An mobile app that celebrates social networking as an embodied and emergent user experience based on primary research with the visually impaired community and around a concept of spatial information architecture.
UI/UX Designer and Creative Technologist
Figma, Unity3D, C#, Blender, Ableton, Illustrator, and Photoshop
MFA Thesis project
With the rise of mobile technology, much of our social interaction has shifted from in-person to app-based platforms on smartphones, like Snapchat, Facebook/Instagram, etc. These social media companies, driven by advertisement revenue, continue to heavily invest in visually immersive technology like spatial computing, with Snap pioneering AR features in 2015.
Moreover, approximately 8% of the U.S. population has visual impairments, hindering their participation in this form of modern socialization, which can lead to social and health issues, per Georgetown University's Health Policy Institute.Source
To understand the lived experience of the visually impaired and their relationship with smartphone technology, I interviewed two individuals directly involved in the community:
Various features and visual languages of different applications were compiled and evaluated for their UI/UX.
During my design research into Snapchat, I was particularly drawn to the Periscope-like interaction design of the Snap Map feature.
From a top-down view of a map, I was struck by the strategy employed to represent localized user-generated content, and to me, it was reminiscent of clouds.
Yet, the media being shared on this application is principally visual information; so instead, I chose an audio-focused route.
To assist in visually articulating the project's concept, I collaborated with Midjourney to illustrate a digital cloud of information intertwined within the urbanscape.
Using Unity3D, different cloud types were examined and interpreted into respective spatialized prototypes.
User input was primarily WASD keyboard controls, a mouse and headphones.
Different audio effects and behaviors were applied as a way to interrogate emergent qualities that the immediate collective Audio Notes might manifest.
To assist in user navigation, audio was binaurally recorded and played back. Each Audio Note contained a beacon that dynamically adjusted to the User's location in the space.
By prototyping the concept in an interactive simulated environment, I was able to move beyond conversations and loose ideas, consider a holistic, novel interaction paradigm.
Three volunteers interacted with each cloud type via the interactive Unity Scenes and provided feedback on the user experience, visual language and immersive qualities.
The feedback was collected via a Google Form and synthesized into the following six comments/questions.
How does a user leave an audio recording?
And, how does the system know where to place it?
What are the limits per voice recording?
How does the system know when to switch between behaviors?
And, how does the cloud behavior affect the audio?
What are the limits per voice recording?
How can the metaphor of a cloud be better expressed visually and behaviorally?
How might someone be able to filter the space across time and content?
From the extensive primary and secondary research, and feedback from User Testing and my Thesis Advisors, I synthesized the following three key insights:
Despite VoiceOver capabilities on my iPhone, most applications, including all social media applications, are nearly inaccessible. This severely impacts my ability to interact with others, unless I'm able to speak to them on the phone.”
Ryan Richards
The techniques of Orientation and Mobility (O&M) empower the visually impaired to develop a mental model of space based on the environmental sounds. This empowers the individual to safely travel without the need of others.
The Stratus and Nimbus interaction design studies were the most effective at conveying the idea of networked audio recordings and emergent behaviors that might result from content-specific information and/or density.
By placing the primary action button in the center of the screen, all Users can easily interact with the app's main function, without the need to rely on VoiceOver.
Each Audio Note's beacon sound and audio information provides a mental model for the user’s surroundings, while facilitating connections with others.
The sketches were expanded to include actions for activating a spatial filter based on Natural Language Processing (NLP) and a toggle switch for Users to move between AR and Planar modes.
Regardless of which mode, the UI and beacon sounds suggest to the User how the system interprets the Audio Note content and density in the vicinity, according to the Nimbus and Cumulus interaction design studies. See above
One of the feedback notes from the interactive prototype—How can the metaphor of a cloud be better expressed visually and behaviorally?—instigated visual research into my own cloud forms and behaviors.
To differentiate my interpretation of a cloud, I experimented with an analog method of making clouds through the use of a cloud tank. I filmed the cloud-like effect with my digital camera at 120 fps.
From the numerous experiments I created with the Cloud tank, there was one in particular that I believed captured the essence of a fluffy cloud.
A still from that footage turned into the background image of the app, and I color-picked many of the colors, with some tweaks, to use throughout the app.
The cloud motif was applied throughout the app, including interactive animations and UI elements.
From the behavioral qualities observed in the cloud tank, I attempted to mimic the circular, billowy feeling in how a User creates their own Audio Note.
For the visual treatment of most elements, including the menus, I feathered the hard edge to accentuate the soft feel of a cloud, while the typographic choice, Proxima Nova, suited the overall circular forms.
I structured the Audio Notes to make them identifiable and more meaningful for Users.
When these Audio Notes gather in place, they form clusters and embody cloud-like and emergent characteristics.
A user can easily engage with a space full of Audio Notes without directly interacting with the application and be guided to each Audio Note location by listening for its spatial beacon sound via their wireless earbuds.
To listen to the audio note, the user simply walks towards its fixed location and the recording automatically plays as spatialized audio.
View entire interaction
To record an audio note, an individual user simply presses on the central button.
Up to four users can record a single audio note if they are within 3 ft. of one another. In these instances, the recording will spatialize each of the users voice so that others may inhabit that conversation.
View entire interaction
In spaces that are denser, the user can initiate a spatial filter according to topics discussed in the Audio Note recordings.
This feature is available to the system due to the app's NLP abilities.
View entire interaction
Users are able to tap an individual audio note and be presented with its metadata, such as the User who posted the Audio Note(s), and the number of plays and responses.
View entire interaction
Based on density of Audio Notes in a 24-hour timeframe, the system automatically switches between two states expressed by a hue change to the background of the application, purple vs. blue, and an distinct auditory experience.
Furthermore, the experience can be observed and interacted in a more immersive manner through the Mixed Reality mode.
View entire interaction
Fundamentally, Sonus offers a way to reunite our physical and social spaces, which includes the variety of accessibility needs of all probable users, and acknowledges how context and wonder plays a major role in socialization.