top of page

Acoustic Garden: Exploring Accessibility and Interactive Music with Distance-related Audio Effect Mo

Watch Live Streaming on:






Beijing (GMT+8): 22:00, November 11

Los Angeles (PST): 6:00, November 11

Melbourne (GMT+10): 1:00, November 12

London (GMT+1): 14:00, November 11


Acoustic Garden: Exploring Accessibility and Interactive Music with Distance-related Audio Effect Modulation in AR/XR


Yufan Xie

Designer, Artist


Yufan Xie is a multi-disciplinary designer and artist, operating at the intersection of architecture, computational design, sound synthesis and audio-visual performances. Graduated from China Central Academy of Fine Arts and the University of Southern California, and is currently working as a computational designer at Refik Anadol Studio.

Yufan's research focuses on the decision-making process and the perceptual and behavioral functions within multi-sensory narratives, while also exploring the cultural nuances inherent to sensory experiences, challenging the prevailing sensory biases while offering a distinct perspective on spatial and temporal narratives. Since 2019, Yufan has been working on audience-oriented performance narratives and innovating spatial musical systems. Notable projects and studies have been showcased at UABB (Shenzhen Bi-City Biennale), BMAB (Beijing Media Art Biennale), Shenzhen MoCAUP, SUSAS (Shanghai Urban Space Art Season), A+D Museum, and Shanghai Digital Futures (CDRF).



Abstract


In AR/XR design, spatial audio and audio-driven narratives are generally considered as secondary roles. Non-visual content production and research for visually impaired users are underrepresented. Similarly, in the field of music, visual interfaces dominate discussions on gesture-based sound synthesis, leaving intuitive, audience-driven experiences unexplored. Historically, studies in architecture and music focused on spatial and musical sequences, but they largely remained confined to print media representations.

In this project, inspired by the progressive structure of electronic music, we introduce a spatialized sound synthesis method based on distance-related audio effect modulation combined with binaural spatialization. This approach is intended to navigate users through space without depending on visual indicators, utilizing auditory cues from a multitude of virtual audio objects instead. These objects are responsive to user movements, providing immersive musical experiences on a walking scale. Interaction with specific audio objects enables users to dictate different musical progressions, as a self-similar spatial structure.

Unlike traditional sonification methods that mainly replicate or translate data, our approach emphasizes the emotional impact of auditory messages. It unveils the potential for audience-involved, spatially-driven musical narratives. Testing across various platforms, we faced challenges in sound design, hardware limitations, and cognitive processing.


 

Host

Wei Wu


Wei Wu is a designer and computational artist with a Master's degree in Design Studies from Harvard University Graduate School of Design. She operates at the intersection of design and emerging technologies, producing work that encompasses robotic installations, interactive media art, and extended reality design.

댓글


bottom of page