In our day-to-day lives, sound is interpreted as auditory information generated from the immediate surrounding environments, received by the inner chambers of the human ear. How will these daily interactions change once set in space? Will they be quieter, louder, or non-existent?
Sounds are associated with a number of things, and are influential for how we behave and navigate our way around spaces. A sudden, unexpected loud noise nearby alerts and startles us; the rattling patterns of water against the trees, the windows and/or the pavement declare that at the moment it's raining; hearing familiar music tells us we're near that same old store on the street corner again.
Sound is identification.
The aim of this project, within the context of space colonization, is the conceptualization of a mixed reality experience — a melting pot where all environments are united into a new kind of perception through changing soundscapes. Like theme songs, being in one location plays a certain tune; while moving to another changes the music altogether.
Not only does it serve as incentive to go exploring, but it also familiarizes inhabitants to their new surroundings based on song i.d. In other cases, the sound can be an indulging, healing experience. It is an optional experience meant for the individual, yet set within the public realm. While interpretation of the sounds are at the discretion of each listener, the collective participation in tuning in via a wireless receiver connected with any pair of headphones or earbuds then converts a space into a temporary communal gathering, creating social networks all over.
This section is still under development; however, possible solutions include transmissions broadcast via satellite, or automatic downloads to receivers from wi-fi networks located throughout the colony (should wi-fi still exist by the time of migration).
SCENARIO // SCRIPTING
In environments pervaded by naturally occurring sounds — sometimes noisy and unpleasant — a participant puts on headphones or earbuds, plugs them into a receiver, and turns the receiver on. It begins playing an audio feed designed for that specific area. Suddenly, the reality once occupied is replaced by a stream of pleasant ambient music that becomes that area's soundtrack. As the participant walks on into a different area — somewhere brighter, but shaded; perhaps more open in space; the music transitions into something darker, more droning. It is slightly less melodic, but the sound is reflective of the area and its architecture. (See below)
MOCK DEMONSTRATION - Slowly transitioning environments from one to the next
Additional audio sketches:
Content © Kelly Chen
IMPLEMENTATION & ACTUALIZATION
Over the course of the next 5 or 6 weeks, I will attempt a final product that expands on the sample experience as demonstrated by the above mock video, except it will be documentary style. A video of a walk shot in first person is projected onto the wall of a space, the audio from that walk played back through external speakers. A separate audio feed is transmitted to a wireless receiver available on the side that the viewer can freely plug his or her headphones into. Compared to the content played over the speakers, what the viewer will essentially hear through the receiver is a lush soundtrack that is directly conjunctive to what's occurring on screen. Thus, there is a crossing between realities, in which the viewer — by tuning in to the receiver — then places him or herself as an indirect participant of the video walk (a passive form of interactivity).