“adaptation/volume” is a 16 channel sound installation consisting of several programming structures written in Max/MSP, a mac mini, a Presonus Firestudio sound-card, Berhinger preamp and recorded sound material.
The sound is abstracted through different realtime algorithms and processed through a spatialization technique called ViMiC (Virtual Microphone Control).
ViMiC is a tool for real-time spatialization synthesis, particularly for concert situations and site-specific immersive installations, and especially for larger or non-centralized audiences. Based on the concept of virtual microphones positioned within a virtual 3D room, ViMiC supports loudspeaker reproduction up to 24 discrete channels for which the loudspeakers do not necessarily have to be placed uniformly and equidistant around the audience. Also, through the integrated Open Sound Control protocol (OSC), ViMiC is easily accessed and manipulated. (Nils Peters)
My final project on my master-degree in Fine Art dealt with the aspects of acousmatics, space and the idea of how organized sound as structure/composition achieves a body, a sculptural form through spatialization techniques. It became a discourse between the physical space and the virtual space, auditory perception of space and visual perception of space, one superimposed on the other.
Being occupied with formalism I was quite interested in the way Pierre Schaeffer (2)describes the formal aspects of sound as sound-objects. Listening to sound itself and the perception of the sound. I started recording sounds for their quality as sound, and not for their references or as symbols of social activity, culture or political meaning.
I became aware through this process that the recorded sound was embedded with acousmatic space. This lead me to the writings of Denis Smalley (3) who states that all listening involves a trans-modal operation, which is the interaction and interdependence of various sense modalities. This meaning that the perception of space in an acousmatic situation (playback of a field recording) generates an understanding or knowledge of the space imbedded in the recording due to our experience of space. A trans-modal perception of an acousmatic recording draws upon the experience of the body and its prior encounter with it. This lead to the realization that listening to the spatialization of these real-time processed digital sound-recordings described not only the recorded space captured within the recording, but also the body’s relation to space and its experience of space. The operations of listening to sound as an object or as sound-sculptures involves a complexity which draws upon past experience, the present and the properties of the physical space. It fuses into a singularity where the apprehension of the virtual space, generated by the speakers and various software programs, overlaps the physical space and generates a dialog between the two. It becomes an idea of space superimposed into the physical space.
How did this project use Max?
The sound is being generated in real-time using different algorithms written in Max/MSP. Additional 3rd party framework like Jamoma is also used to create a virtual room within 16 speakers.