The primary purpose of every technological device, is to improve the quality and quantity of information conveyed and manipulated (in various forms) through the use of highly efficient and ergonomic operational methods.
“Empty Rooms” introduces three separate devices, two dedicated to the playback of musical elements and one to their digital (symbolic) processing.
The first and second, a turntable and an iPod of the latest generation, represent two different forms of playback and “consumption” of sound, mainly in the form of shared and global language.
They come from two different and distant listening cultures, both for their technological nature (analog / digital) and for their use in our individual and collective infosphere.
Each technology, while “returning” its own mediation between technique and content, produces a “residuo altro”, a by-product of its activity, which normally reflects in a non-linear way the operational and constructive limits of the medium itself, and its potential.
This “residual” is the focus of our installation and it represents its source material (Audio / Visual).
First, the artifacts of the record, produced by imperfections in the media, due to its mechanical nature and its inexorable “existential frailty”. Second, the fluctuations of the electromagnetic field generated by the iPod, conveniently detected and amplified by analog transducers.
This material, which is usually “discarded”, is subject to the operational mediation of the Artifex, in this case an algorithm modeled through a series of behaviors evaluated in real-time and implemented in a Kyma Sound (the third device which composes the installation).
The Artifex activates itself because of its numerical/symbolical nature, which dynamically synthesizes the sound material for the installation in order to “craft” a musically relevant result, which involves a plastic/timbric exploration and re-organization (sometimes even structural) of the source material.
The perceptible expression of this sonological dimension is enhanced by the projection on one or multiple screens, of a continuous and statistically controlled flow of “inactive areas” – spaces without any biological or human activity – via Max/MSP Jitter.
These “empty rooms” have previously been algorithmically generated and “re-sampled” through the use of a surveillance camera subjected to radio interferences. Again this “residue” (the disturbances of the radio signal) becomes predominant and structural for the work itself. But what is the “connective tissue” that allows the two domains (audio and visual) to establish a mutual connection and influence?
The Link is established through two transducers, a microphone and a videocamera. Both tools collect relevant information from what they hear and see (several loudspeakers placed in the room, and the images projected on the screen/s): they act as the senses of the system.
This Link defines the unique connection (as implemented within the performative space) with our world and our biosphere – in which the system is placed and supplied – and with which the human being could possibly interfere (i.e. intentionally or unintentionally acting on microphones, or passing in front of the camera).
Empty Rooms is a “possible” organism, inspired by a systematic reflection on the technological medium as a “means of knowledge” and not as a passive “tool” located in an anthropocentric techno-sphere.
How did this project use Max?
"Empty Rooms" instances itself as an audio-visual self-organized performance space.
The paradigm itself discloses a synesthetic approach between the “aural” and “visual” experiences.
A Movie made of algorithmically generated "inactive spaces” is projected on a screen - via Max/MSP Jitter.
An overlapped stream of pre-recorded “sound activities” is then diffused from a record player and from 4 different iPods running in shuffle mode, creating recombinant “invisible actions” to fit into the Movie.
A self organizing link between sound and visuals is established via cybernetic procedures defined as interconnected spin networks, produced by a video camera “observing” the movie and by one microphone “listening” to the space placed inside the performance Locus.
The Kyma sound design environment is then engaged in order to compute the data and perform real-time evaluations between the different types of numerical information (audio-video), producing a “sonorous response” to the asynchronous stream of audio-visual contents.
The synthesized information is then diffused in the performance space again through 4 loudspeakers.
Various types of feedback will take place during this highly dynamic process implying a self regulating behavior that will establish new connections between the pacing of the movie locations and the “sonorous” content produced by the processing of the iPod sound streams.
The Observer will then experience the following layers of information:
- a Real-Time recombinant Movie made of “inactive” locations.
- an Overlapped Stream of “possible actions” diffused by the iPods that fits into the Movie.
- a Sonorous link between the above domains of activities via 4 full range loudspeakers.
The Observer can take into account one or more layers of information (even all of them) in order to create himself a cinematic experience via a correlation process.