Kaleidoscope Music is a six channel, real-time, algorithmic sound installation. By extracting a rich, harmonic soundscape from ambient sound, it seeks to mirror the the behavior of a kaleidoscope, transforming the everyday environment into something strange and beautiful.
The piece takes two microphone signals as input (capturing the ambient sound of the exhibition space) and runs them through a bank of thirty narrow bandpass filters (Jean-Claude Risset style) to extract pure tones from the broadband sound coming in. The frequency and amplitude of the filters are manipulated algorithmically according to one of three different behaviors (which may overlap): dense, steady chords; intermittently swooping lines; and percolating rhythms. Different sections of the piece are strung together on the fly, and each section chooses from one of five possible scales in just intonation.
Kaleidoscope Music was first exhibited at Beijing’s Today Art Museum as part of the exhibition Music to My Eyes. It served as the audio component of the installation Kaleidoscope Wallpaper, developed with my friend the Shanghai-based artist Chen Hangfeng é™ˆèˆªå³°, who built two kaleidoscopes and attached them to closed circuit cameras positioned around the gallery. These video signals were then projected onto the walls of a custom built, six-sided room, where my sound installation was also running. Another exhibition at the Axiom Center for Experimental Media in Boston is in the planning stages. I’ve also presented the work in live performance settings at Opensound (Boston), D-22 (Beijing), Yu Yin Tang (Shanghai), Chapel Performance Space (Seattle), and Studio Z (St. Paul).
How was MAX used?
This is piece was programmed entirely in Max/MSP. The mic signals come in, and I wrote a little algorithmic mixer to vary the balance between the two over time. I use tapin~/tapout~ with varying delay times to thicken up the sound, then squash everything with omx.peaklim~. The filtering is done with reson~, fairly straightforward; the hard part was writing the functions that vary their pitch and amplitude and Q over time. Scales are stored in coll objects. New rhythmic patters are intermittently generated and saved to a coll on the fly, allowing for some mid-level recurrence.