Synaesthetic Sequencer takes its title from the notion of synaesthesia, the sensation some people have whereby one form of sensory stimulation provokes a response in one of the other five senses.
In the case of the Synaesthetic Sequencer, colours perceived via a video camera are converted into spontaneously composed music in real time.
Synaesthetic Sequencer breaks down a scene into an 8×8 grid of colours. It then works its way through each cell of the grid, analysing each one to determine how much red, green and blue makes up the colour of the cell. This information is then used to play a sequence of 3-note chords, with each note corresponding respectively to the amount of red, green or blue in each cell. This process runs in real time.
The complete installation involves a laptop, a camera, a projector showing the 8×8 grid of coloured cells, three MIDI sound modules/synths and three active speakers (one for each of the primary colours).
Synaesthetic Sequencer is a highly interactive installation. The music being created can be influenced in many ways, either by changing what the camera sees or by adjusting various controls on a control panel.
An early version of the piece was demonstrated at the 2013 Edinburgh International Science Festival during the Rocket Lolly cabaret evening, but was premiered in its full form at BEDROOM for the Edinburgh Annuale in June 2013.
The video to the right captures Synaesthetic Sequencer in action at the premiere and was produced by Emma Bowen, with graphic design by Andy Fielding.
How did this project use Max?
Max forms the core of Synaesthetic Sequencer, or to be precise a combination of Max and Jitter. Jitter is used to process the video feed from a USB camera and calculate the RGB values for each part of the image. Max is then used to map these values to MIDI note values. Max is also used to read controller values from a MIDI control surface and use these values to alter both visual and audio aspects of the patch.