The system at the core of Cortex’ functioning has been created in openFrameworks and Max/MSP.
Max uses data from the processed images for controlling the synthesis systems that model the white noise, generating a specific soundscape for each image.
The images are broken up into 4 matrixes sized 1×171 pixel, and for each pixel the RGB values are extracted.
Every line generated through these values creates a sequence of 171 notes in C major.
The notes – played in a random time between 500 and 2500 ms – control the central frequency of 2 pass band filters with very high Q resonance coefficients, causing the transformation from white noise into sound. Throughout the different scenes of the installation sound is modeled by waveshaping functions that distort the signal, amplifying and limiting the maximum value to +/- 1. Moreover, pass band filters with random cut frequencies and Q resonance coefficients are used, as well as random are their amplitude modulations.