Sep 23, 2013 at 8:52am
A max patch which uses video (or static images) to spatialise audio in 3 dimensions and affect parameters of granular synthesis, filters and volume. I made this in 2011 for my masters at the Sonic arts Research Centre in Belfast. It uses RGB, alpha, HSL etc outputs from Jitter to control the spatialisation (using ICSTs ambisonic panners) and maps colour to frequency , light/dark to volume etc etc. In a previous patch if a red pixel is in the top right of the screen a matrix will pan the sound there in a horizontal surround sound field. In this patch the concept is taken to an entire 3d audio space. It was originally used at the Sonic Lab at the sonic arts research centre in Belfast which is a full 3d system with speakers under , above and all around the listener. Just as a moving image exhibits cohesion as it changes through time – so does the spatialisation and other parameters when controlled by the same video – there is a noticeable correlation. The sounds you hear here are not matched to the video as as its hard to render 48 channels into stereo.
[See the full post at: The Holoverse]
You must be logged in to reply to this topic.