was wondering if anyone would be interested in helping out with a patch i'm doing for a final college project..
the idea is of a 'voice painting' installation, to translate a live audio input to a constantly evolving projected 'painting' using Jitter.. i have a patch set up at present where at a certain level of audio, a bang sends a 'point' message to the jit.gj.sketch object, and i'm hoping to use the frequency of the source signal to determine Y value of that point. I'm also deciding between the cross corrreltion of two microphones, or subsampling a live video stream to get a 'position sensing' X value..
the idea is then that the user or users can create their own painting that is constantly evolving (did i mention that i'd like each mark made on the screen to have a set lifespan, so everything is constantly fading as new marks are being made..)
can anyone suggest a way to get a frequency output in numbers from an incoming signal, to be used as my 'Y value'? am also hoping to use the frequency to somehow determine colour, if anyone has any ideas..?
also, after grayscaling & sub-sampling a live video stream to say 64x1 pixels, &then applying a threshold, how could i actually output, again in numbers, the X value position of the 'movement'..? &then there's the whole fading thing...
any and all help would be really appreciated,