I'm working on a project in which I'd like to map the dynamics/intensity of various points of the stereo field to LED-driven lanterns. A nighttime symphony of frogs, specifically. Kind of the complement to something I've done a lot of before, which is chop up a jitter image into several vertical slabs, and then use the brightness/etc to control sounds.
How I thought I'd do it is something like this:
(a) play the stereo recording in max (sfplay~ or similar) - it's quite spatial b/c it was recorded with Sonic Studios DSM mic
(b) somehow, on the fly, divide it into 14 channels, left to right (these will not be going out to actual channels, though, just used for #s.
(c) measure the intensity of each of these streams and spit out into a number, say 0.-1.
(d) translate each of these numbers into 0-255, and send each one out via Arduino Mega to 14 LEDs to control dimming.
I think I have most of it figured out, but I'm banging my head against how to analyze the stereo file and divert it into 14 areas.
Essentially, what I want is if there are discrete sounds/attacks at the same time say far to the left, 3/4 the way to the right, and all the way to the right, but nowhere else, that the LEDs in those locations would light up. Preferably through dimming, so they flicker.
Any and all ideas welcome. I could be missing something easy - I hope I am.