I am in the process of developing a max patch for an installation. The installation uses jitter blob tracking to track an individual who has an infra-red emitter attached to a hat. The installation will only have one person in it at a time. I have managed to produce a patch that tracks the blob on an xy axis. However, I have having trouble figuring out how to translate the data into something that will manipulate the volume faders (one for each of the six speakers). I have attached the patch below (IMT4.txt) although odds are it wont work entirely for you (I am using a modified ps3 eye as a IR camera).
The idea is simple. Depending on where the the blob is found, the volume faders on the 6 speakers will change accordingly. The closer the blob is to a speaker, the higher the volume on the fader will be. However, all of the faders will be at 20% minimum so that a nice ambient bed of sound can be heard. The x and Y coordinates range from 1 to 300. Each speaker is labelled based on its location in the booth. For example, the front right speakers extreme location would be x = 300 y = 300. Also, the centre speakers will be closer to 150 x and y. So the centre right speaker extreme location would be x = 150 and y = 300.
As of now, I have yet to be able to turn the above idea into code. Any idea on how to start programming this? Any pointers would be most helpful.