Dynamic Sound Generation Based on Face Tracking in Max MSP
Hey, how can I generate sound in Max based on the distance between multiple tracked faces? I'm working on a patch that uses cv.jit.faces to track multiple faces in real time.
I want to:
Extract the X and Y coordinates of each detected face.
Calculate the distance between each pair of faces.
Map the distance to sound parameters, such as: Frequency (closer faces = higher pitch, farther = lower pitch) Amplitude (volume changes based on distance) Timbre (different textures based on face proximity)
Generate sound dynamically using oscillators (cycle~, saw~, etc.).
What would be the best way to structure this? Are there any specific objects or approaches that would make the distance calculations and sound mapping more efficient? Any suggestions or example patches would be greatly appreciated!