Sonification with webcam

Barbara Lefranc's icon

Dear all,

I am a very beginner on Max/MSP. I would like to use my webcam and sonify the position X and Y of my upper body. For example, when I move to the left or to the right, I would like to have the sound of raindrops, maybe in function of the velocity but perhaps it is automatic in Max/MSP. I would like to use the package gesture follower to allow gesture recognition. I don't understand how to do to link between the sound and the gesture. I can only get the webcam up with jit.pwindow and I tried to combine the example of the mouse in the package gesture follower with the webcam but it doesn't work. Can I have some suggestions ?

Thank you !

Vincent Goudard's icon

Hi Barbara,
there are many (many, not to say infinite) ways to map gesture to sound in Max.
First of all, a webcam is sending you a matrix of pixels, which include everything in the field of view, not especially your body.
If you want to go this way, you may either map that matrix of pixels directly to sound (various techniques here) or analysing the image matrix to get higher level info (optical flow, shape recognition, contour extraction, etc). Check out the cv.jit package if you want to go that direction (but know it is not an easy way).
If you want to map your body as a skeletal model (members and joints), you'd be better off using a kinect rather than a webcam, which will provide you with this information.
The gesture follower package (from ircam, i guess), is mostly intended to analyse and recognize temporal parameter patterns (not specifically gesture actually). The usecases usually make use of accelerometers rather than webcam data. So in short, if you want to use it with a webcam, you'll probably have to narrow the incoming matrix-data from the webcam to a bunch of higher-level analysis data. If you can grab a kinect, you may feed the info related to your body joints to the gesture follower somehow. But if you're not interested in the image itself but just your own movements, you'd be better off using accelerometers and/or gyroscopes.
I hope this make some sense to you and helps...
-Vincent

Barbara Lefranc's icon

Thank you very much for your response Vincent. I understand that the webcam may not have been the easiest way. Therefore, I try to use my computer's mouse for sonification. However, I am having problems again. I used the example of gesture follower where the computer mouse is very well recognized. I would like to map a sound that would allow me, after learning the letter for example, to do the same letter. To achieve this, I need to determine a threshold beyond which the sound will be modified by filtering with a yew. I saw that there was a Thresh object but some people use todedge. Would you know how to do that ? To begin with, I tries to append a polybuffer where I could drag files but it doesn't work. Also, I'd like to look at the lowest and highest value of the mouse, but I can't find an object that would allow me to do that.

Thank you in advance,

-Barbara

gf try 3.maxpat
Max Patch