Converting movement to sound! Advice please?
So I'm very new to Max/Msp/Jitter..
but I wish to use a combination of the three, as well as cv.jit - to create a project for my Art course.
I plan to have a system where the audience are creating music... where their motion is detected and somehow converted into audio signals. Their movement creates the music..
I also wished to pair this with a system where they are being streamed back to themselves on a live video-feed which would IDEALLY be manipulated using the software also.
I have a rough knowledge of Max now.. and was advised - by the cycling74 team - to check out cv.jit, which I now have.
So with cv.jit, and the Max software - I wish to create this system, but I was wondering..
Has anyone made any systems like this that they could share with me, or advise me upon?
As I said, I'm very new to Max - and have limited time to create this system. I have seen examples of what I'm looking for on Youtube etc, but with no other help or patches.. etc.
While I know it's rude to expect someone to hand over their whole project to me.. I am desperate for help.. advice.. anything!
Thanks alot!
Samuel
I am doing work like this. Short answer; change your goals. Extracting minimal information from large groups of moving people is off the table right now. Unless you want to put bracelets on everybody (and pay at _least_ $25 per bracelet) current SOTA is limited to 4 tracked people.
Hmmm I see,
Does it make a difference if I do not wish to track precise movements? .. rather just general shifts in space between frames or something similar to that? I would only be dealing with small groups of individuals as this would be set up in an exhibition situation..
I don't know - I have the idea somewhere in my mind and it needs to be released!
"General shifts in space" is muddy. If you come up with a clearer definition maybe we can help you.
You may have more success with microphone triangulation o sensors in the floor - depends on budget and presentation space.
Apologies..
something like this.. http://www.youtube.com/watch?v=g7R769OPl3g
Couldn't you put up a few web cams that dump into a few matrices and use the numbers in the matrices as both frequency inputs to oscillators or as float data to control effect parameters on audio files you load in buffers?
it would change the sound based on the video which is based on the movements.
t3mpuser, you have succinctly described the difference between data and information. Yes, he could put a nice rectangle on the wall saying, "audio is generated by camera activity", but the informational content would only have value insofar as it detected movement in the room. It would, for example, be next to impossible to create rhythms without drastic action on the part of the participants (ie synchronous bending at the waist to the floor, which would reduce "occupied" portion of the frame by 50%).
Yes, it is possible to use a matrix from a camera to control many things. The trick is to get it to differ from a random number generator.
My teacher in college did things like this. He required a half an hour of prep time before every performance - if he was lucky.
Re video, check the comments on youtube. Start with cv.whatever (http://jmpelletier.com/cvjit/). Again, note that handling more than one person at a time for that package is a big problem.