Sound Trigger with Kinect
Hello! Forgive me for my struggles with something that will most likely sound simple.
I am trying to utilize a kinect to gather information for a simple, but accurate sound trigger in a video installation project.
Seemed much easier before I began.
Any guidance would be sincerely appreciated.
What have you done and what doesn't work as expected?
I have researched how to use sound triggers and everything I've found has been some incomplete display more for the idea of what can be done. I am also unable to have the kinect display consistently and then syncing that information to what I'm trying to do in Max. So basically I have failed attempts at researching the proper way to create a simple sound trigger with a kinect.
How do you want your trigger to operate? Trig when a specific spot in 3D space is crossed? When a certain limb extends? Or...?
What kinect framework do you have running: jit.openni, dp.kinect or freenect? Perhaps Synapse?
This is pretty broad, can you focus it down a bit? When you say 'sound trigger', what do you want to do exactly? Is it something like Synapse (synapsekinect.tumblr.com), or do you want to trigger things when people move around a room or use gestures etc etc.
How far have you got, posting the patch you are working on may help.
Hello, I think it sounds broad because I view it a a basic trigger. What I wish to occur is a sound response to a person entering into a designated space. The evolution of the idea would eventually have multiple sounds applied by the amount of people in the space, but for now simple is best.
I am using Synapse, but am open obviously. Just wanting to get this to work without imploding.
Thank you
So you want to:
- identify when a person enters a space
- count how many people are in the space
Do you want to track their movement around the space? Do you want to use the video feed?
Sounds like Kinect is overkill for this if you want to keep it simple. Maybe use an overhead camera and do blob tracking with the cv.jit library. You might also want to look at Zach Poff's video trigger system too.
You could use a Kinect suspended high overhead looking down in your space. The higher, the wider the floorspace that can be watched.
The same is true for a webcam.
dp.kinect works well in Windows with Kinect
freenect is an opensource library for Kinect that you could use on Mac
Synapse is illegally distributed software and I don't recommend it.
jit.openni works, but the middleware library that works with Kinect is no longer distributed.
Overhead, you could use simple blob tracking. You would know the distance to the floor. Anything a shorted distance, is probably a person.
You could then react to one or more people. This would work in the dark or light. A webcam is a little less reliable and doesn't easily work in the dark.
The best solution is jit.openni, if you want it on your Mac the zigfu installer is still available: http://zigfu.com/en/downloads/browserplugin/. This is how I have installed it on mine and a dozen of others and it works.
Zigfu is also illegally distributed software. I did't think to mention it as its not often used.
dp.kinect on Windows and a fallback to freenect (if you are on Mac) are both workable solutions. And both are legal and supported by people/community.