Quick Kinect & jit.freenect.grab question
Hi
I've just started using MaxMSP after being introduced through university, and I'm working on a personal project where I use jit.freenect.grab to follow a red object and then it draws onto another LCD.
It seems like I could use Kinect to achieve the same thing, and it may even be easier. I guess through following the locations of a hand when it is held forwards at a different depth, and then it would hopefully easy to get the x and y locations then put them into the other LCD.
However, if anyone could offer any advice, direction or help it would be greatly appreciated as I'm nervous of buying such an expensive object and finding out that I can't use it or that it is incredibly difficult.
Example projects or source code would be even better, but it may be too early for these things to exist.
Certainly sounds possible since Memo Atken already made a 3d drawing tool with it ( http://memo.tv/first_tests_with_kinect_gestural_drawing_in_3d ). The difficulty comes with actually figuring out the right tools for the tracking. of course you'll be using the cv.jit stuff mostly, but really getting the data to look the right way for proper analysis will take some work. probably end up using cv.jit.blobs and centroids but I haven't done much feature tracking with the kinect yet..still waiting to get my own
Tracking things will be way easier with depth rather than actual camera info because you won't be subject to things like lighting and complicated background environments. The kinect data is rather noisy though, so it won't be a walk in the park.
Not sure if this will help, but i posted a few examples of things you can do witht he kinect here: http://blairneal.com/blog/jit-freenect-examples/
Thanks for showing me the examples, it's helped me get started and I've used it as a jumping off point. I've been trying to recreate something like your recent blog post about the Kinect, but so far haven't had any success going off the picture you uploaded.
I was hoping someone here might be able to take a look at what I've done to offer any advice, because currently I'm colour tracking a black glove whenever it enters the depth field, but that isn't really using the Kinect to the best of its abilities.
http://stephendavidbeck.wordpress.com/2010/12/22/too-much-fun-at-my-job-part-2/ is much closer to what I want to achieve, as I would like to make the patch follow my hands and then output the x and y data onto the canvas.
The problems so far seem to be that:
Adding CV hasn't worked.
My depth map is very temperamental, while the patch I'm trying to follow only has one slider. Currently it seems like I can pass through the depth map and out the other side, while I want to just enter and pull out of it.