Kinect and interactive 3d models
hey my jitter friends!
Me; I am an interactive arts student at Concordia University in Montreal QC. We've just entered Christmas break...and my lab technician's been so kind as to lend me a kinect for the period.
I want to do this.
Using a kinect, I want to be able to map a persons position in a room ( for this example, let's say a dancer). Furthermore, I want to be able to control a 3d model as in the final example in the Physics patch-a-day examples . But, it would be live and corresponding (ideally) exactly to the dancer.
So to summarize, the dancer would control the lego man's position in a space, and a projection of the lego man would mirror the dancer exactly. Now this is more of a general how-to, a call for tips on how to achieve this. Any ideas? Anywhere to point towards? Am I crazy? Just the right amount, right?
Be well my jitter friends,
Thanks in advance for the insight
Marlon
I am a headintheclouds and can't link.(or figure out editing) sorry.
hey there, by curiosity, do you have any windows os and "garry's mod" game under the hand ? it does just that ! though ultimately i'd like to do this in Max too, but garry's mod seems a lot of fun
http://www.youtube.com/watch?v=avgqr75MVcg
http://www.youtube.com/watch?v=cKDRcHab6mc
Going lback to Max only.. this is probably completely doable, i'm not experienced at all with kinect or/and jitter, just acquired a kinect recently and willing to do something looking like yours... The jitter physics tutorials are an ideal starting point. To acquire kinect data : https://cycling74.com/wiki/index.php?title=Kinect_Page
and i suppose that by looking through those very forums, you can also find a lot of ready-to-use things ?..
Latency is your biggest problem when tracking the whole body. Plus getting the calibration zone is tricky, at least 3 meters from the camera and no further than 5 or 6 meters if I'm correct. It's a small zone for a dancer to move in.
Projecting the image onto the dancer will require your beamer and Kinect not to be so far away from each other, Perspective distortion is the greatest problem here, this can easily be fixed by using good mapping software such as Madmapper.
Synapse is the software to send out all the joint position data, though getting it to connect to the bone structure of a 3d model in Max can be quite a challenge.
All very possible though :)
With the caveat that I only partly know what I'm talking about here, breaking the process down in coarse steps might go something like:
1. Establish means of tracking kinect joint positions in 3d space and getting them into MAx via one of the kinect drivers/externals outlined above by Vichug. The Synapse system might be the easiest - as least as far as getting started.
2.In openGL, map primitives (spheres or whatever) to track kinect joint positions and thus create a rudimentary humanoid puppet avatar. This should give you some idea of how the co-ordinates relate and whether they are mapped properly to your primitives, etc.
3. Either elaborate step 2 to get a more complex model by incorporating the jit.phys objects to link up the various joints or directly attempt to re-map the lego man (via the patch supplied) to move according the kinect data. This part will probably be more painstaking and require study of the patch(es) and probably co-ordinate systems, but having gone through the first two steps it will hopefully be more clear to you as to what you will actually need to do to get this happening...
HTH
Thanks @Vichug, I will look more into that. Sadly, I don't have access to any windows os!
@Andro, it's space is really only that big? I had no idea it was so small. Thank you for the Madmapper suggestion, hadn't even heard of it. I'm not totally crazy in this, right? I did download Synapse last night just to get an idea of what's going on and what I can work with. By way of my rudimentary understanding, I feel like I will get some values which can be used effectively in Inverse Kinematics. Thanks again, I'll take your advice and get cracking!
@Spectro, I see where you are going. I'll simply make a rudimentary Physics World to begin with, and connect the joints! Ah, amazing. A flaw in my thinking is to go directly from idea to finished product, but obviously not possible...thank you. You've given me a better idea, and this project seems completely tangeble.
I have experience tracking dancers with a Kinect. It can be done, yet you and the dancer will need to learn the boundaries of the OpenNI software (jit.open and synapse both use it) and the Kinect hardware. Dances love to be expressive and sping and move quickly. All the things that tend to confuse the OpenNI joint tracking software. Try the demo applications and see for yourself how it tracks -or not- dancing movements.
At the same time, dancers also like to play. Given something like the Kinect tracking, they are rapidly able to see how to play with the Kinect tracking and can usually find a way to express themselves which can be successful.
Like any camera, its view is a cone with the point at the lens and extending outward. For joint tracking, the limit is about 4 meters away. You can get up to about 0.8m close...however...at that distance, the sensor and software can't see your whole body so it guesses where your joints might be. And those guesses are often wrong.
If you don't need joint tracking, then the Kinect and OpenNI technology is likely more able to meet your needs. For example, if you only need to know where the center of the body is...then that is generally accurate. Or if you want to see depth pixels (aka depthmap) then that is highly accurate.
If you are on Macintosh, jit.openni can be used and its a native Max external rather than the Synapse separate app.
thanks, your words are amazing. Only issue I have now is installing your software; for instance, the Openni install.sh file. I will do my best and get back to you!
Hey guys,
So i stumbled across that jitter recipe 45, Humanoid (pasted here :
) and thought "woohoo it's gonna be easy to use kinect infos to animate this". Obviously i was wrong. Here's a little kinect receiving patch i put together,
it's a messy sketch and only tracks the upper part of skeleton, using synapse. The problem is that it gives only space information, not angular information reltive to those space informations. So how can i translate that position to angular information, so that i could pass that info to that other humanoid patch ?