Map kinect tracking datas to a 3d model
I would like to mirror the movements of an user to an avatar with the kinect.
So after have red different post I decided to Map the datas of a kinect skeleton, to the nodes of a 3D model.
So I made a patch where I control the position of the node (the patch is attached)
But I would like to know if there is not a more efficient solution?
Also i’m afraid that the positions of the joints of the kinect skeleton doesn’t match the nodes positions.
Hi, has anyone made any headway with actually scaling a 3d model to kinect data in max yet? I’m finding it tough, would love to see some way of not having to do the psi pose (I’m on mac),
Hi Arthur and Nick,
I’m doing lots of Kinect + Max. I’m not mapping a 3D model to the skeleton but generate 3D graphics based on the skeleton data in Jitter, mainly using jit.gl.mesh. I can precisely map it to the real-world body position (and as a next step, also project the generated graphics on that body). So mapping a model to the skeleton should work as well. See some pics here, video coming asap: http://www.dietervandoren.net/index.php?/project/integration04/
As for the psi pose, I have a Processing app (using the SimpleOpenNI library) doing the skeleton tracking and sending it to Max over OSC. SimpleOpenNI supports the auto-calibration of recent OpenNI versions so no need for psi-pose anymore. This is on OSX but should work on Windows as well.
That sounds great. Is the app you’re using available or is it developed by you for your work? I’m having trouble using synapse etc
It’s a modification of the User3D sample included with SimpleOpenNI (https://code.google.com/p/simple-openni/). I added OSC output using the OSC-P5 lib.
I’m considering releasing my (dual-kinect) Processing+Max skeleton tracking system. Need to strip, clean and package it though. Probably sometime in summer.