depending on your 3d model, this can be super complex.
jit.anim.node is there to assist.
hopefully dale will chime in, as i'm not entirely sure on how openni outputs joint rotations, but i assume they are in world-space coordinates. in order to control the individual joints of a 3d model, they need to be converted to the local-space, relative to their parent joint.
jit.anim.node is what you need to make these conversions.
also, the copynodestoclipboard message to jit.gl.model, which will copy the entire node-structure of a loaded jit.gl.model file, allowing you to paste in your patch as a hierarchy of jit.anim.node objects. these can be used to control the model using the data from the kinect.
your best bet might be to look at the Day 10 physics patch-a-day patch, found here:
there's information describing the algorithm that does conversions between world-space and local-space orientations in the patch. it deals with controlling a model skeleton with physics objects, but it's a similar concept to controlling via kinect.
Thanks for the answer, yes my first idea was to use world coordinates hoping there was a way for getting the orientation but so far I did not find anything... Thanks for the links, I'll give a look and trying with the jit.anim.node!
when using jit.openni, the NITE middleware does provide rotation information and my jit.openni external exposes it. Check https://github.com/diablodale/jit.openni/wiki#skeleton-joint-data
for orientation related attributes and output formats. Just hit the page and use your browser's page search for "orientation". If something is not clear in that documentation, please open up an issue there and I can help resolve the ambiguity and improve the documentation for all.
Caution, the NITE orientation data is of less quality than joint positions. It is very difficult to detect some orientation changes. For example, you can hold out your arm and rotate it 90-180 degrees (a big change) without your arm joints changing position. This physical fact + the limits of sensor technology/software combine to make orientation data not great. This data is better used to supplement the joint position data instead of being the primary source of data. The NITE documentation from OpenNI discusses some of these limitations.
What you might find useful is to use Max 6 physics objects and with them define how a typical human body and joints operate/limits/etc. Then drive the joints using Kinect joint data. And after that, apply rotation data to only those joints which need it and provide reliable data.
Bump on this problem. I used to have a system that kind of worked with synapse, osceleton, and jit.ogre
Though some rotations were incorrect, it mostly worked for what I needed. Jit.openni doesn't work the same way. Is the matrix different? has anybody converted the rotation matrix coming out of jit.openni to axis/angle or euler angles?
Thanks Rob for the @xaxis message trick. That saved me from having to do a lot of math. I still don't have the right results, though. I think there might be a difference in the initial pose of the jit.gl.model and the skeleton values coming out of jit.openni. I have been struggling to understand world v. local rotations, how to convert between them, and how to adjust initial values so that the rotations are correct.
Here is a simplified patch that tracks one arm. Can somebody who has done this before take a look and see if I am doing anything obviously wrong? The documentation for jit.anim.node is sparse, and it doesn't really go into how to use messages to convert rotations. I feel pretty lost at this point.
I caution everyone that OpenNI is dead. Apple killed it. On 23 April 2014, the official OpenNI website is closing. When this happens, there may be no legal place to download NITE (the essential component of OpenNI that does skeletal joint tracking). This could lead to software piracy and illegal distribution. NiMATE, jit.openni, Synapse…they all use OpenNI.
I recommend switching to dp.kinect which has always used official Microsoft technology and is regularly updated at http://hidale.com/dp-kinect/
the basic idea is that you have an orientation from the kinect for a specific bone, and that is in world-space. you need to translate that to a local orientation for the corresponding bone in the 3d model's node structure.
you do that by multiplying the orientation with the inverse parent-orientation.
you may also need to specify an offset orientation, depending on the model and controller you are using.
the patch below uses a gl.handle to simulate the kinect world-space orientation, that rotates the model's elbow.
We are getting really close. I am posting a new version that tracks both arms. It still needs some work. I think there might be some feedback in the worldquat messages from the jit.anim.nodes. It wasn't this jittery when I was sending the wrong rotations, so I think something is hopping back and fourth.
If you don't have a kinect sensor, you can still run jit.openni with the "read jit.openni_debugrec.xml" message. You will need to have jit.openni installed, of course, (see Dale's blog: http://hidale.com/jit-openni) and the file skeletonrec.oni saved to your openni directory inside of max. You can download it here:
The rotation data from OpenNI is jittery and unreliable. OpenNI documents this in their NITE reference. There will be no fix. Perhaps consider throwing out any results that are less than 1.0 confidence.
Even without rotations driving your model, you can get jittery puppet behavior. This is expected if you have any rigid bones/positions associated with OpenNI joints. You would have to have the exact size "bones" in the entire body for it to work smoothly. And...the length of bones would need to change on each frame as the OpenNI joints change and the "bones" between them changes length.
Make a loose puppet *hinted* strongly by joints and very weakly by rotations for better results.
Here is a cleaned up and annotated patch for reference. I added head, neck, hip, and arm rotations, and put in some timing strategies and data smoothing for less jitter in the rotations. The slide objects are probably adding some latency and performance reduction, so I am still hopeful there is a more elegant way to do this. It seems to run better on the recorded .oni than live through the sensor on my machine. There also appear to be occasional math errors in composing or multiplying the quaternions that cause some weird glitches in the model. This may be caused by sliding values as well.
I need to work on positioning and rotating the avatar to conform with the movements of the subject in front of the sensor. Right now, the avatar is pinned at the hips (which always face forward) and the feet seem to dangle from that point. I added counter-rotations in the ankles, which helped a lot in making the movements look more natural, but now it seems like the avatar is sliding around on ice, and feet are always pointed forward. I would like to make the feet be able to rotate around the y axis, but remain parallel to the ground plane.
dp.kinect seems to be the future as it can output hierarchical quaternion rotations natively and has other features like joint gravity for better extrapolations to untracked joints. I plan to upgrade my macbook soon and I will definitely bootcamp it. Which is more stable for this kind of work, Windows 7 or 8? Will dp.kinect work with the upcoming kinect v2? Will I need a purchased sdk license?
Thanks to everybody who has contributed.....ONWARD!!!
PS the forum wouldn't let me paste the patch in line for some reason
For the setup requirements of dp.kinect, I recommend you reference the wiki at https://github.com/diablodale/dp.kinect/wiki where is listed it supports Windows 7 and Windows 8. The compatibility is driven by the Kinect driver/runtime support. Microsoft currently provides the Kinect drivers/runtime and the Kinect SDK for free. There are licensing rules/restrictions and I recommend you reference the wiki above and/or Microsoft documentation for those details.
I can not speak to any stability of Win 7 or Win 8 on bootcamp nor its relation to dp.kinect. I have never done any testing or heard of anyone else trying. However, if Apple and Microsoft both write great OS/software and Apple can make their hardware work well on Windows, then I would expect dp.kinect to work because it makes API calls to both company's software.
Of high importance is very fast performing USB hardware and drivers. The Kinect sensor uses *a lot* of bandwidth. The Kinect v1 uses most of the bandwidth of a USB 2.0 host controller. And Kinect v2 uses a majority of a USB 3.0 host controller. If you are planning to use the Kinectv2 sensor, I recommend you install Windows 8. It has far better support for USB 3.0 and the Kinectv2 sensor requires USB 3.0.
dp.kinect2 is already ported to work with the new (unreleased) Kinect v2 sensor. I have a prototype sensor unit. My goal is for dp.kinect and dp.kinect2 to be so compatible that you only have to type an extra "2" in the object name. IF there is any incompatibility (I am under a non-disclosure agreement), it will be driven by a major featureset change in the sensor.
An example incompatibility I can speak about is the resolution output of the matrices. Color is now 1920x1080 and depth/ir/playermap/pointcloud is 512x424. dp.kinect2 will accept your resolution parameters (e.g. @depthmapres 1) and internally change them to the new values (@depthmapres 3). You'll want to test/adjust your patch to ensure it work w/ the new resolution. An alternative (CPU cost) is to put a jit.matrix directly after dp.kinect2 to resize the matrix to the old size.