jit.openni kinect skeleton xyz joint orientation / rotation

MRTN's icon

Hello, I would like to know the best way for obtaining rotation of the joints from a skeleton tracked thanks to the kinect, as I need to control a 3d model in position and rotation. Thanks in advance!

Rob Ramirez's icon

depending on your 3d model, this can be super complex.
jit.anim.node is there to assist.

hopefully dale will chime in, as i'm not entirely sure on how openni outputs joint rotations, but i assume they are in world-space coordinates. in order to control the individual joints of a 3d model, they need to be converted to the local-space, relative to their parent joint.

jit.anim.node is what you need to make these conversions.
also, the copynodestoclipboard message to jit.gl.model, which will copy the entire node-structure of a loaded jit.gl.model file, allowing you to paste in your patch as a hierarchy of jit.anim.node objects. these can be used to control the model using the data from the kinect.

your best bet might be to look at the Day 10 physics patch-a-day patch, found here:
https://cycling74.com/tutorials/00-physics-patch-a-day/

there's information describing the algorithm that does conversions between world-space and local-space orientations in the patch. it deals with controlling a model skeleton with physics objects, but it's a similar concept to controlling via kinect.

MRTN's icon

Thanks for the answer, yes my first idea was to use world coordinates hoping there was a way for getting the orientation but so far I did not find anything... Thanks for the links, I'll give a look and trying with the jit.anim.node!
You always save my programming life! :)

diablodale's icon

when using jit.openni, the NITE middleware does provide rotation information and my jit.openni external exposes it. Check https://github.com/diablodale/jit.openni/wiki#skeleton-joint-data
for orientation related attributes and output formats. Just hit the page and use your browser's page search for "orientation". If something is not clear in that documentation, please open up an issue there and I can help resolve the ambiguity and improve the documentation for all.

Caution, the NITE orientation data is of less quality than joint positions. It is very difficult to detect some orientation changes. For example, you can hold out your arm and rotate it 90-180 degrees (a big change) without your arm joints changing position. This physical fact + the limits of sensor technology/software combine to make orientation data not great. This data is better used to supplement the joint position data instead of being the primary source of data. The NITE documentation from OpenNI discusses some of these limitations.

What you might find useful is to use Max 6 physics objects and with them define how a typical human body and joints operate/limits/etc. Then drive the joints using Kinect joint data. And after that, apply rotation data to only those joints which need it and provide reliable data.

MRTN's icon

Thanks DIABLODALE! Is a little bit more clear, in the and I solve it out using physics objects as you and ROB suggested. Thanks again!

aartcore's icon

i trying to figure out the same problem,
i'm trying to link the skeleton output to the blocky-patch (Day 10 of the physics-patch-a-day)
what's is the best calculation to convert the coordinates.
I try to understand the jit.quat and jit.euler2quat converts but i've no idea how to convert the skeleton coords.

Martina (or someone else?) do you have already a sort of working model?

Thanks in advance!

madjax's icon

Bump on this problem. I used to have a system that kind of worked with synapse, osceleton, and jit.ogre
Though some rotations were incorrect, it mostly worked for what I needed. Jit.openni doesn't work the same way. Is the matrix different? has anybody converted the rotation matrix coming out of jit.openni to axis/angle or euler angles?

diablodale's icon

The orientation matrix is defined in the wiki at https://github.com/diablodale/jit.openni/wiki#skeleton-joint-data

Using those instructions and Max's objects like jit.anim.node you can rotate things.

I recommend you create your patch and try to get it working. Then if it doesn't work, post the patch here so others can assist in your specific usage/issue.

Rob Ramirez's icon

you can use jit.quat to construct a quaternion from rotation axes, by setting the @xaxis, @yaxis, and @zaxis attributes.

madjax's icon

Thanks Rob for the @xaxis message trick. That saved me from having to do a lot of math. I still don't have the right results, though. I think there might be a difference in the initial pose of the jit.gl.model and the skeleton values coming out of jit.openni. I have been struggling to understand world v. local rotations, how to convert between them, and how to adjust initial values so that the rotations are correct.

Max Patch
Copy patch and select New From Clipboard in Max.

Here is a simplified patch that tracks one arm. Can somebody who has done this before take a look and see if I am doing anything obviously wrong? The documentation for jit.anim.node is sparse, and it doesn't really go into how to use messages to convert rotations. I feel pretty lost at this point.

diablodale's icon

Here is a simple example below. I also added more to the wiki at https://github.com/diablodale/jit.openni/wiki#skeleton-joint-data to help remind everyone it is *world* coordinate system for orientation in OpenNI.

I caution everyone that OpenNI is dead. Apple killed it. On 23 April 2014, the official OpenNI website is closing. When this happens, there may be no legal place to download NITE (the essential component of OpenNI that does skeletal joint tracking). This could lead to software piracy and illegal distribution. NiMATE, jit.openni, Synapse…they all use OpenNI.

Max Patch
Copy patch and select New From Clipboard in Max.

I recommend switching to dp.kinect which has always used official Microsoft technology and is regularly updated at http://hidale.com/dp-kinect/

MRTN's icon

thanks for the example patch, and most thanks for this informations! "very nice" from apple to decide to kill OpenNI, very very "nice"

If anybody wants to read more info, you can at this link (and lots more on the web): http://createdigitalmotion.com/2014/02/open-kinect-tools-go-closed-and-dead-limiting-artist-and-hacker-options-call-for-help/

madjax's icon

Yes! Thanks so much Dale.

Rob Ramirez's icon

the basic idea is that you have an orientation from the kinect for a specific bone, and that is in world-space. you need to translate that to a local orientation for the corresponding bone in the 3d model's node structure.

you do that by multiplying the orientation with the inverse parent-orientation.
you may also need to specify an offset orientation, depending on the model and controller you are using.

Max Patch
Copy patch and select New From Clipboard in Max.

the patch below uses a gl.handle to simulate the kinect world-space orientation, that rotates the model's elbow.

madjax's icon

Alright! Very Elegant.

We are getting really close. I am posting a new version that tracks both arms. It still needs some work. I think there might be some feedback in the worldquat messages from the jit.anim.nodes. It wasn't this jittery when I was sending the wrong rotations, so I think something is hopping back and fourth.

If you don't have a kinect sensor, you can still run jit.openni with the "read jit.openni_debugrec.xml" message. You will need to have jit.openni installed, of course, (see Dale's blog: http://hidale.com/jit-openni) and the file skeletonrec.oni saved to your openni directory inside of max. You can download it here:

Max Patch
Copy patch and select New From Clipboard in Max.

THINGS THAT STILL NEED FIXING:

* The jit.gl.camera and the model have to be turned around to see the front of the model
I wonder if this can be addressed in the xml file settings?

* The rotations of the arms is not entirely relative to the rotation of the model.
If you rotate the model, the arms don't rotate with it.

* Rotations are overly jittery -- Feedback?

Thanks so much to everybody who is helping to crack this tough nut.

Rob Ramirez's icon

very cool!

yeah, the jitteriness is something i've noticed before when controlling a gl.model structure with a physics ragdoll.

unfortunately i don't have a solution right now, but it's something i'm looking into and will post back here with any further information on this.

diablodale's icon

The rotation data from OpenNI is jittery and unreliable. OpenNI documents this in their NITE reference. There will be no fix. Perhaps consider throwing out any results that are less than 1.0 confidence.

Even without rotations driving your model, you can get jittery puppet behavior. This is expected if you have any rigid bones/positions associated with OpenNI joints. You would have to have the exact size "bones" in the entire body for it to work smoothly. And...the length of bones would need to change on each frame as the OpenNI joints change and the "bones" between them changes length.

Make a loose puppet *hinted* strongly by joints and very weakly by rotations for better results.

benoit-1842's icon

What an interesting topic I hope theres gonna be development with that patch. I am going to help the most I can to get that project rolling.

Thanx
Ben

madjax's icon

Here is a cleaned up and annotated patch for reference. I added head, neck, hip, and arm rotations, and put in some timing strategies and data smoothing for less jitter in the rotations. The slide objects are probably adding some latency and performance reduction, so I am still hopeful there is a more elegant way to do this. It seems to run better on the recorded .oni than live through the sensor on my machine. There also appear to be occasional math errors in composing or multiplying the quaternions that cause some weird glitches in the model. This may be caused by sliding values as well.

I need to work on positioning and rotating the avatar to conform with the movements of the subject in front of the sensor. Right now, the avatar is pinned at the hips (which always face forward) and the feet seem to dangle from that point. I added counter-rotations in the ankles, which helped a lot in making the movements look more natural, but now it seems like the avatar is sliding around on ice, and feet are always pointed forward. I would like to make the feet be able to rotate around the y axis, but remain parallel to the ground plane.

dp.kinect seems to be the future as it can output hierarchical quaternion rotations natively and has other features like joint gravity for better extrapolations to untracked joints. I plan to upgrade my macbook soon and I will definitely bootcamp it. Which is more stable for this kind of work, Windows 7 or 8? Will dp.kinect work with the upcoming kinect v2? Will I need a purchased sdk license?

Thanks to everybody who has contributed.....ONWARD!!!

PS the forum wouldn't let me paste the patch in line for some reason

kinect_seymour_avatar.maxpat
Max Patch
diablodale's icon

For the setup requirements of dp.kinect, I recommend you reference the wiki at https://github.com/diablodale/dp.kinect/wiki where is listed it supports Windows 7 and Windows 8. The compatibility is driven by the Kinect driver/runtime support. Microsoft currently provides the Kinect drivers/runtime and the Kinect SDK for free. There are licensing rules/restrictions and I recommend you reference the wiki above and/or Microsoft documentation for those details.

I can not speak to any stability of Win 7 or Win 8 on bootcamp nor its relation to dp.kinect. I have never done any testing or heard of anyone else trying. However, if Apple and Microsoft both write great OS/software and Apple can make their hardware work well on Windows, then I would expect dp.kinect to work because it makes API calls to both company's software.

Of high importance is very fast performing USB hardware and drivers. The Kinect sensor uses *a lot* of bandwidth. The Kinect v1 uses most of the bandwidth of a USB 2.0 host controller. And Kinect v2 uses a majority of a USB 3.0 host controller. If you are planning to use the Kinectv2 sensor, I recommend you install Windows 8. It has far better support for USB 3.0 and the Kinectv2 sensor requires USB 3.0.

dp.kinect2 is already ported to work with the new (unreleased) Kinect v2 sensor. I have a prototype sensor unit. My goal is for dp.kinect and dp.kinect2 to be so compatible that you only have to type an extra "2" in the object name. IF there is any incompatibility (I am under a non-disclosure agreement), it will be driven by a major featureset change in the sensor.

An example incompatibility I can speak about is the resolution output of the matrices. Color is now 1920x1080 and depth/ir/playermap/pointcloud is 512x424. dp.kinect2 will accept your resolution parameters (e.g. @depthmapres 1) and internally change them to the new values (@depthmapres 3). You'll want to test/adjust your patch to ensure it work w/ the new resolution. An alternative (CPU cost) is to put a jit.matrix directly after dp.kinect2 to resize the matrix to the old size.

benoit-1842's icon

Hi guys.... I am still working on that Seymour Kinect guy.... But it's difficult ! Does somebody have figured out how to move a 3d model with the Kinect and Max6?

Thanx in advance,

Benoit