jit.openni Skeleton Depth Culling/Limiting
Hi.
I've been trying out the jit.openni external for the mac. It seems really good.
I've been wanting to limit the depth within which the skeleton analysis operates. Preferably, this would only affect the skeleton analysis (ie. I would still get full depth map output from the depthmap outlet), but my priority is to get the skeleton analysis working in a constrained range.
From reading some of the posts on here, it looks like I need to edit the OpenNI xml file. After reading the OpenNI website, it seemed like I only had to add a couple of lines to the existing openni_config.xml.
This is what I changed (comments show you the small amount of code I added:
Image1
So... It doesn't work; it initialises in exactly the same way as it previously did. This problem solving is made more difficult because of the newness of this object (as in, is this feature working/supported), and my lack of familiarity with XML. So I was wondering if anybody has had any luck with this approach? Or is there another way to limit the depth the Skeleton analysis operates on. I know I could take the numerical values output for the z-dimension and filter them, but I want to use this in a busy space, and am worried that if I don't limit the range it works on things could get quite messy (eg. if there are forty people within the Kinect's entire range, I would maybe just want it to work between 1m and 3m).
There is probably something really simple or obvious I am missing. Has anybody used this functionality successfully?
There is probably something obvious that I am doing wrong. Has anybody successfully used this functionality?
...Still failing to get this to work.
I was wondering whether anybody knows if this is a supported capability, and I just need to keep trying the understand the OpenNI code?
try the jit.openni mac thread or diablodale's github:
https://cycling74.com/forums/jit-openni-external-now-ported-to-mac-osx-native-kinect-no-extra-apps-needed
https://github.com/diablodale/jit.openni/issues
oh my gosh, I'm amazed I saw this. Yes, please do open questions or issues like this at https://github.com/diablodale/jit.openni/issues
However, I'll answer right quick for you here on this one. The OpenNI stack never implemented this functionality. They had grand ideas but did implement them. As an interesting twist, OpenNI has recently released v2.0 of their SDK which is a complete toss of the old API with zero compatibility. Completely new architectural approach, API, etc.
Somewhere (openni core, nite, or sensorkinect) some or all parts of it didn't successfully implement the bounding box type functionality. If they did, jit.openni would automatically benefit from it when you configured it in XML. (BTW, I tried to use this openni feature myself in the past).
Part of the XML config features work as documented like turning on/off depth, resolutions, and aligning depth/color, What I've found, is that if you try to then configure the specific use of the depth, color, skeleton, etc., then you'll find they didn't implement working code. Unfortunately, I do not know of any list of "things that openni documented yet were not successfully implemented".
Oh, duh...I should also provide some advice. You mention your intention to focus the Kinect attention in a busy space. Also mentioining 1-3m.
The skeleton detection in OpenNI is reasonably good. It does need to see your body to be able to identify and create the skeleton. If you were to be only 1 meter away from a Kinect, it is unlikely it would see all of a typical adult body. If...*if* you were lucky to get a skeleton to be output, it would likely be very inaccurate and its values jump wildly. As you move further away than 1m, you will get more accurate results. You will also find that getting skeletons past 4m is usually unreliable. So that leaves you with maybe 1.5m-4m as a range in which you get useful skeletons. Also remember that the Kinect is a camera with a lens; therefore at 1.5m you have very little left/right movement possible while at 4m you have much more left/right movement possible. You have a "cone" of interaction space.
NITE is the portion of the OpenNI stack which does the skeleton identification and tracking. It is able to track multiple skeletons. However, there are physical limits + technical limits. Getting three people in the cone of interaction space makes them very close friends. Now three moving people makes them friends hitting each other. The NITE code tends to get confused and loose tracking with 3+ people. I have on rare occasions gotten 4 skeletons tracked...but it was unreliable. My external jit.openni has an internal limit of 15 users; a limit you will never reach.
Reference the wiki https://github.com/diablodale/jit.openni/wiki in the User and Skeleton output section.
jit.openni outputs both unidentified bodies AND full skeletons. The former is the "User Center of Mass Data" that is available. The latter the "Skeleton Joint Data". You will always get a unique userID for any humans that it sees tracked or not-tracked. You can then use the x,y,z values of the /user or the torso of the skeleton to filter out or choose specific users/skeletons on which to interact.
I have specifically coded jit.openni to always output this data in the order: floor, center of mass (COM), skeletons.
So one approach could be to look at the data on each bang. If you see a COM which is between 1.5-3m then process all skeletons with that same userID and ignore all other skeleton data.
Hi, I'm not sure this is the right place where to ask this, but since DIABLODALE said the jit.openni have a maximum of 15 user trackable, I was wondering if there is a way to limit this 15 to a number I want (like 3 or 4 maximum)?
Thanks in advance