Depth, color, playermap, infrared, and accelerometer data streams are available
Speech recognition and sound position streams are available
Pointclouds, face tracking/modeling streams are available.
Supports multiple Kinects on the same PC.
32 and 64-bit support
OSC or native-Max messages
It was developed and tested primarily with Max 6.x. Casual testing also demonstrates it works on Max 5.1.9.
There is good compatibility with the output of the external as compared to my other external jit.openni. All output is supported and should be equivalent except the optional joint rotations are in a new format.
@Julien, unfortunately, Microsoft does not support Osx with their Microsoft Kinect SDK. Until that happens, it is impossible.
I have written an open source version that could be recompiled for Osx. Details are at http://hidale.com/jit-openni/
All it needs, is a Mac developer to recompile it and address any porting issues.
The wiki explains the setup; follow the steps there to setup your computer and Kinect.
After that, then run the patcher included with dp.kinect. Play with it. Learn from it and the wiki.
If you need help learning Max, I recommend the Forums here on the cycling74 website.
Just to inform that in windows 10 the kinect x360 model 1414 works perfectly with dp.kinect and kinect sdk 1.8...
On another hand, im gonna buy a kinect v2 and a dp.kinect2, but i need to pass the licence of the de.kinect1 to another computer so it can be useful as well... How do i go about of acomplishing this?
Christian and I solved it over email. It was a Microsoft licensing issue with the xbox360 kinect.
For best performance, I recommend the Kinect for Windows (v1 and v2). They both have better depth cameras and have the full set of features.
At the moment I am getting information for two skeletons, by selecting the min and max of the skeletons indices that the external outputs. However, if a third person enters into the camera the min and max skeleton indices may change, and I want to avoid that. I attach the patch I use as a reference.
My problem is how to build the logic to select more skeletons on a robust way. I am having problems on understanding how to use the signal flow of Max to distinguish different skeleton IDs for different users. And If I use "if" objects to build any logic, the "if" object always selects the minimum of all the IDs...So I am a bit lost in all of this.
What I would like is to have a robust way of getting the position information of various skeletons, from 1 to 4... And get hold of the skeleton IDs in such a way that is robust and does not get altered if new people enter into the frame.
@Marcos, you are inquiring about high-level application logic. You are asking how to implement your interaction model without knowing your interaction model. This isn't a Kinect question.
I recommend you think deeply about the rules for your application; the rules for your interaction model. Compare it to how a worker at a meat market (Fleischerei) helps customers...when and which customer does the worker help? The worker "sees" all the customers that want to buy meat. However...which do they help?
Is it the first customer?
Is it the customer closest to the meat?
Is it the loudest customer?
It is the repeat/loyal customer?
All of these questions and answers is called the "interaction model". Different interactions models will have different code/patches. Therefore, I can't give you suggestions until you know your own model and its rules.
Here are examples of topics/question to think about. Then you can code your rules in the patch.
1) What is the rule for when your application starts interaction with a person? This is distinct from when the Kinect recognizes and tracks their joints. However, your answer could be "immediately after the Kinect tracks the joints" or it could be "when they are tracked by the Kinect for at least 5 seconds" or it could be "when they wave their hands" or something else.
2) What is the rule for when your application stops interacting with a person?
3) What is the rule for how many persons (min and max) with which you interact?
4) If you have less than your min -or- more than your max, what is the rule? Do you ignore people? Which people do you ignore? Do you ignore the people furthest away? Do you ignore the people not moving?
5) What happens when a person disappears (behind someone else, outside the camera, etc.)? Do you stop interacting with this person immediately? Do you wait 5 seconds for them to reappear? Do you remember them and wait forever for them?
6) and more...
When you have these and more answers, you will be able to create your patch to match them.
I have shared two example patches for two different interaction models. They are part of the help file with dp.kinect2. You can download the ZIP file at http://hidale.com/shop/dp-kinect2/ and then look at the help file inside it. Many of the examples will work by changing "dp.kinect2" -> "dp.kinect". Look on the "tips" tab for the two interaction model examples.