@Spa, it should work on both versions of the Kinect as allowed in the MIcrosoft license. There are limits and Microsoft specifies them.
You will need to install the MIcrosoft Kinect drivers which are available with the SDK or with the Kinect runtime. Download and install them as you need. I have not tested having the Kinect SDK and SensorKinect (OpenNI) installed at the same time. I am sure that only one set of low-level Kinect hardware drivers can be active at a time. I recommend you choose which Kinect stack you want to use and have only that installed.
I recommend you query google to find the advantages of the Kinect SDK or OpenNI. Its somewhat a religious discussion except that the Kinect SDK has much richer support for the Kinect. The OpenNI stack is bound by the driver support exposed in SensorKinect. At this time, the dp.kinect and dp.kinect objects are almost the same in functionality. The underlying APIs might have more features on both sides (MSFT does audio and face tracking...OpenNI does gestures) but the max externals I've written don't expose that data yet.
There is no dp.kinect planned for Osx. Microsoft will first have to release an SDK and then a Mac developer somewhere will have to port dp.kinect. I doubt that would happen given the lack of porting of jit.openni.
@stefantiedje you are correct. This external is Windows only. Microsoft does not support the Macintosh with their SDK.
If you know of a Mac developer, I am interested is someone porting jit.openni to the Mac. So far, no one has stepped up to do so.
Ooooo. GitHub is somewhat confusing. Sorry. On the download page look below what you clicked. You will see a download package dp.kinect v0.3.9 BETA.zip
@matmat Any speed increase in the dp.kinect object is probably due to the underlying Kinect SDK. I do have some minor code improvements due to my approach in dp.kinect vs jit.openni, but it is probably nothing that would make it easily seen.
@benoit, the one at the right is connected to the playermap outlet. This is equivalent output to the usermap on jit.openni. Check the documentation to see how you can use it. In short, it gives you a matrix with cells that have values of the players (users) identified. It can allow you to ignore chairs, floors, walls, etc and instead only "see" the people.
dp.kinect supports anything the MIcrosoft Kinect runtime supports for multiple kinects regarding rgb, depth, and skeleton tracking.
Unfortunately, the Microsoft Kinect runtime (at this time) only supports tracking on one Kinect per PC. As soon as their runtime is updated to support 2+, dp.kinect will immediately support it.
FYI, Microsoft is artificially limiting it to one engine due to their self-imposed CPU limit. I hope that soon, they will remove this artificial limit. For example, on my PC, I can enable everything on dp.kinect with two kinects and I'm only using 15-20% of my CPU.
There is a new runtime and SDK coming out in a few weeks. We can both...hope...that they raise the ceiling.
Tanx for clarifying! It's strange that MS has this limitation too. OpenNI has it as well but there it just malfunctions which makes it look like a bug rather than an imposed limitation. Well at least it's clear in MS's case. AFAIK PrimeSense has never even commented on the issue.
Got an overclocked quad-core multi-threaded i7 CPU with cycles to spare so here's hoping they remove this ridiculous limit...
Hi all. the v0.4.9 beta above has the new @sync feature. I believe it does...technically...work correctly. You will get only sync'd frames. However, I'm not pleased with the number of frames which are discarded. It is due to subtle drift by the three major components (bang from Max, Kinect camera hardware, Kinect SDK subsystem). Very small drift in any of the three can put them out of sequence and result in the 1/60 sec rule not being satisfied.
I have an idea on how to increase the number of sync'd frames to be made available. I will explore this idea in a later beta. Until then, you can choose to have all the frames without 1/60 sec sync (its default behavior) or sync frames to be within 1/60sec (@sync 1). The latter will likely reduce the number of frames made available to you.
I made a big improvement in the @sync functionality. There should be many more frames available that are sync'd. You may also notice that the default value for @waittime is no longer zero. If you enable @sync, I recommend you keep the new default value or set @waittime to a value greater than the framerate of the depth sensor.
Please note, the skeleton data does not yet utilize the new sync approach. It will in an upcoming update.
very good question !!!! Or is there gonna be a large cost to pay for personnal usage ? Because diablodale your kinect objects are very good and useful, your place belong in the books written about the kinect...
During the beta period, I have an expiration to encourage feedback, report bugs, and for people to retrieve the new beta versions which fix any bugs or test new features. When these new betas are released, the expiration date for that beta version of dp.kinect will be for another 30 days.
At http://hidale.com/dp-kinect/ I have recently updated the licensing terms for the beta. It also hints to the "flavor" or direction for the eventual licensing terms.
I hope that the external provides the Max community a useful and reliable means to use the Kinect. The feedback from you all during the beta period will help us all.
@Ed, it is true. I have not yet exposed skeleton joints in screen coordinates; the equivalent of @skeleton_value_type 1
I do have two questions.
1) What is the usefulness of having a skeleton orientation (which is a real world 3x3 xyz rotation matrix) yet have screen coordinates which are only x,y in which the rotation matrix should not be applied?
2) I have not used the orientation matrix on dp.kinect. I output what I get from the SDK but hasn't checked that it is correct. Is it correct? Please note, in dp.kinect, it is a 4x4 matrix (the last rows/column are zero except the bottom right is 1) while in jit.openni it is just the 3x3. Should be simple to convert from the 3x34x4 if you want that. I kept it 4x4 since 'that's what Max6 uses in the anim nodes.
Well, for my current app, I'm not using orientation at all, just the screen position coordinates of the head and hands. The positions are used to interact with video on the screen with the normal Kinect video modified and mixed in. The user video is partially used for user position feedback. If I understand your question correctly, I could see similar applications where the orientation/direction of the head or hands, etc. could be used as another modifier or parameter for certain types of position dependant interaction with video. I wouldn't, at this time at least, give it a very high priority though.
@Ed, ok I understand. FYI, if you don't need the orientation, I recommend you not have @output_skeleton_orientation=1 set. It'll save a few CPU cycles. Of course, its completely your choice. :-)
There is likely to be an update to jit.open very soon; a few minor tweaks were made to make it MacOS compatible. The orientations are strictly for the 3D x,y,z real world space. I might disable the orientation data if @skeleton_value_type=1. Why? Because there is no relation to the projective coordinates without some rather complex mathematics.
My priority at this time is to jump on some features finally exposed in the new v1.6 SDK. It came out a few days ago and I've been digging into the changes like raw IR camera support, depth values greater than 4m, etc.
dp.kinect v0.6.4 is now available at the normal download locations. the major change is the addition of Infrared (IR) output. With this addition, the dp.kinect external now has a complete superset of features of jit.openni yet maintains very good compatibility so you can chose the external best for your patch.
Just ported my whole system from OSX to Win7 (basically: downloading a whole bunch of Win versions of the externals I'm using). Adapted your help patch to output 2 skeletons and route them into my merging patch. Quick first test seems to be running smoothly. The 2 skeletons are working and I'm getting very good rendering frame rates. CPU load doesn't exceed 50%.
Really hate having to go back to Windows but if it means no longer having to rely on my elderly macbookpro for the 2nd Kinect then it just needs to be done... Have been wanting to compare Win vs OSX performance for a long time too.
I'm up for testing your new build. Though I'm working in my small home studio for the time being. Don't have the whole installation setup for full on testing right now.
Would the Xtion Pro and ProLive work with dp.kinect (other driver??)(multiple)
Would Xtion work with jit.openni (osx and win)
Have you got experience with them?
Using dp.kinect can i have at the same time 2 different depth cameras: a xbox360 kinect and a kinect for windows or an XtionPro. Can i swap them without changing drivers.
Of what i understood, you recommand dp.kinect over jit.openni cos of long term dev with microsoft sdk. The problem is that you licensing is only 30 days, so no permanent install. After nearly 1 year of betas, wouldn't be the time for a licensing allowing user to make long term installation?
I've got the take decision regarding this issue.
By the way, can we hope you'll have the same interest in developing a max external for kinect2 (xbox one)?
dp.kinect is exclusively for use with the Microsoft Kinect.
Yes, I do known some people have success using the Xtion with jit.openni. There are multiple versions of the Asus sensors. I do not know which do or don't work. It depends on the hardware driver they provide. I do not own an Asus sensor.
dp.kinect supports multiple Kinects on the same PC. Each Kinect much be on its own USB 2.0 controller. It is not possible to have two Kinects on the same USB 2 controller and they fully function. There is not enough bandwidth.
It is not possible to install the official SDK Kinect hardware drivers -and- the open source Kinect OpenNI v1.x hardware drivers at the same time. The drivers conflict with each other because they are trying to control the same hardware. It should be possible to install the Xtion drivers at any time and there be no conflict.
I already have private clients which have licensed dp.kinect for their own private use. Each client has their own needs and I provide them a version of dp.kinect which meets those needs. If you are interested in licensing dp.kinect, you are welcome to contact me at email@example.com
My intention is to continue to develop dp.kinect to support new features of the Microsoft SDK. Two that I am working on now are face recognition and gesture support. As the SDK is updated to support the Kinect2, I would like to support the enhancements or new features which that hardware allows.
I am working on puppetry with the kinect and was surprised that there wasn't orientation tracking of the head in OpenNi but fortunately there is in dp.kinect!
I have one question though (I hope I haven't overlooked it in the wiki):
when I select position + orientation I get 3 floats for position but then followed by 5 floats. Looking at it I guess the first 4 floats are the quaternion. Is that correct? What is the last float then? Quality? Some extra orientation?
first I want to say thank you for this object, it's very nice to have directly kinect in max (and on pc too!). I worked with openni first, and synapse, but it seems very unstable for my computer... so dp is kind of a graal! :)
I read all [dp.kinect] wiki, it's working, as I have a moving picture of myself, but now, I have problems to make skeleton calibration. how can I have [calib_success] thing? is it a message? how can I do it? I checked "skeleton" attribute. "skeleton format" is checked too. I'm a bit confused... I don't work with osc-route. next, it says:
"The format of the max route friendly message @skeletonformat=1 with no orientation data:
skel userid jointname x y z confidence"
does it means I have to make for tracking someone's head a [route skel "userid" head x y z confidence] object??? what does it means?
Then open the demo patcher that is included with the ZIP file. On that top level patch you will dp.kinect. the 5th outlet will output messages that you are seeking. Everything out the 5th outlet is a message and all those messages are documented at the URL above.
The demo patch shows how to look for "user". you could use the same method to look for "calib_success" if you want.
Are you familiar with Max messages? This is a very important topic to learn to be able to use Max. The tutorials that come with Max (help menu, max tutorials) are excellent and can teach you have to view Max messages, manipulate them, use the route object on them, etc. Perhaps #18 (Data Collections) can assist you.
I suspect you will be using a (route) object to route messages and to get to the data (coordinates or other values) that you want.