Forums > MaxMSP

Open Kinect

Feb 06 2011 | 7:29 pm


ok so i put the 2 calculation methods side by side in one patch, switching with a gate/switch combo:

jit.expr method: 12.5 – 13.5 fps
jit.slab method: 19.5 – 20.5 fps

although less than before, this is still a serious speed improvement.

just one question regarding you slab method:
what is the difference in output format that i notice?

the 3 jitter planes are totally different than in jit.expr method, but apart from the perspective correction part the rest of the patch seems identical to me;

i was hoping to get the same view with the same camera mode/position when switching views, but that doesn’t work.

Feb 07 2011 | 2:29 pm

for me it s 12 vs 16fps with the patch below.

i’m not sure i understand your question. the resulting mesh of both methods is identical. one difference is that the slab outputs a 4-plane matrix whereas the jit.expr outputs a 3-plane one. that’s because the slab method requires 4 planes to work but the last plane can just be discarded. it doesn’t hold any relevant data.

btw, i don’t know what does with the 4th plane data in the vertex array inlet (it accepts 3 and 4 plane matrices). when rendering points it doesn’t seem to have any influence. i didn’t check other modes.

-- Pasted Max Patch, click to expand. --

Feb 09 2011 | 3:30 pm

my confusion came from the different scaling size, and the additional plane.
in your last patch you assumed the scaling 0.13 instead of 0.33 – this way the size is identical.
i didn’t understand this difference at first sight.

thanks for the clarification!

Feb 12 2011 | 1:44 am

I put a clip of some recent stereo 3D kinect stuff on youtube – check it out:

(or in 720p )

(you should see a little 3D popup menu in the lower-right corner; use it to change the viewing method – most likely red/cyan anaglyph)

Feb 12 2011 | 1:44 am

Feb 13 2011 | 9:28 am

too bad, no glasses… very curious though

Feb 25 2011 | 10:09 pm

i condensed my undistortion findings into a blog post, with files:

Feb 26 2011 | 10:14 pm

is there a way to refine the depth map for use as a controller?

How do you isolate specific portions of the matrix? Say, by distance?

I have followed several examples (including using Processing, OSCeleton, and also the freenect.grab object) but still can’t seem to figure out how to do this…


Mar 08 2011 | 10:06 am

Hello peoples!

Yesterday i couldn't resist anymore so even though i'm on winxp i bought a kinect and spend a lot of hours compiling the source provided by and got it to work (in general, not max yet). Now i came across this link,

Tried it and it works!!! I can now acces the kinect directly from max!!! Aah man, didn't expect this so i'm stoked!


(As a sign of proof, check the jit.dx.grab in the picture)


  1. kinect.png


Mar 09 2011 | 3:00 pm

hi, got a problem running 2 kinects at the same time. did anyone try it with the latest external? see thread here:

Mar 15 2011 | 11:05 am

Somebody could send the old pbox2d libraries?

the link for download them on is dead and when I try to download them somewhere else I have the new librarie with not the same fonctions.. ( i have fonction error when i try the examples).
And i could not post on his blog…


Best regards


Mar 15 2011 | 11:10 am

there you go…

Mar 16 2011 | 9:06 pm

@ DTR thanks it works now perfectly

Mar 21 2011 | 12:18 am

Hi, I have been working with the freenect source for a couple hours and I am trying to accomplish a specific task. I am hoping to pull from the kinect depth measurements represented by a specific number as it refers to a range.

For example I am 1 ft from the kinect it gives a reading of 0, where if i am 3 ft away it will give a reading of 30.

Is it easy to extract this information?

Jul 13 2011 | 1:39 pm

I’m having a serious issue with latest jit.freenect.grab : it consumes my RAM !

with the help patch, I can see in the system activity monitor that Max memory usage is growing quite quickly : about 0.1 Mo / second.
when I close jit.freenect.grab.maxhelp memory stops growing but do not decreases.

so after a couple of hours of usage, Max takes hundreds of Mo of RAM, and finally crashes…

same problem on a macBookPro corei7 / osX.6.8
and a macPro quad Core Xeon / osX.6.8
+ uptodate Max & Jitter.

does anyone has the same problem ?
any workaround ?

thanks in advance.


Jul 14 2011 | 10:25 am

Unfortunately yes. I talked to Jean-Marc quite a lot when he was releasing the first versions. On my aging macbook pro c2d I got frequent crashes and memory leaks. The strange thing is that it seemed hard to reproduce and probably some users with more RAM just didn’t notice the leaks.

@Jean-Marc: Have you picked up anything again?

PS: In the meantime I just moved to Osceleton, to get basic skeleton tracking, but I would really love to be able to process it inside Jitter through a native object.

Mar 15 2012 | 1:47 pm

haw can i display x ,y and z in the skeletal viewer interface

Mar 15 2012 | 4:44 pm

Hi Alice,

A bit more information would be useful, patch/ what software you’re using with Max.

Mar 19 2012 | 8:50 am


Mar 21 2012 | 4:26 pm

Ok, well I haven’t looked into that software (It looks windows only, and I use a Mac).

But I use a really neat piece of software called Synapse, which sends x,y,z coordinates of all the major body joints.
This can then sends the x,y,z joint data via OSC into Max (or another program).

It’s all free, available on Windows and Mac, and it’s really quick and easy to get skeleton tracking.
It’s definitely worth checking out.

It sounds like that’s what you are after.

Mar 21 2012 | 8:34 pm

thank you, so my project is to develop a motion tracking system using the kinect (left and right hands and head)
have you any information or website that can help me

Mar 22 2012 | 5:12 pm

Yeah, Synapse is perfect for you then.

Basically the link I posted above has everything you need to know, including a tutorial (sorta) on how to use it within Max!

I’ve also dug up a Kinect routing patch (below) I made when I was trying to make sense of it all…

Have fun!

-- Pasted Max Patch, click to expand. --

Mar 28 2012 | 11:32 am

how to retrieve two hand positions at two different time

Apr 01 2012 | 4:30 pm

My above patch does this, using Synapse.

May 08 2012 | 12:41 pm

I’ve been working with @dtr ‘s patch for a while and don’t understand why the jit.expr multiplies the x and y values of the depth information by the z value. The expression looks like this:

jit.expr 3 float32 640 480 @expr "(cell[0]-dim[0]/2.) * (in[0]-in[1].) * in[2] " "(cell[1]-dim[1]/2.) * (in[0]-in[1]) * in[2]" in[0] @inputs 3

Why not something like the following? It seems to give much better results.

jit.expr 3 float32 640 480 @expr "(cell[0]-dim[0]/2.) * in[1] " "(cell[1]-dim[1]/2.) * in[1]" in[0] @inputs 2

May 08 2012 | 2:14 pm

hmm i haven’t used this one in a while. i found the math on the net, didn’t come up with it myself. what i get from it is that you need to factor in the depth to translate the xy from the 2d depth map to 3d real world coordinates. seems logical to me.

why don’t you test which one’s correct by measuring real world distances and compare to the calculated values? i think they ‘re supposed to be millimeters.

May 08 2012 | 4:24 pm

@dtr Thanks for that. I will keep reading and see if I can figure out where the math comes from.

Feb 08 2017 | 3:54 pm

Hey guys!
I’m pretty late to the party here. Gonna have a read through the forum in a bit but just thought firstly i’d throw it out there and ask if people have had much success using any tracking or blob recognition to trigger events.
I’m presently a total noob with Kinect but have been immersing myself in Processing+Kinect.
Have had a good time with Max+Monome the past few months so i’m sure i’ll pick it up fine.
Any tips or links would be much appreciated.
Like I said I’m gonna trawl through the forum here and get what I can

Feb 12 2017 | 3:50 pm

The help patch for dp.kinect2 has an example of triggering/playing midi music with the Kinect. The download is here

Feb 22 2017 | 11:17 am

So as far as I can tell…diablodale you have literally the only means of "Kinecting" to Max 7 using Windows 10 and a model 1414.

I’ve spent a few hours researching and I keep looping back to your dp.kinect and dp.kinect2 for the 2nd gen Kinect.

I certainly respect the need to pay the bills, but if I’m intermediate with Max/MSP/Jitter and looking to build my own patch using the Kinect data, can anyone advise on how to get started without paying? Perhaps this is just the dead end I’m at since I don’t have enough familiarity to program myself a new work around, but if anyone has an open-source solution for Max 7, Windows 10 and the model 1414 (1st gen) Kinect I would be incredible appreciative.

That said, I appreciate the hard work diablodale, I just can’t afford the fruits of your labor at the moment. I’ll download the trial so kudos to supplying that, perhaps I can make a skeletal version for my own immediate needs from this until I can afford the full package.

Happy patching!

Feb 22 2017 | 12:59 pm

Haven’t done this for a good while, not sure about if they work on Windows 10 or if I have my facts straight, but FWIW you could look into OSCeleton: or Synapse: as ways to get Kinect data into Max. Both require several installs for drivers, middleware etc, I remember it being painful, but I seem to remember that the Zigfu installer took care of it all for you so maybe try that first:

Viewing 31 posts - 151 through 181 (of 181 total)

Forums > MaxMSP