Open Kinect

Feb 6, 2011 at 7:29pm

@dtr

ok so i put the 2 calculation methods side by side in one patch, switching with a gate/switch combo:

jit.expr method: 12.5 – 13.5 fps
jit.slab method: 19.5 – 20.5 fps

although less than before, this is still a serious speed improvement.

just one question regarding you slab method:
what is the difference in output format that i notice?

the 3 jitter planes are totally different than in jit.expr method, but apart from the perspective correction part the rest of the patch seems identical to me;

i was hoping to get the same view with the same camera mode/position when switching views, but that doesn’t work.

#191556
Feb 7, 2011 at 2:29pm

for me it s 12 vs 16fps with the patch below.

i’m not sure i understand your question. the resulting mesh of both methods is identical. one difference is that the slab outputs a 4-plane matrix whereas the jit.expr outputs a 3-plane one. that’s because the slab method requires 4 planes to work but the last plane can just be discarded. it doesn’t hold any relevant data.

btw, i don’t know what jit.gl.mesh does with the 4th plane data in the vertex array inlet (it accepts 3 and 4 plane matrices). when rendering points it doesn’t seem to have any influence. i didn’t check other modes.

– Pasted Max Patch, click to expand. –
#191557
Feb 9, 2011 at 3:30pm

my confusion came from the different scaling size, and the additional plane.
in your last patch you assumed the scaling 0.13 instead of 0.33 – this way the size is identical.
i didn’t understand this difference at first sight.

thanks for the clarification!

#191558
Feb 12, 2011 at 1:44am

I put a clip of some recent stereo 3D kinect stuff on youtube – check it out:

http://www.youtube.com/watch?v=mCHVwcnkO3Q

(or in 720p )

http://www.youtube.com/watch?v=mCHVwcnkO3Q&hd=1

(you should see a little 3D popup menu in the lower-right corner; use it to change the viewing method – most likely red/cyan anaglyph)

#191559
Feb 12, 2011 at 1:44am

#191560
Feb 13, 2011 at 9:28am

too bad, no glasses… very curious though

#191561
Feb 25, 2011 at 10:09pm

i condensed my undistortion findings into a blog post, with files: http://dtr.noisepages.com/2011/02/2-methods-for-undistorting-the-kinect-depth-map-in-maxjitter/

#191562
Feb 26, 2011 at 10:14pm

is there a way to refine the depth map for use as a controller?

How do you isolate specific portions of the matrix? Say, by distance?

I have followed several examples (including using Processing, OSCeleton, and also the freenect.grab object) but still can’t seem to figure out how to do this…

thanks,
-levy

#191563
Mar 8, 2011 at 10:06am

Hello peoples!

Yesterday i couldn't resist anymore so even though i'm on winxp i bought a kinect and spend a lot of hours compiling the source provided by openkinect.org and got it to work (in general, not max yet). Now i came across this link,

http://www.e2esoft.cn/kinect

Tried it and it works!!! I can now acces the kinect directly from max!!! Aah man, didn't expect this so i'm stoked!

FRid

(As a sign of proof, check the jit.dx.grab in the picture)

[attachment=155859,1898]

Attachments:
  1. kinect.png
#191564
Mar 9, 2011 at 3:00pm

hi, got a problem running 2 kinects at the same time. did anyone try it with the latest external? see thread here: http://cycling74.com/forums/topic.php?id=31621

#191565
Mar 15, 2011 at 11:05am

Hi
Somebody could send the old pbox2d libraries?

the link for download them on http://tohmjudson.com/?p=30 is dead and when I try to download them somewhere else I have the new librarie with not the same fonctions.. ( i have fonction error when i try the examples).
And i could not post on his blog…

thanks

Best regards

Arthur

#191566
Mar 15, 2011 at 11:10am

there you go…

Attachments:
  1. pbox2d0.02.zip
#191567
Mar 16, 2011 at 9:06pm

@ DTR thanks it works now perfectly

#191568
Mar 21, 2011 at 12:18am

Hi, I have been working with the freenect source for a couple hours and I am trying to accomplish a specific task. I am hoping to pull from the kinect depth measurements represented by a specific number as it refers to a range.

For example I am 1 ft from the kinect it gives a reading of 0, where if i am 3 ft away it will give a reading of 30.

Is it easy to extract this information?

#191569
Jul 13, 2011 at 1:39pm

I’m having a serious issue with latest jit.freenect.grab : it consumes my RAM !

with the help patch, I can see in the system activity monitor that Max memory usage is growing quite quickly : about 0.1 Mo / second.
when I close jit.freenect.grab.maxhelp memory stops growing but do not decreases.

so after a couple of hours of usage, Max takes hundreds of Mo of RAM, and finally crashes…

same problem on a macBookPro corei7 / osX.6.8
and a macPro quad Core Xeon / osX.6.8
+ uptodate Max & Jitter.

does anyone has the same problem ?
any workaround ?

thanks in advance.

Mathieu

#191570
Jul 14, 2011 at 10:25am

Unfortunately yes. I talked to Jean-Marc quite a lot when he was releasing the first versions. On my aging macbook pro c2d I got frequent crashes and memory leaks. The strange thing is that it seemed hard to reproduce and probably some users with more RAM just didn’t notice the leaks.

@Jean-Marc: Have you picked up anything again?

PS: In the meantime I just moved to Osceleton, to get basic skeleton tracking, but I would really love to be able to process it inside Jitter through a native object.

#191571
Mar 15, 2012 at 1:47pm

HI
haw can i display x ,y and z in the skeletal viewer interface

#191572
Mar 15, 2012 at 4:44pm

Hi Alice,

A bit more information would be useful, patch/ what software you’re using with Max.

#191573
Mar 19, 2012 at 8:50am

hi
kinectSDK32

#191574
Mar 21, 2012 at 4:26pm

Ok, well I haven’t looked into that software (It looks windows only, and I use a Mac).

But I use a really neat piece of software called Synapse, which sends x,y,z coordinates of all the major body joints.
This can then sends the x,y,z joint data via OSC into Max (or another program).

It’s all free, available on Windows and Mac, and it’s really quick and easy to get skeleton tracking.
It’s definitely worth checking out.

It sounds like that’s what you are after.

http://synapsekinect.tumblr.com/

#191575
Mar 21, 2012 at 8:34pm

thank you, so my project is to develop a motion tracking system using the kinect (left and right hands and head)
have you any information or website that can help me

#191576
Mar 22, 2012 at 5:12pm

Yeah, Synapse is perfect for you then.

Basically the link I posted above has everything you need to know, including a tutorial (sorta) on how to use it within Max!

I’ve also dug up a Kinect routing patch (below) I made when I was trying to make sense of it all…

Have fun!

– Pasted Max Patch, click to expand. –
#191577
Mar 28, 2012 at 11:32am

how to retrieve two hand positions at two different time

#191578
Apr 1, 2012 at 4:30pm

My above patch does this, using Synapse.

#191579
May 8, 2012 at 12:41pm

I’ve been working with @dtr ‘s patch for a while and don’t understand why the jit.expr multiplies the x and y values of the depth information by the z value. The expression looks like this:

jit.expr 3 float32 640 480 @expr “(cell[0]-dim[0]/2.) * (in[0]-in[1].) * in[2] ” “(cell[1]-dim[1]/2.) * (in[0]-in[1]) * in[2]” in[0] @inputs 3

Why not something like the following? It seems to give much better results.

jit.expr 3 float32 640 480 @expr “(cell[0]-dim[0]/2.) * in[1] ” “(cell[1]-dim[1]/2.) * in[1]” in[0] @inputs 2

#191580
May 8, 2012 at 2:14pm

hmm i haven’t used this one in a while. i found the math on the net, didn’t come up with it myself. what i get from it is that you need to factor in the depth to translate the xy from the 2d depth map to 3d real world coordinates. seems logical to me.

why don’t you test which one’s correct by measuring real world distances and compare to the calculated values? i think they ‘re supposed to be millimeters.

#191581
May 8, 2012 at 4:24pm

@dtr Thanks for that. I will keep reading and see if I can figure out where the math comes from.

#191582

You must be logged in to reply to this topic.