Forums > MaxMSP

Open Kinect

February 6, 2011 | 7:29 pm

@dtr

ok so i put the 2 calculation methods side by side in one patch, switching with a gate/switch combo:

jit.expr method: 12.5 – 13.5 fps
jit.slab method: 19.5 – 20.5 fps

although less than before, this is still a serious speed improvement.

just one question regarding you slab method:
what is the difference in output format that i notice?

the 3 jitter planes are totally different than in jit.expr method, but apart from the perspective correction part the rest of the patch seems identical to me;

i was hoping to get the same view with the same camera mode/position when switching views, but that doesn’t work.



dtr
February 7, 2011 | 2:29 pm

for me it s 12 vs 16fps with the patch below.

i’m not sure i understand your question. the resulting mesh of both methods is identical. one difference is that the slab outputs a 4-plane matrix whereas the jit.expr outputs a 3-plane one. that’s because the slab method requires 4 planes to work but the last plane can just be discarded. it doesn’t hold any relevant data.

btw, i don’t know what jit.gl.mesh does with the 4th plane data in the vertex array inlet (it accepts 3 and 4 plane matrices). when rendering points it doesn’t seem to have any influence. i didn’t check other modes.

– Pasted Max Patch, click to expand. –

February 9, 2011 | 3:30 pm

my confusion came from the different scaling size, and the additional plane.
in your last patch you assumed the scaling 0.13 instead of 0.33 – this way the size is identical.
i didn’t understand this difference at first sight.

thanks for the clarification!


February 12, 2011 | 1:44 am

I put a clip of some recent stereo 3D kinect stuff on youtube – check it out:

http://www.youtube.com/watch?v=mCHVwcnkO3Q

(or in 720p )

http://www.youtube.com/watch?v=mCHVwcnkO3Q&hd=1

(you should see a little 3D popup menu in the lower-right corner; use it to change the viewing method – most likely red/cyan anaglyph)


February 12, 2011 | 1:44 am



dtr
February 13, 2011 | 9:28 am

too bad, no glasses… very curious though



dtr
February 25, 2011 | 10:09 pm

i condensed my undistortion findings into a blog post, with files: http://dtr.noisepages.com/2011/02/2-methods-for-undistorting-the-kinect-depth-map-in-maxjitter/


February 26, 2011 | 10:14 pm

is there a way to refine the depth map for use as a controller?

How do you isolate specific portions of the matrix? Say, by distance?

I have followed several examples (including using Processing, OSCeleton, and also the freenect.grab object) but still can’t seem to figure out how to do this…

thanks,
-levy


March 8, 2011 | 10:06 am

Hello peoples!

Yesterday i couldn't resist anymore so even though i'm on winxp i bought a kinect and spend a lot of hours compiling the source provided by openkinect.org and got it to work (in general, not max yet). Now i came across this link,

http://www.e2esoft.cn/kinect

Tried it and it works!!! I can now acces the kinect directly from max!!! Aah man, didn't expect this so i'm stoked!

FRid

(As a sign of proof, check the jit.dx.grab in the picture)

[attachment=155859,1898]

Attachments:
  1. kinect.png


dtr
March 9, 2011 | 3:00 pm

hi, got a problem running 2 kinects at the same time. did anyone try it with the latest external? see thread here: http://cycling74.com/forums/topic.php?id=31621


March 15, 2011 | 11:05 am

Hi
Somebody could send the old pbox2d libraries?

the link for download them on http://tohmjudson.com/?p=30 is dead and when I try to download them somewhere else I have the new librarie with not the same fonctions.. ( i have fonction error when i try the examples).
And i could not post on his blog…

thanks

Best regards

Arthur



dtr
March 15, 2011 | 11:10 am

there you go…

Attachments:
  1. pbox2d0.02.zip

March 16, 2011 | 9:06 pm

@ DTR thanks it works now perfectly


March 21, 2011 | 12:18 am

Hi, I have been working with the freenect source for a couple hours and I am trying to accomplish a specific task. I am hoping to pull from the kinect depth measurements represented by a specific number as it refers to a range.

For example I am 1 ft from the kinect it gives a reading of 0, where if i am 3 ft away it will give a reading of 30.

Is it easy to extract this information?


July 13, 2011 | 1:39 pm

I’m having a serious issue with latest jit.freenect.grab : it consumes my RAM !

with the help patch, I can see in the system activity monitor that Max memory usage is growing quite quickly : about 0.1 Mo / second.
when I close jit.freenect.grab.maxhelp memory stops growing but do not decreases.

so after a couple of hours of usage, Max takes hundreds of Mo of RAM, and finally crashes…

same problem on a macBookPro corei7 / osX.6.8
and a macPro quad Core Xeon / osX.6.8
+ uptodate Max & Jitter.

does anyone has the same problem ?
any workaround ?

thanks in advance.

Mathieu


July 14, 2011 | 10:25 am

Unfortunately yes. I talked to Jean-Marc quite a lot when he was releasing the first versions. On my aging macbook pro c2d I got frequent crashes and memory leaks. The strange thing is that it seemed hard to reproduce and probably some users with more RAM just didn’t notice the leaks.

@Jean-Marc: Have you picked up anything again?

PS: In the meantime I just moved to Osceleton, to get basic skeleton tracking, but I would really love to be able to process it inside Jitter through a native object.


March 15, 2012 | 1:47 pm

HI
haw can i display x ,y and z in the skeletal viewer interface


March 15, 2012 | 4:44 pm

Hi Alice,

A bit more information would be useful, patch/ what software you’re using with Max.


March 19, 2012 | 8:50 am

hi
kinectSDK32


March 21, 2012 | 4:26 pm

Ok, well I haven’t looked into that software (It looks windows only, and I use a Mac).

But I use a really neat piece of software called Synapse, which sends x,y,z coordinates of all the major body joints.
This can then sends the x,y,z joint data via OSC into Max (or another program).

It’s all free, available on Windows and Mac, and it’s really quick and easy to get skeleton tracking.
It’s definitely worth checking out.

It sounds like that’s what you are after.

http://synapsekinect.tumblr.com/


March 21, 2012 | 8:34 pm

thank you, so my project is to develop a motion tracking system using the kinect (left and right hands and head)
have you any information or website that can help me


March 22, 2012 | 5:12 pm

Yeah, Synapse is perfect for you then.

Basically the link I posted above has everything you need to know, including a tutorial (sorta) on how to use it within Max!

I’ve also dug up a Kinect routing patch (below) I made when I was trying to make sense of it all…

Have fun!

– Pasted Max Patch, click to expand. –

March 28, 2012 | 11:32 am

how to retrieve two hand positions at two different time


April 1, 2012 | 4:30 pm

My above patch does this, using Synapse.


May 8, 2012 | 12:41 pm

I’ve been working with @dtr ‘s patch for a while and don’t understand why the jit.expr multiplies the x and y values of the depth information by the z value. The expression looks like this:

jit.expr 3 float32 640 480 @expr "(cell[0]-dim[0]/2.) * (in[0]-in[1].) * in[2] " "(cell[1]-dim[1]/2.) * (in[0]-in[1]) * in[2]" in[0] @inputs 3

Why not something like the following? It seems to give much better results.

jit.expr 3 float32 640 480 @expr "(cell[0]-dim[0]/2.) * in[1] " "(cell[1]-dim[1]/2.) * in[1]" in[0] @inputs 2



dtr
May 8, 2012 | 2:14 pm

hmm i haven’t used this one in a while. i found the math on the net, didn’t come up with it myself. what i get from it is that you need to factor in the depth to translate the xy from the 2d depth map to 3d real world coordinates. seems logical to me.

why don’t you test which one’s correct by measuring real world distances and compare to the calculated values? i think they ‘re supposed to be millimeters.


May 8, 2012 | 4:24 pm

@dtr Thanks for that. I will keep reading and see if I can figure out where the math comes from.


Viewing 27 posts - 151 through 177 (of 177 total)