Forums > Jitter

Kinect depth data

January 19, 2011 | 2:09 pm

Hi all,

I am using the freenect object by Jean-Marc Pelletier to combine depth data and camera video to create a textured mesh. However, the depth data makes it so the video texture isn’t properly aligned with the actual location of whoever is in front of the Kinect. I’m guessing it has something to do with the shadow from the infrared light. Anybody have any ideas of how I could clean this data up?

Regards,
Edwin


January 20, 2011 | 5:04 am

Yep…just throw a jit.rota on the camera feed and knock it about -33 in the x direction i think..using the offset command

You can check out my example patch here..i think i got that fixed: http://blairneal.com/blog/jit-freenect-examples/


January 20, 2011 | 8:58 am

Thanks for the reply laserpilot.

I checked out your patch, it’s almost identical to mine! :)

Changing the offset only moves the problem around. When I shift it to one side I get ‘blind spots’ on the other side and the other way around. When sitting in front of the camera it seems my right hand ‘fits’ in the nurbs mesh, but the left one has part of the background in it (I’ve mirrored my camera inputs by the way).

Is anyone else experimenting with freenect and jit.gl stuff?


January 20, 2011 | 10:11 am

offset is not enough, image needs to be scaled as well.
I guess that these settings are different for each kinect…

to align video to depth input, I use srcdimstart + srcdimend messages to a jit.matrix with these settings :

– Pasted Max Patch, click to expand. –

Mathieu


January 21, 2011 | 3:22 pm

Thanks Mathieu, I’ll give it a try!



dtr
January 22, 2011 | 9:48 am

check this out for the full technical explanation: http://www.ros.org/wiki/kinect_calibration/technical


Viewing 6 posts - 1 through 6 (of 6 total)