Kinect depth data

Jan 19, 2011 at 2:09pm

Kinect depth data

Hi all,

I am using the freenect object by Jean-Marc Pelletier to combine depth data and camera video to create a textured mesh. However, the depth data makes it so the video texture isn’t properly aligned with the actual location of whoever is in front of the Kinect. I’m guessing it has something to do with the shadow from the infrared light. Anybody have any ideas of how I could clean this data up?

Regards,
Edwin

#54473
Jan 20, 2011 at 5:04am

Yep…just throw a jit.rota on the camera feed and knock it about -33 in the x direction i think..using the offset command

You can check out my example patch here..i think i got that fixed: http://blairneal.com/blog/jit-freenect-examples/

#196211
Jan 20, 2011 at 8:58am

Thanks for the reply laserpilot.

I checked out your patch, it’s almost identical to mine! :)

Changing the offset only moves the problem around. When I shift it to one side I get ‘blind spots’ on the other side and the other way around. When sitting in front of the camera it seems my right hand ‘fits’ in the nurbs mesh, but the left one has part of the background in it (I’ve mirrored my camera inputs by the way).

Is anyone else experimenting with freenect and jit.gl stuff?

#196212
Jan 20, 2011 at 10:11am

offset is not enough, image needs to be scaled as well.
I guess that these settings are different for each kinect…

to align video to depth input, I use srcdimstart + srcdimend messages to a jit.matrix with these settings :

– Pasted Max Patch, click to expand. –

Mathieu

#196213
Jan 21, 2011 at 3:22pm

Thanks Mathieu, I’ll give it a try!

#196214
Jan 22, 2011 at 9:48am

check this out for the full technical explanation: http://www.ros.org/wiki/kinect_calibration/technical

#196215

You must be logged in to reply to this topic.