Kinect depth data

staxas's icon

Hi all,

I am using the freenect object by Jean-Marc Pelletier to combine depth data and camera video to create a textured mesh. However, the depth data makes it so the video texture isn't properly aligned with the actual location of whoever is in front of the Kinect. I'm guessing it has something to do with the shadow from the infrared light. Anybody have any ideas of how I could clean this data up?

Regards,
Edwin

laserpilot's icon

Yep...just throw a jit.rota on the camera feed and knock it about -33 in the x direction i think..using the offset command

You can check out my example patch here..i think i got that fixed: http://blairneal.com/blog/jit-freenect-examples/

staxas's icon

Thanks for the reply laserpilot.

I checked out your patch, it's almost identical to mine! :)

Changing the offset only moves the problem around. When I shift it to one side I get 'blind spots' on the other side and the other way around. When sitting in front of the camera it seems my right hand 'fits' in the nurbs mesh, but the left one has part of the background in it (I've mirrored my camera inputs by the way).

Is anyone else experimenting with freenect and jit.gl stuff?

Mathieu Chamagne's icon

offset is not enough, image needs to be scaled as well.
I guess that these settings are different for each kinect...

Max Patch
Copy patch and select New From Clipboard in Max.

to align video to depth input, I use srcdimstart + srcdimend messages to a jit.matrix with these settings :

Mathieu

staxas's icon

Thanks Mathieu, I'll give it a try!

dtr's icon

check this out for the full technical explanation: http://www.ros.org/wiki/kinect_calibration/technical