Shaders for the Kinect

Jan 12, 2011 at 12:40pm

Shaders for the Kinect

I’m looking for a bit of direction regarding the comparison of depth maps obtained from the kinect.

I’ve written a patch which saves both rgb and depth output from the kinect into matrixsets.

Now I would like to composite the rgb image by using the depth maps.

I think what I want to do is compare the depth maps, (call them d1 and d2) and if d1>d2 I want to use a pixel from rgb1; if not then rgb2.

I’m assuming that shaders are the way to go here (are they?). I’ve just been through the relevant tutorials and am slightly wiser but not sure if I need to write my own shader from scratch or just use existing ones.

The depth images are 1 plane as opposed to 4 – does this mean that maths shaders (such as gtep) won’t work off the shelf? I cant seem to get them to work anyway.

As you can no doubt tell I’m just feeling my way through this stuff so any direction would be very useful!

Thanks

#54371
Jan 12, 2011 at 2:09pm

vade made a nice shader for mesh deformation

http://cycling74.com/forums/topic.php?id=17837

it is a pity, that the attachment is lost.

perhaps cycling will make it availible again.
it would be also great for the new tool section.

best,
d.

#195744
Jan 12, 2011 at 5:02pm

Thanks – maybe I didnt explain myself but that wasnt quite what I was after.
Perhaps a patch might explain it clearer (be useful?)

This allows you to isolate a depth range (which can be used to carry out background subtraction) and save rgb output and depth data into one of two (pairs of) video buffers – jit.matrixsets.

Then it carries out a series of matrix operations using jitter shaders to work out which pixels are nearest to the camera. It basically compares the depth maps, makes 2 mask matrices composed of the lowest value depth info and multiplies each matrix by the original rgb image.

It kind of works.

Problems I’m having are:

1)The background subtraction is very rough – I haven’t managed to dilate the depth map yet.
2)I guess I need to calibrate the camera somehow to get the rgb and depth map images to line up.
3)The frame rate is quite low.

Anyone got any ideas?

– Pasted Max Patch, click to expand. –
#195745
Jan 12, 2011 at 6:55pm

Apply the two RGB matrices as textures to two deformed jit.gl.mesh objects and let OpenGL’s occlusion algorithm do the rest?

#195746
Jan 13, 2011 at 12:19am

Sounds like a plan. Need more info on the mesh object than I can get from the docs (I’ve found).

What do I need to do to the depth image before I feed it into the jit.gl.mesh?

Thanks

#195747
Jan 13, 2011 at 7:40am

If the depth image is one plane float32, you can you simply use it as the z plane of the vertex matrix input to the mesh.

#195748
Jan 13, 2011 at 5:33pm

It seems like what you are trying to do isn’t mesh deformation but rather keying based on depth. I’d recommend checking out the alphaglue and alphablend shaders together. You can find helper patches for these in examples/jitter-examples/render/slab-helpers.

#195749

You must be logged in to reply to this topic.