Forums > Jitter

Shaders for the Kinect

Jan 12 2011 | 12:40 pm

I’m looking for a bit of direction regarding the comparison of depth maps obtained from the kinect.

I’ve written a patch which saves both rgb and depth output from the kinect into matrixsets.

Now I would like to composite the rgb image by using the depth maps.

I think what I want to do is compare the depth maps, (call them d1 and d2) and if d1>d2 I want to use a pixel from rgb1; if not then rgb2.

I’m assuming that shaders are the way to go here (are they?). I’ve just been through the relevant tutorials and am slightly wiser but not sure if I need to write my own shader from scratch or just use existing ones.

The depth images are 1 plane as opposed to 4 – does this mean that maths shaders (such as gtep) won’t work off the shelf? I cant seem to get them to work anyway.

As you can no doubt tell I’m just feeling my way through this stuff so any direction would be very useful!


Jan 12 2011 | 2:09 pm

vade made a nice shader for mesh deformation

it is a pity, that the attachment is lost.

perhaps cycling will make it availible again.
it would be also great for the new tool section.


Jan 12 2011 | 5:02 pm

Thanks – maybe I didnt explain myself but that wasnt quite what I was after.
Perhaps a patch might explain it clearer (be useful?)

This allows you to isolate a depth range (which can be used to carry out background subtraction) and save rgb output and depth data into one of two (pairs of) video buffers – jit.matrixsets.

Then it carries out a series of matrix operations using jitter shaders to work out which pixels are nearest to the camera. It basically compares the depth maps, makes 2 mask matrices composed of the lowest value depth info and multiplies each matrix by the original rgb image.

It kind of works.

Problems I’m having are:

1)The background subtraction is very rough – I haven’t managed to dilate the depth map yet.
2)I guess I need to calibrate the camera somehow to get the rgb and depth map images to line up.
3)The frame rate is quite low.

Anyone got any ideas?

-- Pasted Max Patch, click to expand. --

Jan 12 2011 | 6:55 pm

Apply the two RGB matrices as textures to two deformed objects and let OpenGL’s occlusion algorithm do the rest?

Jan 13 2011 | 12:19 am

Sounds like a plan. Need more info on the mesh object than I can get from the docs (I’ve found).

What do I need to do to the depth image before I feed it into the


Jan 13 2011 | 7:40 am

If the depth image is one plane float32, you can you simply use it as the z plane of the vertex matrix input to the mesh.

Jan 13 2011 | 5:33 pm

It seems like what you are trying to do isn’t mesh deformation but rather keying based on depth. I’d recommend checking out the alphaglue and alphablend shaders together. You can find helper patches for these in examples/jitter-examples/render/slab-helpers.

Viewing 7 posts - 1 through 7 (of 7 total)

Forums > Jitter