Shaders for the Kinect
Jan 12, 2011 at 12:40pm
Shaders for the Kinect
I’m looking for a bit of direction regarding the comparison of depth maps obtained from the kinect.
I’ve written a patch which saves both rgb and depth output from the kinect into matrixsets.
Now I would like to composite the rgb image by using the depth maps.
I think what I want to do is compare the depth maps, (call them d1 and d2) and if d1>d2 I want to use a pixel from rgb1; if not then rgb2.
I’m assuming that shaders are the way to go here (are they?). I’ve just been through the relevant tutorials and am slightly wiser but not sure if I need to write my own shader from scratch or just use existing ones.
The depth images are 1 plane as opposed to 4 – does this mean that maths shaders (such as gtep) won’t work off the shelf? I cant seem to get them to work anyway.
As you can no doubt tell I’m just feeling my way through this stuff so any direction would be very useful!
Jan 12, 2011 at 2:09pm
vade made a nice shader for mesh deformation
it is a pity, that the attachment is lost.
perhaps cycling will make it availible again.
Jan 12, 2011 at 5:02pm
Thanks – maybe I didnt explain myself but that wasnt quite what I was after.
This allows you to isolate a depth range (which can be used to carry out background subtraction) and save rgb output and depth data into one of two (pairs of) video buffers – jit.matrixsets.
Then it carries out a series of matrix operations using jitter shaders to work out which pixels are nearest to the camera. It basically compares the depth maps, makes 2 mask matrices composed of the lowest value depth info and multiplies each matrix by the original rgb image.
It kind of works.
Problems I’m having are:
1)The background subtraction is very rough – I haven’t managed to dilate the depth map yet.
Anyone got any ideas?
– Pasted Max Patch, click to expand. –
Copy all of the following text.Then, in Max, select New From Clipboard.
Jan 12, 2011 at 6:55pm
Apply the two RGB matrices as textures to two deformed jit.gl.mesh objects and let OpenGL’s occlusion algorithm do the rest?
Jan 13, 2011 at 12:19am
Sounds like a plan. Need more info on the mesh object than I can get from the docs (I’ve found).
What do I need to do to the depth image before I feed it into the jit.gl.mesh?
Jan 13, 2011 at 7:40am
If the depth image is one plane float32, you can you simply use it as the z plane of the vertex matrix input to the mesh.
Jan 13, 2011 at 5:33pm
It seems like what you are trying to do isn’t mesh deformation but rather keying based on depth. I’d recommend checking out the alphaglue and alphablend shaders together. You can find helper patches for these in examples/jitter-examples/render/slab-helpers.
You must be logged in to reply to this topic.