Forums > Jitter

OpenGL shapes created with camera tracking points

September 14, 2010 | 5:51 am

I’m attempting to put together something which maps depth via infrared camera tracking. My patch is getting a little unruly, so this video might explain it more simply… http://www.flickr.com/photos/kodamapixel/4982093309/

At the moment, I’m showing depth by laying 3 copies of the same video image on top of each other – just with different thresholds and colour. However, instead of using the binary video image, I’d like to create simple OpenGL shapes that loosely fit each colour.

My knowledge of OpenGL is fairly (very) limited, and I’m having difficulty with translating either cv.jit tracking points, or binary video into meaningful glvertex or similar.


September 16, 2010 | 5:01 am

A bit more information might be helpful with my question…

This isn’t the patch I’m working on in the above video, but a thought on how the GL shapes might be created with cv.jit – I’ve used the [cv.jit.blobs.centroids] centre coordinate and the x, y coordinates spat out by [cv.jit.features] to create several triangle_strip shapes, thinking they might come together to create pizza-like slices (see the video above for the shapes I’m trying to create).

Unfortunately it doesn’t even come close to what I’d like to achieve, but hopefully will trigger a few ideas out there. I’m using a very controlled lighting environment, so the threshold will probably need to be set fairly high to get an idea of where I’m at…

– Pasted Max Patch, click to expand. –

September 16, 2010 | 8:45 pm

I’m not entirely sure what your question is, and being no GL king I’m probably not the one to answer it…but it looks interesting and I wondered if you had tried using your data to control a NURBS surface with jit.gl.nurbs. You can represent a 3D plane with depth and you could map a texture onto it too. How you manage the control matrix is the tricky bit, I think you are using three levels of shade difference to sense depth?


September 16, 2010 | 10:51 pm

Thanks for the suggestion, scatalogic. [jit.gl.nurbs] could be a great idea – I’ll see if I can wrangle one into the shapes I need.

Just to try and clarify my question, I’m trying to create something like a topographic map – http://mail.colonial.net/~hkaiter/imagextras/topographic-map.jpg – which gives the sense of depth, but is really only 2D shapes.

The obvious difference with my project is that these shapes are not static – they will change in response to motion tracking. What I’m struggling with is how to get the motion tracking to create the shapes.

Thanks again for your response scatalogic, I’m struggling a bit with this one, so I really do appreciate any suggestions at all.


September 18, 2010 | 9:57 pm

hi there

i being working on trying to track 4 ir points and then find the homography of the points (no luck yet) this is my question patch, but i think it might help you, as it takes point from cv.jit.track.draw and makes a 2d shape with them.
will be asking a question homography very soon with the patch but on the for chance anyone see this and has done it, please give me some pointers!

– Pasted Max Patch, click to expand. –

thanks ben


September 19, 2010 | 8:21 am

Thanks strimbob, that’s something I’d never thought of. I think I’m probably inching closer to a solution after playing around with your patch, but getting the track points to initialise in a meaningful position is the tricky part.

At the moment I’m just using blobs.bounds to get corner points. I don’t know if this will help you at all, but thought it worth posting anyway…

– Pasted Max Patch, click to expand. –

Thanks again!


September 20, 2010 | 8:07 pm

without looking at your patches and knowing much about cv, my guess is the cv points are in screen coordinates (meaning 0 to screen width and 0 to screen height).

you can use jit.gl.render’s screentoworld message to transform these coordinates into opengl world coordinates.


Viewing 7 posts - 1 through 7 (of 7 total)