Looking for a slightly different aproach to motion tracking.

geddonn's icon

Hi there, I'm beginning preparation for an installation that involves tracking the motion of thin lines of video tape hanging from the ceiling.
I'm using a Kinect with the freenect.grab object only utilising the RGB outlet at the moment.

My problem is that the installation will also invite people to enter and walk around in it.
I would like to hopefully have the Kinect sensor ignore people (maybe ignore blobs above a certain size?) and only focus on the movements of the tape light reflecting off of the tape.

I've tried a few background subtraction techniques and have messed around a bit with cv.jit.blobs but am having trouble ignoring people while also ignoring pixely artifacts.

I'm just looking for a nudge in the right direction.

dtr's icon

here's one speculative approach. since you re using the kinect you can have it analyse the scene and look for people, users in kinect terms. no calibration is needed for that so people can walk freely in and out.

there's a function in the kinect libraries called getuserpixels or something similar. it will return from the rgb image the pixels that 'belong' to a user. substract that from the image and you've removed the people from your scene. do your tracking on that.

max externals for kinect and apps like synapse and osceleton don't have this functionality (AFAIK). if you're, like me, not much of a code ninja i'd advise looking into simpleopenni for processing. it exposes the openni and nite libraries in processing, much easier than doing the whole linking and compiling business in xcode, visual studio and the like.

geddonn's icon

Thanks very much for your suggestion. I've steered away from anything involving xcode and compiling. I'd like to eventually learn that stuff but for the purposes of this project I'm gonna keep it simple.
Looking into simpleopenni now!

Any tips on tracking thin vertical lines in jitter?
cv.jit.blobs doesn't seem to be ideal for this.

Thanks again

dtr's icon

well an entirely different approach would be to coat 'm with IR reflective paint or something. use a IR sensitive camera (cheap webcams are, just need to remove their IR filter, replace it with a high pass filter that only lets through IR. expose the threads to IR light and there you go...

this doesn't tell you how to track their shape though. not sure about that.

btw, a kinect projects a whole lot of IR rays from one of its lenses and has an IR cam built in. many apps including the jit.freenect.grab external let you read out the IR image. you'd wanna make sure your tracked threads are a lot more reflective than the rest of the scene so you can easily filter it out.

geddonn's icon

That's a good idea. The project is centered around the various qualities of the tape though so painting it might make it look/feel different.

I think I'm going to stick with cv.jit as I can't get my head around the 'Processing' application.

I've been experimenting further with cv.jit.blobs.centroids and am wondering how easy it would be to use plane 3 (centroid size) to exclude centroids over a certain size.
Am I right in thinking this would be a job for either jit.expr or jit.pix?

Thanks