practical advice on projecting onto moving bodies/blobs

Dec 14, 2011 at 12:34am

practical advice on projecting onto moving bodies/blobs

So I have a top-down installation in which people are moving around in a space and that comes through the Kinect as depth info. I have the blob tracking for that working well enough, but I’m having a hell of a time getting that to project back on to the participants with accuracy.
Does anyone have any experience with this? I’m going to take another stab at it tomorrow, I figure I will mark off the extents of the Kinect’s vision with tape, then try to project on to that, and try mapping it out if it’s still not right. Bleh.

Dec 14, 2011 at 10:54am

are you converting the kinect’s depth map to an orthogonal system? the raw output is distorted (what you get is the distance to the camera, not 3d coordinates). have a look at this:

once you have that it shouldn’t be too hard to get the projections right.

Dec 15, 2011 at 1:13am

I’m just using the depth map to see the presence of a body and convert that to a blob, I don’t think I need to smooth it out, although it might help. I’m having some success, but I can’t quite map the projection on onto the participants. Is there such a thing as logarithmic scaling?

Dec 15, 2011 at 4:32pm

no way for me to be sure as that needs to be verified on you actual setup, but just using the raw depth map as is probably already throws of the accuracy of your system. do you understand the difference between the depth map and a perspective corrected 3D mesh/pointcloud of your space?

The depth map gives you the distance of a ‘pixel’ to the camera. Think of it as rays shooting out of the camera lens. plot that out in a 3D mesh/point cloud and you’ll see that a straight wall right in front of the camera comes out curved. Same if your camera is hanging from the ceiling and looking straight down at the floor.

Now perhaps you could work with that but you get in trouble when trying to project graphics on top of it. The coordinate systems don’t match. The depth map is distorted because of the camera perspective. But your graphics are in orthogonal 2d or 3d.

This is fixed by first converting the depth map to orthogonal 3d and then performing your tracking processes. This ‘space’ will be much easier to match with your projected graphics.

btw, z.scale does logarithmic scaling: (scale in max5 also but is a bit weird, apparently maintained for legacy compatibility)

Dec 16, 2011 at 7:28pm

Ah, thanks. Just started messing with it and yeah it looks much better.
Question… why is it outputting a 4 plane matrix? Hard to work with for blobs :( I’m very new to Max, and I swear I do the tutorials, but I can’t seem to get it to a single plane, black and white output based on range. Also, why do I always get “mismatch” errors when I’m messing with matrices?! It says one object outputs matrix and the other inputs.. but they don’t like each other.

Dec 16, 2011 at 8:30pm

Hm I’ve managed to get something decent out of it for blob tracking, but there are strange shadow outlines and it has double-vision!

Dec 17, 2011 at 9:40pm

difficult to comment on that without actual media and/or patch…

Dec 18, 2011 at 7:50pm
– Pasted Max Patch, click to expand. –
Mar 15, 2012 at 2:53pm

up !


You must be logged in to reply to this topic.