kinect or carpet switches?

alexandra Verhaest's icon

Hi all,

I am working on an interactive video installation showing a scene of a window view. I would like to have a second video of a reflection of someone in that window triggered based on the position of the user within a defined area. (so that it would apear as if the reflection were the user's reflection in the window -- sorry crappy explanation ugh)

So, I was looking into using the depth map info of a kinect. But.. after some research, I understood that it is a bit buggy and quite cpu-dependent.
(See post of dtr saying : "jit.freenect.grab is buggy. has a memory leak (only shows after some hours of non-stop operation) and it loses connection under high cpu loads. it's also from pre-OpenNI times, ie. hacked and reverse engineered. of course it might still be useful to you, for example if you need the depth map directly in max.")

So I wanted to pick your brains before I spend 100 euro on the kinect or on the carpet switches. Any good experiences with the depth map info and max? Or would carpet switches and arduino be a safer option? Any recommendations on those? Other advice?

Thanks,

Alex

dtr's icon

That post of mine is from a very long time ago. jit.freenect.grab was updated since and there is now also the jit.openni external which uses the official OpenNI libraries. The windows version has been out for a while and the mac one should come out any day now. Working with a kinect directly in Max is now perfectly possible and reliable. Of course there's always the option of having a separate program doing the kinect tracking and sending its data to Max.

Whether the switches or the kinect are most appropriate depend on other aspects as well. How large is the area you want to track? How many people at a time? Is it a problem if a user gets occluded behind another one? etc

If you choose the kinect I'd recommend using the user/skeleton tracking over doing your own blob tracking on the depth map. User/skeleton tracking gives you the position of the users directly. Using that as control input for your video would be trivial.

Stephane Morisse's icon

But skeleton tracking implies taking the pose...

Vjacobs's icon

not if you use (windows only) Microsoft Kinect SDK dp.kinect external in Max

dtr's icon

auto-calibration was introduced in OpenNI (mac+win) ages ago, ie. the pose isn't necessary anymore

Vjacobs's icon

hah interesting, didn't know that!

alexandra Verhaest's icon

Thanks dtr!

I finaly decided on using ultrasonic sensors and an arduino, since the matrix info I need is pretty straightforward and doesn't even require Y-info.. but I'll keep it in mind for more advanced NI-projects in the future.

Thanks,

Alex