I am working on an interactive video installation showing a scene of a window view. I would like to have a second video of a reflection of someone in that window triggered based on the position of the user within a defined area. (so that it would apear as if the reflection were the user's reflection in the window -- sorry crappy explanation ugh)
So, I was looking into using the depth map info of a kinect. But.. after some research, I understood that it is a bit buggy and quite cpu-dependent.
(See post of dtr saying : "jit.freenect.grab is buggy. has a memory leak (only shows after some hours of non-stop operation) and it loses connection under high cpu loads. it's also from pre-OpenNI times, ie. hacked and reverse engineered. of course it might still be useful to you, for example if you need the depth map directly in max.")
So I wanted to pick your brains before I spend 100 euro on the kinect or on the carpet switches. Any good experiences with the depth map info and max? Or would carpet switches and arduino be a safer option? Any recommendations on those? Other advice?