I’ve tried to map the x values like this [zmap 0. 640. -1. 1.0] and y values [zmap 0. 480. 1. -1.] but this gives me strange results.
What would be the right mapping data to map the kinect to jitter?
You might wanna look at this if you’re trying to map the raw depth map to an openGL context: http://dtr.noisepages.com/2011/02/2-methods-for-undistorting-the-kinect-depth-map-in-maxjitter/
I’ve done this a long time ago so maybe the newer kinect tools do this automatically now, not sure about that.
I have seen your example before, thanks. But I’m using synapse and getting OSC data.
I’m not using Synapse but it seems improbable that the units you get are between 0-640 and 0-480. More likely you’re getting meter or millimeter values with 0 in the middle (x) at camera height (y), so you’d have both positive and negative values.
I’m also using synapse with kinect. It can output pixel values as well! I think 640 by 480!
Log in to reply
SUBSCRIBE TO OUR NEWSLETTER
C74 RSS Feed | © Copyright Cycling '74 | Terms & Conditions