kinect depth map and rgb together
I'm having trouble figuring out how to get a depth informed color image out of kinect. I can get the rgb image and the depth point map separately but I'm not sure how to marry them.
Do you draw a point map, then paint each point the color associated with the rgb? (How do you specify colors per point?)
Or do you take the rgb information and give it depth?
I'm trying to make color 3d points that I can fly around with the [jit.gl.camera].
Thanks kindly!
Hey folks,
This is an old thread but it is asking the exact question I would like to know...
I am using openni to get RGB data and depth information from the kinect camera. I would like to then add particle effects the image at the coordinates specified in the depth map.
Does anyone know what range the Kinect uses and how to convert it to a 640x480 image?
Your setup isn't clear to me; and that greatly affects the answer. I'll answer generically.
The Kinect can see about 8 meters. Various software drivers/middleware might limit that to 4m. For example, NITE (a component of the OpenNI stack) and Microsoft's skeleton detection code are limited to 4m. However, the raw RGB and depth pixels can go to the 8m.
Different software/drivers support different resolutions. You will want to use your software to align the depth and color images and for them to have the same output resolution (you say you want 640x480). Then, for example, you can use jit.gl.mesh to draw the depthmap as 3d pixels and color them with the RGB image.
If you are on Windows, I recommend you use dp.kinect. Why? Because it is richly integrated into Max and supports nearly all features of the Kinect. Updated regularily and well supported. http://hidale.com/dp-kinect/
If you are on Mac, I caution you that Apple killed the OpenNI project and the website is being deleted later this month. After that, a core component of OpenNI (NITE) will be unavailable unless pirated putting all of the Mac solutions like jit.openni, Synapse, and NIMate at risk.