jit.openni (inc. Kinect)by diablodale
This is a Max Jitter external for OpenNI on Windows and Mac OSx.
- Configuration of OpenNI by an OpenNI XML configuration file; see OpenNI documentation for format
- ImageMap of RGB24 output in a 4-plane char jitter matrix
- DepthMap output in a 1-plan long, float32, or float64 jitter matrix
- IrMap output in a 1-plan long, float32, or float64 jitter matrix
- UserPixelMap output in a 1-plan long, float32, or float64 jitter matrix
- Skeleton joints in an OSC, native max route, or legacy OSCeleton format
- Skeleton events in an OSC, native max route, or legacy OSCeleton format (e.g. user seen, user lost, calibration success, etc.)
- User centers of mass
- Scene floor identification and data
- Values for user centers of mass and joints in OpenNI native, projective coordinate, or OSCeleton legacy “normalized” values
- Attributes to filtering data based on position or orientation confidence, display or not the orientation data, and smooth skeleton data using OpenNI’s smoothing API
- Depth camera field of view
- Compiled as a Win32 and Mac OSx max/jitter external
https://github.com/diablodale/jit.openni/wiki is the location of the project documentation and setup instructions.
Example of some OSC joint output:
/userid/jointname x y z confidPosition [Xx Xy Xz Yx Yy Yz Zx Zy Zz orientPosition]
It has been casually tested with Max 5 and 6, SensorKinect, and OpenNI binaries on Windows and Mac OSx. If you find problems and can reproduce them with clear steps, I encourage you to open an issue at https://github.com/diablodale/jit.openni/issues
This external’s output is very similar to jit.freenect.grab and therefore it can often be used in its place with small changes. My object outputs depth in mm and jit.freenect.grab outputs in cm. A simple math operation can resolve this. Note, my object does not provide the “raw” values of jit.freenect.grab; instead it provides the mm depth values via OpenNI.
The OSC skeleton data should be easy to use if you are familiar with OSCeleton.
I would like to see support for other generators (gestures, hand point, etc.) in the future added by myself or with the assistance of others.