-documentation is non-existent aside from video, and comments in JSON config file, but there's probably enough info there for a start.
1. 3D background subtraction
2. custom culling of Kinect depth matrix for carving out ideal tracking space
3. reports blob location data via OSC to variable number of listening clients, as follows:
--- /kinect/blobs label1 pixelX1 pixelY1 x1 y1 z1 label2 pixelX2 pixelY2 x2 y2 z2... etc. for as many blobs as are present (message length will be in multiples of 6 element list)
4. by default blobs are sorted and labeled in ascending order nearest to farthest from camera (labels start at 1)
--- pixel x/y coords are blob centroid in pixel (camera frame) space, normalized from -1. to 1., with origin at center of frame
--- x/y/z coords are real-space 3D blob centroid coordinates in meters (right-handed coordinate system, with z pointing out from camera, y pointing up)
5. optional depth-sensitive tracking - attempts to consistently associate labels with correct blobs, regardless of overlap, etc.
--- like cv.jit.blobs.sort, but considering depth as well