thanks for the tip, dtr. I was not aware about that. I imagine it's a huge task to port some of those capabilities into max. But it would be really great to merge, for instance, the "physics" from max and the real time modeled/segmented world of point-cloud/kinect-fusion. Do you have any clue on how this could be done?
Well... I'm trying to get their 'GPU People' to compile and run. That's their alternative to OpenNI / Kinect SDK skeleton tracking. It runs on Nvidia CUDA accelerated graphics card instead of CPU. Pretty interesting. I'd like it to eventually replace my current tracking system based on the SimpleOpenNI library in Processing. But not being a programmer I have a really hard time getting the whole PCL package and all its dependencies to compile. Didn't succeed yet. GPU People is in the dev/trunk branch (as opposed to stable branch), which doesn't help either.
So no, I have no clue how this could be done ;)
It surely can be done but will take an experienced code ninja to do it...