I’m doing lots of Kinect + Max. I’m not mapping a 3D model to the skeleton but generate 3D graphics based on the skeleton data in Jitter, mainly using jit.gl.mesh. I can precisely map it to the real-world body position (and as a next step, also project the generated graphics on that body). So mapping a model to the skeleton should work as well. See some pics here, video coming asap: http://www.dietervandoren.net/index.php?/project/integration04/
As for the psi pose, I have a Processing app (using the SimpleOpenNI library) doing the skeleton tracking and sending it to Max over OSC. SimpleOpenNI supports the auto-calibration of recent OpenNI versions so no need for psi-pose anymore. This is on OSX but should work on Windows as well.