[OT?] pointcloud animation workflow: from live scene capture to modeling app
Hi, this may slightly OT but could certainly be done in Jitter, so here goes:
Been working on making a limited DIY 3d live scene capture + virtual camera solution. The virtual camera recorder is up and running , all made in Max and JS, so far using a Gametrak controller and a wireless 6DOF IMU.
However, the live 3d scene capture is the current problem -
I am using a kinect for this and cannot for the life of me figure out the proper workflow for capturing and being actually able to use a pointcloud sequence (i.e a "4d animation", where there's a point cloud file saved for each frame) in 3d modeling software (currently using Maya but can use any other if it'll work better). There's an "obj sequence loader" plugin for Maya, but it's broken with recent versions.
The capturing part is manageable, currently using "Brekel Kinect" for this, and it could quite easily be written in Jitter or whatever to be faster- currenly getting only 10 fps at most (would love to find a utility that would process raw kinect depth+image pairs into a sequence of obj files).
But I have no clue how to actually import a sequence of obj / ply etc files into Maya or similar. Any pointers welcome!
Will share the virtual camera code once it's generalized and cleaned enough.
best
Nadav