Ambitious project. Normally I do brash things and get okay results then have to comb the forums for ages optimizing my poorly conceived method. Not this time (maybe)!
I have a super accurate 6DOF tracking system available to me. I want to strap a tracker to a kinect and use the resulting depth field as a point cloud to create a map of the room. Other AR apps do this by Parallel Tracking an Mapping to extrapolate the position of the camera from the points it’s creating to track. I will already have the measurements of the tracker in the room and pitch, roll, yaw to the cm and all I need to do is convert cm to openGL coordinates.
So I could use help on the Viz side. How would I even potentially be able to create such a massive point cloud with color data derived from the RGB camera on the kinect. So far I’ve been able to make very low rez point clouds using the kinect but I’d like to be able to make stable points in the 3D space (it’ll be utilizing the cosm objects).
Is jit.gl.sketch the way to go? Maybe make an airbrush that draws jit.gl.sketch points? Hmmm… but then how would I make each point retain it’s texture from the RGB camera without bogging down the processor too much?