Problems with jit.gl.mesh and auto_normals
Hi,
I can't get jit.gl.mesh auto_normals to work like I expect with "draw_mode triangles".
I use a Kinect and Processing to scan a 3D model. I then record all points in a file called model.txt (each line is one vertex of a triangle. The first three floats are the x y and z. Ignore the 2 others).
When I use Processing to view the file, the normals are created properly and the model reflects light as expected.
When I use Max to view the file, all the normals seem to be identical even with auto_normals set to "1". You can compare the output of Processing and Max with "processing.png" and "max.png". What am I doing wrong? See attached files.
Its already working, its just hard to see the shading. All I added was a point light to the scene, defined some material settings, remove the @color to use @mat_diffuse instead.
Patch below automatically moves the mesh to a blob where you can see the shading.
I noticed that your mesh is not contiguous and sometimes appears out of order. The ordering of vertices is needed to now what vertex is adjacent to another. And therefore affects normals. Since you are capturing in a Kinect and you'll have a grid there, can you keep it in a matrix of 2 dimensions so that mesh can better understand how to connect everything together?
Oh... ok, thanks!
I noticed that your mesh is not contiguous and sometimes appears out of order.
Yeah, that is necessary I think. I want to ignore all mesh data that is past a certain distance (2000 cm right now) from the Kinect camera. If I use the grid directly from jit.freenect.grab for example, there is too much "garbage" data that is displayed (see http://www.youtube.com/watch?v=wvJKaViF7p0 for an example of what I mean).
Right now, this is the cleanest method I have found and I am capable of producing results similar to https://vimeo.com/35858119 in Processing.
But I rather work in Max :)
I've heard that you have to undistort data in freenect. If you don't want that or want additional features, you can use jit.openni which is available for Mac and Windows.
There is an "edge flag" feature of gl.mesh that I've always been curious about. If anyone knows, perhaps it can help. Being able to mark the edges of data so that you don't draw the z=0.0 data which you see in your youtube video.
jit.openni can identify human outlines and you can use that to mask the data. Maybe color the points you don't want as black or having 0 alpha. If you're a shader writer, you can also use geometry shaders to restructure and filter out vertices that you don't want, e.g. on Z.
The vimeo thing is fairly easy to do after you make the mesh. You only have to color the vertices.
If you want highdef (rather than the 640x480) res of hte Kinect, there is a hires technical that is available that allows you to calibrate a hidef camera to the kinect depth sensor at http://www.rgbdtoolkit.com/