[jit.openni] how do I define a plane using the floor output?

    Apr 19 2013 | 9:29 am
    Hello everybody,
    I'm trying to learn jit.openni for user tracking with the Kinect. As for now, I'd like to understand how I can draw a plane representing the floor. As reported in https://github.com/diablodale/jit.openni/wiki
    scene_floor is an array of 6 floats: first 3 are the X, Y, Z coordinates for a point on the plane, the second 3 are the X, Y, Z of a normal vector.
    Definition with a point and a normal vector
    In a three-dimensional space, another important way of defining a plane is by specifying a point and a normal vector to the plane.
    Let r0 be the position vector of some known point P_0 in the plane, and let n be a nonzero vector normal to the plane. The idea is that a point P with position vector r is in the plane if and only if the vector drawn from P_0 to P is perpendicular to n. Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the desired plane can be expressed as the set of all points r such that
    bold n cdot (bold r-bold r_0)=0.
    (The dot here means a dot product, not scalar multiplication.) Expanded this becomes
    n_x (x-x_0)+ n_y(y-y_0)+ n_z(z-z_0)=0,,
    which is the familiar equation for a plane.[3]
    Note that this means that two non-equal points can be used to define a plane so long as they are ordered and used according to an agreed convention: for example, the first point P_0 sits on the plane and the normal vector is defined implicitly from (P_1 - P_0).
    Ok, now my ignorance does the rest...how can I translate all this beautiful things in the Jitter world? (i. e. a matrix controlling a jit.gl.mesh)?

    • Apr 19 2013 | 1:40 pm
      Hello. Consider using jit.gl.gridshape @shape=plane, then connect a jit.anim.node as its parent to control the position and rotation. For example...
    • Apr 19 2013 | 4:53 pm
      Thanks diablodale for the hint, it seems just what I need!
      But...I have another (maybe silly) question: is the Kinect supposed to automatically detect the floor? Because at the moment the values output by jit.openni give me the results you can see in the attached screenshot. in fact the floor goes on the ceiling...
      What I'm trying to achieve is to translate skeleton coords into a jit.gl.mesh in order to visualize skeleton joints into Max, and possibly visualize also the floor. But I'm getting lost...with coordinates! Can you help me? Here's the patch:
      I've been able to visualize the "skeletal" mesh, but to display it correctly, I have to give it rotationxyz 90 180 0. which is obviously different from the plane's rotation
    • Apr 20 2013 | 10:48 pm
      Several issues that I found: 1) You erroneously set skeleton_value_type=2. This was the #1 issue. It needs to be the default value of zero. As a new coder, you will never need to set this to a value of 2; it is for crusty-old-buggy-legacy-3rd-party-emulation-apps only. And unless you become an expert at projective coordinate systems in 3D graphics and need such, you will never set it to 1.
      2) for your sanity and better graphics, you'll want to express all your values in meters. As you have it currently configures, it is in mm. So after issue 1 was fixed, the plane was at -1334 down the Y axis...way off screen. Due to arcane OpenGL depth buffer issues, I highly suggest you avoid large coordinates like that. Instead set @distmeter=1 on jit.openni
      3) you have some problems in delacing. The approach you have taken won't work. I'll leave that debugging to you.
    • Apr 21 2013 | 7:19 pm
      Thanks again for your hints. About issue number 3, it seemed to work for me in getting a correct visualization of skeletal joints, but I'll see if I can do something better.
    • Apr 21 2013 | 11:03 pm
      Here's the updated patch:
      Now, thanks to the @distmeter 1 attribute, the floor plane appears at the right place. But there are some discrepancies with the skeleton joints position. In the attached screenshot you can see the feet are lower than the floor, while in other tests, users in the same position seem like "floating in air". At a first rough test, this seems to be related to the Kinect's tilt. in the screenshot tilt was set to 0, and changing it give different results. Do you have any recomendations about an optimal tilt for getting correct floor and skeleton values? Or could it be related to other issues? Maybe I'll make some tests in a bigger room, with a larger floor and no forniture. At the moment I'm trying it in a furnished room with the Kinect on a desk.
    • Apr 23 2013 | 2:09 pm
      You still have the bug in the portion that delaces the messages and fills the matrix for the mesh. That root problem cascades and you try to hack it by rotating the mesh which is unneeded.
      Look carefully at how you are filling the matrix. I"m confident that Y and Z are in the incorrect planes.
    • Apr 23 2013 | 2:52 pm
      Yes, I fixed that last night.. but did'nt have time to post. Planes Y and Z were inverted. Thanks for pointing that out. Now I'm applying a 180° rotation to jit.gl.camera to correctly visualize the skeleton distance, since jit.openni outputs growing values for farther objects, while in GL world greater values represent closer objects. Can this approach be good, or should I use another method? I still have to do serious testing about the "feet and floor" issue, but it seems better. I'll post something soon. Thanks again.
    • Apr 24 2013 | 12:47 am
      The Kinect does represent Z distance from the sensor as an increasing value. This approach is shared with all commodity depth sensors in the marketplace.
      OpenGL itself has no preference in positive or negative numbers representing "closer objects". Instead, OpenGL documentation and tools often choose default values which place a camera in a positive Z coordinate with it looking towards the negative Z axis. As you have written, you can easily change that at will. Its completely your choice and has no affect on any rendering quality.
      Combinations of rotations with jit.gl.mesh, jit.anim,node(s), and even the mirror option in the openni XML configuration file provide you a rich set of options that can make a solution which fits your need and your programming style.
      WIth the bug fixes I made in your patch, the feet and floor appear to align well. It will not be perfect. Both the feet *and* the floor are estimated. Often, feet cannot be seen clearly by the sensor so NITE estimates their position based on the other joins and common body proportions. You can also adjust the feet coordinates yourself by, for example, forcing their coordinate to always be within 5cm above the floor. Its all math...and math is fun! ;-)
    • Apr 25 2013 | 8:48 am
      Thank you very much for your help, diablodale. And thanks for your great job with jit.openni!
    • Jun 30 2013 | 10:37 pm
      Hi diablodale and LSKA. I am very interested in this discussion, and wondering if there was an updated patch posted that had the "bug fixed I made in your patch". I ran LSKA's patch from April 23rd, and the point plotting looked like it still had the bug in it. Would one of you be willing to post the updated patch?
      Thanks very much. This is so exciting to be working on it!
    • Jul 01 2013 | 5:05 pm
      Hi, Ceberman, here's an updated version of the patch discussed above, with some enhancements and a better interface, also including a method for recording and storing skeletal data.
    • Jul 02 2013 | 1:33 am
      Thank you, very cool. Can't wait to read it in detail
    • Jul 02 2013 | 9:58 am
      Hi LSKa, Really enjoying this patch! I have two very newbie questions: 1. How do I flip the x coordinates so the point skeleton is in mirror mode? 2. How do I get the floor to show?
      Thanks again for posting this, and many, many thanks to diablodale for jit.openni!
    • Jul 02 2013 | 10:42 am
      1. You can open the jit.openni_config.xml and set the "mirror_on" parameter to "false" in the Depth node configuration. Actually, in my patch the whole scene is 180° rotated, so it's already in some sort of "mirror mode". Alternatively, if you want a quick switch, you can invert the sign of the x values inside the "skeleton_mesh" subpatcher (see the vexpr object in the patch below)
      2. the floor should be visible when clicking on the "show floor" toggle (You should see the values change in the "floor coords" box), but it might be too small to be seen from the standard camera position. Try moving the camera around , or changing the floor scale, see if it appears!
    • Jul 03 2013 | 1:37 am
      Thanks a ton. Using your skeleton_mesh patch, i was able to get the right mirror orientation that matches the RGB camera. Will keep working on the show floor