depthtoworld cg shader kinect jit.openni

Ad.'s icon

hi all,
I work ona depth to world coordinates (from kinect and jit.openni object) cg shader, to do a very efficient real-time mapping.
I have an issue with cg shader, max tells me
"(33) : error C3004: function "texRECT" not supported in this profile
"
I'm asking for help :)

here's my jxs, vp and fp files. thx
you

Ad

jxs :

map depth kinect to world coordinate.

vp:

uniform samplerRECT depthMap : TEXUNIT0;
uniform float4x4 mvp : state.matrix.mvp;

float rawDepthToMeters(float depthValue) {
if (depthValue < 1f) {
return (float)(1.0 / (depthValue * -0.0030711016 * 2047 + 3.3309495161));
}
return 0.0f;
}

float3 depthToWorld(float x, float y, float depthValue) {

float fx_d = 1.0 / 5.9421434211923247e+02;
float fy_d = 1.0 / 5.9104053696870778e+02;
float cx_d = 3.3930780975300314e+02;
float cy_d = 2.4273913761751615e+02;

float3 result = float3(0,0,0);
float depth = rawDepthToMeters(depthValue);
result.x = (x - cx_d) * depth * fx_d;
result.y = (y - cy_d) * depth * fy_d;
result.z = depth * 200;
return result;
}

void main(
in float4 iVertex : POSITION,
in float2 iTexCoord : TEXCOORD0,
out float4 oVertex : POSITION,
out float2 oTexCoord : TEXCOORD0
){
float4 vertex = iVertex;
float depth = texRECT(depthMap, iTexCoord).r;
vertex.xyz += depthToWorld(iVertex.x,iVertex.y,depth);
oVertex = mul(mvp,vertex);

oTexCoord = iTexCoord;
}

fp:

uniform samplerRECT colorMap : TEXUNIT1;

void main(
    in float2 iTexCoord : TEXCOORD0,
    out float4 oColor : COLOR
){
    oColor = texRECT(colorMap,iTexCoord);
}

diablodale's icon

As I consider the features for the Microsoft SDK version (and potentially a backport to the OpenNI version), would you mind sharing your intention?

Are you wanting to create a point cloud? To have some kind of data structure which has for each point an x,y,z value in real world coordinates?

I had considered providing this in jit.openni but chose to postpone it so I could see more development with the OpenNI SDK and point cloud projects like PCL.

The parallelism afforded by a shader would likely make the x,y -> real world coordinates faster. That though would be offset by the needed movement of the fullframe data into the graphic cards memory and back to main memory (if needed there).

Ad.'s icon

I'm working on a real-time mapping on moving objects-I need to work with the real world coordinates, then make some calibration between kinect and projector (instrisics and extrinsiscs points)- deduce my matrix transform and implement it in the jxs shader-) GPU is I the solution to do this.

Ad.

ćwiek's icon
Max Patch
Copy patch and select New From Clipboard in Max.

hi!
i am doing very similar thing, and thought, that the easiest and most accurate way to do this, is to place camera on the projector + little correction with jit.gl.cornerpin. of course then you have to deal with ir lighting and filters.
anyway what do you mean saying "real world coordinates"? i think, that the main problem is in the optics of both kinect and projector - similar to this thread https://cycling74.com/forums/kinect-z-depth-range-selector-problem
i have done patch to map the depth into mesh, with texture from rgb camera, so it can be helpfull somehow.
you just have to play around with @srcdimstart and @srcdimend to match rgb and depth. I am very interested in this thread, so please keep me inform.

ćwiek's icon
Ad.'s icon

thx you, I'll keep you in touch about this !
I think the only way to make an efficient mapping is to deal with intrinsics ,extrinsics and distorsion of projector and kinect (or to deal with posit algo but it's another problem)
Cornerpin won't be enough in my case because it woks for a given plane although I need a mapping in a whole 3d room.
big up

urmatter's icon

I would like to invite you all to this thread. I would love to get all of your input on this...

marlus's icon

i'm using jit.openni and in order to get a depth image where the nearest part is lighter pixel and far is darker, i have to created two nodes:

  • jit.op @op / @val 22 (fix image that looks looping shades of grey and make just one "pass")

  • jit.op @op * @val -1 (invert)

Is that correct? Are you guys have to fix depth image too?

Then, I discovered that a more direct way to do this: jit.expr @expr in[0] / 22 * -1

I'm new to jitter and matrices concepts. Do you have any example on how to create points cloud from this treated depth image with jit.gl.mesh? And how to undistort it?

Attaching a path that I'm working on:

openni001.maxpat
Max Patch
diablodale's icon

You can use math to adjust the numbers you get in the depthmap to be anything you want. jit.openni can provide you the numbers in meters, mm, floats, integers, etc. Depending on how you configure it, you can adjust the numbers to then give you a visual that you want. For example, sending 3412 to a jit.window is something that I can't readily understand. However, sending 0.5 to a window makes me thing I will see a medium grey.

You must first learn about jitter and matrices. You admit that above. Its a good thing to know what you don't know. :-)  Go through *all* of the Jitter tutorials. They are amazing to learn. After you understand these basic skills, you will then be ready to take on more complex topics like creating and manipulating point clouds.

There are APIs in the Kinect SDK which you could use to create point clouds. Using the SDK is non-trivial C++ or C# coding. I recommend you use C++ so that you can more easily write an external for Max. If you are not a C++ or C# coder, then you have a big fun project in which to learn new skills. ;-)

marlus's icon

Thank you, @diablodale! I will have a look on those examples. And maybe later, getting used to max, I will try some C++ stuff (do you recommend oF, cinder, or just plain c++?)

just had to change jit.openni depth outlet to output floats instead of integers, small thing but makes all the difference. also I had to jit.map from 0-10000 to 0-2047. I still don't understand what these objects does:

jit.expr 3 float32 160 120 @expr "(cell[0]-dim[0]/2.) * (in[0]-in[1]) * in[2] " "(cell[1]-dim[1]/2.) * (in[0]-in[1]) * in[2]" in[0] @inputs 3

but changing some variables on inlets 2 and 3, I got similar results and a point cloud! I will experiment a little bit more and organize the patch then post it here ;)

diablodale's icon

OpenFrameworks and Cinder are libraries of C++ code to make it easier for non-experienced coders to have access to powerful solutions. The choice of Cinder versus OpenFrameworks is not easily answered and I recommend you google for some suggestions/reviews. And...try both of them to see which you like better.

LSka's icon

Hi Marlus,
have you made any progress in your point cloud? How did you change the depth output to floats in jit.openni?

marlus's icon

That's what I did

  1. converted the openni output to float32

  2. mapped 0-10000 (openni) to 0-2047 (libfreenect)

  3. used libfreenect expressions

  4. little hack to put the kinect black frame far away: (in[0]+2046)%2047

I'm still trying to get a cool visualization with jit.gl.mesh and to create a better camera control so I can map the kinect to the projector...

Here is the patch

Max Patch
Copy patch and select New From Clipboard in Max.