Forums > MaxMSP

Open Kinect



November 11, 2010 | 11:50 pm

Yes, hopefully someone will port the driver to a jitter external!

I believe we won’t see anyone getting skeletons out of it for some time though =/


November 13, 2010 | 8:52 pm

The opened driver has been released for linux, hope someone will port it to OSX.
We work here to retrieve kinect datas from linux computer and send to another with Max installed …


November 14, 2010 | 3:40 am

You can get the depth map into Jitter now via Syphon (jit.gl.syphon), and use the OpenFrameworks depthmap app that uses libfreekinect, right now. Here it is working in QC:

http://mansteri.com/2010/11/kinect-of-syphon-kinect-in-quartz-composer/

The same could be done for Jitter. Just saying, no need to wait.


November 14, 2010 | 5:03 am

Just DLed the OpenKinect code base. Might be over my head… would love to make it a jitter object.

Vade: Openframeworks depthmap app? Link would be awesome, thx.



November 14, 2010 | 7:00 pm

thanks vade for the infos, i’ll give it a try !


November 16, 2010 | 7:30 pm

Alright, got a kinect, the Openframworks code works like a treat. It’s pretty slick.

Going to see if I can hack together jit.kinect.grab today.

@vade: info on your Syphon from Openframeworks code? We’re you going to release it?


November 16, 2010 | 9:30 pm

Here’s the bare skeleton, *****doesn’t work yet******, but initializes the camera and free() works properly. Next I’ve got to figure out how to get the information into a matrix or two. Ideally I think it would be great to have the raw depth from 1, RGB from 2, and a built in threshold (just like the openframeworks example for blob detection).

Not sure I’ll have this done any time soon, but making progress. I’d post this on github for people to help with but I haven’t time to set it up. So for now feel free to repost in the forum I guess?

Attachments:
  1. kinect.zip

November 16, 2010 | 10:16 pm

i hope you crack it!
did you see this link on that openframeworks topic, its using the Kinect to make a 3d recreation of a live video and move through it in 3d. its brillant

http://idav.ucdavis.edu/~okreylos/ResDev/Kinect/index.html


November 16, 2010 | 10:25 pm

Okay, little help would be great.
Seems I’m getting the info I need loaded here:
memcpy(gl_depth_front, gl_depth_back, sizeof(gl_depth_back));

So what’s the best way to read the info of type:
uint8_t gl_depth_front[640*480*4];
Into out_bp?

I’m sure the example is somewhere in the Jitter N-dimensional Matrices examples, but I’m starting to go bleary eyed.

Dunno, could be I’m not understanding the Opengl example in the OpenKinect example code but that seems to make sense.


November 16, 2010 | 10:59 pm

Hi cap10subtext,

how about setting up github? I’d certainly like to codevelop this one(already trying to hack something up).
Too bad I have to work tomorrow:(

uint8_t is char in jitter land.


November 17, 2010 | 2:21 am

I’ve got a github set up now but having serious noob related issues at the moment. :P I had everything setup, got my project directory ready, create .gitignore and .gitattributes to make sure it wouldn’t muck up my Xcode files. Thought I was supposed to git commit to get things online but I’m stuck in vim…

lemme guess, rtfm? :P This is making me feel stupid.


November 17, 2010 | 2:54 am

Bah! Github is making me really frustrated, I’m missing something really simple and I can’t get it to upload the project. If someone takes pity on me and sends me a link to a cheat sheet for github I’ll put up the xcodeproj

https://github.com/cap10subtext/jit.kinect


November 17, 2010 | 5:15 am

okay it’s up…


November 17, 2010 | 6:50 am

cap10subtext: openframeworks is up and working on the syphon google code svn, and has been for a while :) It just was not ready for the nice packaged Beta 1 release.

Nice work on the jitter object. Hope this makes progress :)


November 17, 2010 | 1:42 pm

Gee, I should have checked the forums before starting to roll my own…

Anyway, I’ve put my project up on Github also. (It’s probably going to be a good idea to get everyone working on the same project, though…)

https://github.com/jmpelletier/jit.freenect.grab

I’ve got a compiled external over there that loads properly, and -in theory- should output something. I say in theory because the Kinect is only going on sale on the 20th here.

I looked over cap10subtext’s code and it looks like there’s a lot of leftover from the glView example that doesn’t make sense (or do anything) in the context of a Jitter external.

Anyway, I’d appreciate if someone with actual hardware could test this out.

Jean-Marc


November 17, 2010 | 2:55 pm

Sense? Who needs to make sense? ;) Seriously though, I’m a noob at max development. I’d dumped a few things in there I was trying to make sense of so I know it’s crap.

I’ll test yours right now…


November 17, 2010 | 3:18 pm

struggling with libusb. Can you point me to the version you are using? I tried compiling and installing the one here:
http://www.libusb.org/ v. 1.0.8 but I get this error:

jit.freenect.grab, 262): Library not loaded: /usr/local/lib/libusb-1.0.0.dylib
Referenced from: /Applications/Max5/Cycling ’74/jitter-externals/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab
Reason: Incompatible library version: jit.freenect.grab requires version 2.0.0 or later, but libusb-1.0.0.dylib provides version 1.0.0

Little help?


November 17, 2010 | 5:31 pm

Hi all,

@JMP – it totally makes sense form me if this object is part of cv.jit.

What do you guys think?

I’ll test the object with kinect late tonight(eu) when I finally get home.

@cap

I think you need patched libusb, funny thing is that you should already have one if you got the glview example working, you just have to find it:)

checkout this frameworks thread:

http://www.openframeworks.cc/forum/viewtopic.php?f=14&t=4947&hilit=kinect&start=15

Sorry for shooting in the dark, hopefully we’ll get this one running in a day or two!

Best,
nesa


November 17, 2010 | 6:11 pm

Thanks nesa, just before I saw your post, I realized I was using the "outdated" ones and JMP linked to the new ones, so I updated according to Theo’s instructions.

JMP I finally got your external loaded, sending it the open message does nothing. It doesn’t even turn on the little IR. I may be doing it wrong, but my external does open correctly, you can see the laser powering up, and it returns a serial number on the device.

I’m going to try fiddling a bit but I think you may be missing a call somewhere… I’ll report back in a bit.


November 17, 2010 | 7:09 pm

JMP: I’ve tried compiling your xcode project but I’m not having any luck so I’ve tried with the newest freenect source files and now my example isn’t working anymore either, so I doubt it’s something you’ve done. It’s either something specific to my machine (botched libusb install?) or else there’s something going on in the new rev.

I’m not sure what it is, I might try my original code with your handling of Jitter and that might have more success for now.

Rats, I have a Max workshop today I was really hoping to present this at. Oh well.


November 18, 2010 | 2:29 am

Hi,
I was just looking at this video, and was blown away by the 3D precision of the kinect :

http://vimeo.com/16788233

wow, 1 centimeter for depth precision. Time-of-flight-of-the-light technic! 1 centimeter, it's a measurement of light speed with a precision of 1/300 of a nanosecond, 1/100 cycle of our fastest computers!

others interesting videos :

http://www.youtube.com/user/okreylos#p/u/1/N9dyEyub0CE

and nice thoughs about using multiples kinect :
http://www.youtube.com/user/okreylos#p/u/2/ttMHme2EI9I

http://www.youtube.com/user/okreylos#p/u/0/YB90t5Bssf8

[attachment=146410,1405]

Attachments:
  1. vimeo-com-16788233.png

November 18, 2010 | 4:05 am

Hi,

so I was playing around with Jean-Marc’s version(lost multiple hours compiling patched libusb and freenect properly).

In Jean-Marc’s version there are callbacks which were never called because we need to start depth/rgb grab and process events I guess(what I got from latest glview example).

After poking around with these, I could finally see only one RGB frame and white depth matrix. Now it seems that something is wrong in timestamp handling, but dunno – my head doesn’t work any more.

I forked Jean Marc’s object, and you can find the latest hack here:

http://github.com/npnp/jit.freenect.grab

Here are my horrible notes on compiling libusb/freenect on macos:

-libusb:
libusb – git clone git://git.libusb.org/libusb.git
apply freenect patch
configure
libusb must be compiled as 32bit, use:
make CFLAGS=’-arch i386′
make check
sudo make install

-freenect:
manually edit libfreenect with ccmake to
set include dir to /usr/include/libusb
set system root dir to /

this causes ‘missing sdk error’,
adjust the project settings in xcode:
change sdk to macos4/5/6.
build libfreenect as i386

hope this helps, can’t wait to continue working on it!

Thanks Jean Marc and Cap!


November 18, 2010 | 6:02 am

Okay I see how that works. Thanks nesa, that does indeed initialize the laser grid now. But I’ve traced the printf’s from camera.c and yikes! I’m going to have to take a closer look but like you said it looks like it’s dropping mad frames. So far no images on my end.

Don’t know if you have this in there yet, but make sure you toss in this bad boy into free and close to make sure it’s not crashing on exit: libusb_release_interface(x->device, 0); (until they implement a proper freenect_close. Can’t wrap my brain around github at the moment to throw it in your fork.

More tomorrow.


November 18, 2010 | 7:11 am

Hi, I fixed the libusb issues and managed to remove dependencies. I also added the camera release code.

I also updated the mxo, so people with an actual device, please give it a try!

(There are also instructions in the README on how to compile a 32-bit usblib and link statically to it — which is harder than it should.)

Jean-Marc


November 18, 2010 | 11:28 am

Jean-Marc,

thanks for the cleanup and nice instructions.

Unfortunately, the object outputs nothing – see my previous post.

Cap, thanks – I didn’t have the release_interface, but I see it now in Jean-Marcs code.


November 18, 2010 | 2:24 pm

JMP and nesa: here’s is some debug information from camera.c. These are the errors it throws after it opens and 5 bangs are given. Maybe this will put us on the trail. I will hunt through the code and see if I can figure out what else is throwing an error.

Device Number: 1
device index: 0
new device opened.
starting grabs
First xfer: -9

CTL CMD 0003 1267 = 12
CTL RES = 10
CTL CMD 0003 1268 = 12
CTL RES = 10
CTL CMD 0003 1269 = 12
CTL RES = 10
CTL CMD 0003 126a = 12
CTL RES = 10
CTL CMD 0003 126b = 12
CTL RES = 10
CTL CMD 0003 126e = 12
CTL RES = 10
CTL CMD 0003 126f = 12
CTL RES = 10
CTL CMD 0003 1270 = 12
CTL RES = 10
CTL CMD 0003 1271 = 12
CTL RES = 10
CTL CMD 0003 1272 = 12
CTL RES = 10
CTL CMD 0003 1273 = 12
CTL RES = 10
CTL CMD 0003 1274 = 12
CTL RES = 10
CTL CMD 0003 1275 = 12
CTL RES = 10
CTL CMD 0003 1276 = 12
CTL RES = 10
CTL CMD 0003 1277 = 12
CTL RES = 10
CTL CMD 0003 1278 = 12
CTL RES = 10
CTL CMD 0003 1279 = 12
CTL RES = 10
CTL CMD 0003 127a = 12
CTL RES = 10
CTL CMD 0003 127b = 12
CTL RES = 10
CTL CMD 0003 127c = 12
CTL RES = 10
CTL CMD 0003 127d = 12
[Stream 70] Invalid magic ffff
[Stream 70] Invalid magic ffff
[Stream 70] lost 251 packets
[Stream 70] lost too many packets, resyncing…
[Stream 70] Invalid magic eebd
[Stream 70] Invalid magic f75e
[Stream 70] lost 249 packets
[Stream 70] lost too many packets, resyncing…
[Stream 70] Invalid magic ffff
[Stream 70] Invalid magic ffff
[Stream 70] Expected 1748 data bytes, but got 1908. Dropping…
[Stream 70] Invalid magic 674c
[Stream 70] Invalid magic aa75
[Stream 70] Invalid magic 3ac7
[Stream 70] Invalid magic 73ae
[Stream 70] Invalid magic d8bb
[Stream 70] Invalid magic 9d93
[Stream 70] Invalid magic a1d4
[Stream 70] Invalid magic ea9d
[Stream 70] Invalid magic 8eb1
[Stream 70] Invalid magic 5ceb
[Stream 70] lost 244 packets
[Stream 70] lost too many packets, resyncing…
CTL RES = 10
CTL CMD 0003 127e = 12
CTL RES = 10
CTL CMD 0003 127f = 12
CTL RES = 10
CTL CMD 0003 1280 = 12
CTL RES = 10
[Stream 70] Invalid magic d8bb
CTL CMD 0003 1281 = 12
CTL RES = 10
CTL CMD 0003 1282 = 12
CTL RES = 10
CTL CMD 0003 1283 = 12
[Stream 70] Invalid magic c899
[Stream 70] Invalid magic ffff
[Stream 70] Expected 1748 data bytes, but got 1908. Dropping…
[Stream 70] Invalid magic 5d8b
[Stream 70] Invalid magic c5d8
[Stream 70] Invalid magic ea9d
[Stream 70] Invalid magic 4ea9
[Stream 70] Invalid magic 5ceb
[Stream 70] Invalid magic 75ee
[Stream 70] Invalid magic 2762
[Stream 70] Invalid magic b376
[Stream 70] Invalid magic ba97
[Stream 70] Invalid magic 5bac
[Stream 70] lost 244 packets
[Stream 70] lost too many packets, resyncing…
[Stream 70] Invalid magic 84d0
[Stream 70] Invalid magic 756e
[Stream 70] Invalid magic 674c
[Stream 70] Invalid magic b176
[Stream 70] Invalid magic 3ac7
[Stream 70] Invalid magic 4387
CTL RES = 10
CTL CMD 0003 1284 = 12
CTL RES = 10
[Stream 70] Invalid magic ffff
[Stream 70] Invalid magic ffff
[Stream 80] Invalid magic 0a08
[Stream 80] lost 255 packets
[Stream 80] lost too many packets, resyncing…
[Stream 70] Invalid magic 64ec
[Stream 70] lost 255 packets
[Stream 70] lost too many packets, resyncing…
[Stream 80] Invalid magic 4424
[Stream 80] lost 255 packets
[Stream 80] lost too many packets, resyncing…
[Stream 70] Invalid magic ffff
[Stream 80] Invalid magic 020d
[Stream 80] lost 255 packets
[Stream 80] lost too many packets, resyncing…
[Stream 70] Invalid magic dd5b
[Stream 70] lost 255 packets
[Stream 70] lost too many packets, resyncing…


November 18, 2010 | 4:07 pm

Update: nesa are those Callbacks doing what they should be in your setup? A simple trace indicates the functions aren’t being called at all in my setup, therefore no timestamp or pixel data, therefore no love.

this works (JMP Looks like you are missing these in your init which explains why the grid isn’t powering up):
if (freenect_start_depth(device_data[i].device )!=0) {error("start_depth failed");}
if (freenect_start_rgb(device_data[i].device )!=0) {error("start_rgb failed");}
if (freenect_process_events(device_data[i].context)<0) {error("processevents failed");}

these don’t:
freenect_set_depth_callback(device_data[i].device, depth_callback);
freenect_set_rgb_callback(device_data[i].device, rgb_callback);
freenect_set_rgb_format(device_data[i].device, FREENECT_FORMAT_RGB);
(you’ll notice I’m still using JMP’s multi device loops, I know you discontinued them in your fork, but I’m 100% certain that’s not the issue here).

I’m sorry I’m not doing this right on Github! I haven’t had time to get into the flow, hopefully I’ll have time to master it on the weekend. Make things easier for everyone…


November 19, 2010 | 1:43 am

Hey! I’m a dev on the libfreenect project as well as a max/pd external developer (admittedly through flext usually, jitter is gonna be new for me). If there’s any support needs from the libfreenect side, lemme know and I’ll see what we can do. Definitely interested in getting jitter going myself. :)


November 19, 2010 | 6:00 am

qDot, nice to meet you! Welcome aboard.

I think the biggest thing considerations from libfreenect would be sure ensure it continues to play nice with Max. For example they removed code from freenect_close, so freeing the external causes the app to crash. So far the hack has been to keep in the camera release from an early release.

Not sure what that means to you. My past experience says there’s certain calls that should be avoided at all costs when it comes to Max, ie, exit, etc… but I’m not sure how many things like that will be a consideration. I’m not much of an authority. Just a hack. :)


November 19, 2010 | 7:44 am

Thanks cap!

First off, has someone set up a main repo for jit.freenect.grab anywhere? I’m happy to work as maintainer on this if you’d like, we could possibly even make the repo off the OpenKinect organization on github if you’d like.

Knowing where that is would make it easier for me to update you on what’s been updated in the api when changes happen, or even make the patches myself if you’d like. I’ve got the Max SDK going on here (was working on my own jitter external last weekend, but have been kinda busy just working on libfreenect this week).

The api is solidifying somewhat quickly on the OS X/Linux side, we’re hoping to have windows under the same API as is on master right now, it’s just taking us a bit to get things right. I don’t think we should be calling anything too volatile in the API, but I would also expect it to change pretty quickly, so you might be best static compiling it into your external for the time being if you want things to keep working, assuming that matches with whatever license you want to use on your external too.

Also: How is the external expecting to get images? We sort of assume a streaming architecture in the api, so it may be better to go with a start/stop thread model then a "bang for an image" one, though you could certainly do that via thread spawning too.


November 19, 2010 | 9:37 am

Hi qDot,

Right now there’s my repo at https://github.com/jmpelletier/jit.freenect.grab and nesa’s fork at http://github.com/npnp/jit.freenect.grab (as well as cap10subtext’s earlier https://github.com/cap10subtext/jit.kinect).

Kinects go on sale tomorrow here, so hopefully with an actual device on hand I should be able to get something working in the next 24 hours.

As far as streaming vs. asynchronous design, I used the later — bang to get a frame — because it fits with existing designs for jit.qt.grab and jit.dx.grab. This is not set in stone, but I think it’s better that way, because it allows users to simply replace the traditional grabbers in existing patches (among other things).

Right now, my biggest request as far as libfreenect is concerned is that it would be nice to be able to have a user data pointer in the callbacks. Right now, it looks like using globals is the only way to access anything other than the function arguments. (I should probably make a more official request.)

Right now everything is statically linked, and unless there are licensing hurdles, it should stay that way. You should be able to just drop externals in the Cycling74 folder and expect them to just work.

Jean-Marc


November 19, 2010 | 3:33 pm

Hi all,

qDot – welcome, great that you’re on board:)

I’ve just posted the first version that actually outputs something.

I agree with Jean-Marc about asynchronous design. For me that fits more into the ways of Max.

In my hacky version I’ve created a separate thread that gets the stream continuously while the bang will just output the latest frame(a la @unique 0). No optimizations whatsoever at this point.

good luck and have fun!


November 19, 2010 | 4:08 pm

Does anyone know if they have windows drivers for Kinect?


November 19, 2010 | 5:36 pm

nesa/jean-marc: Awesome, that was pretty much going to be my thought too on frame retrieval, just not as used to jitter as I am to the rest of max (most of my hardware externals stream because they’re outputting at > 100hz).

I’ll see about user data in the callbacks. I believe someone has submitted a patch somewhere for that, I just need to find it. But yeah, if you file issues on the openkinect/libfreenect github site, that’s probably best to keep us remembering.

Anthony: There’s nothing on the main repo right now that works under windows, but this HAS been working on windows, so it’s not a hopeless cause, just one that’s taking a bit of time. We’re working on solidifying win32 under the new api. That’s our main goal right now, actually, so we can have people developing on top of it on all three platforms while the probably-going-to-be-much-slower-dev-time-wise kernel driver development process begins.


November 19, 2010 | 5:47 pm

nesa: what’s the best way to contact you (if it’s okay with you)? I just have a simple question about your code and don’t want to spam this thread (i can post here if you prefer). I’m arlabrat on twitter or at gmail d0t com.


November 19, 2010 | 6:03 pm

qDot: thanks so much, if only we could get in on the ground floor earlier on other projects, would make things so much easier down the road.

Possible project for Cycling74: top 5 points to consider on how to make your SDK max/jitter friendly? :)
Dynamically linked bundles, relative path names in the C file, no volatile commands… Maybe this stuff is all evident to programmers but I’m constantly running into trouble with how many SDKs are just poorly compatible with Max:
ARTookit: any spaces in the data file paths won’t work
Intersense: requires a text file with ports installed at root (or assigned directory)
qDot: I tried compiling a Thinkgear external once upon a time, think I even posted it to the forum, but I seem to recall that needed a bundle installed to work to?
Anyways, just a side note. Back to business…


November 20, 2010 | 4:22 am

Hi,

I just wanted to add my thanks for the work and sharing going on here. I am afraid I don’t have anything to add to the jitter object discussion, but I have followed Vade’s advice above and been using openFrameworks and Syphon to get the kinect’s depth map into jitter.. This is really only connecting the dots between other peoples’ work, but I thought my notes to myself might be helpful to others –> so here they are… http://palace-of-memory.net/kinect-openframeworks-syphon-maxmspjitter/

I have also included the final application which will open a window called "kinect syphon server" and it should display the kinects depth image in real time. It is sending out a Syphon server stream called "Kinect Depth Image" which you can grab in max/msp/jitter using the jitter syphon implementation..


November 20, 2010 | 4:24 am

ok, so the app is too large so I will link it from the blog post above…

Attachments:
  1. kinectSyphon.jpg

November 20, 2010 | 12:53 pm

hi miscellanea,
i developped my own ofx->syphon->jitter in the same procedure than you and it does exactly the same. The image is good and i receive the picture at 30fps.
google "libusb-osx-kinect.diff" and use git to download the last libusb, it solves the problem of glitchy picture. You can follow advices from the "readme" jit.kinect.grab sources from jean-marc pelletier.

But i have an bad issue, everythings works good, ok, but i can see that the process "kynectsyphon" use an average of 115% of cpu (my cpu is i5 2.4ghz macbookpro). My compiled version do the same.

I tried the GlView example done by theo,the first hacked kinect os x use of libfreenect, and the cpu runs at 7%…
the ofxkinect sources do the same too : 140% cpu.


November 21, 2010 | 1:53 pm

Hi all,

I’ve folded in some of the changes made by nesa and the latest update on my Github repo now works.

http://www.youtube.com/watch?v=WIJA46ocia0

It’s still very alpha. I still have to implement "unique" mode, multiple camera support, proper opening/closing, and I can’t seem to be able to release the camera properly but the video streams work as they should.

Phew!



LLT
November 21, 2010 | 2:09 pm

Bravo JM,
J’ai hâte de tester….

http://www.youtube.com/watch?v=tAGnSrdOfyA


November 22, 2010 | 12:14 am

Congrats, Jean-Marc! It’s running in the background right now and it’s amazing!

I bow to the master! :)


November 22, 2010 | 9:59 pm

I’ve already changed the usblib on my computer so I can’t verify conveniently at the moment: does the version currently on Github link to the usblib dynamically? Does it work to just drop this into max-externals another machine? Thanks…


November 23, 2010 | 1:16 am

You shouldn’t need usblib. My previous static version was causing some problems, so right now I’m just including libusb sources in my project.


November 23, 2010 | 6:42 pm

Kinect support for Cinder :

http://vimeo.com/17069720
wow, the depth resolution in this video seems far better than 1 centimeter !!!

Amazing kinect video art :

http://vimeo.com/17075378
http://vimeo.com/17107669


November 24, 2010 | 2:40 am

Hi Alexandre,

At close range (about 1 meter) the depth resolution is indeed very high. To test things out I made a short video. I’m just moving my head back and forth slightly to make it look like I’m moving in and out of a "light". You can tell my facial features quite well.

http://www.youtube.com/watch?v=wS8wyIYn77w


November 24, 2010 | 8:29 am

Bouu.. you’re scaring me!

It’s hard to imagine how they make such depth resolution from measuring light time of flight… plus basically the resolution shouldn’t change with distance. But maybe, they put some kind of averaging like this: http://www.youtube.com/watch?v=Z1yYu5dEFfI Could this mean also that: Less FPS=more depth precision possible, while: More FPS=less depth precision ??


November 24, 2010 | 1:34 pm

Looking forward to playing with this! Just got a simple kinect setup working, so getting it into jitter is clearly the next step!

BTW, alexandre, the kinect isn’t time of flight, they use structured light, and project IR dot patterns which they then decode


November 24, 2010 | 3:28 pm

You are true!
It was Wired.com saying stupid things about the kinect without knowing what they are talking about : http://webcache.googleusercontent.com/search?q=cache:7_wVm6TRufoJ:www.wired.com/gadgetlab/2010/11/tonights-release-xbox-kinect-how-does-it-work/+time+of+flight+kinect&cd=3&hl=fr&ct=clnk&gl=fr


November 25, 2010 | 12:09 pm

Hi JMP,

Thanks for your amazing work.
and everyone in here sharing this cool world.

i tried to use jit.freenect.grab but
i got a message below…

jit.freenect.grab: unable to load object bundle executable
2010-11-25 20:55:19.323 MaxMSP[695:20b] Error loading /Users/fuyamayousuke0/Desktop/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab: dlopen(/Users/fuyamayousuke0/Desktop/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab, 262): no suitable ima
ge found. Did find:
/Users/fuyamayousuke0/Desktop/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab: unknown required load command 0×80000022

sorry, i have no idea…
would you give me some help???

Thanks


November 25, 2010 | 1:20 pm

i didn’t think it would be so quick to have a jitter object for the kinect. too bad i don’t have the skill to be part of the dev process. well done !
now i have to buy a kinect…


November 25, 2010 | 3:27 pm

Big up for the developers of the object!

I tried the jit.freenect.grab object but from the first outlet i get only a total white image.
The second outlet works and put out the normal live camera image.

I tried the App http://miscellanea.com/downloads/kinectSyphonApp.zip and this one worked fine.

Any suggestions?


November 25, 2010 | 4:37 pm

try sending a message "mode 1" or "mode 2" to jit.freenect.grab to change outlet 1 output mode…
BTW, it’s true that a help file would be useful ; is there a jit.freenect.grab.maxhelp somewhere around ?

anyyway.. big, big, big thanks to you guys for your work on this external !
it works fine here, and I really enjoy this microsoft toy :-)


November 25, 2010 | 4:49 pm

Hey, Mathieu! MMF for Kinect? ;-)


November 25, 2010 | 5:13 pm

"BTW, it’s true that a help file would be useful ; is there a jit.freenect.grab.maxhelp somewhere around ?"

If Jean-Marc isn’t already all over this, I can have one up in a jiff…


November 25, 2010 | 5:37 pm

Here’s a rough draft… Borrowed Jean-Marc’s cv.jit template.


November 26, 2010 | 11:15 am

thanks for the help file.
there’s a small error : mode 0 (default) does not disable depth output ; it ouputs the raw depth values in 11bits.
(outputs a float32 matrix, values are between 0 – 2048)

(connect a jit.cellblock to see the matrix values..)

M


November 26, 2010 | 1:01 pm

Thanks Mathieu. With select the different mode options it works.

The only thing is that the output randomly stops after a couple of minutes working. Only the output from the first outlet or only the output from the second one stop with updating the image.

Probably because it’s in alpha state?


November 26, 2010 | 1:15 pm

Sorry about not documenting the "mode" attribute better.

It’s definitely not "production ready" yet, but it’s almost there.

I’m not sure why the output stops randomly. It might be a problem with libfreenect because I don’t think there’s really anything in my external that might be causing these sorts of problems.

Yousuke: what version of OSX are you using? The external is still in development so the version that’s up is a debug build and I haven’t made any effort to make it compatible with anything other than 10.6.

Thanks for the help file! I made a few edits and pasted it below.

Jean-Marc

– Pasted Max Patch, click to expand. –

November 26, 2010 | 3:33 pm

Okay, forget the last help file, I made some more changes.

You can now chose to output the depth matrix as long, float32 or float64. The original data is 11-bit, so there’s not much point in outputting char. You can easily do the conversion in Jitter anyway.

There was also a "unique" attribute that wasn’t in the help file. It works like for jit.qt.grab.

The Kinect needs to be still to calibrate its laser projection. If you move it or nudge it you will experience blackouts. That’s normal.

The update is up on Github but it’s still "alpha" so play at your own risks.

Jean-Marc

– Pasted Max Patch, click to expand. –

November 26, 2010 | 4:00 pm

thanks for the help file.
there’s a small error : mode 0 (default) does not disable depth output ; it ouputs the raw depth values in 11bits.
(outputs a float32 matrix, values are between 0 – 2048)

(connect a jit.cellblock to see the matrix values..)

M

Whoops… Should have known better. I didn’t even check that. Sorry for the mistake.

Jean-Marc & aartcore: I posted this issue on Github and promised more debug info (but haven’t worked with it for any length of time since). I’m on 10.6.4 as well. Might be with the freenect lib but I haven’t yet encountered this in (for example) Openframeworks.


November 27, 2010 | 3:49 pm

I just posted a release candidate on Github. Thanks to nesa, you can now bob the Kinect’s head and get accelerometer readings. I also verified that it works with two Kinects at the same time. There’s also a much-improved help file in the download.

Jean-Marc


November 27, 2010 | 7:05 pm

I’ve got the same problem as YouSuke. I’m using OS X 10.4.11. Has anyone tried it on 10.4 or 10.5?

Here’s the error I’m getting:

jit.freenect.grab: unable to load object bundle executable
2010-11-27 13:51:54.584 MaxMSP[4035] CFLog (21): dyld returns 2 when trying to load /Users/mattgilbert/Projects/kinect-dance/max/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab


November 27, 2010 | 8:36 pm

Could someone compile Jean-Marc’s external for Windows and post it?


November 28, 2010 | 12:32 am

Sorry, OS 10.5 and higher, Intel only for now.


November 29, 2010 | 3:32 am

>JMP

Hi,
I tried new release and it works very well!!
thanks so much.

Yousuke


November 30, 2010 | 12:35 pm

Hi,

I haven’t had time to work on the external (day job) but I did hack a quick patch that maps the output of jit.freenect.grab to OpenGL geography. I might actually make this another mode in the external, which would get rid of the artifacts.

http://www.youtube.com/watch?v=wvJKaViF7p0


November 30, 2010 | 7:13 pm

I have the new release working well. Thanks so much for all of your work!
Looking forward to getting the data into a sound or graphic patch.


December 2, 2010 | 8:44 pm

In a previous release of the jit.freenect.grab on github there was a build folder with the mxo in it but on this newer one it’s disappear..can anyone give me a hint as to how to build the mxo of the new version?

edit: oops..sorry..just had to build the xcode project and the folder showed up…sorry, I’m terrible at xcode at the moment


December 2, 2010 | 10:54 pm

>JMP

Hi,

I download your latest release, the camera is working, but most of the message object seem not work, there were "doesn’t understand" errors in Max window, do you know what might be happening? I use Max 5.1.5 Many thanks!

yan


December 3, 2010 | 4:27 am

works great! can’t thank you enough!!! http://www.youtube.com/watch?v=phGSc2KUcfw


December 3, 2010 | 1:09 pm

>jean marc pelletier

hi,

thank you so much for your work, but I have an question, why we didn’t have the color which changes according to the depth of field ? like all the other driver demo ? I want to use it to create several layers with color filters .

Marc lautier ( journee d’informatique musicale 2009 grenoble)


December 3, 2010 | 2:37 pm

lautier987, you can just remap the grey values to hsl.


December 3, 2010 | 2:38 pm

Yan: I’m not sure why the messages aren’t working. I got another message about this via Twitter but here everything works fine, and as far as I can tell other people are using the object without problem too. I’ll try to look into it, but it’s hard when I can’t reproduce the problem.

Marc: Bonjour! The colour is just for visualization. I didn’t include it because it’s the kind of thing Jitterists might want to make themselves. Here’s an example of how you can do it (it’s not the same mapping as in the demos). You just need to make sure you’re using "mode 3" (distance).

– Pasted Max Patch, click to expand. –

December 3, 2010 | 2:39 pm

thanks so much for your work Jean-Marc Pelletier !
the external rc1 works like charm :-)


December 3, 2010 | 2:48 pm

Also:

The official page for jit.freenect.grab is live at http://jmpelletier.com/freenect/

If you have something interesting to show, let me know and I’ll add it to the gallery.

Jean-Marc


December 6, 2010 | 11:38 am

nice work! Jean-Marc, how did you create this one:


December 7, 2010 | 2:38 am

dirkdebruin: Used jit.gencoords + jit.freenect.grab depth map to make a geometry matrix that I fed straight into jit.gl.render.

When I get the time, I want to make another external that converts the depth and rgb data to more proper OpenGL geometry.

Jean-Marc


December 7, 2010 | 11:17 am

Thanks a lot Freenect team!
grab object works just fine.
Joy.


December 7, 2010 | 12:27 pm

I used the nurbs to place video in openGL. when i get time to finish the patch i will share it. i just used jit.gl.nurbs-video-deform.maxpat as basis from the examples

[attachment=148280,1493]

Attachments:
  1. kinect.png

December 7, 2010 | 3:18 pm

hi

sorry for the stupid question, but it doesn’t hurt to ask, no?

__when you say kinect it’s only the camera/accesory (which costs some 140 euros over here, in France) or do you need the Xbox as well?
__how do you connect it to max/jitter (running on a mac) – bluetooth? wifi??

__if I understand well, it goes way beyond the possibilities of a web-cam, does it?

many thanks for some basic answers!!

best

kasper


December 7, 2010 | 3:26 pm

Hey I've got one of those. ;)

I used the XY of a plane and the Z as the Grab depth info, fed that into mesh.

Not elegant but some fun tinkering… and sadly this was the last time I touched the Kinect. Stupid work getting in the way…

[attachment=148298,1500]

Attachments:
  1. kinectgrab.jpg

December 7, 2010 | 3:34 pm

__when you say kinect it’s only the camera/accesory (which costs some 140 euros over here, in France) or do you need the Xbox as well?

No.

__how do you connect it to max/jitter (running on a mac) – bluetooth? wifi??

USB and the jit.freenect.grab object. Also required external power.

__if I understand well, it goes way beyond the possibilities of a web-cam, does it?

Only in that it gives you depth information (which, as has previously been very difficult) so it’s easier than ever to extract, for example, presence (with a much easier way to do background subtraction).


December 7, 2010 | 3:39 pm

oh, thanks

so the kinect + usb cable and of course the new object, and I am set (+ the jitter patch etc etc etc of course)

I think I will get one!!!

best

kasper


December 7, 2010 | 6:11 pm

Hello Jean Marc

Any plan for a windows version ? sooner or later ?

thanks

xavier


December 7, 2010 | 7:08 pm

First, a big thank you to all that have contributed to jit.freenect external. I’ve been having a bit too much fun with it lately! And of course yet another controller where I don’t own the console it was meant for :)

As an exercise both to help with my jit.gl.* chops (more than my usual tinkering with videoplanes) and playing more with the data coming from the kinect, I wanted generate a 3d point cloud based on the data. Seems pretty straightforward to do given that all the coordinate data is available. Where I’m stuck is what object(s) to use to generate the points. Any pointers?

Thanks again for the great work!

David


December 7, 2010 | 8:23 pm

Try jit.gl.mesh with @draw_mode set to points


December 7, 2010 | 8:50 pm

WHOA! Oh man, I just tried the newest external and scared myself. If you don’t initialize the object with a tilt value it resets to 0, and the other objects never turned on the red LED before. I thought my Kinect had been hijacked by Skynet! LOL, no more coffee for me…


December 8, 2010 | 2:04 am

cap10subtext: as I wrote on Twitter a while ago, I always thought robots with red-glowing eyes à la Terminator were a meaningless fantasy, but here I am with two re-glowing Kinect eyes staring at me…

pixelux: I would like a Windows version too, but it looks like libfreenect doesn’t work on Windows yet.

Kasper: you don’t even need a USB cable, it comes with the device. 140 euros? Ouch. Here, it’s 12,000 yen, about 110 euros. Very, very cheap for what it is. I used to work with a Point Grey Bumblebee (http://www.ptgrey.com/products/bumblebee2/) which costs about $2000 and didn’t give you as good a depth map as the Kinect. It works in sunlight, though, unlike the Kinect.



dtr
December 9, 2010 | 10:18 am

Hi,

Current external ‘s working here too, but the output freezes every couple of minutes. I have to send the close and start messages for it to restart. I just started messing around last night. Will give it another go tonight to see if this persists. I’m on OS X 10.5.8 and Max 5.1.5.

grtz dtr



dtr
December 9, 2010 | 8:42 pm

Problem persists. Every x minutes the output will stall. I had one instance where the 2D image stopped outputting while the depth field kept going. Any ideas?


December 10, 2010 | 1:25 am

Thanks you for the post.
Hi guys, Im a newbie. Nice to join this forum.

__________________
Watch The Tourist Online Free


December 14, 2010 | 2:57 am

hey DTR I had trouble with that too until I used the grey usb extension that came with the kinect, the problem for me seemed to have been related to a loose connection.

Jeremy

http://www.jeremybailey.net


December 17, 2010 | 5:53 pm

uuhgg this is awesome , I have to go to work now but I can't wait to come home and play with this

sooo awesome

[attachment=149360,1564]

Attachments:
  1. Untitled12.jpg

December 18, 2010 | 7:09 pm

First of all many thanks to the makers of the object!

Second, would any of you Jitter wizards be willing to post some examples of how you are manipulating the depth data like in these crazy videos turning people’s faces into topographic maps?

Thanks,

Dave


December 18, 2010 | 8:49 pm

      

[attachment=149408,1567]

Attachments:
  1. kinectsex.jpg

December 18, 2010 | 9:47 pm

Erm, yeah… I’m thinking more like a patch, but thanks Roman!


December 18, 2010 | 10:06 pm

I cobbled something together using a screengrab from one of jean-marc’s demos..just to format the data correctly..and tried the video distortion example..and made a colorizing example with jit.charmap…I want to clean it up and post it for everyone, but I don’t have my own kinect..I’ll try and clean it up but I might break a couple things…maybe tomorrow but someone will probably beat me to it


December 19, 2010 | 7:45 pm

Hi,

First of all, thanks to all people who made this possible so quickly.
I’ve bought a kinect camera and I’m wondering how to grab the filtered IR image. I had a look to the CocoaKinect application ( http://fernlightning.com/doku.php?id=randd:kinect ) and it allows you to check "IR video".
In masking physically the Kinect IR beamer, I can only see IR illuminators and reflectors. How can I achieve this with the awesome jit.freekinect.grab?

Thanks in advance.

Benoit SIMON.
La Gaité Lyrique.


December 19, 2010 | 8:50 pm

@ stringtapper

Here’s the best I can come up with at the moment. It’s basically a compilation of everyone’s examples already posted here. If anyone wants me to remove them, let me know..I just wanted to post an aid to people that got stuck where I did. You can grab the patch from my website: http://blairneal.com/blog/jit-freenect-examples/

Or just do the old copy and paste BUT PLEASE RELOAD WHEN COPY PASTING…there are a lot of LOADBANGS that need to go through in order for stuff to work

I was also borrowing a friends kinect and was flying blind when i cleaned up the patch and i might have broken it a lot. let me know if i did

– Pasted Max Patch, click to expand. –


dtr
December 19, 2010 | 10:16 pm

@jeremybailey: The grey cable didn’t fix it for me. Would have been surprising too. It is intended for extending the Xbox’s WiFi adapter USB cable.

I tried plugging the Kinect in a powered USB hub as that fixes power issues with some of my MIDI gear but it didn’t help here.

Jean-Marc told me there have been more reports of the output freezing. He’ll try to squash the bug.


December 19, 2010 | 10:46 pm

@laserpilot

Thanks for that, its’a lot to chew on for now.


December 22, 2010 | 6:58 am

the render on the left is using NURBS the one on the right is using jit.gl.mesh trying to create a point-cloud data visualizer , it was so difficult getting this far!

I'm sure I can get the point cloud results looking better with some tweaking

[attachment=149644,1580]

Attachments:
  1. sterlingcrispinkinect.jpg

December 26, 2010 | 11:43 pm

my last update on this thread, I'll keep to myself after this, I'm just really excited about this

[attachment=149879,1596]

Attachments:
  1. mekinect2.gif

January 6, 2011 | 7:53 pm

So far as we can easily get a contour from the gesture, I wonder if anyone could give me a direction on how to make a Utterback’s Text Rain effect with Jitter, or Jitter/processing?
In other words, to make objects falling and collide with captured contour?

http://camilleutterback.com/projects/text-rain/

Thanks.


January 9, 2011 | 6:12 pm

Hey any plans for NITE support now that there are OS X binaries? :)


January 10, 2011 | 2:31 pm

Hi everyone.

I’ve been insanely (almost literally) busy these past few days/weeks so I haven’t had much time to do anything Kinect-related, but I made an effort to get a new update out that hopefully solves some of the stalling problems people have been having.

http://jmpelletier.com/freenect/

As far as Windows and NITE are concerned, I’d really like to do it soon, but it’s going to be awfully hard to find the time.

Cheers,

Jean-Marc


January 10, 2011 | 4:58 pm

Hi Jean-marc,
I’m reposting my comment because I had no answers. Here it is:
"I’ve bought a kinect camera and I’m wondering how to grab the filtered IR image. I had a look to the CocoaKinect application ( http://fernlightning.com/doku.php?id=randd:kinect ) and it allows you to check "IR video".
In masking physically the Kinect IR beamer, I can only see IR illuminators and reflectors. How can I achieve this with the awesome jit.freekinect.grab?"

Thanks.

Benoit SIMON.


January 11, 2011 | 2:52 am

Jean-Marc,
Thank you for this, it is great!

jeremy,

I just got OpenNI-NITE-OSCeleton connected to Max via OSC. I am planning on writing a tutorial up for it tomorrow (if we have a snow day).. will post it here if there is interest…


January 11, 2011 | 11:47 am

@tohm

yes! there’s interest!
praying for snow… ;-)


January 11, 2011 | 11:56 am

I second that (hello Joe)

Luke



dtr
January 11, 2011 | 5:55 pm

many tanx jean-marc!

running a lot stabler (and faster) now. i still get occasional hangs, but much much less frequent than before.

question: can anyone point me to the necessary math for correcting the 3d distortion of the cam? what i mean is to get a rectangular room to look rectangular when rendered, unlike the skewed model you get from the uncorrected depth map. i worked out some very crude and basic formula straightening things out a bit but i bet someone’s gonna know the correct calculations…


January 12, 2011 | 11:47 pm

Here is the run through on setting up OpenNI with Max via OSC…

Will start a new topic on it…

http://tohmjudson.com/?p=30


January 13, 2011 | 3:50 am

Hi Jean-Marc,

congrats for your jit.freenect.grab – it´s a great tool. I download your last version, did already some tests and it works really fine with one Kinect sensor. Now I would like to use the jit.freenect.grab with 2 Kinect sensors. It detects well the to devices with the ‘getndevices’ message but its not clear for me what you mean with ‘give an index to the open message’. I tried with 0, 1, 2 …. but I only receive the message "Cannot open Kinect device 2, …….".
Is there a way to fix this problem? Did I get anything wrong with the ‘index’?

Thanks Anne


January 13, 2011 | 2:35 pm

@tohm – this looks brilliant. Hopefully just what I’m looking for.

Unfortunately I won’t get time to try it until the weekend – bloody day job!

@scatalogic – let’s compare notes. Race you.. ;-)



dtr
January 13, 2011 | 2:38 pm

@tohm: great! will give it a try over the weekend

i’m very excited about all these developments. had my first test run with my project partner, a dancer, yesterday. albeit my 3d motion tracking is still very crude we already had very expressive results hooking up the signals to a granular synthesizer patch.

the depth map makes optical tracking so much easier than with regular camera’s. just clip off the portions of the 3d space you don’t need and you’re good to go… no endless fiddling with backgrounds, lighting, keying, infrared, etc etc


January 13, 2011 | 7:11 pm

oops – didn’t notice there was a new thread – moved message to:

http://cycling74.com/forums/topic.php?id=30593&replies=3#post-151197


January 15, 2011 | 7:45 am

This is just too much fun – get out yer 3D glasses!

[attachment=151322,1670]

Attachments:
  1. grab008.jpg


dtr
January 15, 2011 | 11:05 am

aargh… OpenNI requires OS X 10.6, stilll stuck on 10.5 here…


January 15, 2011 | 7:28 pm

Any news on the windows 7 front? All the stuff so far only works on Mac :(



dtr
January 17, 2011 | 11:39 am

no tips for where to look?

> question: can anyone point me to the necessary math for correcting the 3d distortion of the cam? what i mean is to get a rectangular room to look rectangular when rendered, unlike the skewed model you get from the uncorrected depth map. i worked out some very crude and basic formula straightening things out a bit but i bet someone’s gonna know the correct calculations…




dtr
January 17, 2011 | 2:55 pm

hey, tanx! i had browsed that site already though. what i understand is that it describes how to precisely overlay the RGB image with the depth map. what i’m trying to do is to render the depth map pixels to 3D vertexes and compensate for the lens’ point perspective.

the given focal length and lens distortion values should be useful filling in the formula though!

Lens distortion (reprojection error)
IR 0.34
RGB 0.53

Focal length (pixels) and field of view (degrees)
IR 580 57.8
RGB 525 62.7

(measurements from http://www.ros.org/wiki/kinect_calibration/technical)


January 17, 2011 | 2:58 pm

As sorry I thought it did both. To be honest every time I read that page I go a bit dizzy!

I want to overlay the RGB image with the depth map.

Just got to wait for my head to stop spinning!


January 19, 2011 | 4:08 pm

Hi all,

RC3 is out, now with IR camera output!

Unfortunately, that’s all I managed to find the time to work on. Sadly, no Windows, and not NITE skeleton tracking.

I’m guessing the Windows port shouldn’t be _too_ hard, if anyone wants to give it a shot, everything’s on Github.

http://jmpelletier.com/freenect/

Jean-Marc


January 20, 2011 | 11:53 am

Many Thanks Jean-Marc for this new version :-)

I have one small problem / question :
I can’t get anything useable from depth output in mode 3 (distance in cm).
values seem to be clipped between 29.97 and -3.38 ; it doesn’t looks like centimeters…

what would be the right formula to translate output mode 1 to distance in centimeters ?

Mathieu


January 25, 2011 | 5:47 am

Dear all, This post on topic is very interesting but I suggest for the future of Jitter and for personal culture the reading of Geometry of Multiple images By Olivier Faugeras and Quang Tuan Long, I think there is a lot of ideas and principals decribed inside. So some principe will be enlarge the new future Of Jitter and Jamoma. For curious people a link :

http://mitpress.mit.edu/catalog/item/default.asp?sid=05880509-232D-489C-869B-91D6543A5B96&ttype=2&tid=4147

Maybe Cycling must be organize a team of research on his way?


January 27, 2011 | 10:46 pm

@PseudoStereo
My 3d glasses are red=right / green = left eye. But I have to reverse them to get the 3d effect?
Are your glasses different?


January 28, 2011 | 12:06 am

@Jean-Marc : merci beaucoup!!!!!


January 29, 2011 | 4:46 pm

@Macciza

The convention in the world of anaglyph 3D is (almost) always Red=Left, Cyan/Blue/Green=Right. It’s arbitrary, but that’s the tradition.



dtr
January 31, 2011 | 5:08 pm

i'm still trying to get that perspective corrected 3d model out of the depth map. the folks at the openframeworks forum pointed me to this info: http://openkinect.org/wiki/Imaging_Information

i implemented it in jitter but still not getting correct results, see screenshot. the camera is pointing in the y (blue) axis direction.

below is my patch. is anyone getting better results with other methods?

i have a feeling that the 'rawDisparity' value mentioned on the wiki page is not the same as what the 'raw' mode of the external outputs. jean-marc, can you confirm or contradict this?

– Pasted Max Patch, click to expand. –

[attachment=152632,1748]

Attachments:
  1. kinectestrender.jpg


dtr
January 31, 2011 | 6:15 pm

ok nevermind, i got it! tweaking the minDistance some more put things in the right perspective (pun intended).

i’ll post a corrected patch asap.



dtr
January 31, 2011 | 6:27 pm

this seems to be working pretty well for me:

– Pasted Max Patch, click to expand. –

[attachment=152634,1749]

Attachments:
  1. kinectestrender2.jpg

February 1, 2011 | 8:11 am

@dtr – Fantastic!

Here's a version with the rgb texture mapped onto the mesh:

– Pasted Max Patch, click to expand. –

[attachment=152700,1751]

Attachments:
  1. grab001.jpg


dtr
February 1, 2011 | 4:15 pm

great, tanx! i wanna roll the perspective correction into a shader so it can be slabbed on the GPU. should make it much faster. that s my first shader scripting project so bear with me…

by the way, i read on the openNI forum that the official drivers include a function for precisely aligning rgb and depth map. perhaps it s even done in firmware.


February 1, 2011 | 4:21 pm

@dtr
wow, this un-distortion function was exactly what i was seraching for!

just one question: why do you convert the 11 bit raw data to cm "by hand", as mode 3 of the jit.freenect.grab object provides distance values already?

those jit.expr are really slow, so saving on one would help gain speed.

edit: i tried without the first jit.expr and with mode 3, and get a different image — what’s the difference in these??

thanks



dtr
February 1, 2011 | 4:51 pm

@vogt: i guess it has to do with the calculations in the external being different than the ones on the wiki page. i messed around till i found mode 0 to be working with some tweaking of the minDistance parameter. jean-marc would know precisely what’s going on.

btw, last time i checked the values spit out by mode 3 it didn’t look anything like plausible cm ranges. this might have been with an older external version though.



dtr
February 1, 2011 | 5:00 pm

btw2, don’t know your project but i don’t need the full resolution in mine. for now i downsample to 64×48 for a BIG fps increase.


February 2, 2011 | 11:30 am

@dtr
thanks for the explanation!

i have release candidate 3 of the external – and mode 3 spits out usable values – as fas as i can say they are in decimeters, so you have to multiply them by 10; but they differ from the values that are spit out by the calculations you used.

some quick tests with a bottle and a ruler:
the minimum distance i get (and that is correctly reported by your calculation) is around 0.67 meters, while the same distance is reported as "6.4" in mode 3.
a distance of 2.5 in you calc equals about "24.5" in mode 3 – have to check with greater distances, but all in all it seems mode 3 is about 3-5 cm off

@jean-marc pelletier
would you consider integrating the calculations dtr used into the external, as they seem to be more accurate? this could provide us with a good speed boost! pretty please! :)

@dtr / downsampling:
well actually i don’t need the 3d representation (pointcloud / mesh); i am tracking people in a room, and need to get their positions by filming them from the side. without the opengl parts i get 15fps in 640×480 – so far that’s ok. this will become heavy for the cpu though when i add the audio part of my project.

yeah, downsampling is an option for me, but i have to fiddle with the resolution to find a good compromise between precision and speed.

thanks agan for posting this patch!!!



dtr
February 2, 2011 | 1:27 pm

i’m kind of in the same boat: motion tracking a dancer in an audiovisual performance/installation. for now low rez, high fps does it but there ‘s a large sacrifice in spatial accuracy. eventually i’ll need full rez for really fine control so i’m now trying to work out a shader to do it instead of the jit.expr’s. i’ll also look at doing (some) motion analysis in shaders coz even at 64×48 iterating through the 3d matrices seriously ducks the fps.

i’m also wondering to what extent jit.cv objects could be incorporated. they re built for analyzing 2d matrices but perhaps there’s tricks to apply them in 3d context. i’m not very experienced with jit.cv yet though.


February 2, 2011 | 2:02 pm

@dtr
ah, interesting!!

did you consider using skeleton tracking?

there’s no direct interface for max yet, but it’s possible – i tried it already!
check out this page:
http://mansteri.com/2011/01/osceleton-with-quartz-composer/
the guy receives the tracking values via osc in max, so you could dump the quartz composer part, and work directly with the values sent by OSCELETON.

btw. after having problems with this instruction, i found another more precise instruction to set up OpenNI, Nite and all that stuff:

http://tohmjudson.com/?p=30

all the unix build stuff is a bit time consuming, but it works, and i’m a total rookie in this area!

does this help you?

about cv.jit: well you could analyse the depth tracking image with it!
at least that’s what i will do. my plan is to isolate the moving parts in the image, and map the corresponding area of the depth image to it, in order to then track the center of the moving part. still have to figure out how to do this the best way…



dtr
February 2, 2011 | 3:07 pm

yes i definitely want to check out skeleton tracking. didn’t get to it yet as i still need to upgrade to os x 10.6 for it to run…

about cv.jit, i wonder if it’s gonna work the way you expect it to but i’m curious for your results!



dtr
February 4, 2011 | 4:18 pm

i worked out a working shader. patch below and shader attached.

i didn’t get the speed improvement i was hoping for though, only a couple of fps more. looks like reading and writing to/from the GPU/VRAM is slowing things down most, plus the rendering of the 640×480 mesh. using char instead of float32 gains a couple of fps but not much.

does anyone have ideas for other strategies? for example, is there a way to render the mesh directly on the GPU without reading back to a matrix first?

in this thread a technical possibility is mentioned but it wasn’t implemented 4 years ago. is it now? http://cycling74.com/forums/topic.php?id=3435

– Pasted Max Patch, click to expand. –

February 4, 2011 | 6:37 pm

@dtr
wow, actually with your new version i get 19-20 frames more per second!!!
(28 instead of 9-10 fps!)
i wonder why there’s no speedboost on your system-

i have a mbp 2,2 ghz 2007 with a GeForce 8600M GT (128mb)


February 5, 2011 | 9:12 am

i think i know how to use the depth image with cv.jit:

i want to convert the matrix in the following way: replace the y-values of the matrix by the birghtness values of the matrix, while keeping the value of the x-position. the original y-values can be deleted, and each drawn point on y-axis can be just binary white.

this way i would get a top down map of the scene.

i just don’t know how to do that in the best way – if anyone has some clue please help!

i also started an extra topic for this and hope someone can give me advice:

http://cycling74.com/forums/topic.php?id=31017

thanks!

ps: i also thought of just using the gl mesh data, putting the camera on top, setting the camera to isometric projection, and feeding that back into a regular matrix, but this way i would loose all accuracy of position, and i’m not sure of how to align that properly.



dtr
February 5, 2011 | 5:17 pm

my last patch might be a bit misleading about speed improvement. i also weeded out all ballast from the older version, like the extra monitoring windows. do you also get a big speed improvement when switching between the jit.expr and slab methods in the same patch? that ‘d be the real test.

also, with slabbing and shaders the capacities of specific graphic cards come into play. perhaps mine is less performant for this particular method. i’m on a MBP 2.33GHz core2duo, Radeon X1600 256MB graphics and OS X 10.5.8.


February 5, 2011 | 7:27 pm

ok, i will make some speed test with both variants side by side.
all in all your system seems much more up to date and newer than mine.
only exception: i’m running os x 10.6.6

btw. maybe you have a good advice on this:
http://cycling74.com/forums/topic.php?id=31017&replies=6#post-153232
?

… struggling.. :(


Viewing 150 posts - 1 through 150 (of 177 total)