Open Kinect
Yes, hopefully someone will port the driver to a jitter external!
I believe we won't see anyone getting skeletons out of it for some time though =/
The opened driver has been released for linux, hope someone will port it to OSX.
We work here to retrieve kinect datas from linux computer and send to another with Max installed ...
You can get the depth map into Jitter now via Syphon (jit.gl.syphon), and use the OpenFrameworks depthmap app that uses libfreekinect, right now. Here it is working in QC:
The same could be done for Jitter. Just saying, no need to wait.
Just DLed the OpenKinect code base. Might be over my head... would love to make it a jitter object.
Vade: Openframeworks depthmap app? Link would be awesome, thx.
thanks vade for the infos, i'll give it a try !
Alright, got a kinect, the Openframworks code works like a treat. It's pretty slick.
Going to see if I can hack together jit.kinect.grab today.
@vade: info on your Syphon from Openframeworks code? We're you going to release it?
Here's the bare skeleton, *****doesn't work yet******, but initializes the camera and free() works properly. Next I've got to figure out how to get the information into a matrix or two. Ideally I think it would be great to have the raw depth from 1, RGB from 2, and a built in threshold (just like the openframeworks example for blob detection).
Not sure I'll have this done any time soon, but making progress. I'd post this on github for people to help with but I haven't time to set it up. So for now feel free to repost in the forum I guess?
i hope you crack it!
did you see this link on that openframeworks topic, its using the Kinect to make a 3d recreation of a live video and move through it in 3d. its brillant
Okay, little help would be great.
Seems I'm getting the info I need loaded here:
memcpy(gl_depth_front, gl_depth_back, sizeof(gl_depth_back));
So what's the best way to read the info of type:
uint8_t gl_depth_front[640*480*4];
Into out_bp?
I'm sure the example is somewhere in the Jitter N-dimensional Matrices examples, but I'm starting to go bleary eyed.
Dunno, could be I'm not understanding the Opengl example in the OpenKinect example code but that seems to make sense.
Hi cap10subtext,
how about setting up github? I'd certainly like to codevelop this one(already trying to hack something up).
Too bad I have to work tomorrow:(
uint8_t is char in jitter land.
I've got a github set up now but having serious noob related issues at the moment. :P I had everything setup, got my project directory ready, create .gitignore and .gitattributes to make sure it wouldn't muck up my Xcode files. Thought I was supposed to git commit to get things online but I'm stuck in vim...
lemme guess, rtfm? :P This is making me feel stupid.
Bah! Github is making me really frustrated, I'm missing something really simple and I can't get it to upload the project. If someone takes pity on me and sends me a link to a cheat sheet for github I'll put up the xcodeproj
okay it's up...
cap10subtext: openframeworks is up and working on the syphon google code svn, and has been for a while :) It just was not ready for the nice packaged Beta 1 release.
Nice work on the jitter object. Hope this makes progress :)
Gee, I should have checked the forums before starting to roll my own...
Anyway, I've put my project up on Github also. (It's probably going to be a good idea to get everyone working on the same project, though...)
I've got a compiled external over there that loads properly, and -in theory- should output something. I say in theory because the Kinect is only going on sale on the 20th here.
I looked over cap10subtext's code and it looks like there's a lot of leftover from the glView example that doesn't make sense (or do anything) in the context of a Jitter external.
Anyway, I'd appreciate if someone with actual hardware could test this out.
Jean-Marc
Sense? Who needs to make sense? ;) Seriously though, I'm a noob at max development. I'd dumped a few things in there I was trying to make sense of so I know it's crap.
I'll test yours right now...
struggling with libusb. Can you point me to the version you are using? I tried compiling and installing the one here:
http://www.libusb.org/ v. 1.0.8 but I get this error:
jit.freenect.grab, 262): Library not loaded: /usr/local/lib/libusb-1.0.0.dylib
Referenced from: /Applications/Max5/Cycling '74/jitter-externals/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab
Reason: Incompatible library version: jit.freenect.grab requires version 2.0.0 or later, but libusb-1.0.0.dylib provides version 1.0.0
Little help?
Hi all,
@JMP - it totally makes sense form me if this object is part of cv.jit.
What do you guys think?
I'll test the object with kinect late tonight(eu) when I finally get home.
@cap
I think you need patched libusb, funny thing is that you should already have one if you got the glview example working, you just have to find it:)
checkout this frameworks thread:
Sorry for shooting in the dark, hopefully we'll get this one running in a day or two!
Best,
nesa
Thanks nesa, just before I saw your post, I realized I was using the "outdated" ones and JMP linked to the new ones, so I updated according to Theo's instructions.
JMP I finally got your external loaded, sending it the open message does nothing. It doesn't even turn on the little IR. I may be doing it wrong, but my external does open correctly, you can see the laser powering up, and it returns a serial number on the device.
I'm going to try fiddling a bit but I think you may be missing a call somewhere... I'll report back in a bit.
JMP: I've tried compiling your xcode project but I'm not having any luck so I've tried with the newest freenect source files and now my example isn't working anymore either, so I doubt it's something you've done. It's either something specific to my machine (botched libusb install?) or else there's something going on in the new rev.
I'm not sure what it is, I might try my original code with your handling of Jitter and that might have more success for now.
Rats, I have a Max workshop today I was really hoping to present this at. Oh well.
Hi,
I was just looking at this video, and was blown away by the 3D precision of the kinect :
wow, 1 centimeter for depth precision. Time-of-flight-of-the-light technic! 1 centimeter, it's a measurement of light speed with a precision of 1/300 of a nanosecond, 1/100 cycle of our fastest computers!
others interesting videos :
http://www.youtube.com/user/okreylos#p/u/1/N9dyEyub0CE
and nice thoughs about using multiples kinect :
http://www.youtube.com/user/okreylos#p/u/2/ttMHme2EI9I
http://www.youtube.com/user/okreylos#p/u/0/YB90t5Bssf8
Hi,
so I was playing around with Jean-Marc's version(lost multiple hours compiling patched libusb and freenect properly).
In Jean-Marc's version there are callbacks which were never called because we need to start depth/rgb grab and process events I guess(what I got from latest glview example).
After poking around with these, I could finally see only one RGB frame and white depth matrix. Now it seems that something is wrong in timestamp handling, but dunno - my head doesn't work any more.
I forked Jean Marc's object, and you can find the latest hack here:
Here are my horrible notes on compiling libusb/freenect on macos:
-libusb:
libusb - git clone git://git.libusb.org/libusb.git
apply freenect patch
configure
libusb must be compiled as 32bit, use:
make CFLAGS='-arch i386'
make check
sudo make install
-freenect:
manually edit libfreenect with ccmake to
set include dir to /usr/include/libusb
set system root dir to /
this causes 'missing sdk error',
adjust the project settings in xcode:
change sdk to macos4/5/6.
build libfreenect as i386
hope this helps, can't wait to continue working on it!
Thanks Jean Marc and Cap!
Okay I see how that works. Thanks nesa, that does indeed initialize the laser grid now. But I've traced the printf's from camera.c and yikes! I'm going to have to take a closer look but like you said it looks like it's dropping mad frames. So far no images on my end.
Don't know if you have this in there yet, but make sure you toss in this bad boy into free and close to make sure it's not crashing on exit: libusb_release_interface(x->device, 0); (until they implement a proper freenect_close. Can't wrap my brain around github at the moment to throw it in your fork.
More tomorrow.
Hi, I fixed the libusb issues and managed to remove dependencies. I also added the camera release code.
I also updated the mxo, so people with an actual device, please give it a try!
(There are also instructions in the README on how to compile a 32-bit usblib and link statically to it -- which is harder than it should.)
Jean-Marc
Jean-Marc,
thanks for the cleanup and nice instructions.
Unfortunately, the object outputs nothing - see my previous post.
Cap, thanks - I didn't have the release_interface, but I see it now in Jean-Marcs code.
JMP and nesa: here's is some debug information from camera.c. These are the errors it throws after it opens and 5 bangs are given. Maybe this will put us on the trail. I will hunt through the code and see if I can figure out what else is throwing an error.
Device Number: 1
device index: 0
new device opened.
starting grabs
First xfer: -9
CTL CMD 0003 1267 = 12
CTL RES = 10
CTL CMD 0003 1268 = 12
CTL RES = 10
CTL CMD 0003 1269 = 12
CTL RES = 10
CTL CMD 0003 126a = 12
CTL RES = 10
CTL CMD 0003 126b = 12
CTL RES = 10
CTL CMD 0003 126e = 12
CTL RES = 10
CTL CMD 0003 126f = 12
CTL RES = 10
CTL CMD 0003 1270 = 12
CTL RES = 10
CTL CMD 0003 1271 = 12
CTL RES = 10
CTL CMD 0003 1272 = 12
CTL RES = 10
CTL CMD 0003 1273 = 12
CTL RES = 10
CTL CMD 0003 1274 = 12
CTL RES = 10
CTL CMD 0003 1275 = 12
CTL RES = 10
CTL CMD 0003 1276 = 12
CTL RES = 10
CTL CMD 0003 1277 = 12
CTL RES = 10
CTL CMD 0003 1278 = 12
CTL RES = 10
CTL CMD 0003 1279 = 12
CTL RES = 10
CTL CMD 0003 127a = 12
CTL RES = 10
CTL CMD 0003 127b = 12
CTL RES = 10
CTL CMD 0003 127c = 12
CTL RES = 10
CTL CMD 0003 127d = 12
[Stream 70] Invalid magic ffff
[Stream 70] Invalid magic ffff
[Stream 70] lost 251 packets
[Stream 70] lost too many packets, resyncing...
[Stream 70] Invalid magic eebd
[Stream 70] Invalid magic f75e
[Stream 70] lost 249 packets
[Stream 70] lost too many packets, resyncing...
[Stream 70] Invalid magic ffff
[Stream 70] Invalid magic ffff
[Stream 70] Expected 1748 data bytes, but got 1908. Dropping...
[Stream 70] Invalid magic 674c
[Stream 70] Invalid magic aa75
[Stream 70] Invalid magic 3ac7
[Stream 70] Invalid magic 73ae
[Stream 70] Invalid magic d8bb
[Stream 70] Invalid magic 9d93
[Stream 70] Invalid magic a1d4
[Stream 70] Invalid magic ea9d
[Stream 70] Invalid magic 8eb1
[Stream 70] Invalid magic 5ceb
[Stream 70] lost 244 packets
[Stream 70] lost too many packets, resyncing...
CTL RES = 10
CTL CMD 0003 127e = 12
CTL RES = 10
CTL CMD 0003 127f = 12
CTL RES = 10
CTL CMD 0003 1280 = 12
CTL RES = 10
[Stream 70] Invalid magic d8bb
CTL CMD 0003 1281 = 12
CTL RES = 10
CTL CMD 0003 1282 = 12
CTL RES = 10
CTL CMD 0003 1283 = 12
[Stream 70] Invalid magic c899
[Stream 70] Invalid magic ffff
[Stream 70] Expected 1748 data bytes, but got 1908. Dropping...
[Stream 70] Invalid magic 5d8b
[Stream 70] Invalid magic c5d8
[Stream 70] Invalid magic ea9d
[Stream 70] Invalid magic 4ea9
[Stream 70] Invalid magic 5ceb
[Stream 70] Invalid magic 75ee
[Stream 70] Invalid magic 2762
[Stream 70] Invalid magic b376
[Stream 70] Invalid magic ba97
[Stream 70] Invalid magic 5bac
[Stream 70] lost 244 packets
[Stream 70] lost too many packets, resyncing...
[Stream 70] Invalid magic 84d0
[Stream 70] Invalid magic 756e
[Stream 70] Invalid magic 674c
[Stream 70] Invalid magic b176
[Stream 70] Invalid magic 3ac7
[Stream 70] Invalid magic 4387
CTL RES = 10
CTL CMD 0003 1284 = 12
CTL RES = 10
[Stream 70] Invalid magic ffff
[Stream 70] Invalid magic ffff
[Stream 80] Invalid magic 0a08
[Stream 80] lost 255 packets
[Stream 80] lost too many packets, resyncing...
[Stream 70] Invalid magic 64ec
[Stream 70] lost 255 packets
[Stream 70] lost too many packets, resyncing...
[Stream 80] Invalid magic 4424
[Stream 80] lost 255 packets
[Stream 80] lost too many packets, resyncing...
[Stream 70] Invalid magic ffff
[Stream 80] Invalid magic 020d
[Stream 80] lost 255 packets
[Stream 80] lost too many packets, resyncing...
[Stream 70] Invalid magic dd5b
[Stream 70] lost 255 packets
[Stream 70] lost too many packets, resyncing...
Update: nesa are those Callbacks doing what they should be in your setup? A simple trace indicates the functions aren't being called at all in my setup, therefore no timestamp or pixel data, therefore no love.
this works (JMP Looks like you are missing these in your init which explains why the grid isn't powering up):
if (freenect_start_depth(device_data[i].device )!=0) {error("start_depth failed");}
if (freenect_start_rgb(device_data[i].device )!=0) {error("start_rgb failed");}
if (freenect_process_events(device_data[i].context)
these don't:
freenect_set_depth_callback(device_data[i].device, depth_callback);
freenect_set_rgb_callback(device_data[i].device, rgb_callback);
freenect_set_rgb_format(device_data[i].device, FREENECT_FORMAT_RGB);
(you'll notice I'm still using JMP's multi device loops, I know you discontinued them in your fork, but I'm 100% certain that's not the issue here).
I'm sorry I'm not doing this right on Github! I haven't had time to get into the flow, hopefully I'll have time to master it on the weekend. Make things easier for everyone...
Hey! I'm a dev on the libfreenect project as well as a max/pd external developer (admittedly through flext usually, jitter is gonna be new for me). If there's any support needs from the libfreenect side, lemme know and I'll see what we can do. Definitely interested in getting jitter going myself. :)
qDot, nice to meet you! Welcome aboard.
I think the biggest thing considerations from libfreenect would be sure ensure it continues to play nice with Max. For example they removed code from freenect_close, so freeing the external causes the app to crash. So far the hack has been to keep in the camera release from an early release.
Not sure what that means to you. My past experience says there's certain calls that should be avoided at all costs when it comes to Max, ie, exit, etc... but I'm not sure how many things like that will be a consideration. I'm not much of an authority. Just a hack. :)
Thanks cap!
First off, has someone set up a main repo for jit.freenect.grab anywhere? I'm happy to work as maintainer on this if you'd like, we could possibly even make the repo off the OpenKinect organization on github if you'd like.
Knowing where that is would make it easier for me to update you on what's been updated in the api when changes happen, or even make the patches myself if you'd like. I've got the Max SDK going on here (was working on my own jitter external last weekend, but have been kinda busy just working on libfreenect this week).
The api is solidifying somewhat quickly on the OS X/Linux side, we're hoping to have windows under the same API as is on master right now, it's just taking us a bit to get things right. I don't think we should be calling anything too volatile in the API, but I would also expect it to change pretty quickly, so you might be best static compiling it into your external for the time being if you want things to keep working, assuming that matches with whatever license you want to use on your external too.
Also: How is the external expecting to get images? We sort of assume a streaming architecture in the api, so it may be better to go with a start/stop thread model then a "bang for an image" one, though you could certainly do that via thread spawning too.
Hi qDot,
Right now there's my repo at https://github.com/jmpelletier/jit.freenect.grab and nesa's fork at http://github.com/npnp/jit.freenect.grab (as well as cap10subtext's earlier https://github.com/cap10subtext/jit.kinect).
Kinects go on sale tomorrow here, so hopefully with an actual device on hand I should be able to get something working in the next 24 hours.
As far as streaming vs. asynchronous design, I used the later -- bang to get a frame -- because it fits with existing designs for jit.qt.grab and jit.dx.grab. This is not set in stone, but I think it's better that way, because it allows users to simply replace the traditional grabbers in existing patches (among other things).
Right now, my biggest request as far as libfreenect is concerned is that it would be nice to be able to have a user data pointer in the callbacks. Right now, it looks like using globals is the only way to access anything other than the function arguments. (I should probably make a more official request.)
Right now everything is statically linked, and unless there are licensing hurdles, it should stay that way. You should be able to just drop externals in the Cycling74 folder and expect them to just work.
Jean-Marc
Hi all,
qDot - welcome, great that you're on board:)
I've just posted the first version that actually outputs something.
I agree with Jean-Marc about asynchronous design. For me that fits more into the ways of Max.
In my hacky version I've created a separate thread that gets the stream continuously while the bang will just output the latest frame(a la @unique 0). No optimizations whatsoever at this point.
good luck and have fun!
Does anyone know if they have windows drivers for Kinect?
nesa/jean-marc: Awesome, that was pretty much going to be my thought too on frame retrieval, just not as used to jitter as I am to the rest of max (most of my hardware externals stream because they're outputting at > 100hz).
I'll see about user data in the callbacks. I believe someone has submitted a patch somewhere for that, I just need to find it. But yeah, if you file issues on the openkinect/libfreenect github site, that's probably best to keep us remembering.
Anthony: There's nothing on the main repo right now that works under windows, but this HAS been working on windows, so it's not a hopeless cause, just one that's taking a bit of time. We're working on solidifying win32 under the new api. That's our main goal right now, actually, so we can have people developing on top of it on all three platforms while the probably-going-to-be-much-slower-dev-time-wise kernel driver development process begins.
nesa: what's the best way to contact you (if it's okay with you)? I just have a simple question about your code and don't want to spam this thread (i can post here if you prefer). I'm arlabrat on twitter or at gmail d0t com.
qDot: thanks so much, if only we could get in on the ground floor earlier on other projects, would make things so much easier down the road.
Possible project for Cycling74: top 5 points to consider on how to make your SDK max/jitter friendly? :)
Dynamically linked bundles, relative path names in the C file, no volatile commands... Maybe this stuff is all evident to programmers but I'm constantly running into trouble with how many SDKs are just poorly compatible with Max:
ARTookit: any spaces in the data file paths won't work
Intersense: requires a text file with ports installed at root (or assigned directory)
qDot: I tried compiling a Thinkgear external once upon a time, think I even posted it to the forum, but I seem to recall that needed a bundle installed to work to?
Anyways, just a side note. Back to business...
Hi,
I just wanted to add my thanks for the work and sharing going on here. I am afraid I don't have anything to add to the jitter object discussion, but I have followed Vade's advice above and been using openFrameworks and Syphon to get the kinect's depth map into jitter.. This is really only connecting the dots between other peoples' work, but I thought my notes to myself might be helpful to others --> so here they are... http://palace-of-memory.net/kinect-openframeworks-syphon-maxmspjitter/
I have also included the final application which will open a window called “kinect syphon server” and it should display the kinects depth image in real time. It is sending out a Syphon server stream called “Kinect Depth Image” which you can grab in max/msp/jitter using the jitter syphon implementation..
hi miscellanea,
i developped my own ofx->syphon->jitter in the same procedure than you and it does exactly the same. The image is good and i receive the picture at 30fps.
google "libusb-osx-kinect.diff" and use git to download the last libusb, it solves the problem of glitchy picture. You can follow advices from the "readme" jit.kinect.grab sources from jean-marc pelletier.
But i have an bad issue, everythings works good, ok, but i can see that the process "kynectsyphon" use an average of 115% of cpu (my cpu is i5 2.4ghz macbookpro). My compiled version do the same.
I tried the GlView example done by theo,the first hacked kinect os x use of libfreenect, and the cpu runs at 7%...
the ofxkinect sources do the same too : 140% cpu.
Hi all,
I've folded in some of the changes made by nesa and the latest update on my Github repo now works.
It's still very alpha. I still have to implement "unique" mode, multiple camera support, proper opening/closing, and I can't seem to be able to release the camera properly but the video streams work as they should.
Phew!
Bravo JM,
J'ai hâte de tester....
Congrats, Jean-Marc! It's running in the background right now and it's amazing!
I bow to the master! :)
I've already changed the usblib on my computer so I can't verify conveniently at the moment: does the version currently on Github link to the usblib dynamically? Does it work to just drop this into max-externals another machine? Thanks...
You shouldn't need usblib. My previous static version was causing some problems, so right now I'm just including libusb sources in my project.
Kinect support for Cinder :
http://vimeo.com/17069720
wow, the depth resolution in this video seems far better than 1 centimeter !!!
Amazing kinect video art :
Hi Alexandre,
At close range (about 1 meter) the depth resolution is indeed very high. To test things out I made a short video. I'm just moving my head back and forth slightly to make it look like I'm moving in and out of a "light". You can tell my facial features quite well.
Bouu.. you're scaring me!
It's hard to imagine how they make such depth resolution from measuring light time of flight... plus basically the resolution shouldn't change with distance. But maybe, they put some kind of averaging like this: http://www.youtube.com/watch?v=Z1yYu5dEFfI Could this mean also that: Less FPS=more depth precision possible, while: More FPS=less depth precision ??
Looking forward to playing with this! Just got a simple kinect setup working, so getting it into jitter is clearly the next step!
BTW, alexandre, the kinect isn't time of flight, they use structured light, and project IR dot patterns which they then decode
You are true!
It was Wired.com saying stupid things about the kinect without knowing what they are talking about : http://webcache.googleusercontent.com/search?q=cache:7_wVm6TRufoJ:www.wired.com/gadgetlab/2010/11/tonights-release-xbox-kinect-how-does-it-work/+time+of+flight+kinect&cd=3&hl=fr&ct=clnk&gl=fr
Hi JMP,
Thanks for your amazing work.
and everyone in here sharing this cool world.
i tried to use jit.freenect.grab but
i got a message below...
jit.freenect.grab: unable to load object bundle executable
2010-11-25 20:55:19.323 MaxMSP[695:20b] Error loading /Users/fuyamayousuke0/Desktop/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab: dlopen(/Users/fuyamayousuke0/Desktop/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab, 262): no suitable ima
ge found. Did find:
/Users/fuyamayousuke0/Desktop/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab: unknown required load command 0x80000022
sorry, i have no idea...
would you give me some help???
Thanks
i didn't think it would be so quick to have a jitter object for the kinect. too bad i don't have the skill to be part of the dev process. well done !
now i have to buy a kinect...
Big up for the developers of the object!
I tried the jit.freenect.grab object but from the first outlet i get only a total white image.
The second outlet works and put out the normal live camera image.
I tried the App http://miscellanea.com/downloads/kinectSyphonApp.zip and this one worked fine.
Any suggestions?
try sending a message "mode 1" or "mode 2" to jit.freenect.grab to change outlet 1 output mode...
BTW, it's true that a help file would be useful ; is there a jit.freenect.grab.maxhelp somewhere around ?
anyyway.. big, big, big thanks to you guys for your work on this external !
it works fine here, and I really enjoy this microsoft toy :-)
Hey, Mathieu! MMF for Kinect? ;-)
"BTW, it's true that a help file would be useful ; is there a jit.freenect.grab.maxhelp somewhere around ?"
If Jean-Marc isn't already all over this, I can have one up in a jiff...
thanks for the help file.
there's a small error : mode 0 (default) does not disable depth output ; it ouputs the raw depth values in 11bits.
(outputs a float32 matrix, values are between 0 - 2048)
(connect a jit.cellblock to see the matrix values..)
M
Thanks Mathieu. With select the different mode options it works.
The only thing is that the output randomly stops after a couple of minutes working. Only the output from the first outlet or only the output from the second one stop with updating the image.
Probably because it's in alpha state?
Sorry about not documenting the "mode" attribute better.
It's definitely not "production ready" yet, but it's almost there.
I'm not sure why the output stops randomly. It might be a problem with libfreenect because I don't think there's really anything in my external that might be causing these sorts of problems.
Yousuke: what version of OSX are you using? The external is still in development so the version that's up is a debug build and I haven't made any effort to make it compatible with anything other than 10.6.
Thanks for the help file! I made a few edits and pasted it below.
Jean-Marc
Okay, forget the last help file, I made some more changes.
You can now chose to output the depth matrix as long, float32 or float64. The original data is 11-bit, so there's not much point in outputting char. You can easily do the conversion in Jitter anyway.
There was also a "unique" attribute that wasn't in the help file. It works like for jit.qt.grab.
The Kinect needs to be still to calibrate its laser projection. If you move it or nudge it you will experience blackouts. That's normal.
The update is up on Github but it's still "alpha" so play at your own risks.
Jean-Marc
thanks for the help file.
there's a small error : mode 0 (default) does not disable depth output ; it ouputs the raw depth values in 11bits.
(outputs a float32 matrix, values are between 0 - 2048)
(connect a jit.cellblock to see the matrix values..)
M
Whoops... Should have known better. I didn't even check that. Sorry for the mistake.
Jean-Marc & aartcore: I posted this issue on Github and promised more debug info (but haven't worked with it for any length of time since). I'm on 10.6.4 as well. Might be with the freenect lib but I haven't yet encountered this in (for example) Openframeworks.
I just posted a release candidate on Github. Thanks to nesa, you can now bob the Kinect's head and get accelerometer readings. I also verified that it works with two Kinects at the same time. There's also a much-improved help file in the download.
Jean-Marc
I've got the same problem as YouSuke. I'm using OS X 10.4.11. Has anyone tried it on 10.4 or 10.5?
Here's the error I'm getting:
jit.freenect.grab: unable to load object bundle executable
2010-11-27 13:51:54.584 MaxMSP[4035] CFLog (21): dyld returns 2 when trying to load /Users/mattgilbert/Projects/kinect-dance/max/jit.freenect.grab.mxo/Contents/MacOS/jit.freenect.grab
Could someone compile Jean-Marc's external for Windows and post it?
Sorry, OS 10.5 and higher, Intel only for now.
>JMP
Hi,
I tried new release and it works very well!!
thanks so much.
Yousuke
Hi,
I haven't had time to work on the external (day job) but I did hack a quick patch that maps the output of jit.freenect.grab to OpenGL geography. I might actually make this another mode in the external, which would get rid of the artifacts.
I have the new release working well. Thanks so much for all of your work!
Looking forward to getting the data into a sound or graphic patch.
In a previous release of the jit.freenect.grab on github there was a build folder with the mxo in it but on this newer one it's disappear..can anyone give me a hint as to how to build the mxo of the new version?
edit: oops..sorry..just had to build the xcode project and the folder showed up...sorry, I'm terrible at xcode at the moment
>JMP
Hi,
I download your latest release, the camera is working, but most of the message object seem not work, there were "doesn't understand" errors in Max window, do you know what might be happening? I use Max 5.1.5 Many thanks!
yan
works great! can't thank you enough!!! http://www.youtube.com/watch?v=phGSc2KUcfw
>jean marc pelletier
hi,
thank you so much for your work, but I have an question, why we didn't have the color which changes according to the depth of field ? like all the other driver demo ? I want to use it to create several layers with color filters .
Marc lautier ( journee d'informatique musicale 2009 grenoble)
lautier987, you can just remap the grey values to hsl.
Yan: I'm not sure why the messages aren't working. I got another message about this via Twitter but here everything works fine, and as far as I can tell other people are using the object without problem too. I'll try to look into it, but it's hard when I can't reproduce the problem.
Marc: Bonjour! The colour is just for visualization. I didn't include it because it's the kind of thing Jitterists might want to make themselves. Here's an example of how you can do it (it's not the same mapping as in the demos). You just need to make sure you're using "mode 3" (distance).
thanks so much for your work Jean-Marc Pelletier !
the external rc1 works like charm :-)
Also:
The official page for jit.freenect.grab is live at http://jmpelletier.com/freenect/
If you have something interesting to show, let me know and I'll add it to the gallery.
Jean-Marc
nice work! Jean-Marc, how did you create this one:
http://www.youtube.com/watch?v=wvJKaViF7p0
dirkdebruin: Used jit.gencoords + jit.freenect.grab depth map to make a geometry matrix that I fed straight into jit.gl.render.
When I get the time, I want to make another external that converts the depth and rgb data to more proper OpenGL geometry.
Jean-Marc
Thanks a lot Freenect team!
grab object works just fine.
Joy.
hi
sorry for the stupid question, but it doesn't hurt to ask, no?
__when you say kinect it's only the camera/accesory (which costs some 140 euros over here, in France) or do you need the Xbox as well?
__how do you connect it to max/jitter (running on a mac) - bluetooth? wifi??
__if I understand well, it goes way beyond the possibilities of a web-cam, does it?
many thanks for some basic answers!!
best
kasper
__when you say kinect it's only the camera/accesory (which costs some 140 euros over here, in France) or do you need the Xbox as well?
No.
__how do you connect it to max/jitter (running on a mac) - bluetooth? wifi??
USB and the jit.freenect.grab object. Also required external power.
__if I understand well, it goes way beyond the possibilities of a web-cam, does it?
Only in that it gives you depth information (which, as has previously been very difficult) so it's easier than ever to extract, for example, presence (with a much easier way to do background subtraction).
oh, thanks
so the kinect + usb cable and of course the new object, and I am set (+ the jitter patch etc etc etc of course)
I think I will get one!!!
best
kasper
Hello Jean Marc
Any plan for a windows version ? sooner or later ?
thanks
xavier
First, a big thank you to all that have contributed to jit.freenect external. I've been having a bit too much fun with it lately! And of course yet another controller where I don't own the console it was meant for :)
As an exercise both to help with my jit.gl.* chops (more than my usual tinkering with videoplanes) and playing more with the data coming from the kinect, I wanted generate a 3d point cloud based on the data. Seems pretty straightforward to do given that all the coordinate data is available. Where I'm stuck is what object(s) to use to generate the points. Any pointers?
Thanks again for the great work!
David
Try jit.gl.mesh with @draw_mode set to points
WHOA! Oh man, I just tried the newest external and scared myself. If you don't initialize the object with a tilt value it resets to 0, and the other objects never turned on the red LED before. I thought my Kinect had been hijacked by Skynet! LOL, no more coffee for me...
cap10subtext: as I wrote on Twitter a while ago, I always thought robots with red-glowing eyes à la Terminator were a meaningless fantasy, but here I am with two re-glowing Kinect eyes staring at me...
pixelux: I would like a Windows version too, but it looks like libfreenect doesn't work on Windows yet.
Kasper: you don't even need a USB cable, it comes with the device. 140 euros? Ouch. Here, it's 12,000 yen, about 110 euros. Very, very cheap for what it is. I used to work with a Point Grey Bumblebee (http://www.ptgrey.com/products/bumblebee2/) which costs about $2000 and didn't give you as good a depth map as the Kinect. It works in sunlight, though, unlike the Kinect.
Hi,
Current external 's working here too, but the output freezes every couple of minutes. I have to send the close and start messages for it to restart. I just started messing around last night. Will give it another go tonight to see if this persists. I'm on OS X 10.5.8 and Max 5.1.5.
grtz dtr
Problem persists. Every x minutes the output will stall. I had one instance where the 2D image stopped outputting while the depth field kept going. Any ideas?
Thanks you for the post.
Hi guys, Im a newbie. Nice to join this forum.
__________________Watch The Tourist Online Free
hey DTR I had trouble with that too until I used the grey usb extension that came with the kinect, the problem for me seemed to have been related to a loose connection.
First of all many thanks to the makers of the object!
Second, would any of you Jitter wizards be willing to post some examples of how you are manipulating the depth data like in these crazy videos turning people's faces into topographic maps?
Thanks,
Dave
Erm, yeah… I'm thinking more like a patch, but thanks Roman!
I cobbled something together using a screengrab from one of jean-marc's demos..just to format the data correctly..and tried the video distortion example..and made a colorizing example with jit.charmap...I want to clean it up and post it for everyone, but I don't have my own kinect..I'll try and clean it up but I might break a couple things...maybe tomorrow but someone will probably beat me to it