[sharing] Kinect depth-sensitive blob tracker, 3D bg subtraction, etc.
Hey everyone,
Using jit.freenect, some cv.jit, GLSL, Java, and JavaScript... here's a Kinect tracker patch/app that I've been using lately as a server for controlling other patches/apps. Some pretty cool features in this, IMO, that others might find useful...
best,
Zachary
demo video: http://youtu.be/bx02WIG7ooU
download app and max patches at: http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/
-documentation is non-existent aside from video, and comments in JSON config file, but there's probably enough info there for a start.
FEATURES:
1. 3D background subtraction
2. custom culling of Kinect depth matrix for carving out ideal tracking space
3. reports blob location data via OSC to variable number of listening clients, as follows:
--- /kinect/blobs label1 pixelX1 pixelY1 x1 y1 z1 label2 pixelX2 pixelY2 x2 y2 z2... etc. for as many blobs as are present (message length will be in multiples of 6 element list)
4. by default blobs are sorted and labeled in ascending order nearest to farthest from camera (labels start at 1)
--- pixel x/y coords are blob centroid in pixel (camera frame) space, normalized from -1. to 1., with origin at center of frame
--- x/y/z coords are real-space 3D blob centroid coordinates in meters (right-handed coordinate system, with z pointing out from camera, y pointing up)
5. optional depth-sensitive tracking - attempts to consistently associate labels with correct blobs, regardless of overlap, etc.
--- like cv.jit.blobs.sort, but considering depth as well
nice! i had the culling etc but the blob tracking is real cool. tanx!
Waow ! This is very impressive, thanks for including the original patch as well !
Great Work! Thank you very much for sharing.
Thanks guys, glad you like this. Please let me know your suggestions for future versions, or any strange behavior you experience. Or just gut what you need from the patch, and do your own thing...
@dtr, I've been wanting to implement a kinect-based cv.jit.blobs.sort for a while now. Thinking about some other tweaks that could be added... like right now with tracking enabled, if a blob disappears for one or more frame, it's label gets freed, and then when that same blob reappears it gets assigned the lowest available label. It probably makes sense to make that behavior flexible, and allow a blob to disappear for a custom-defined number of frames before its label is stripped...
Added some more features to the depth-based tracking...
in v19 of app/patch, here: http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/
Now you have a buffer parameter that defines how long a blob's location/label will be remembered by the tracker after it has disappeared from the frame.
Default is 0, meaning no frames of memory (when a blob disappears, it's forgotten). This is the original behavior from previous version.
Value of -1 sets memory to infinite. When a blob disappears, it's coord/label remains indefinitely. When a new blob appears within that specified distance threshold, the tracker interprets it as that old blob.
And more practical... a value of 30-300 (1-10 sec., assuming 30fps) will remember a blob's coord/label for a short amount of time. This may be good if, for example, a person turns sideways and is momentarily too small to register as a blob.
this seems really a great tool, but i cannot get it to work ...
When launching the patch, Max asks for a "project.osc.recv" which it doesn't find. What is this object ?
Can't get no information about it.
Thank you
fantastic tool - thanks!
(works perfectly in Max 5, not at all in Max 6 - no errors, but turning the camera on just outputs a single frame - if I toggle the kinect view window, each time it opens it contains the newest frame, but no motion)
mega share!
indeed, great tool, works fine in max5
@tep, That abstraction shouldn't have been in the max patch, part of a larger project, I've taken it out now. Redownload or just delete the object yourself. But glad you got it working, regardless.
@pseudostereo, Thanks! I don't have Max6 at my work, where this was made, but I do have it at home. I'll figure out what's happening there soon. In Max5, you have to do some funky things to make gl objects work without a visible window, and that's what I'm doing. Maybe if you remove the visible-then-invisible window stuff from the patch, and just make it visible always, the problem goes away? I'll check when I can.
nice work, thanks for sharing!
Hi everyone,
I made a few minor changes to ensure that the patch works correctly in Max6.
Redownload v19 for the update. I haven't extensively checked the patch in Max6, just a few minutes of testing. So let me know if any strange behavior pops up.
Actually, I don't understand why any changes were necessary... If anyone form Cycling is out there listening, maybe they can give insight.
Here are the details:
1. In the Max5 version, I wasn't erasing the render context before banging it, since I didn't see the need in this case. But that erase is required in Max6 (haven't had time to thoroughly figure out why).
2. In the Max5 version, I was setting the gl.render object to @transform_reset 1, and then the gl.videoplane was correspondingly effected. But in Max6, setting @transform_reset to 1 for the gl.render object doesn't seem to effect the gl.videoplane at all. So I explicitly set the gl.videoplane attribute. This is a surprising one to me, and it means that lots of my old patches that global set @transform_reset may be broken in Max6...
Anyways, it's all working in 5 and 6 now.
best,
Zachary
@Zachary Thanks for sharing, I am currently working on a project also use kinect device, but not for blob tracking, it is simply a data mapping issue, I am wondering if i can ask you for suggestions through emails?
Thank you
Caroline
Sure, though if it's something that would benefit other people in the forum, it might be nice to ask here as well.
zseldess at gmail
ask here
Excellent work, Zachary. Your office chair looks really nice, too. Thanks for posting this stuff.
Hi Zachary,
as it is based on jit.freenect.grab, i suppose there's no problem in using 2 kinects & two of your patches at the same time ?
Here are the things which would be necessary :
- "open 1" & "open 2" messages in [jit.freenect.grab]
- distinguishing the send&receive & OSC ports in each patch
- change the ports and make a 2nd kinect_tracker_config.json
do i forget something ?
Hi tep,
That sounds like everything, although I won't know for sure until I get back to my kinects in a week or so. If you use the standalone, all that should be necessary to change is the open message, I think. I'll rebuild to allow that to be selected in the interface and via the json config in the near future.
Also, just fyi, I'm planning to update this to make it possible to carve out multiple tracking spaces with one camera, but need to think it through some more. I'm also going to be working in the next few months on a version as tracking server for 3D nav in cave environments...
best,
Zachary
This is super awesome. Thanks for sharing.
Hi guys,
Another small update to KVL Kinect Tracker (v20), based on tep's last post. I've added a camera index field in the config file (and in the gui) which lets you select from one of multiple connected Kinect devices.
You can get newest version (and change log) here:
http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/
For now, this will work out the box when running multiple versions of the standalone app.
If you want to run as two versions of the patch, you'll need to make some mods:
1. duplicate the patch in two directories
2. make unique send/receives for each patch
3. make unique argument for all z.jsonio objects in each patch
4. modify the JSON config file uniquely for each patch, if needed
best,
Zachary
would like to add to all the positive comments for this - really great work here!
one question - when i have tracking on and set the frame buffer to -1 the blobs hold their respective labels in the video window but when i look at the unpacked OSC the labels jump.
for example: i have 2 blobs. when the one labeled "2" moves closer to kinect it stays as "2" in the window but it is now coming out of the first outlet from OSC-route -> unpack.
any ideas?
thanks, david
Hi David,
Thanks for the feedback. I think that is the expected behavior, although I'll have to check tomorrow when I get access to the Kinect.
Blobs will always be reordered nearest to farthest from the camera, whether or not "tracking" of the blobs is enabled, as in your settings (seems at least as sensible as left to right or other sorting logic, since it's in 3D).
But since you have tracking on, and the buffer set to -1 (i.e. infinite), the label should be correct. So if it's blob two that you're interested in, regardless of it's nearness to the camera, I'd recommend passing the OSC list through a [zl iter 6] to chop it into separate 6 elements per blob, then pass that to a [route desiredBlobLabel].
Let me know if that helps, or doesn't make any sense at all...
best,
Zachary
Hi Zachary,
thanks for the reply. i will give that a try.
sounds like it should sort the problem out.
cheers
david
Hi,
Thank you for sharing this, it will help me immensely with a project (I'll share the results soon).
I have a question though as I'm new to the OSC route object - how do one actually unpacks the blob data within the patch? Or do I have to open a new patch? Sorry - maybe it's an obvious thing, but I cannot figure it out.
Thanks!
Hey great patch.
Is there anyway i can take the blob data from the KVL kinect tracker and implement it into another patch so i can manipulate it.
thanks for the help
G
buckmulligan and jobbernaul, I think you're asking similar questions.
This depends on whether or not you're running KVL Kinect Tracker as a Max patch, or as a standalone.
If a standalone, then
1. You need to create a [udpreceive] object at the port that you're sending the data to.
2. From there you use an OSC-Route object to route out the url (whatever you've specified in the config file. defaulting to /kinect/blobs)
-so [udpreceive ] --> [OSC-Route /kinect/blobs] -->...
From there it all depends on what you want. If you want to break the list up into blob componenents, you can pass it through a [zl iter 6], then grab the data you want from each blob with zl slice/ecils/nth, etc.
If you're running it as a patch, you can do the same as above, or you can grab the data from a [receive kinect.blobs] object. Then pass it through [OSC-Route], etc.
Hope that helps.
best,
Zachary
thanks for the advice the recieve kinect.blobs in max worked a treat. Im trying to use the blob data to allow the user to view live video feed from within the users silhouette so this hopefully has taken me one step closer.
thanks again
Jobbernaul
I know i am taking you're time but i was wondering if you had any ideas how i could use the blob tracking data to create a silhouette on a new jit.pwindow.
thank you in advance
Jobbernaul
Hi Jobbernaul,
You don't need any of the blob tracking stuff at all for that. If you want mod KVL Kinect Tracker for this, just get rid of everything after the [p planCulling] sub patch. The output from that can be used as a mask, either rolling your own method, using jit.alphamask, etc. If that doesn't make any sense, check out the Jitter tutorials, particularly Tutorial 29: Using the Alpha Channel.
best,
Zachary
this might just be me being a complete amateur but where is the [p planculling] sub patch. this project is defeating me at the moment but thank you for the help.
Thanks
Jobbernaul
Thanks very much Zachary, works very well for me.
looks awesome! i'll give it a try!
Fantastic patch. A great help to someone who is learning to interface with the Kinect.
I was wondering if anyone has managed to get this working with cv.jit.blobs.bounds rather than with centroids?
It would be nice to see whether the user is standing or sitting for example.
Thanks for the feedback.
I'm working on another version of this for VR environments with head tracking and IR markers, and that uses cv.jit.blobs.bounds. I'll share that when it's a bit more stable and mature. But it wouldn't be very hard to modify the KVL Kinect Tracker to use that object either. Some minor patch changes, and a bit of modification to the Java code. If I get any time in the near future, I'll take a look at it.
best,
Zachary
Hi Zachary,
while waiting for your next version of KVL, i would like to know if you or some other people here get some random crashes ?
Really random for me - after at least 15 or 20 mn use... Sometimes no crash during some hours ...
I remember a post here on the Max forum talking about the jit.qt.freenect that would go on filling the RAM and that would be the cause of some crash ...
does someone experience crash too ?
oh by the way intel core2duo 2,4G, os 10.6.8
@Zachary
This is great news, I look forward to the new patch.
I haven't had the time to learn java yet so this kind of modification is out of my reach.
I've managed to get the patch working with cv.jit.blobs.bounds without your clever blob sorting mechanism and tried to make some patching that sorts the bounds against the centroids in your patch by testing whether the middle of the bounds is equal to the value of any of the centroids but haven't got it working properly yet.
Here is how far I got (changes are in your [blobs] patch):
@tep
I get crashes sometimes, usually with a high CPU load or if I close then reopen the kinect cam.
As I'm using freenect.grab in an installation soon I tested it by keeping it open as long as possible and managed to get 6 hours with no crashes.
I then began modifying my patch and within half an hour got another crash.
I guess we are going to have to wait for someone to make the [jit.openni] external available for OSX for proper stability.
@tep
I have experienced crashes, especially when repeatedly connected/disconnecting to Kinect while modifying the patch. But those seem rare on our systems. And we don't tend to leave it on for hours and hours. Sorry I can't be of more help there. The most recent jit.freenect is more stable than predecessors though, so it's worth making sure you're using it. And wait, it looks like there's a rc5 from earlier this month (we're not using that yet). Worth a try.
best,
Zachary
Zachary, this work is so helpful. +1 on the great work feedback. I was following the request of buckmulligan and jobbermaul into how to get blob data out of the patch - e.g. some number data pertaining to the movement of three blobs - is this possible? So that one could maybe scale it to play back speed or get values such as those from an accelerometer?? I was unsure on what to send with the OSC-route...
Many thanks for your time
Bests
Jay
Hi Zachary, I got somewhere getting data out but have had the same problem as d.w.str_ - where the blob data keeps swapping even though tracking is on. I then used route as you helpfully suggested to isolate this data which worked well but, as you may see in my patch the data become very limited and actually only outputs an x axis value for each blob instead of x y and z as in the normal unpack from the osc... could you offer any insight? It would be great to get x y and z from the route to ensure blob number swapping doesn't happen.
Many thanks for your time.
Hi Jason,
You're confusing route's behavior with unpack, it seems. [route] looks for a list the starts with one of the arguments, and if found, removes that argument from the list and passes the remaining elements out of the corresponding outlet. So if the incoming blob list is 1 px py x y z (with 1 being the blob label), [route 1] will pass px py x y z out of the first outlet. It all works as advertised. Using the tracker daily where I work without issues.
Here's an example, to help clarify. Good luck.
best,
Zachary
Thank you for answers Zachary & DIFTL
didn't see this RC5 ! Going to test it deeply right now. Because it's for an installation, it NEEDS to run for hours and hours. Fingers crossed.
Thanks very much for the example, Zachary. I have given it a try and it seems to be working - no swapping of blobs as I compared.
Many thanks for your time and patience.
J
thx a lot for sharing this... :-)
Hi Everyone, I'm just about to upgrade to Lion, any issues with this patch or is it all good?
@fp, my pleasure!
@Jason, I haven't tried it on Lion, actually, but I don't think it will be any different. I'll give it a try in a few days, when I'm back in the vicinity of a Kinect...
best,
Zachary
great job, smart and ergonomic. Thank you for sharing it with those who do not know yet : )
Great app.
The only problem I have is understanding how the Plane Culling works.
What are these planes anyway? I see that they have 3 points. Are they, well, triangle planes? :)
I am trying to move the points around but with no succes.
Any ideas?
Thank you, Zachary, and thanks to everyone willing to give me some pointers.
ygr
Hi ygr,
Glad you're finding the app useful.
Plane culling, I'll admit, is not well documented. Basically, you can create up to 6 planes that define which part of the real space in front of the camera you care to track, "culling" the space on one side of the plane. The simplest way to define a plane is by specifying three points on its surface. I'm not at a computer with a Kinect right now, but try defining one plane like this, and see what happens: xyz1: -2. 1. 0. xyz2: -2. -1. 0. xyz3: -2. 0. 1.
Sometimes is culls the opposite side of the plane than you are intending, and that has to do with the way the software determines the front vs. back of the plane based on the ordering of the coordinates. If that happens, try just switching the y coords for xyz1 and xyz2, or similar. I should really make an abstracted visualization for this process so it's more clear. But once you get the hang of what's happening, it's pretty easy to use.
best,
Zachary
Cool stuff, that's for sure. And very useful ;-)
It's a pity, I do not discovered this project earlier and don't save a lot of time on creating my own "isotope".
Thank you so much, Zachary, for looking into this. This is what I am trying to do, getting the hang of what is happening. ;))
I tried your example from above and all turns to black. If I play a bit with some numbers I manage to see the culling but it is all screwed up.
I am trying to understand by playing with the examples the app comes with, namely the second (right culling) and the third (left culling) plane. As they seem to be more properly arranged. The others are way off in my setup. Don't know why. :)
Maybe if I understand what these are in openGL, I might manage to play with them. Are they actually triangles in 3d space? Do you know where 0. is in this perspective? (I didn't play much with the 3d camera in max :D)
May I ask why you didn't put 4 points for each plane? It would've made more sense for me at least.
All the best,
ygreq
The coordinates are all in meters in real-space, with the camera being at 0,0,0. Check out the info.rtf file that comes with the tracker (make sure you download the newest version, v20) for a *little* information on the coordinates system, etc., to help you understand what this all means. I'm positive it all works... just used it in an installation at NYCEMF last week, and many other contexts before that. So it's just a matter of understanding what's going on. I probably won't have access to a Kinect until this evening, but when I get a chance, I'll try to snap some screenshots for you.
best,
Zachary
Thank you, Zachary. I bet it works. Like I said, I have left and right that are positioned pretty well. The thing is it is a bit hard to move them around. When I try to move the y coordinate of a point, it seems that I don't move a point in 3d space but more like pivoting a whole line from the middle. Weird. :)
Maybe you are busy right now and I don't want to keep you from your things. I will try to play some more and maybe, just maybe, I will understand what is going on. I will come back with an answer or more pertinent questions. And I'll be waiting for those screenshots. :D
Maybe you could document a bit the whole Culling thing for us, dupes. Whenever you have time.
Thank you so much,
ygreq
"Check out the info.rtf file that comes with the tracker"
I am guessing you are referring to this info:
"Coordinate System (in meters): right-handed
- when standing in front of camera, looking at lenses, axes defined as follows:
x axis - positive to the right
y axis - positive up
z axis - positive point directly out of camera into tracked space"
I copied them here for future reference to others :D
Really helpful!
Man, this is weird.
I just don't get how the culling works! Maybe after a good sleep..
Hi ygreq,
Sorry for the delay. Two things about my earlier culling plane coordinates that may have been a problem:
1. 2 meters on the x axis (relative to the camera) may not be visible in the space you're working, as it you'll need a lot of depth to see that far to the side of the camera.
2. As I wrote, sometimes the culling is inverted from what you want, due to how the software determines which side of the plane is front and back (i.e. which side needs to be culled), based on the orientation of the points. So I needed to invert the y coords of the first two points to get the desired effect. If you want to see the math behind the culling, take a look at the shader file: zs.kinectcullplane.jxs
Again, three points is one simple way to define a plane. Four points is not. You can easily define four points that don't sit on the same plane (try it, you'll see). So you just need to do some visualization and careful thinking about this and it should become a bit clearer. Skim the definition section for a plane on wikipedia: http://en.wikipedia.org/wiki/Plane_(geometry)
I could definitely come up with a way to may this culling visual and intuitive, just don't have the time right now.
I've captured a few screenshots for clarification. First, I reduced the x values from -2. to -0.5, which will the allow the plane to be visible at much closer distances. I've also captured the inverted plane, so you can see what happens, and the original left cull plane together with a symmetrical right culling plane, so you can see how to mirror the planes. Move the points around, see what happens, and based on coordinate system information in the info.rtf, see if you make sense of it all. Hope that helps.
best,
Zachary
This looks fantastic, thanks for sharing! I can't seem to be able to download it from http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/ though, any idea where I might get it from now? Thanks again!
Hi Barry,
The server seems to have gone down for some reason. I'm having someone look into it. Will post when it's back online. Should be soon...
best,
Zachary
Hi all,
Website is back up.
best,
Zachary
Hi Zachary, nice job done ;) I want to ask if there is some limitation in the max patch for sharing the video renders with Syphon. I tried even with the first ouput and nothing happends, if I plug a normal video window it works. The syphon tags are very easy and I know I´m not making anything wrong there. Do you have any ideas about this?
thanks!
Hi Pattyco,
Sorry for the long delay. I've added syphon to a similar tracker that I haven't made public yet, which is similar to the one here. It works fine. If I get a chance sometime soon, I'll integrate optional syphon activation for the video matrices, exposed in the config file. Will post here when that's ready.
best,
Zachary
I would love to give this a try but server is down.
Can someone please check on it. You have my deepest
gratitude.
Hi Anthony,
Looks like the server is back up. I'll probably move it elsewhere in the near future, to avoid this kind of surprise downtime... Will post the new url when that is done.
best,
Zachary
Ok, I've moved the software over to my own website. All future updates will go there, and I'll get the old site to forward on to this new page soon.
best,
Zachary
Is it possible to mask the depthmap?
What I'm basically doing is that I'm playing a video with alpha layer to track blobs which are outside of the moving mask. I got this working so far, but only when I'm close to the camera.
When the blob moves away from the camera it re-appears in the mask again. This is because I probably only mask the first plane and not the 'depth' so to say.
Is there a solution to this? I'm subtracting the mask between the bgSubstraction & blobs subpatch.
And a big up for the tracker, really smooth!
Hi Zachary,
First of all I would like to thank you for this awesome maxpatch you have.
I'm trying to create an interactie project using a kinect and max/msp using your patch for the kinect.
I'm trying to figure out each x and y coordinate in the display for each blob.
Like the cartesian x and y values for blob number 1 - number 2 - and so on.
Is there a way to figure this out easily?
I've been trying to unpack the values under (p blobs) -> (p get_cartesian), unpacking the list and using the first two values but that is just confusing.
I'm also new to max and am unfamiliar with java.
I want to a video file that follows a particular blob. The one I have now works with 1 blob, but when theres 2 or more it gets weird. My target is to follow up to 6.
If you can help me, I'll be very greatful!
Nonetheless, Thanks Zachary
@Viesueel,
Sorry for the very delayed response. I'm not sure I understand what you're try to do. Please re-explain in a bit more detail, and maybe I'll be able to help.
@Eyelsi,
It sounds to me like the tracker is already sending all the information you want. Take a close look at the info.rtf file so understand how the tracker formats outgoing messages. I recommend you just use the standalone app version, and receive the information via the [udpsend] object, but if you really want to use the patch version, then you can receive info either from [udpsend] or from [receive kinect.blobs].
The information you most need to understand is at the bottom of the doc, in the section entitled "OSC messages sent by KVL Kinect Tracker". Here's a patch that grabs the real-space cartesian x and y coords of each blob (You'll need CNMAT's [OSC-route] object):
best,
Zachary
Hey Zachary, I finally took a good look at this. I'm in need of extending my depth tracking range beyond what skeleton/user tracking offers. Your patch is mighty useful!
I found 1 inconvenience and that's that even though depth is taken into account for sorting it isn't in the blob detection. At least I didn't manage to tweak the parameters so it works that way. Whenever I have 2 objects overlapping in the depth map, regardless of the depth distance between them, they get joined in one blob.
I tweaked your system a little to prevent that. I used Mrtof's contouring method found in this thread: https://cycling74.com/forums/share-blob-detection-contours-and-the-kinect/
It's in the 'camera' subpatch.
I'm using KinectSDK instead of Freenect so I also made the patch framework-agnostic. Depth and RGB images are received through 'r kinect_d' and 'r kinect_rgb' so people can use whatever kinect grabber they like outside of this patch.
I had to take out a couple of erodes from 'bgSubstraction' to make this work well. One could take out more or re-add some to tweak for their specific situation.
Patch added in zip file 'cause posting copy-compressed is broken on the forum...
Hi DTR,
Thanks for checking the tool out, and applying some changes. I'll take a look soon at all you've done.
In answer to your comment:
"I found 1 inconvenience and that’s that even though depth is taken into account for sorting it isn’t in the blob detection. At least I didn’t manage to tweak the parameters so it works that way. Whenever I have 2 objects overlapping in the depth map, regardless of the depth distance between them, they get joined in one blob."
I'm not entirely clear on what you mean here (though this may become clear after I actually look at what you've done). I think I've already implemented basically what you describe. If you set "tracking" to on (bad label here), just above the distance threshold numbox, you can get the tracker to remember all blob x/y/z locations over n frames (set frame buffer). Then when a blob disappears because it is occluded by another nearer blob, that label and location remains in memory (for n frames). Note, if you set frame buffer to -1 it will retain all labels forever (not recommended). Any blob that appears within the cartesian distance threshold (i.e. distance threshold in UI) of that stored blob will be interpreted as that blob, and rendered and served as such.
This is all demonstrated in this video: http://www.youtube.com/watch?v=4HvooLGSKwY
Let me know if that makes sense.
best,
Zachary
Hey, what I mean is seen in the video at 1.34. When hand 1 is partially occluded by hand 2 we get 1 merged blob labelled 2. In my application this needs to be prevented so that 1 is still registered as a separate blob. That's my mod is for. By applying a black contour to shapes in the greyscale depth image, they remain separate when partially occluding each other and register as separate blobs.
Hi everybody,
trying to run the app KVL_Kinect_Tracker (v20) on my mac laptop 2.4 GHz Intel Core i7.
Running macos 10.8.5
I get no image whatsoever in the KINECT VIEW, and no updates in the fps. So it seems to not be working for some reason.
jit.freenect.grab does work and I do get n image both in ir and rgb.
has anyone else had this problem?
thanks
drew
Great share, thanks zach!
Just one more question if anyone can help, -
I need each blob to turn off/on independently -
I would like to send a '0' or turn off an oscillator synth when a blob disappears -
i.e.
is there someway I can gate on/off when each blob is active/disappears?
when blobs not present X, Y,or Z get a '0' - or a number is initiated when that blob is present, a 0 when not.
any help 'oreciated - thanks again
drew
Understanding 'Z' data
hi - just a really simple question on the Z data - sorry for my not understanding.
From what I do understand X & Y output from -0.5 to +0.5 (- to the left or bottom + to the right or top).
What is the range of the output on the Z data? I am trying to use it to control volume/amplitude - but I can't get the range I want, at the moment it only works close up to the kinect camera (even after adjusting min/max distance) - I want to use with a larger distance from the kinect. - the sound keeps totally disappearing even when the blobs are still clearly visible.
any help appreciated, thanks
drew
Hey Zachary, in regards to adding the syphon activation, where in the max patch would I add such? I am quite new to max in itself but I want to be able to send video matrices through from syphon to the visual programming software 'Isadora'. Even is there a way I can add this to your already crafted program?
Thanks Zachary
Hello,
For an installation project I would like to record a bit of the video data outputted by jit.freenect.grab in order to simulate the kinect input while being off-site.
In the patch the kinect is set to mode 0 = raw (1 plane matrices with values ranging from 0 - 2047). I can store individual matrices like this using the jitter format, but is there a way to record a "video" of float32 matrices? I tried scaling this into 0-255 range and save it (raw codec) as char data. The video looks "OK" but obviously quantized (see pic), however when using it to substitute for the live kinect input the blob detection in this patch doesn't work anymore. Would be great if someone could give me a hint if what I'm trying to do is possible at all. Would be greatly appreciated.
Cheers,
M
OK, got it! Ever-amazing Andrew explained the coerce trick in the thread over here :) https://cycling74.com/forums/save-float32-or-float64-type-frames-jit-freenect-grab-for-later-playback/
Thank you very much for this patch.
Still working well.
You may have saved my life.
Hey Jonathan!
could you post said patch/s?
the original link at the top of the page is no more, and the whole project seems un-findable on their website.
thx
jd
Hey John
Video demo can be found here: https://www.youtube.com/watch?v=bx02WIG7ooU&list=PLAg8gEZ3dDuCgnCkKs2Y_L4RVGUht5820&index=2
And the softare can be downloaded from http://www.zacharyseldess.com/KVL_KinectTracker/
Enjoy
Hi everyone,
I permit myself to post in this oooold topic because of the few last messages from 2017. Happy to see that I'm not alone :D
I turned into Kinect with Max stuff since a few days, and it seems like it was way more easier to get data from Kinect four years ago than today, now that PrimeSens have been bought by Apple, now that Kinect hack "wow" effect is calming down, and now that Max is at version 7, Windows is 10, and now that 64bit is becoming more commmon.
Fortunately I found NI mate, which seems to be the last easy way-to-go to track skeletons. (There is also dp.kinect but I didnt gave it a try yet, and it's not crossplatform).
The fact is that now I would like to get the depth matrix from the Kinect into Max, and it's pretty funny to see that it seems way more difficult to get it rather than complexly calculated tracked skeletons. I discovered jit.freenect.grab, I discovered the awesome work of Zachary Sedless on KVL Kinect Tracker, and I discovered THEBOYTHEYCALLJONNY saying, in april 2017, "still working well" which gave me back a lot of hope.
BUT I cant make it working. I'm on Windows 10, and I cant run KVL KinectTracker max patch because of it's dependency with jit.freenect.grab, wich come as a .mxo only Mac-compatible. I read that a Windows port could be possible but I didnt found any fork of it. And I'm stuck here. I found the jit.openni alternative (kind of pre-dp.kinect) but, I dont know why, I cant open it's.mxe file (is it only Max 6 compatible ??)
Moreover, I thought that installing "NI mate" would override outdated unobtainable drivers problem, making things that need NITE stuff working, but it doesn't seem as simple.
So, as a brief résumé : Is there a way to get depth matrix from Kinect (1414 for now) into Max 7 (64bit preferably), without using dp.kinect ?
I thought KVL should be the answer but it's apparently not.
Thanks,
TFL