[sharing] Kinect depth-sensitive blob tracker, 3D bg subtraction, etc.

Jan 3, 2012 at 11:37am

[sharing] Kinect depth-sensitive blob tracker, 3D bg subtraction, etc.

Hey everyone,

Using jit.freenect, some cv.jit, GLSL, Java, and JavaScript… here’s a Kinect tracker patch/app that I’ve been using lately as a server for controlling other patches/apps. Some pretty cool features in this, IMO, that others might find useful…

best,
Zachary

demo video: http://youtu.be/bx02WIG7ooU

download app and max patches at: http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/

-documentation is non-existent aside from video, and comments in JSON config file, but there’s probably enough info there for a start.

FEATURES:
1. 3D background subtraction

2. custom culling of Kinect depth matrix for carving out ideal tracking space

3. reports blob location data via OSC to variable number of listening clients, as follows:
— /kinect/blobs label1 pixelX1 pixelY1 x1 y1 z1 label2 pixelX2 pixelY2 x2 y2 z2… etc. for as many blobs as are present (message length will be in multiples of 6 element list)

4. by default blobs are sorted and labeled in ascending order nearest to farthest from camera (labels start at 1)
— pixel x/y coords are blob centroid in pixel (camera frame) space, normalized from -1. to 1., with origin at center of frame
— x/y/z coords are real-space 3D blob centroid coordinates in meters (right-handed coordinate system, with z pointing out from camera, y pointing up)

5. optional depth-sensitive tracking – attempts to consistently associate labels with correct blobs, regardless of overlap, etc.
— like cv.jit.blobs.sort, but considering depth as well

#60985
Jan 3, 2012 at 11:50am

nice! i had the culling etc but the blob tracking is real cool. tanx!

#219649
Jan 3, 2012 at 1:31pm

Waow ! This is very impressive, thanks for including the original patch as well !

#219650
Jan 3, 2012 at 1:56pm

Great Work! Thank you very much for sharing.

#219651
Jan 3, 2012 at 2:18pm

Thanks guys, glad you like this. Please let me know your suggestions for future versions, or any strange behavior you experience. Or just gut what you need from the patch, and do your own thing…

@dtr, I’ve been wanting to implement a kinect-based cv.jit.blobs.sort for a while now. Thinking about some other tweaks that could be added… like right now with tracking enabled, if a blob disappears for one or more frame, it’s label gets freed, and then when that same blob reappears it gets assigned the lowest available label. It probably makes sense to make that behavior flexible, and allow a blob to disappear for a custom-defined number of frames before its label is stripped…

#219652
Jan 4, 2012 at 1:06pm

Added some more features to the depth-based tracking…

http://youtu.be/4HvooLGSKwY

in v19 of app/patch, here: http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/

Now you have a buffer parameter that defines how long a blob’s location/label will be remembered by the tracker after it has disappeared from the frame.

Default is 0, meaning no frames of memory (when a blob disappears, it’s forgotten). This is the original behavior from previous version.

Value of -1 sets memory to infinite. When a blob disappears, it’s coord/label remains indefinitely. When a new blob appears within that specified distance threshold, the tracker interprets it as that old blob.

And more practical… a value of 30-300 (1-10 sec., assuming 30fps) will remember a blob’s coord/label for a short amount of time. This may be good if, for example, a person turns sideways and is momentarily too small to register as a blob.

#219653
Jan 4, 2012 at 8:14pm

this seems really a great tool, but i cannot get it to work …

When launching the patch, Max asks for a “project.osc.recv” which it doesn’t find. What is this object ?

Can’t get no information about it.

Thank you

#219654
Jan 4, 2012 at 9:04pm

fantastic tool – thanks!

(works perfectly in Max 5, not at all in Max 6 – no errors, but turning the camera on just outputs a single frame – if I toggle the kinect view window, each time it opens it contains the newest frame, but no motion)

#219655
Jan 4, 2012 at 11:46pm

mega share!

#219656
Jan 5, 2012 at 12:29am

indeed, great tool, works fine in max5

#219657
Jan 5, 2012 at 6:33am

@tep, That abstraction shouldn’t have been in the max patch, part of a larger project, I’ve taken it out now. Redownload or just delete the object yourself. But glad you got it working, regardless.

@pseudostereo, Thanks! I don’t have Max6 at my work, where this was made, but I do have it at home. I’ll figure out what’s happening there soon. In Max5, you have to do some funky things to make gl objects work without a visible window, and that’s what I’m doing. Maybe if you remove the visible-then-invisible window stuff from the patch, and just make it visible always, the problem goes away? I’ll check when I can.

#219658
Jan 6, 2012 at 11:06am

nice work, thanks for sharing!

#219659
Jan 7, 2012 at 7:15am

Hi everyone,

I made a few minor changes to ensure that the patch works correctly in Max6.

Redownload v19 for the update. I haven’t extensively checked the patch in Max6, just a few minutes of testing. So let me know if any strange behavior pops up.

Actually, I don’t understand why any changes were necessary… If anyone form Cycling is out there listening, maybe they can give insight.

Here are the details:

1. In the Max5 version, I wasn’t erasing the render context before banging it, since I didn’t see the need in this case. But that erase is required in Max6 (haven’t had time to thoroughly figure out why).

2. In the Max5 version, I was setting the gl.render object to @transform_reset 1, and then the gl.videoplane was correspondingly effected. But in Max6, setting @transform_reset to 1 for the gl.render object doesn’t seem to effect the gl.videoplane at all. So I explicitly set the gl.videoplane attribute. This is a surprising one to me, and it means that lots of my old patches that global set @transform_reset may be broken in Max6…

Anyways, it’s all working in 5 and 6 now.

best,
Zachary

#219660
Jan 7, 2012 at 4:34pm

@Zachary Thanks for sharing, I am currently working on a project also use kinect device, but not for blob tracking, it is simply a data mapping issue, I am wondering if i can ask you for suggestions through emails?

Thank you

Caroline

#219661
Jan 7, 2012 at 4:39pm

Sure, though if it’s something that would benefit other people in the forum, it might be nice to ask here as well.

zseldess at gmail

#219662
Jan 7, 2012 at 4:40pm

ask here

#219663
Jan 8, 2012 at 9:18pm

Excellent work, Zachary. Your office chair looks really nice, too. Thanks for posting this stuff.

#219664
Jan 23, 2012 at 4:27pm

Hi Zachary,

as it is based on jit.freenect.grab, i suppose there’s no problem in using 2 kinects & two of your patches at the same time ?

Here are the things which would be necessary :
- “open 1″ & “open 2″ messages in [jit.freenect.grab]
- distinguishing the send&receive & OSC ports in each patch
- change the ports and make a 2nd kinect_tracker_config.json

do i forget something ?

#219665
Jan 23, 2012 at 9:05pm

Hi tep,

That sounds like everything, although I won’t know for sure until I get back to my kinects in a week or so. If you use the standalone, all that should be necessary to change is the open message, I think. I’ll rebuild to allow that to be selected in the interface and via the json config in the near future.

Also, just fyi, I’m planning to update this to make it possible to carve out multiple tracking spaces with one camera, but need to think it through some more. I’m also going to be working in the next few months on a version as tracking server for 3D nav in cave environments…

best,
Zachary

#219666
Jan 23, 2012 at 10:51pm

This is super awesome. Thanks for sharing.

#219667
Jan 31, 2012 at 11:29am

Hi guys,

Another small update to KVL Kinect Tracker (v20), based on tep’s last post. I’ve added a camera index field in the config file (and in the gui) which lets you select from one of multiple connected Kinect devices.

You can get newest version (and change log) here:

http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/

For now, this will work out the box when running multiple versions of the standalone app.

If you want to run as two versions of the patch, you’ll need to make some mods:

1. duplicate the patch in two directories
2. make unique send/receives for each patch
3. make unique argument for all z.jsonio objects in each patch
4. modify the JSON config file uniquely for each patch, if needed

best,
Zachary

#219668
Feb 17, 2012 at 4:59pm

would like to add to all the positive comments for this – really great work here!

one question – when i have tracking on and set the frame buffer to -1 the blobs hold their respective labels in the video window but when i look at the unpacked OSC the labels jump.

for example: i have 2 blobs. when the one labeled “2″ moves closer to kinect it stays as “2″ in the window but it is now coming out of the first outlet from OSC-route -> unpack.

any ideas?
thanks, david

#219669
Feb 17, 2012 at 5:27pm

Hi David,

Thanks for the feedback. I think that is the expected behavior, although I’ll have to check tomorrow when I get access to the Kinect.

Blobs will always be reordered nearest to farthest from the camera, whether or not “tracking” of the blobs is enabled, as in your settings (seems at least as sensible as left to right or other sorting logic, since it’s in 3D).

But since you have tracking on, and the buffer set to -1 (i.e. infinite), the label should be correct. So if it’s blob two that you’re interested in, regardless of it’s nearness to the camera, I’d recommend passing the OSC list through a [zl iter 6] to chop it into separate 6 elements per blob, then pass that to a [route desiredBlobLabel].

Let me know if that helps, or doesn’t make any sense at all…

best,
Zachary

#219670
Feb 17, 2012 at 5:43pm

Hi Zachary,

thanks for the reply. i will give that a try.
sounds like it should sort the problem out.

cheers
david

#219671
Mar 2, 2012 at 5:07pm

Hi,

Thank you for sharing this, it will help me immensely with a project (I’ll share the results soon).

I have a question though as I’m new to the OSC route object – how do one actually unpacks the blob data within the patch? Or do I have to open a new patch? Sorry – maybe it’s an obvious thing, but I cannot figure it out.

Thanks!

#219672
Mar 3, 2012 at 1:50pm

Hey great patch.

Is there anyway i can take the blob data from the KVL kinect tracker and implement it into another patch so i can manipulate it.

thanks for the help
G

#219673
Mar 3, 2012 at 2:09pm

buckmulligan and jobbernaul, I think you’re asking similar questions.

This depends on whether or not you’re running KVL Kinect Tracker as a Max patch, or as a standalone.

If a standalone, then
1. You need to create a [udpreceive] object at the port that you’re sending the data to.
2. From there you use an OSC-Route object to route out the url (whatever you’ve specified in the config file. defaulting to /kinect/blobs)
-so [udpreceive

] –> [OSC-Route /kinect/blobs] –>…

From there it all depends on what you want. If you want to break the list up into blob componenents, you can pass it through a [zl iter 6], then grab the data you want from each blob with zl slice/ecils/nth, etc.

If you’re running it as a patch, you can do the same as above, or you can grab the data from a [receive kinect.blobs] object. Then pass it through [OSC-Route], etc.

Hope that helps.

best,
Zachary

#219674
Mar 3, 2012 at 2:16pm

thanks for the advice the recieve kinect.blobs in max worked a treat. Im trying to use the blob data to allow the user to view live video feed from within the users silhouette so this hopefully has taken me one step closer.

thanks again

Jobbernaul

#219675
Mar 3, 2012 at 4:05pm

I know i am taking you’re time but i was wondering if you had any ideas how i could use the blob tracking data to create a silhouette on a new jit.pwindow.

thank you in advance
Jobbernaul

#219676
Mar 5, 2012 at 5:51pm

Hi Jobbernaul,

You don’t need any of the blob tracking stuff at all for that. If you want mod KVL Kinect Tracker for this, just get rid of everything after the [p planCulling] sub patch. The output from that can be used as a mask, either rolling your own method, using jit.alphamask, etc. If that doesn’t make any sense, check out the Jitter tutorials, particularly Tutorial 29: Using the Alpha Channel.

best,
Zachary

#219677
Mar 5, 2012 at 6:29pm

this might just be me being a complete amateur but where is the [p planculling] sub patch. this project is defeating me at the moment but thank you for the help.

Thanks
Jobbernaul

#219678
Mar 6, 2012 at 9:02pm

Thanks very much Zachary, works very well for me.

#219679
Mar 21, 2012 at 10:15am

looks awesome! i’ll give it a try!

#219680
Mar 23, 2012 at 1:09am

Fantastic patch. A great help to someone who is learning to interface with the Kinect.

I was wondering if anyone has managed to get this working with cv.jit.blobs.bounds rather than with centroids?

It would be nice to see whether the user is standing or sitting for example.

#219681
Mar 23, 2012 at 8:01pm

Thanks for the feedback.

I’m working on another version of this for VR environments with head tracking and IR markers, and that uses cv.jit.blobs.bounds. I’ll share that when it’s a bit more stable and mature. But it wouldn’t be very hard to modify the KVL Kinect Tracker to use that object either. Some minor patch changes, and a bit of modification to the Java code. If I get any time in the near future, I’ll take a look at it.

best,
Zachary

#219682
Mar 23, 2012 at 9:35pm

Hi Zachary,
while waiting for your next version of KVL, i would like to know if you or some other people here get some random crashes ?

Really random for me – after at least 15 or 20 mn use… Sometimes no crash during some hours …

I remember a post here on the Max forum talking about the jit.qt.freenect that would go on filling the RAM and that would be the cause of some crash …

does someone experience crash too ?

oh by the way intel core2duo 2,4G, os 10.6.8

#219683
Mar 23, 2012 at 10:20pm

@Zachary
This is great news, I look forward to the new patch.
I haven’t had the time to learn java yet so this kind of modification is out of my reach.
I’ve managed to get the patch working with cv.jit.blobs.bounds without your clever blob sorting mechanism and tried to make some patching that sorts the bounds against the centroids in your patch by testing whether the middle of the bounds is equal to the value of any of the centroids but haven’t got it working properly yet.
Here is how far I got (changes are in your [blobs] patch):

– Pasted Max Patch, click to expand. –

@tep
I get crashes sometimes, usually with a high CPU load or if I close then reopen the kinect cam.
As I’m using freenect.grab in an installation soon I tested it by keeping it open as long as possible and managed to get 6 hours with no crashes.
I then began modifying my patch and within half an hour got another crash.

I guess we are going to have to wait for someone to make the [jit.openni] external available for OSX for proper stability.

#219684
Mar 24, 2012 at 9:01am

@tep

I have experienced crashes, especially when repeatedly connected/disconnecting to Kinect while modifying the patch. But those seem rare on our systems. And we don’t tend to leave it on for hours and hours. Sorry I can’t be of more help there. The most recent jit.freenect is more stable than predecessors though, so it’s worth making sure you’re using it. And wait, it looks like there’s a rc5 from earlier this month (we’re not using that yet). Worth a try.

best,
Zachary

#219685
Mar 27, 2012 at 1:09pm

Zachary, this work is so helpful. +1 on the great work feedback. I was following the request of buckmulligan and jobbermaul into how to get blob data out of the patch – e.g. some number data pertaining to the movement of three blobs – is this possible? So that one could maybe scale it to play back speed or get values such as those from an accelerometer?? I was unsure on what to send with the OSC-route…
Many thanks for your time
Bests
Jay

#219686
Mar 27, 2012 at 2:50pm

Hi Zachary, I got somewhere getting data out but have had the same problem as d.w.str_ – where the blob data keeps swapping even though tracking is on. I then used route as you helpfully suggested to isolate this data which worked well but, as you may see in my patch the data become very limited and actually only outputs an x axis value for each blob instead of x y and z as in the normal unpack from the osc… could you offer any insight? It would be great to get x y and z from the route to ensure blob number swapping doesn’t happen.
Many thanks for your time.

– Pasted Max Patch, click to expand. –
#219687
Mar 27, 2012 at 4:50pm

Hi Jason,

You’re confusing route’s behavior with unpack, it seems. [route] looks for a list the starts with one of the arguments, and if found, removes that argument from the list and passes the remaining elements out of the corresponding outlet. So if the incoming blob list is 1 px py x y z (with 1 being the blob label), [route 1] will pass px py x y z out of the first outlet. It all works as advertised. Using the tracker daily where I work without issues.

Here’s an example, to help clarify. Good luck.

best,
Zachary

– Pasted Max Patch, click to expand. –
#219688
Mar 27, 2012 at 5:01pm

Thank you for answers Zachary & DIFTL

didn’t see this RC5 ! Going to test it deeply right now. Because it’s for an installation, it NEEDS to run for hours and hours. Fingers crossed.

#219689
Mar 28, 2012 at 3:43pm

Thanks very much for the example, Zachary. I have given it a try and it seems to be working – no swapping of blobs as I compared.
Many thanks for your time and patience.
J

#219690
Jul 9, 2012 at 8:59pm

thx a lot for sharing this… :-)

#219691
Jul 14, 2012 at 2:49pm

Hi Everyone, I’m just about to upgrade to Lion, any issues with this patch or is it all good?

#219692
Jul 15, 2012 at 12:45am

@fp, my pleasure!

@Jason, I haven’t tried it on Lion, actually, but I don’t think it will be any different. I’ll give it a try in a few days, when I’m back in the vicinity of a Kinect…

best,
Zachary

#219693
Sep 4, 2012 at 10:03pm

great job, smart and ergonomic. Thank you for sharing it with those who do not know yet : )

#219694
Apr 8, 2013 at 4:32pm

Great app.

The only problem I have is understanding how the Plane Culling works.

What are these planes anyway? I see that they have 3 points. Are they, well, triangle planes? :)
I am trying to move the points around but with no succes.

Any ideas?

Thank you, Zachary, and thanks to everyone willing to give me some pointers.
ygr

#219695
Apr 8, 2013 at 5:18pm

Hi ygr,

Glad you’re finding the app useful.

Plane culling, I’ll admit, is not well documented. Basically, you can create up to 6 planes that define which part of the real space in front of the camera you care to track, “culling” the space on one side of the plane. The simplest way to define a plane is by specifying three points on its surface. I’m not at a computer with a Kinect right now, but try defining one plane like this, and see what happens: xyz1: -2. 1. 0. xyz2: -2. -1. 0. xyz3: -2. 0. 1.

Sometimes is culls the opposite side of the plane than you are intending, and that has to do with the way the software determines the front vs. back of the plane based on the ordering of the coordinates. If that happens, try just switching the y coords for xyz1 and xyz2, or similar. I should really make an abstracted visualization for this process so it’s more clear. But once you get the hang of what’s happening, it’s pretty easy to use.

best,
Zachary

#219696
Apr 8, 2013 at 5:44pm

Cool stuff, that’s for sure. And very useful ;-)

It’s a pity, I do not discovered this project earlier and don’t save a lot of time on creating my own “isotope”.

#219697
Apr 8, 2013 at 5:51pm

Thank you so much, Zachary, for looking into this. This is what I am trying to do, getting the hang of what is happening. ;))

I tried your example from above and all turns to black. If I play a bit with some numbers I manage to see the culling but it is all screwed up.
I am trying to understand by playing with the examples the app comes with, namely the second (right culling) and the third (left culling) plane. As they seem to be more properly arranged. The others are way off in my setup. Don’t know why. :)

Maybe if I understand what these are in openGL, I might manage to play with them. Are they actually triangles in 3d space? Do you know where 0. is in this perspective? (I didn’t play much with the 3d camera in max :D)
May I ask why you didn’t put 4 points for each plane? It would’ve made more sense for me at least.

All the best,
ygreq

#219698
Apr 8, 2013 at 5:56pm

The coordinates are all in meters in real-space, with the camera being at 0,0,0. Check out the info.rtf file that comes with the tracker (make sure you download the newest version, v20) for a *little* information on the coordinates system, etc., to help you understand what this all means. I’m positive it all works… just used it in an installation at NYCEMF last week, and many other contexts before that. So it’s just a matter of understanding what’s going on. I probably won’t have access to a Kinect until this evening, but when I get a chance, I’ll try to snap some screenshots for you.

best,
Zachary

#219699
Apr 8, 2013 at 6:05pm

Thank you, Zachary. I bet it works. Like I said, I have left and right that are positioned pretty well. The thing is it is a bit hard to move them around. When I try to move the y coordinate of a point, it seems that I don’t move a point in 3d space but more like pivoting a whole line from the middle. Weird. :)

Maybe you are busy right now and I don’t want to keep you from your things. I will try to play some more and maybe, just maybe, I will understand what is going on. I will come back with an answer or more pertinent questions. And I’ll be waiting for those screenshots. :D

Maybe you could document a bit the whole Culling thing for us, dupes. Whenever you have time.

Thank you so much,
ygreq

#219700
Apr 8, 2013 at 6:21pm

“Check out the info.rtf file that comes with the tracker”

I am guessing you are referring to this info:

“Coordinate System (in meters): right-handed
- when standing in front of camera, looking at lenses, axes defined as follows:
x axis – positive to the right
y axis – positive up
z axis – positive point directly out of camera into tracked space”

I copied them here for future reference to others :D
Really helpful!

#219701
Apr 8, 2013 at 6:56pm

Man, this is weird.

I just don’t get how the culling works! Maybe after a good sleep..

#219702
Apr 11, 2013 at 4:49pm

Hi ygreq,

Sorry for the delay. Two things about my earlier culling plane coordinates that may have been a problem:
1. 2 meters on the x axis (relative to the camera) may not be visible in the space you're working, as it you'll need a lot of depth to see that far to the side of the camera.

2. As I wrote, sometimes the culling is inverted from what you want, due to how the software determines which side of the plane is front and back (i.e. which side needs to be culled), based on the orientation of the points. So I needed to invert the y coords of the first two points to get the desired effect. If you want to see the math behind the culling, take a look at the shader file: zs.kinectcullplane.jxs

Again, three points is one simple way to define a plane. Four points is not. You can easily define four points that don't sit on the same plane (try it, you'll see). So you just need to do some visualization and careful thinking about this and it should become a bit clearer. Skim the definition section for a plane on wikipedia: http://en.wikipedia.org/wiki/Plane_(geometry)

I could definitely come up with a way to may this culling visual and intuitive, just don't have the time right now.

I've captured a few screenshots for clarification. First, I reduced the x values from -2. to -0.5, which will the allow the plane to be visible at much closer distances. I've also captured the inverted plane, so you can see what happens, and the original left cull plane together with a symmetrical right culling plane, so you can see how to mirror the planes. Move the points around, see what happens, and based on coordinate system information in the info.rtf, see if you make sense of it all. Hope that helps.

best,
Zachary

[attachment=220807,5406] [attachment=220807,5407] [attachment=220807,5408]

Attachments:
  1. rightcull.png
#219703
Apr 11, 2013 at 6:52pm

This looks fantastic, thanks for sharing! I can’t seem to be able to download it from http://www.vis.kaust.edu.sa/tools/KVL_Kinect_Tracker/ though, any idea where I might get it from now? Thanks again!

#219704
Apr 11, 2013 at 8:13pm

Hi Barry,

The server seems to have gone down for some reason. I’m having someone look into it. Will post when it’s back online. Should be soon…

best,
Zachary

#219705
Apr 12, 2013 at 2:49am

Hi all,

Website is back up.

best,
Zachary

#219706
Jun 16, 2013 at 4:01pm

Hi Zachary, nice job done ;) I want to ask if there is some limitation in the max patch for sharing the video renders with Syphon. I tried even with the first ouput and nothing happends, if I plug a normal video window it works. The syphon tags are very easy and I know I´m not making anything wrong there. Do you have any ideas about this?

thanks!

#252951
Jul 25, 2013 at 9:43am

Hi Pattyco,

Sorry for the long delay. I’ve added syphon to a similar tracker that I haven’t made public yet, which is similar to the one here. It works fine. If I get a chance sometime soon, I’ll integrate optional syphon activation for the video matrices, exposed in the config file. Will post here when that’s ready.

best,
Zachary

#257116
Jul 25, 2013 at 3:21pm

I would love to give this a try but server is down.
Can someone please check on it. You have my deepest
gratitude.

#257132
Jul 26, 2013 at 3:52pm

Hi Anthony,

Looks like the server is back up. I’ll probably move it elsewhere in the near future, to avoid this kind of surprise downtime… Will post the new url when that is done.

best,
Zachary

#257260
Jul 26, 2013 at 5:20pm

Ok, I’ve moved the software over to my own website. All future updates will go there, and I’ll get the old site to forward on to this new page soon.

http://www.zacharyseldess.com/KVL_KinectTracker/

best,
Zachary

#257271
Sep 12, 2013 at 5:47am

Is it possible to mask the depthmap?

What I’m basically doing is that I’m playing a video with alpha layer to track blobs which are outside of the moving mask. I got this working so far, but only when I’m close to the camera.
When the blob moves away from the camera it re-appears in the mask again. This is because I probably only mask the first plane and not the ‘depth’ so to say.
Is there a solution to this? I’m subtracting the mask between the bgSubstraction & blobs subpatch.

And a big up for the tracker, really smooth!

#265101
Oct 30, 2013 at 6:05pm

Hi Zachary,

First of all I would like to thank you for this awesome maxpatch you have.
I’m trying to create an interactie project using a kinect and max/msp using your patch for the kinect.

I’m trying to figure out each x and y coordinate in the display for each blob.
Like the cartesian x and y values for blob number 1 – number 2 – and so on.

Is there a way to figure this out easily?
I’ve been trying to unpack the values under (p blobs) -> (p get_cartesian), unpacking the list and using the first two values but that is just confusing.
I’m also new to max and am unfamiliar with java.

I want to a video file that follows a particular blob. The one I have now works with 1 blob, but when theres 2 or more it gets weird. My target is to follow up to 6.

If you can help me, I’ll be very greatful!

Nonetheless, Thanks Zachary

#269690
Nov 3, 2013 at 3:28pm

@Viesueel,

Sorry for the very delayed response. I’m not sure I understand what you’re try to do. Please re-explain in a bit more detail, and maybe I’ll be able to help.

@Eyelsi,

It sounds to me like the tracker is already sending all the information you want. Take a close look at the info.rtf file so understand how the tracker formats outgoing messages. I recommend you just use the standalone app version, and receive the information via the [udpsend] object, but if you really want to use the patch version, then you can receive info either from [udpsend] or from [receive kinect.blobs].

The information you most need to understand is at the bottom of the doc, in the section entitled “OSC messages sent by KVL Kinect Tracker”. Here’s a patch that grabs the real-space cartesian x and y coords of each blob (You’ll need CNMAT’s [OSC-route] object):

– Pasted Max Patch, click to expand. –

best,
Zachary

#270047

You must be logged in to reply to this topic.