Forums > MaxMSP

IR Blob tracking limitations

May 14, 2014 | 3:12 am

Hello maxMSP supercrowd!

I am having trouble tracking 3 blobs from IR LEDs. The problem I am running into is that when objects exit and return from the camera view the blob order is lost. The shapes will certainly be coming in and out of view so this is something I must deal with. It is important that the objects are assigned specific blobs and maintain this order throughout.

I thought that one possibility of overcoming this is by creating simple patterns on each object with the IR leds: a square, triangle, single dot (for example). Hopefully using a combination of cv.jit objects to recognise these simple patterns and assign them to blobs (cv.jit.moments and cv.jit.learn to start). I wonder if anyone has had any luck with this this kind of pattern/ shape recognition?

Might it be better to distinguish between the 3 blobs by brightness? If anyone has tried any other methods with any success (or failures) I would love to hear your comments!

Cheers



dtr
May 14, 2014 | 5:40 am

Couldn’t you add your own ID’ing logic on top of it? Assign ID ‘s yourself that don’t change as long as a given blob doesn’t disappear.

If going the marker/tag way, you could look into the reactable framework: http://www.reactable.com/ (wow! someone dropped a business plan there…) It’s based on reacTIVision: http://reactivision.sourceforge.net/


May 14, 2014 | 8:45 pm

Thanks for the reply DTR.

The problem is that the blobs are definitely going to disappear from camera view. It will be in the dark so the ID tag would be difficult.

Basically I want to track 3 objects like is done with colour tracking in the videos here but using IR LEDs so it can be done in the dark. But maintaining the order when they come in and out of frame is proving challenging.

Anyone know of a method that might work? Varying brightness or pulse rate both seem unreliable.

Cheers!



dtr
May 15, 2014 | 2:23 am

Perhaps you should be clearer about what you mean by ‘maintaining order’. Do you mean that you want a fixed ID for each? That’s something else than maintaining the order at which they come in, for example.


May 15, 2014 | 4:03 am

Sorry, my explanation was pretty poor. I do mean fixed ID rather than order. By order I meant that the blob assigned to each object would not change, no matter where the objects move within the frame and, most challengingly, even if they were to disappear and reappear from camera view.

It seems that simple shape recognition might be the only option. But then there is the problem of changing perspective as the shapes move around in the space…



dtr
May 15, 2014 | 6:31 am

Getting closer… now tell us how large your space is, what the objects are like and how they are moved.


May 15, 2014 | 7:57 am

Glad to be edging in the right right direction ;)

The objects and set up is the same as in the videos linked above and here apart from it will be dark, hence the IR LEDs rather than colour tracking like in the clips.

The tracking is done by with a webcam attached to the user’s head. The objects must be tracked with this cam for the purpose of this project.

Do you have experience with a similar kind of object tracking? I would be very grateful for advice on shape recognition, or other methods that might be effective. As it is I am considering placing 3 IR LEDs on each object, each in a triangle shape with its proportions unique to that object (top-heavy, bottom-heavy, side-heavy…). By comparing the ratio of the sides of the triangles I thought it might be possible to recognise the objects. However trouble may arise when the user moves around the scene and the triangle shapes change. Am I going about this the wrong way? Any advice or comments welcome and appreciated!

Cheers



dtr
May 15, 2014 | 8:58 am

No sorry, pattern/shape recognition is not my specialty.

You could go about it another way by tracking the orientation and position of the head cam. If you know the field of view you can deduce which objects are in view, if you know their position too.



dtr
May 15, 2014 | 8:59 am

Btw, fiducial markers as in the reactivision framework can work here. Make the marker IR reflective and light the space with an IR lamp. Depending on your camera resolution they might have to be pretty big though.


May 15, 2014 | 7:21 pm

Thanks DTR. I investigated using the reactivision project but with no luck. The system itself seems pretty robust but the problem is that, in my project, the webcam is already in use in a python program for the eye-tracking side of the system. While this doesn’t create a problem for for use of the same video stream inside max, reactivision does not seem very happy with dual usage of the camera.

Simple IR LEDs seems like the way to go….best get those maths books out.

Thanks for your help anyway :)



dtr
May 16, 2014 | 2:30 am

Sounds like that could be solved by sending the video stream from the 1 app to the other instead of having both apps pull from the camera.


May 17, 2014 | 8:48 pm

Thanks DTR. But I didn’t think that would be possible as the app is not compatible with syphon so can’t be sent that way.


July 8, 2014 | 9:01 am

hi, have you thought about using IR LEDs to flash in a different frequency, just like a remote control?


July 9, 2014 | 3:44 am

Hi there

I did consider using flashing IR LEDs. The problem I found was synching them with the camera frames. Do you mean to flash the 3 LEDs in sequence as shown in this patch?

– Pasted Max Patch, click to expand. –

The problem I had was synching the camera frames to the flashes of the LEDS. I just don’t know how this would be possible, considering that the LEDs would be controlled by arduino, I guess, so there would be latency involved.

If you have an idea of a solution I would be really grateful to hear it!!


July 13, 2014 | 12:26 pm

As cv.jit.blobs.centroids gives you the area of the blobs tracked, couldn’t you just use a row of IR LED’s with different sizes for each object?


July 14, 2014 | 7:28 pm

But due to the changing size as the blobs get closer to/ further from the camera they can’t be tracked by actual size, only relative size, right?

i.e. biggest blob = x, middle blob = y, smallest blob = z

Then what happens when only one or two of them are visible?


July 15, 2014 | 12:32 am

Yes, quite right! I guess you could track how far away the user is and adjust accordingly, but that doesn’t sound like a great solution either. Some kind of marker or shape recognition sounds like the way to go. Good luck!


Viewing 17 posts - 1 through 17 (of 17 total)