IR Blob tracking limitations
Hello maxMSP supercrowd!
I am having trouble tracking 3 blobs from IR LEDs. The problem I am running into is that when objects exit and return from the camera view the blob order is lost. The shapes will certainly be coming in and out of view so this is something I must deal with. It is important that the objects are assigned specific blobs and maintain this order throughout.
I thought that one possibility of overcoming this is by creating simple patterns on each object with the IR leds: a square, triangle, single dot (for example). Hopefully using a combination of cv.jit objects to recognise these simple patterns and assign them to blobs (cv.jit.moments and cv.jit.learn to start). I wonder if anyone has had any luck with this this kind of pattern/ shape recognition?
Might it be better to distinguish between the 3 blobs by brightness? If anyone has tried any other methods with any success (or failures) I would love to hear your comments!
Couldn’t you add your own ID’ing logic on top of it? Assign ID ‘s yourself that don’t change as long as a given blob doesn’t disappear.
If going the marker/tag way, you could look into the reactable framework: http://www.reactable.com/ (wow! someone dropped a business plan there…) It’s based on reacTIVision: http://reactivision.sourceforge.net/
Thanks for the reply DTR.
The problem is that the blobs are definitely going to disappear from camera view. It will be in the dark so the ID tag would be difficult.
Basically I want to track 3 objects like is done with colour tracking in the videos here but using IR LEDs so it can be done in the dark. But maintaining the order when they come in and out of frame is proving challenging.
Anyone know of a method that might work? Varying brightness or pulse rate both seem unreliable.
Perhaps you should be clearer about what you mean by ‘maintaining order’. Do you mean that you want a fixed ID for each? That’s something else than maintaining the order at which they come in, for example.
Sorry, my explanation was pretty poor. I do mean fixed ID rather than order. By order I meant that the blob assigned to each object would not change, no matter where the objects move within the frame and, most challengingly, even if they were to disappear and reappear from camera view.
It seems that simple shape recognition might be the only option. But then there is the problem of changing perspective as the shapes move around in the space…
Getting closer… now tell us how large your space is, what the objects are like and how they are moved.
Glad to be edging in the right right direction ;)
The objects and set up is the same as in the videos linked above and here apart from it will be dark, hence the IR LEDs rather than colour tracking like in the clips.
The tracking is done by with a webcam attached to the user’s head. The objects must be tracked with this cam for the purpose of this project.
Do you have experience with a similar kind of object tracking? I would be very grateful for advice on shape recognition, or other methods that might be effective. As it is I am considering placing 3 IR LEDs on each object, each in a triangle shape with its proportions unique to that object (top-heavy, bottom-heavy, side-heavy…). By comparing the ratio of the sides of the triangles I thought it might be possible to recognise the objects. However trouble may arise when the user moves around the scene and the triangle shapes change. Am I going about this the wrong way? Any advice or comments welcome and appreciated!
No sorry, pattern/shape recognition is not my specialty.
You could go about it another way by tracking the orientation and position of the head cam. If you know the field of view you can deduce which objects are in view, if you know their position too.
Btw, fiducial markers as in the reactivision framework can work here. Make the marker IR reflective and light the space with an IR lamp. Depending on your camera resolution they might have to be pretty big though.
Thanks DTR. I investigated using the reactivision project but with no luck. The system itself seems pretty robust but the problem is that, in my project, the webcam is already in use in a python program for the eye-tracking side of the system. While this doesn’t create a problem for for use of the same video stream inside max, reactivision does not seem very happy with dual usage of the camera.
Simple IR LEDs seems like the way to go….best get those maths books out.
Thanks for your help anyway :)
Sounds like that could be solved by sending the video stream from the 1 app to the other instead of having both apps pull from the camera.
Thanks DTR. But I didn’t think that would be possible as the app is not compatible with syphon so can’t be sent that way.
hi, have you thought about using IR LEDs to flash in a different frequency, just like a remote control?
I did consider using flashing IR LEDs. The problem I found was synching them with the camera frames. Do you mean to flash the 3 LEDs in sequence as shown in this patch?
----------begin_max5_patcher---------- 672.3oc4X10bhBCEF9Z3WQlbMqS9.4i8t96nyNchRVa5fA2Prqc6z+6KI.0p UkraEjVuPcRHI7bdyIubvm88fyJ1vKgfuCtE348rummsKSGdMs8fKYalmyJs CCJ4+tX1Cvf5Ko4az1ty34rm.T5DJs8Zx0KKVqy4Z6DwM89yBoVxVxsy5Fkf k2N9UL876ExE2o3y00HQinSlF.HQwSPAfXp4aBZBB7il4HxrqSEQeiD1tP02 U8Sq30qBbFSt.95bLDTJ9i8hXR0R9JtBYKsDSeu36a9JvQkYIurjsf+NoA0C BRpUPnDSCbxgEDb7QEjKmXfOmhAEaSHbRLRufhgtXwhb9oi7iGeTRbad+Ix9 i2JTpJATyU2wkrY41fAcrXWH0aC+chS7+QbNndCjvHqf3j2.5JvanUPb43.A +E2affc2njP974MzDet4MPu17FvjD2qaH8JvanUPb43.k7E2aH0ceRZxmOqg 5vyImgvjwqyPIOGf.X.4vBP3GXmmVuyGismINl1zguPv6+8eKkH77oUyKVKq 16pzKTOnW3TSD.RvmJUZ5T3oRWLpSymcxdbQnld9DpeUkiqJN6OuoQmBicPm htvOt4C4pfQjtcUl1muKhcDvbgb++1BKyl92M1KKVql2dKZesXvVty3kZgjo EEx2LHxtC5dQVFW9V5yDkl.xFznCtK3LOoiKdLuqT27fGNdvtvCc33gLx3Iz AdvCX9iK4yTxfwC0k8KSgPCEOIiKdL04.Hck+jNr7f657EZjwCYbsesmoPuy Sm9goiKdnC24KSgwcede3zGSAncxydP2q73R8X6As67TWtHa0pG4pxl0zhRU QwOTnLMiBrMEx5l1ZfgJ9ih1wWO.lppjXcU8vqU0uTvljHnu497h+ew7jKS2 -----------end_max5_patcher-----------
The problem I had was synching the camera frames to the flashes of the LEDS. I just don’t know how this would be possible, considering that the LEDs would be controlled by arduino, I guess, so there would be latency involved.
If you have an idea of a solution I would be really grateful to hear it!!
As cv.jit.blobs.centroids gives you the area of the blobs tracked, couldn’t you just use a row of IR LED’s with different sizes for each object?
But due to the changing size as the blobs get closer to/ further from the camera they can’t be tracked by actual size, only relative size, right?
i.e. biggest blob = x, middle blob = y, smallest blob = z
Then what happens when only one or two of them are visible?
Yes, quite right! I guess you could track how far away the user is and adjust accordingly, but that doesn’t sound like a great solution either. Some kind of marker or shape recognition sounds like the way to go. Good luck!