finding nemo ;-) w. motion/color tracking
Here is a little project I have.
Track the x y position of 5 goldfish in a fish bowl.
I am already able to do so using the [cv.jit.track] but the points are lost way to easily.
Solution: reset the 5 point < - either where they were last seen (possible 'cause 0 is sent when point is lost).
Or I also tried: reset 5 point @ specific determined points.
The results for the data parsing-reset routine is not great. Also, as I said, the points are lost to easily to often. (the fish swim to fast!)
Try color tracking with (jit.chromakey)&[cv.blobs.centroid] combo.
test: [jit.findbound] (tutorial 25) is perfect for tracking a single x y point color range. But if you want many?
I’ve been trying w. jit.chromakey and with/out the w. [jit.brcosa] for tweaking. < - But the result are not to great.
How could I isolate 5 points of the same color then connect then to cv.blobs.centroids?
Again , the only reason I am trying plan B is because Plan A < -points get lost to easily. I believe if it's as fast as jit.findbounds the result will be perfect.
I might be on the right track with my choice of objects for Plan B but I might doing something wrong.
Any help or advice would be greatly appreciated.
thanks in adavance
Have you tried experimenting some with the environment at all? I would think that straight color tracking might have issues with shadows, highlights, etc, and have trouble working too well.
I mess with a lot of vision based touch and interaction screens so personally I would be thinking the best bet is to get to a binary image which is showing just the fish. Off the top of my head you might try something like an array of IR leds and diffuse them behind the fish bowl so you can try and separate background from the foreground (fish). You might even be able to set an IR light on top of the bowl and track the top of the fish with an off set.
Here is a sample patch to get started. You might have already gotten a base patch going but I find with the vision tracking accurately and cleanly if I am not able to get decent data with these basic objects then it’s going to be tough. (Otherwise I am trying to expand and filter thee blobs too much and it gets too noisy)
----------begin_max5_patcher---------- 871.3ocwX00aaBCE84To9evhmSivFHjrGl19cLMUY.2D2B1LroIsU6+97Ggz zkjVSKD+PjEFCbtGetm604kquZRPFeKQD.9F3WfISdQMyDyb5YlzMwjfJ717 RrvrvfbdUEgIClt6lRxVo4FUzUqkf0jxZfjCD43RB.yJ.3h6aER.kU2J02Ay .71WedVaEkURjl2NraVZg4cxyt+FTzgqU8ncKNra563LICWQLOxOan3x8OQM VlulxVcaCIWZCz3kIyBmBhVFpGPIKMCpK.+9vWnf9r4EBQyBMS+2quROpFl5 LYwHaTgvwb08T4rFRMWbZZ.cJZH9Lz.xAZvtb4S0DKGnAvsUXYCcavTPPvqg 9oHLT3ADVRzHxWmUbIadBHWSEZAzczxRkbBjUxyD8PGgFGcTrY.hh7fNJ+wY ZoTdIWP5AS.OCS.+hRImjQV9ZdhOjQh1LYCNWRJLVSUXwCjhdPbgCqDBZkPw KLLCbTshNKmzf2.xUPtA6NQ.WNrDQT3A7P5hQjFzoK0anrB9ldDtK9Hu2ixJ 9Xa0nCSGfysR.aTegitzQH5fu0cbXBuyYCVxwEUDg.jD1ivd9P5B99rAJ01r QrUfeh770zhBB6nO+3l5uq0rtBqZ6PcuYfV1FLSaRZJwBHx7Y8fWSFXuAahR GEFMltCJ.mQZ5QrNnMkQ06QffLLakaxI6PRnm5fkWC9g522cuIVHxWMw14GE Zy.S7US+qxPksU8oTKzyTlkqR8Ra+ORKHbUu98ftF3Vz1kiYKikj5CRXMUBD 03bBHC2n8oaHU7GIJaa8L4Orpg2x5QWrCcua6JwGlbA7mOSpUkFVfHj6jP5E u3OLxdvGe3V+.4oSSMgmfZNWMs3uPMsiGbhxrZJ04hR7YAtabu.WjmMqQHyv xXOHxdtT4LspG+iVCIW8wLTmlxdHyXe0zzejyV0fy5QEMOKoh+r1V1EFTRYG 8eLa9Z5a7ernf21j2g0tte.G7IKHBIkgkTN6vUs3sq50CSs2eqhVTyU9N6PB JLV6o.ihs+0l6u38Ncp6nF4DpS6KpGFvM2IvEeNvAGUvE6D3z7KrW62o1ZHK Ma6HkZt6pAA1QNiZOre6TJD5ylAMet8nnK2e0ED0Q9gQcZ2F5ErkNJa1CBzb J01OdhHWEavuDzzSnF9Gphpk.C -----------end_max5_patcher-----------
feel free to email me if you have any other questions I can help with
WOw! what generosity. Thanks
I will definitely get back to you.
Unfortunately for now, after trying 3 main approaches and since the deadline is today, I have settled for the simplest less orhodox way.
Let me clarify a few things:
The event is tomorrow< - but I am also performing live music.SO have to work on sound.
But the fish thing is something I want to further develop to represent it again in the very near future. < - So yes you will here from me again. thanks
What I have settled for: Simple: jit.scissors (2 screens)/jit.brcosa/jit.findbounds = 2 pairs of (x y) values.
I figured that the most important is to feel movement variations and to parse data coming from 1 same source.
with blobs.centroids < - the N of blobs always change making hard to parse which is which.<- of course the N of index always change.
Finally, yes I have using different backgrounds to answer you question.
Thanks a lot for your interest
will write soon
I hear you about the deadline.
For future reference though, you should be able to use the cv.jit.blobs.sort object to keep track of the right one. You still might have issues when they overlap (which you could probably handle with a second camera axis or something) but it should help a bunch. Check out the help file for it. Also, you might wanna just bounce around the cv.jit objects to get ideas. When I am messing with a new idea using vision I’ll hack around a few help files and combine until I like where it’s going.
thanks again for the advice.
Check out the help file for it. Also, you might wanna just bounce around the cv.jit objects to get ideas.
Yes that is precisely what inspires ideas. I’ve been doing it for a few week for this. < - a few years for maxMSP
I am not a jitter pro @ all. But up to now, I’ve been able to mostly get close to what I wanted. Mostly in the gl stuff though.
As for artistically manipulating parsed data< -I believe this comes with experience and a lot curiosity (object testing), patience and mostly focus.
That’s only thing I found weird when looking into the cv.help files, he shows you what the object does but not how to use it< - meaning parsing the values. <- probably 'cause it's nothing for advance users.
alright back to work.