finding nemo ;-) w. motion/color tracking
Hi all
Here is a little project I have.
The goal:
Track the x y position of 5 goldfish in a fish bowl.
Plan A
I am already able to do so using the [cv.jit.track] but the points are lost way to easily.
Solution: reset the 5 point
Or I also tried: reset 5 point @ specific determined points.
The results for the data parsing-reset routine is not great. Also, as I said, the points are lost to easily to often. (the fish swim to fast!)
Plan B
Try color tracking with (jit.chromakey)&[cv.blobs.centroid] combo.
test: [jit.findbound] (tutorial 25) is perfect for tracking a single x y point color range. But if you want many?
I've been trying w. jit.chromakey and with/out the w. [jit.brcosa] for tweaking.
How could I isolate 5 points of the same color then connect then to cv.blobs.centroids?
Again , the only reason I am trying plan B is because Plan A
I might be on the right track with my choice of objects for Plan B but I might doing something wrong.
Any help or advice would be greatly appreciated.
thanks in adavance
phil
Have you tried experimenting some with the environment at all? I would think that straight color tracking might have issues with shadows, highlights, etc, and have trouble working too well.
I mess with a lot of vision based touch and interaction screens so personally I would be thinking the best bet is to get to a binary image which is showing just the fish. Off the top of my head you might try something like an array of IR leds and diffuse them behind the fish bowl so you can try and separate background from the foreground (fish). You might even be able to set an IR light on top of the bowl and track the top of the fish with an off set.
Here is a sample patch to get started. You might have already gotten a base patch going but I find with the vision tracking accurately and cleanly if I am not able to get decent data with these basic objects then it's going to be tough. (Otherwise I am trying to expand and filter thee blobs too much and it gets too noisy)
feel free to email me if you have any other questions I can help with
www.demandevolution.comjon@demandevolution.com
WOw! what generosity. Thanks
I will definitely get back to you.
Unfortunately for now, after trying 3 main approaches and since the deadline is today, I have settled for the simplest less orhodox way.
Let me clarify a few things:
The event is tomorrow
But the fish thing is something I want to further develop to represent it again in the very near future.
What I have settled for: Simple: jit.scissors (2 screens)/jit.brcosa/jit.findbounds = 2 pairs of (x y) values.
I figured that the most important is to feel movement variations and to parse data coming from 1 same source.
with blobs.centroids
Finally, yes I have using different backgrounds to answer you question.
Thanks a lot for your interest
will write soon
phil
I hear you about the deadline.
For future reference though, you should be able to use the cv.jit.blobs.sort object to keep track of the right one. You still might have issues when they overlap (which you could probably handle with a second camera axis or something) but it should help a bunch. Check out the help file for it. Also, you might wanna just bounce around the cv.jit objects to get ideas. When I am messing with a new idea using vision I'll hack around a few help files and combine until I like where it's going.
jon
thanks again for the advice.
Check out the help file for it. Also, you might wanna just bounce around the cv.jit objects to get ideas.
Yes that is precisely what inspires ideas. I've been doing it for a few week for this.
I am not a jitter pro @ all. But up to now, I've been able to mostly get close to what I wanted. Mostly in the gl stuff though.
As for artistically manipulating parsed data
That's only thing I found weird when looking into the cv.help files, he shows you what the object does but not how to use it
alright back to work.
phil