cv.jit xy coordinates being scaled to a dest size

Jun 5, 2009 at 4:00pm

cv.jit xy coordinates being scaled to a dest size

Hello,

I am working on a cv patch that is tracking blobs with cv.jit.blob.label. I have a had hell of time dealing with how the label object organizes the blob ids, but I have another isssue I am hoping to get some tips on dealing with.

I have a fish eye camera tracking blobs which I want to format to fit on my screen. I need to make the standard calibration screen that is used on wiimote whiteboard apps, touchlib, reactvision, etc. After looking around in the stitcher external (flock) I got some ideas from the image stitching. I am thinking the best approach will be a jit.window that goes full screen on key command, then has a few basic key commands to shuffle through calibration points.

I am new to drawing in jitter, but I think that stuff is all doable, what i am going to have a difficult time with is the processing the points. I am assuming I will need to do something like have two coll’s store the input points and the real screen points then make a formula to convert, but will i need to constantly process the input coordinates or can i just process them once? If anyone has any tips on objects to use, or somewhere to start I would greatly appreciate it. For no i am searching around for ideas from patches and just reading through doc’s so anything would be great.

thanks

jon

#44256
Jun 5, 2009 at 4:45pm

Just out of curiousity, what exactly are you doing and what functionality does it need to have? Is there is a reason why you don’t use tbeta and the TuioClient, to take care of the distortion issues?

Here is my suggestion:

use lcd to draw a circle in the top left of the projection screen. The person touches the circle. The coordinates of the blob are used as the top left of the source matrix (see jitter tutorial 14, if you don’t already know) Repeat with all 4 corners. Now you have a matrix that is bounded by what the camera sees. Use open gl to map this is a 3d shape that compensates for the distortion of your lens. Now send this matrix to cv.jit blobs.

Is f0.route_index an external?

#159088
Jun 5, 2009 at 8:25pm

The f0.route is actually an abstraction patch http://www.fredrikolofsson.com/pages/code-max.html

i haven’t used it before though, just seen it when scanning through them.

As for why i am trying to do the max based vision patch, I want to make a fully max based replacement for touchlib, and reactvision. Since max is the only “programming” I know how to do, i can’t really do much customization of touchlib. Even though my max based version will be a little slower, it allows for a lot more cv features to be integrated and customized. As well I have been building in masking features and some other stuff that lets you have more features to customize the app to the users setup.

It might be a cpu heavy waist of time in the end, but right now it seems to be running all of the vision tracking and blob detection with various cv.jit objects at a comparable workload. Really i had a few specific things I wanted to do then realized if I made the whole thing I could have a really easy method to try out and integrate new jitter and cv objects as they come available.

thanks for the tutorial suggestion. I will go check it out. One thing I am really hopping to add is control over each calibration point. Essentially i want to combine how reactvision and touchlib do their calibrating to allow for more help with wide angle and changing set ups. I don’t think it will be too difficult though to scale that part once I can calibrate any points.

if you’re interested at all I could post something once I get it organized

thanks again

jon

#159089
Jun 6, 2009 at 1:06am
GhostandtheMachine wrote on Fri, 05 June 2009 11:00

I have a fish eye camera tracking blobs which I want to format to fit on my screen. I need to make the standard calibration screen that is used on wiimote whiteboard apps, touchlib, reactvision, etc. After looking around in the stitcher external (flock) I got some ideas from the image stitching. I am thinking the best approach will be a jit.window that goes full screen on key command, then has a few basic key commands to shuffle through calibration points.

I am new to drawing in jitter, but I think that stuff is all doable, what i am going to have a difficult time with is the processing the points. I am assuming I will need to do something like have two coll’s store the input points and the real screen points then make a formula to convert, but will i need to constantly process the input coordinates or can i just process them once?

Not sure if this will help, but this reminds me of some experimenting I did with jit.gl.nurbs, where I had a flat plane with 9 control points (in “order 2″ I think), and by moving all the corners in the Z direction I got a fish-eye of sorts (or at least the corners were “wrapped” in that direction, like if you took a piece of paper and wrapped it over a sphere, just smoother). For a real fish-eye I’m sure there’s a formula somewhere which would take the center of a square (this is where it wouldn’t be altered) and depending on how far you go in each direction, there’s more displacement along curved paths, away from the camera. If you can find a workable formula for this you can generate a jit.gl.mesh that you can query points from. (actually the mesh would only be needed for visual feedback and testing, the calculations do the work.)

If you do get the formula, it probably would be fast enough to process in real-time, otherwise choose an overall resolution (like 100 X 100 points in X, Y) and run the formula offline, then save the results in a lookup table. Then when you want to calculate in real-time, you’d force the incoming values into that resolution (with scale)and do a lookup of the result, which should be very quick.

#159090
Apr 18, 2011 at 12:28am

i am really interesting, if someone found a solution!

#159091

You must be logged in to reply to this topic.