I am working on a cv patch that is tracking blobs with cv.jit.blob.label. I have a had hell of time dealing with how the label object organizes the blob ids, but I have another isssue I am hoping to get some tips on dealing with.
I have a fish eye camera tracking blobs which I want to format to fit on my screen. I need to make the standard calibration screen that is used on wiimote whiteboard apps, touchlib, reactvision, etc. After looking around in the stitcher external (flock) I got some ideas from the image stitching. I am thinking the best approach will be a jit.window that goes full screen on key command, then has a few basic key commands to shuffle through calibration points.
I am new to drawing in jitter, but I think that stuff is all doable, what i am going to have a difficult time with is the processing the points. I am assuming I will need to do something like have two coll's store the input points and the real screen points then make a formula to convert, but will i need to constantly process the input coordinates or can i just process them once? If anyone has any tips on objects to use, or somewhere to start I would greatly appreciate it. For no i am searching around for ideas from patches and just reading through doc's so anything would be great.