Forums > Jitter

getting a camera and projector to play nice

February 1, 2010 | 6:38 pm

Hi all,
In need of a little advice here on methodology.

I’m using an IR camera to track performers, and then using that data to map a projected image on those performers. The problem is figuring out the relationship between the camera lens and the projector lens…e.g. what the camera thinks is 2.5 meters stage left doesn’t usually correspond to what the projector considers 2.5 meters stage left due to the different optics in the lenses, i.e. it’s certainly not a linear relationship. If this was a one off it wouldn’t be too hard to figure out empirically…a combo of mapper or table or expr would probably do the trick. But unfortunately this performance moves from venue to venue, so the different distances and lenses make it impossible for a fixed relationship. To make this even more challenging, I often have only 10 minutes or so to get this part set up.

So to quote a health club ad on TV from my youth, "there has to be a better way", and am looking for any advice on this. Something like having an assistant run around on stage where I grab a couple key points and have it derive the relationship. Just seems like clicking a point in the live image coming into jitter, and then clicking the corresponding point in the projected image would work, but am having a bit of a mental block on this.

Thanks in advance for any pointers!

Best,
David


February 1, 2010 | 8:22 pm

At some point I had to deal with something similar. I found this article very helpful:

http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/lenscorrection/

Particularly the equations towards the top of the page.

My best guess is that you’d have to find appropriate lens distortion coefficients for the camera and the projector, and then plug them into a pre-built translation routine that takes into account their relative positions in the space. If the routine is robust enough I could see you being able to do this within a 10 minute window.

In my case I translated camera pixel coordinates to a normalized coordinate system and then plugged these into the lens distortion equation. This was done in order to generate pan/tilt angles for a PTZ projector (in this case a High End Systems DL1). The idea was to track the movement of a viewer and have a projected image follow the viewer in the space.

My method wouldn’t translate cleanly for you since you’re dealing with a fixed projector, but I think if you experiment with the lens correction equation you should find a solution. I know I found it very helpful to determine a cm/pixel ratio in the camera image prior to lens correction, which was helpful in deriving the input data to the trig functions used to determine my projector pan/tilt angles.

In your case you’d have to do something similar to this and then re-distort the image to account for the projector lens distortion. I would imagine that jit.expr would be very useful here.

Hope some of that is helpful – Jesse


Viewing 2 posts - 1 through 2 (of 2 total)