In need of a little advice here on methodology.
I'm using an IR camera to track performers, and then using that data to map a projected image on those performers. The problem is figuring out the relationship between the camera lens and the projector lens...e.g. what the camera thinks is 2.5 meters stage left doesn't usually correspond to what the projector considers 2.5 meters stage left due to the different optics in the lenses, i.e. it's certainly not a linear relationship. If this was a one off it wouldn't be too hard to figure out empirically...a combo of mapper or table or expr would probably do the trick. But unfortunately this performance moves from venue to venue, so the different distances and lenses make it impossible for a fixed relationship. To make this even more challenging, I often have only 10 minutes or so to get this part set up.
So to quote a health club ad on TV from my youth, "there has to be a better way", and am looking for any advice on this. Something like having an assistant run around on stage where I grab a couple key points and have it derive the relationship. Just seems like clicking a point in the live image coming into jitter, and then clicking the corresponding point in the projected image would work, but am having a bit of a mental block on this.
Thanks in advance for any pointers!