how to track camera movement with cv.jit?
Does anyone have any idea how to use cv.jit to track the movement of the camera as it pans, tilts, and zooms?
So far I’ve tried cv.jit.orientation, which doesn’t yield optimal results for most shots because it depends on the motion of what’s in the frame. I’ve also tried cv.jit.track, specifying basically every pixel on the edge of the matrix, but that doesn’t quite work how I imagined either.
It’s not completely impossible to do, but with the current objects it would be very hard to achieve. You would have to compute the homography matrix from your original tracking point positions and the output of cv.jit.track. This will only really work if your camera is looking at a flat plane.
If you’re lucky and I have enough time to spare, you might find some better tools in the next update.
Ultimately, though, the easiest will be to put some sensors on the camera.
Thanks for the tip and for your excellent library.
I cannot go the sensor route as I am actually interested in duplicating camera motion from stock footage.
It would be great to see homography estimation in the next version of cv.jit. Is it adapted from OpenCV? I know that has a homography estimation function.
In the meantime, I’m wondering whether this is something I could implement just in Max. I found these two tutorials:
However, they’re a little tough as I didn’t quite make it to linear algebra. Do you know of either an article with a little more explanation or an example patch I could look at?
Off-topic a bit, but I saw Jean-Marc on here and really want to pose my question to the wizard himself! :)
For a project ("Words" on the Project Pages):
we have made a soundscape area with multiple layers of sound, from file playback as well as user-recorded sounds. The mix depends on where the participant is in the space, about 10 meters square. Everything is working great except the tracking…tested thoroughly with mouse, keyboard, and random motion to move the blobs around.
We’re using infrared LEDs attached to the top of the headphones, and an IR camera mounted above everything. The resulting image is pretty much ideal for tracking: small blobs, high contrast, total of 6 blobs possible at once.
The problem is with the index sorting. It’s absolutely essential that the blobs don’t switch with each other, which happens anytime two blobs cross each other’s X axis. (Blobs melting into one another is a different issue and isn’t as much of a problem, people can just keep a bit of distance from one another.) I’ve been trying to figure this out for a long time and I keep hitting dead ends. Have tried cv.jit.sort and it’s either not enough or I’m not implementing it correctly. Should cv.jit.sort be able to keep the indexed lists the same when they cross each other’s X axis? Or is there another object in the cv.jit library I should try? I can send you the relevant bits of the patch if you have a moment to look at it, that would be fantastic.
In the cv.jit.sort helpfile I saw how the colored balls are colored correctly, but the indexed lists coming out still switch when the X-axis is crossed. As you can imagine with our project, this quick swapping would ruin the whole idea of the sound being dependent upon one’s position in the environment, where sounds from files or recordings have a unique virtual location, and their levels (as heard by each participant on their headphones) are based upon the distance from each sound.
Thanks for any thoughts on this, and a HUGE thanks for your amazing library! It’s just the thing to help keep Maxers at the forefront of multitouch interfaces and camera tracking.