I'm working on a project that will involve tracking a dancers position and projecting video onto them in real time. I'm using an IR-sensitive camera so as to avoid interference between the input and the output video.
Does anyone have experience working with position tracking like this in Jitter? What techniques/methods have you found to be most efficient and most accurate.
My first thought was to make use of the cv.jit packs, using the cv.jit.blobs objects to track the center position of my subject. This is pretty great, and pretty accurate, but it means that any extremities (ie. arms/legs reaching outwards) won't have a direct effect on the regions they are touching, only on the overall blob count/size. (Cv.jit.blobs also takes up some considerable computing power.)
Now I've been working with simpler methods of masking and limiting using a mix of jit.op's, background subtraction, jit.scissors to define submatrices, and frame differencing to detect motion. This is great, and seems to be much more CPU efficient, but I still can get the level of detail I'm interested in.
I've searched around the forums and see some references to using openGL shaders to handle this, although I'm totally unschooled in writing the code for this.
Any thoughts? Suggestions? Search-terms to look up? Thanks.