I found this performance on youtube, and it is really awesome. I am so curioius about how the video tracking work in the performance.
As far as I know, they use infrared camera. But I think that is still hard to differentiate the human body from the projected animation or background stuff. In this performance, they created some animation only around the dancers' body. I was wondering how the video camera "found" the shape of the human body and ignored all the other images projected on the ground. I have tried some cv.jit, but still in vain.
Can Jitter do something like this ?
Any ideas about the way to create something like the posted youtube video ?