Hi forum friends,
I've pieced together a patch that (sometimes accurately) tracks a pair of eyes by first recognizing most prominent face captured via a webcam and utilizing cv.jit.faces.largest & cv.jit.faces.eyes.img objects (produces two matrixes, each zoomed in on one eye).
Have spent the better part of this weekend trying to figure out a means of time-delaying and then overlaying the two eye matrixes back onto the live-feed for output / projection... Basically I wish to replace a person's eye movements with those that took place several seconds prior.
The data produced by the cv.jit objects would have to be used to 'glue' the time-delayed matrixes onto any face recognized by the patch, and also to get the time-delayed eyes to follow the face while it moves in-frame. I can't seem to figure out how to do this....
I have xyxy box dimensions for a face. I've got xy data streams for the pair of eyes that will be delayed. I have a subpatch that will do the delaying. And I found reference in the forums on overlaying planes onto a matrix with jit.gl.videoplane objects but... By now I'd have to say that I've lost m'way. Or the will? Flummoxed! ;)
Anyone have feedback / advice / suggestions on this front?