Forums > Jitter

Live video w/ gl textures & cv.jit track and follow attempt…

April 1, 2014 | 8:30 pm

Problems: Trying to create masks (wrapping textures around shapes or vice versa); maths problem translatng xyz into screentoworld coord’s and; when latter works out – problem sending / recieving / feeding coord’s into gridshapes.

I had posted problem patches a few months ago (http://cycling74.com/forums/topic/delay-new-feed-follow-eye-tracking-wcv-jit/) but a friend since suggested I try w/ the gl objects instead. And so…

Have run m’self in circles (figuratively, graphically, as it happens too)… Feel I must be ‘missing’ some basic gl methodology principals… ?

I’ve extracted some sample problems from my master patch and attached it here.

Any and all insights (there are no small problems here) would be greatly appreciated!


April 2, 2014 | 6:40 am

… sorry… re-uploaded w/ corrections.


April 2, 2014 | 9:17 am

hi.
it’s difficult to understand what you are asking, and no one is going to be able to dissect and debug your patch for you.

i recommend you take a single problem that you are struggling with, describe it concisely and post a simple patch demonstrating.


April 2, 2014 | 11:00 am

Hi Rob,

Sorry ’bout that; presenting too many prob’s at once.

Here is one, simple I think, problem (attached)… I just cut & pasted together my difference of requirement from a functional patch I found, previously, on the forum.

More (break’n ‘em down) to come…


April 3, 2014 | 5:25 am

K so. While the above (How2_circleMaskEG) represents the trouble I’m having with masking live video input the attached (How2_GLdimenEG) demonstrates an issue I’m encountering with overlaying a delayed stream of the same (live feed from webcam).

CORRECTION: The delayed feed problem was just a bug. One keep running into, for some reason, though…

However: Once the slab is replaced (is in patch), and the dim coord’s connected to videoplane, you’ll seee the last of my ongoing prob’s… Working out the screentoworld coord’s.

I’m basically looking to shape the overlaying texture in accordance with face tracking coord’s (which are produced in this patch) – in effect replacing and following the face captured in the live feed.

Hope this helps to clarify!

  • This reply was modified 3 months by  qudaparcs.

May 20, 2014 | 11:50 am

Just coming back to this again; anyone anywhere w/ insights maybe?


May 21, 2014 | 1:25 pm

you’re going to have to try a bit harder to explain the problem and provide a simple patch demonstrating.


June 12, 2014 | 2:58 pm

Hey Rob,

Between the two inquires posted (link to first provided above) and the various patches posted I’m at a loss. There’s a problem for every approach I’m taking to problem-solving, thus the very different patches, and explanations. But the overall goal is one I’ve noted posted by others elsewhere on the forum and none of us with any luck it seems:

Am trying to replace a face (or the eyes particularily) in the output of a live video feed. Input is live-feed, which is then delayed, and that I have previously overlaid back onto the output. The problem at that juncture was getting the (delayed feed) overlay to conform to face-detection shape and then follow the movements of the face in the (new) live input (on top of which it was overlaid).

Those patches (which are not uploaded in other threads) progressed well. I have no prob’s with the cv.jit tracking objects, can produce all sorts of useful data, and I can get still-images (jpegs) to overlay and (roughly) follow a face in the live video feed. Still and yet. Could not for the life of me get this to work with the delayed video image however.

So then I moved onto using GL objects and textures to achieve this very same goal. The above are the smaller / more specific problems I’m encountering with this approach; one pertains to creating a dynamic mask overlay and the other to converting the screentoworld coordinates.

Does this better explain?


June 12, 2014 | 3:13 pm

Maybe this will clarify best… Was referred to this video today, haven’t used openframework yet, but does represent what I’ve been trying to do with cv.jit & GL & osc objects… Except that instead of overlaying a still-image I would like to overlay a delayed moving image (not recorded but from a live feed): http://vimeo.com/29348533

Possible? Or… Should I just ‘skip it’ and get busy instead learning the openframework C++ et al approach?

  • This reply was modified 1 month by  qudaparcs.

June 13, 2014 | 10:07 am

i’m not sure why you need opengl for this.
sounds like it would be easier to do the frame delay and compositing using matrices.

how are you planning to match the recorded face to the live face? are you also recording the coordinates of the face tracker for each frame?


June 17, 2014 | 9:56 am

Hi again,

I’ve tried compositing but run into trouble getting over-laid video to follow the live-video tracking data. Also had difficulty forming an accurate mask. Also would be ideal if mask would wrap but… Might be too desperate for solutions to be an idealist ;)

I’ll upload a patch I was problem-solving with, here, which contains a whole lot of data for tracking. I was experimenting with varying methods of collecting data, converting it, and then applying to the overlaid video (called ‘delayface’). That’s the whole problem, no matter what object library I resort to, I need to mask the overlaid video and it needs to follow the live-feed tracking data.

I’ve a handful of these problem-solving patches, taking different approaches, so could upload other egs as well (one of the patches above outlines a screentoworld attempt). But perhaps you’d find the data collection here helpful for starters…

I should note that this is not a problem of overlaying a recorded video but that of live video which has been delayed (less than) half a minute…

  • This reply was modified 1 month by  qudaparcs.

June 18, 2014 | 9:37 am

whether or not the overlaid video is recorded or live is fairly irrelevant.
the problem it seems, is matching up the face from one video with the face from another video.
i can see now why you want to use screentoworld.

what i would do first is simply try and match up faces, then worry about masking, as that’s a much simpler problem.

to match up two faces, i would find the center point of each face in screen-space (note that this may be different then the center calculated from the output of cv.faces, as the matrix dimensions input into faces are not necessarily the same as the dims of your gl window)
so first, do the math to get the center point.
then scale each center value to screen dims (using scale it’d be something like [scale center-x-value 0 face-matrix-width 0 screen-width])
then find the world-space center point using screentoworld for both faces.
you should then be able to offset (using the position attribute) the overlaid face the difference between the two faces.
offset = face1-center – face2-center
face2-new-center = face2-center + offset

you probably want to make sure your gl.render is set to ortho mode to make things easier.

here’s a basic screentoworld example

– Pasted Max Patch, click to expand. –

June 20, 2014 | 10:15 am

Yeah: Good point (matrix bounding box dimens not same as GL faces). Thanks also for the example…

And the off-set ‘tween two = perfect. Makes good sense.

Might take me awhile re-digest screentoworld logic then apply… Hard to break old (matrix logic) habits / presumptions on that front. Shall let you know how that goes when it does ;)

Merci deja!

  • This reply was modified 1 month by  qudaparcs.

Viewing 13 posts - 1 through 13 (of 13 total)