Adding an effect to an isolated part of the video??!

aleighlewis's icon

Hi There,

I am using SOFTVNS to track heads, and then add add an effect just to the head. Does anyone know a way to make the effect only affect the head area??Or an effect which can isolate certain parts of the quicktime??

Thanks so much,
Aleigh

Wesley Smith's icon

Use a mask link with jit.alphablend. To do stuff to a face, create a
mask for the face. You may find this tricky.

wes

aleighlewis's icon

The mask will follow the face as it is being tracked? Will you be able to see the rest of the body and surroundings? Or will the mask mask everything out but the head?

Thanks so much,
Aleigh

Wesley Smith's icon

I don't know, you're doing this, not me. Take a look at
jit.alphablend and start with a binary mask. Essentially, you
combining 2 different video sources. This is a basic video operation.
If this is confusing, you may need to go back to basics. Take a look
at iMovie or somthing.

best,
wes

dlublin's icon

Hey Aleigh,

Since you're using VNS, you may find it easier to work with v.lumakey.

Basically, v.lumakey has three inlets - the first two are the two video streams you want to combine, and the third stream is your grayscale mask that is used to determine whether or not to use the video from the first or second inlet (or some mixture of the two) on a pixel by pixel basis.

So, if you were trying to blur just the heads (say using v.blur), you'd process the entire video frame using v.blur, and then use v.lumakey and the output from v.heads (or whatever object you are using for headtracking) is your mask. In your final output, areas of the mask that were dark only show the original frame, whereas the light areas will show your processed (blurred) frame.

The v.lumakey help patch is pretty self explanatory and definitely worth checking out.

- Dave

aleighlewis's icon

HI Dave,
Thanks so much for your help. I really appreciate it. I am not sure if my last response posted or not so here it is again. My experience with Jitter is limited so if you could answer some more questions that would be really great.
So Far here is my patch (which isn't fully working:

jit.repos 4 char connected to v.jit
r videoinput (from v.heads) to v.jit
r videoinput (from v.lumakey) to vi.jit

x coord from heads to horizontal slope of gradient
y coord from heads to vertial slop of gradient

The lumakey is reading the info from the head tracking because the humbers are changing and there is an effect in the display window

I am not however seeing the video effect, or even an image in the main display window. The only object connected to it is jit.repos...because when I try to connect v.lumakey to the display, I get an error.

If you have any suggestions about what I am doing wrong I'd love to know. I have also included a screen shot.

Cheers,
aleigh

Isjtar's icon

i don't have a lot of experience with soft vns, but I seem to remember that you can use the coordinates of the head (which is just a square surrounding it) to control a cutting function, to separate that part of the image, process it and paste it back in, might get nasty with flickering and erasing and so on.
would be cleaner then alphamasking i think though...

sorry,don't have it on my computer so can't check what it was exactly.