Alpha blending multiple videos with jit.gl

Mar 23, 2013 at 9:28pm

Alpha blending multiple videos with jit.gl

Hi,

I’m trying to mix multiple video sources using irregular alpha masks of various sorts, and somehow haven’t been able to find a good approach. The test patches use an audio waveform as a basis for generating the mask shapes, then do some further image processing on the resulting matrices before applying them as masks.

Here are two things I’ve tried.
* testalpha5: uses jit.alphablend to do the masking, then sends to a pretty-much as/is 4-way slabrenderer mixer from the Jitter Tutorial #23.
* testalpha6: doesn’t use jit.alphablend; just modifies the slabrenderer mixer to do the alphablend with jit.gl.slab

Both of them basically work, except:
* They are both very slow already, and the tests are at low resolution and not doing much image processing yet. So I need a more efficient approach. Am planning to purchase a solid state drive soon (FW800 interface) in hopes of improving quicktime bandwidth issues, but suspect that won’t solve everything.

* In testalpha6, which does the alphablending in jit.gl rather than on the matrix side, the background mysteriously turns from black to grey depending on which display the window is on. Can’t figure out whether this is a bug or just something I need to do differently.

Despite several years using Jitter my knowledge of the jit.gl side is still fairly limited. Have gleaned quite a bit digging through forums and tutorials, but suspect I haven’t found the best approach yet.

Thanks for advice!

Current Setup: Max 6.08. Mid-2010 MacBookPro, OSX 10.6.8. 8 GB RAM. FW800 external drive for videos. GeForce GT 330M (just had board swapped out by Apple to replace previously installed defective graphics chip from this MBP series, so hopefully it’s good now.)

Attachments:
  1. testalphas.zip
#67283
Mar 23, 2013 at 10:34pm

Hello,
I opened your patches

Demovideo2 is your movie reader and your bottle neck and i think you should rewrite it.
use vade advices for that (jit.qt.movie @colormode uyvy @unique 1 etc..) this should give you a good increase in fps.
jit.qt.movie masterout @colormode uyvy @unique 1 -> jit.gl.slab masterout @file cc.uyvy2rgba.jxs @dimscale 2 1 ->
jit.gl.texture masterout @name tex1 ->jit.gl.slab masterout @file 2waymix.jxs

and the way to go…
i would rethink the data flow to jit.qt.movie -> jit.gl.texture for the mask try jit.matrix to jit.gl.texture and both on a jit.gl.videoplane with @ tex1 tex2 or an overlay of 2 jit.gl.videoplanes don’t forget to try several blend_mode a b.

And why not try the latest good job from “rob the magnific”using the new HAP stuff
jit.gl.hap…

HTH a little bit
Cheers
Hubert

#242098
Mar 23, 2013 at 10:38pm

I don’t know why, but Max crashes when I try to turn on audio in your patches..
Anyway, about the gray background in the rendering window, you may try setting the @erase_color attribute to 0 in jit.gl.render.

#242099
Mar 24, 2013 at 8:04pm

Thanks both!
Hubert, thanks for the suggestions about going to uyvy colormode. I’ve been looking through your notes and vado’s demos , but haven’t so far figured out exactly how to apply them to my patch correctly – could you elaborate a bit? Have tried various things but can’t seem to get the color space straightened back out in the end. I’ve attached new patches – the qtread1 abstraction is where I now read the quick time; the rest of the relevant stuff is still in the main patch. (Thanks also for Hap tip… will look into this – but not sure if it’ll give enough flexibility to allow for me to work with the videos using matrix objects? I’m planning to do quite a bit of matrix work.)

LSka, looks like you’re right about the @erase_color causing the strange changes in the background, thanks!
I suspect audio problem could be from patch saved with line input; I’ve saved this one with mic input so hopefully easier to open. If not, try setting input before turning on audio.

Thanks again all,
-Amy

#242100
Mar 25, 2013 at 6:33am

Oops, small correction to previous post – it’s the qtplay abstraction that plays the quicktime, not qtread1.

-Amy

#242101
Mar 25, 2013 at 9:16pm

Hello Amy
Had time to kill while Brecht Master Puntila is playing on stage…
Here you go some redigest patches
HTH and i would say there is a funny potential in your idea try some poly_mode or several shape on the gridshape…
Enjoy it’s free!!
Cheers
Hubert

#242102
Mar 25, 2013 at 9:45pm

Oups was to fast just renamer the bpatcher with qtplayhubert and qtread1hubert and it should work…
The funny is that using your post and the sevscope post is really ideal to mix
at the same moment at different places ….
Master Puntila is drunken and every thing works much more easier
Cheers

#242103
Mar 30, 2013 at 4:26am

Thanks Hubert!

Sorry for slow reply – was traveling and didn’t have as much time for patching as I expected. Anyway, thanks for the patch! I’ve got it basically working and the frame rate is much better! I’m still digging through it trying to understand all the GL stuff and to see if I understand how to make it do what it’ll need to do in the end. Will let you know if I have more questions, but wanted to follow up first to say thanks!

-Amy

#242104
Apr 2, 2013 at 6:33pm

Hi again Hubert!

Ok, I’ve been working with the patch and run into a question. I’d planned to have each video use an individual alpha channel as its matte. (For example, in my original test patches, I’d done this with the alphamaker subpatcher. For simplicity’s sake I’d generated all the alpha channels the same, but in the real patch, they’d be different.) Anyway, is there a way to adapt your patch to do this? I first tried stacking alpha textures (i.e. wave, wave2) on the jit.gl.gridshape, but then discovered that this doesn’t really work because the black parts of the upper alpha layers block out parts that should be showing through from lower layers. So I’m wondering if there’s a way to apply the mattes that ties them to the individual movies?

Thanks, and sorry for confusion about that. I was trying to keep the test patch (too) simple, as it hadn’t occurred to me that a solution might involve separating the alphas from the movies!

-Amy

#242105
Apr 6, 2013 at 5:06am

Aha, I’ve just discovered cc.alphaglue.jxs … this appears to be a solution…

#242106
Apr 7, 2013 at 8:54pm

Hi again all,

Ok, I’ve gotten closer, but stuck again. Using cc.alphaglue.jxs, I can apply individual alphas to my videos, but when running that through 43j-fourwaymix.js, I get the combined individual alphas matting an additive mix of all the videos. I am trying to get each matte to stay with its individual video. I also tried daisy chaining slabs with co.alphablend.jxs (daisy chaining because co.alphablend.jxs only expects 2 inputs). But that was a mess. I’m not sure if this can be solved using the existing shaders or if I need to modify them – but I couldn’t quite figure out how to modify either of those to do this.

I’ve attached my latest patch. The rendering subpatcher starts with putting together what Hubert had done, then adds the alphaglues in the slabrenderer_mod subpatcher. Thanks much for any help!

Attachments:
  1. testAlphaPost3.zip
#242107
Apr 7, 2013 at 9:15pm

BTW, also tried layering jit.gl.videoplanes with @layers and @blend_enables, but could not get opacities to work with the layers. It seems like that might be the most straightforward solution if it can be done – perhaps I’m just missing the correct combination of attribute settings?

Thanks again for any leads…

#242108
Apr 8, 2013 at 6:57pm

hi uebergeek.

i didn’t really follow all of this thread, but the easiest way to overlay movies with alpha-channels, is to overlay multiple gl.videoplanes.

you simply need to set @depth_enable 0 and @blend_enable 1. the default @blend_mode will blend based on alpha channel.
if you want to control the layering order, use the @layer attribute.

let me know if this doesn’t answer your question.

– Pasted Max Patch, click to expand. –
#242109
Apr 9, 2013 at 1:58pm

hi there

i am trying to build a live render cartoon caption patch
i have been trying to combine and image (the speech bubble) and some text files (the words) and then render it over a piece of live video
this is the stripped back patch

i have only been able to render the text by using the jit.op @op, and combining it with the speech bubble png,making a single image with the jit.alphablend but when i pop this into a jit.video plane i loose the alphachannel and it renders out as box (instead of the shape of the speech bubble)

jit.videoplane @blend enable 1 @depth enable 0 @transform reset 2 @layer 1

any help much appreciated (i’ve spent many hours trying to work out this problem)

#242110
Apr 9, 2013 at 6:22pm

hi juzjuz.

in this case, jit.alphablend is not what you need, as it’s intended to composite two matrices together based on the alpha channel of the first.
instead, you want to add an alpha channel to one matrix (your video), based on the values in another matrix (your text).

jit.pack is your friend in this case.

once you’ve combined the two matrices with jit.pack and have a proper alpha channel, jit.gl.videoplane @blend_enable will take care of the rest:

– Pasted Max Patch, click to expand. –
#242111
Apr 10, 2013 at 7:04pm

hey rob
got it working now!
great to be put on the right path, there are so many dead ends to explore and while you learn a lot it gets a little frustrating
thanks!!!

#242112
Apr 13, 2013 at 10:12pm

Thanks Rob! Got the jit.gl.videoplanes working now thanks to your example patch. Yep, seems I just hadn’t figured out the correct combination of attributes.

#242113
Apr 17, 2013 at 10:13pm

Hi again, Rob and all,

Though the patch is basically working as it should now, it seems that adding the multiple alpha mattes back in has slowed the frame rate down again. I’m just doing minimal video processing now, and need to add more in, but it’ll need to be running faster for me to do that. (I’m working with 854×480 sources). I’ve attached my revised patch – is there a faster way I could be doing this? I’ve red-commented the render sections that are where I need help.

Some background info: This patch is a basic demo/framework for a research/performance system that would use various aspects of gesture and sound data to layer and manipulate irregular shaped video images. Currently I’m using sound waves with some image processing to generate alpha masks, but the idea would be to be able to use a variety of real-time gestural and sound data – plus Jitter matrix processing – to create the masks. (I’ve tried some other generative processes to create the masks by the way, and they’re just as slow. So it seems to be the layering of multiple alpha masked videos that’s the bottleneck, not the way I’m generating the masks.)

I’d also be interested in ideas on doing similar with OpenGL meshes, etc. The alpha mask is my initial approach to layerable irregular forms, but some more visual depth would be welcome. I tried using the sound data with jit.gen to transform multiple jit.gl.meshes, (based on the patch posted at http://cycling74.com/forums/topic.php?id=44597) but that ended up being very slow and didn’t really layer. I could imagine doing something like layering multiple textured cylinders and then deforming them in real time, but I’m not sure that would be possible or practical.

Thanks much for any leads, and please let me know if I’ve left out any pertinent info.

#242114
Apr 18, 2013 at 8:15pm

looks like it’s the jit.rota sub-patches that are causing the biggest cpu drain.
you can convert those to jit.gl.pix (or jit.gl.slab using the td.rota.jxs shader). the gl.pix code is taken from the gen patch-a-day thread.

you can also simplify your alpha-glueing using jit.gl.pix (there’s no need to calculate the luma value, as the alphaglue shader does. simply grab a single plane from the alpha matrix, and pack with the color matrix planes of the movie)

the patch below shows this in action.

also, check out the jit.gl.hap external, to speed up movie playback. it may yield better results.

– Pasted Max Patch, click to expand. –
#242115
Apr 29, 2013 at 7:21am

Thanks much Rob – that helps a lot! I’ve been digging into learning the wonderful world of jit.gl.pix; seems it can be very useful as I further develop the patch.

I’m sure there will be some things that will still need to be done with matrices, but it certainly seems that for speed’s sake I should do as much as possible on the GL side. One thing I’m trying do is distort the images and alphas something like the jit.repos-distortquad example. I’ve attached my revised patch showing how I’m using it – (See blue objects in the multilayer_stuff patcher.) Since I end up having to apply the distortion eight times – (4 videos, RGB + alpha for each), it slows things down. (And I’m still using it at 320×240 as in the original example – when I take it up to 854×480 it’s extremely slow.) It seems like that type of distortion effect might be fairly straightforward to do with jit.gl, but I haven’t found a similar example. I can imagine it could be done by distorting a textured mesh – but then I lose the alpha blending from the videoplanes. So perhaps a shader? Is there perhaps a similar example out there I’m missing? I’m not sure how to approach writing that from scratch.

Also, I had a couple other general questions about jit.gl.pix:

* Are there performance hits from daisy chaining too many jit.gl.pix objects? I.e. is it faster to combine shader functions into one jit.gl.pix object when possible, or is it ok to keep daisy chaining individual shaders? (I realize it’s sometimes necessary to use separate objects because of vectors vs. pixels).

* Is there a way to give jit.gl.pix objects names, so i can see easily in my code what each one does? A workaround of course would be to put them into descriptively-named subpatchers, but just wondering if there is a way to name them directly.

Re: jit.gl.hap – thanks for the suggestion – I played around with it some. Seems that it will really only help if I’m not doing any matrix operations on the videos, correct? As soon as I convert to matrix, frame rate slows down, which I guess makes sense. I also tried it without any matrix operations (i.e. the distort-quads) on the videos, but keeping the distorts on the 4 alphas. It didn’t seem to make much difference vs. doing the same with jit.qt.movie in uyvy mode – so I think the distorts on the alphas are bottlenecking it in that case.

Thanks very much again!

#242116
Apr 29, 2013 at 10:29pm

the jitter-gen examples are found at:
Max 6.1/examples/jitter-examples/gen/

i’ll let you find the gen version of jit.repos (hint: it’s called jit.gl.pix.repos)

my guess is one shader (gen patcher) is generally going to be more efficient than several. but as always, better to test it out yourself. 1 frame of efficiency might not be worth a decline in patching organization and readability.

did you try just typing in an argument?

gl.hap spits out textures, and you will get the most optimal performance by keeping it as a texture (as you’ve discovered).
it should be more efficient than qt.movie @colormode uyvy, especially if you use the HAP encoder on your movies.

#242117
Apr 30, 2013 at 5:17am

Thanks Rob! Yeah, the gen repos shader looks cool.. only problem is, I can’t seem to get similar results as the Spatial Map -> Distorted Quad patch got using jit.repos. Actually the gen version’s behavior seems fairly straightforward to me – it’s the use of jit.repos in the demo patch that’s somewhat mystifying! I’m trying to get something like that twisting of the image that happens in the demo – but with the gen version I just seem to get a straight 2D mapping of one image into the other. I’m not really following how the demo patch works, though I noticed two probably significant differences that I can’t figure out how to duplicate: the use of a type long matrix and the use of the interpbits attribute. I tried adjusting various parameters on the jit.gl.pix and found equivalents of most of the jit.repos attributes, but still couldn’t get the matrix to deform the same way. Am I overlooking something?

Thanks again.

– Pasted Max Patch, click to expand. –
#242118
Apr 30, 2013 at 9:20pm

Hello,
So i’m here again…
I took the jit.pix.repos example but feed it with jit.bfg noise stuff you could be more lucky with this noiser.
I’m not sure it will help you but it’s rich of spaces.
Try out something with jit.cl.noisse from Nesa
http://cycling74.com/forums/topic.php?id=24488
Funny thing you’re building Amy!!!
Cheers
Hubert

#242119
May 1, 2013 at 3:42am

Thanks Hubert! They don’t really solve this problem, but in fact they are useful since I expect to use noise and expressions here and there throughout the patch. So thanks a bunch for those tips! The more the merrier…

-Amy

#242120
May 1, 2013 at 7:28pm

Still banging my head on this (hopefully) last major glitch in the framework. Anybody got a suggestion? The goal: Use some sort of GL means to more or less duplicate the effect in the Spatial Map -> Distorted Quad demo patch. That is, getting a 2D matrix/texture looks like it’s being twisted/pulled/warped in 3D. The problem: the jit.gl.pix gen repos shader doesn’t seem to behave the same way as the jit.pix.repos object used in the demo patch (See my previous pasted patch.) Using it as/is with jit.pix.repos is too slow, as it’ll end up being used many times simultaneously (possibly 8 instances.)

The other possible solution I’m playing with is to use a mesh and capture it as a texture, but so far it doesn’t look very good. (Haven’t tried running it with the 8 instances yet, so not sure how that’ll work speedwise.) Seems like some sort of a shader would be a nicer solution, but I’m much less experienced with the GL side of things than I am with matrix stuff, so a little stumped how to proceed.

Thanks for any ideas!

#242121
May 2, 2013 at 12:32am

here is an example of how to use the gl.pix repos to get similar effects as the matrix version.

– Pasted Max Patch, click to expand. –
#242122
May 7, 2013 at 6:05am

Thanks Rob! That works much better! And now it works much more smoothly with jit.gl.hap. Though interestingly, the non-Hap compressed Jitter demo movies @ 320×240 from the laptop’s internal drive play back more smoothly with jit.gl.hap than my own Hap-compressed videos @ 854×480 played from FW800 external drive. Hopefully I can find a “happy medium” resolution. Thanks again!

#242123

You must be logged in to reply to this topic.