Alpha blending multiple videos with

    Mar 23 2013 | 9:28 pm
    I'm trying to mix multiple video sources using irregular alpha masks of various sorts, and somehow haven't been able to find a good approach. The test patches use an audio waveform as a basis for generating the mask shapes, then do some further image processing on the resulting matrices before applying them as masks.
    Here are two things I've tried.
    * testalpha5: uses jit.alphablend to do the masking, then sends to a pretty-much as/is 4-way slabrenderer mixer from the Jitter Tutorial #23.
    * testalpha6: doesn't use jit.alphablend; just modifies the slabrenderer mixer to do the alphablend with
    Both of them basically work, except:
    * They are both very slow already, and the tests are at low resolution and not doing much image processing yet. So I need a more efficient approach. Am planning to purchase a solid state drive soon (FW800 interface) in hopes of improving quicktime bandwidth issues, but suspect that won't solve everything.
    * In testalpha6, which does the alphablending in rather than on the matrix side, the background mysteriously turns from black to grey depending on which display the window is on. Can't figure out whether this is a bug or just something I need to do differently.
    Despite several years using Jitter my knowledge of the side is still fairly limited. Have gleaned quite a bit digging through forums and tutorials, but suspect I haven't found the best approach yet.
    Thanks for advice!
    Current Setup: Max 6.08. Mid-2010 MacBookPro, OSX 10.6.8. 8 GB RAM. FW800 external drive for videos. GeForce GT 330M (just had board swapped out by Apple to replace previously installed defective graphics chip from this MBP series, so hopefully it's good now.)

    • Mar 23 2013 | 10:34 pm
      I opened your patches
      Demovideo2 is your movie reader and your bottle neck and i think you should rewrite it.
      use vade advices for that ( @colormode uyvy @unique 1 etc..) this should give you a good increase in fps. masterout @colormode uyvy @unique 1 -> masterout @file cc.uyvy2rgba.jxs @dimscale 2 1 -> masterout @name tex1 -> masterout @file 2waymix.jxs
      and the way to go...
      i would rethink the data flow to -> for the mask try jit.matrix to and both on a with @ tex1 tex2 or an overlay of 2 don't forget to try several blend_mode a b.
      And why not try the latest good job from "rob the magnific"using the new HAP stuff
      HTH a little bit
    • Mar 23 2013 | 10:38 pm
      I don't know why, but Max crashes when I try to turn on audio in your patches..
      Anyway, about the gray background in the rendering window, you may try setting the @erase_color attribute to 0 in
    • Mar 24 2013 | 8:04 pm
      Thanks both!
      Hubert, thanks for the suggestions about going to uyvy colormode. I've been looking through your notes and vado's demos , but haven't so far figured out exactly how to apply them to my patch correctly - could you elaborate a bit? Have tried various things but can't seem to get the color space straightened back out in the end. I've attached new patches - the qtread1 abstraction is where I now read the quick time; the rest of the relevant stuff is still in the main patch. (Thanks also for Hap tip... will look into this - but not sure if it'll give enough flexibility to allow for me to work with the videos using matrix objects? I'm planning to do quite a bit of matrix work.)
      LSka, looks like you're right about the @erase_color causing the strange changes in the background, thanks!
      I suspect audio problem could be from patch saved with line input; I've saved this one with mic input so hopefully easier to open. If not, try setting input before turning on audio.
      Thanks again all,
    • Mar 25 2013 | 6:33 am
      Oops, small correction to previous post - it's the qtplay abstraction that plays the quicktime, not qtread1.
    • Mar 25 2013 | 9:16 pm
      Hello Amy
      Had time to kill while Brecht Master Puntila is playing on stage...
      Here you go some redigest patches
      HTH and i would say there is a funny potential in your idea try some poly_mode or several shape on the gridshape...
      Enjoy it's free!!
    • Mar 25 2013 | 9:45 pm
      Oups was to fast just renamer the bpatcher with qtplayhubert and qtread1hubert and it should work...
      The funny is that using your post and the sevscope post is really ideal to mix
      at the same moment at different places ....
      Master Puntila is drunken and every thing works much more easier
    • Mar 30 2013 | 4:26 am
      Thanks Hubert!
      Sorry for slow reply - was traveling and didn't have as much time for patching as I expected. Anyway, thanks for the patch! I've got it basically working and the frame rate is much better! I'm still digging through it trying to understand all the GL stuff and to see if I understand how to make it do what it'll need to do in the end. Will let you know if I have more questions, but wanted to follow up first to say thanks!
    • Apr 02 2013 | 6:33 pm
      Hi again Hubert!
      Ok, I've been working with the patch and run into a question. I'd planned to have each video use an individual alpha channel as its matte. (For example, in my original test patches, I'd done this with the alphamaker subpatcher. For simplicity's sake I'd generated all the alpha channels the same, but in the real patch, they'd be different.) Anyway, is there a way to adapt your patch to do this? I first tried stacking alpha textures (i.e. wave, wave2) on the, but then discovered that this doesn't really work because the black parts of the upper alpha layers block out parts that should be showing through from lower layers. So I'm wondering if there's a way to apply the mattes that ties them to the individual movies?
      Thanks, and sorry for confusion about that. I was trying to keep the test patch (too) simple, as it hadn't occurred to me that a solution might involve separating the alphas from the movies!
    • Apr 06 2013 | 5:06 am
      Aha, I've just discovered cc.alphaglue.jxs ... this appears to be a solution...
    • Apr 07 2013 | 8:54 pm
      Hi again all,
      Ok, I've gotten closer, but stuck again. Using cc.alphaglue.jxs, I can apply individual alphas to my videos, but when running that through 43j-fourwaymix.js, I get the combined individual alphas matting an additive mix of all the videos. I am trying to get each matte to stay with its individual video. I also tried daisy chaining slabs with co.alphablend.jxs (daisy chaining because co.alphablend.jxs only expects 2 inputs). But that was a mess. I'm not sure if this can be solved using the existing shaders or if I need to modify them - but I couldn't quite figure out how to modify either of those to do this.
      I've attached my latest patch. The rendering subpatcher starts with putting together what Hubert had done, then adds the alphaglues in the slabrenderer_mod subpatcher. Thanks much for any help!
    • Apr 07 2013 | 9:15 pm
      BTW, also tried layering with @layers and @blend_enables, but could not get opacities to work with the layers. It seems like that might be the most straightforward solution if it can be done - perhaps I'm just missing the correct combination of attribute settings?
      Thanks again for any leads...
    • Apr 08 2013 | 6:57 pm
      hi uebergeek.
      i didn't really follow all of this thread, but the easiest way to overlay movies with alpha-channels, is to overlay multiple gl.videoplanes.
      you simply need to set @depth_enable 0 and @blend_enable 1. the default @blend_mode will blend based on alpha channel.
      if you want to control the layering order, use the @layer attribute.
      let me know if this doesn't answer your question.
    • Apr 09 2013 | 1:58 pm
      hi there
      i am trying to build a live render cartoon caption patch
      i have been trying to combine and image (the speech bubble) and some text files (the words) and then render it over a piece of live video
      this is the stripped back patch
      i have only been able to render the text by using the jit.op @op, and combining it with the speech bubble png,making a single image with the jit.alphablend but when i pop this into a plane i loose the alphachannel and it renders out as box (instead of the shape of the speech bubble)
      jit.videoplane @blend enable 1 @depth enable 0 @transform reset 2 @layer 1
      any help much appreciated (i've spent many hours trying to work out this problem)
    • Apr 09 2013 | 6:22 pm
      hi juzjuz.
      in this case, jit.alphablend is not what you need, as it's intended to composite two matrices together based on the alpha channel of the first.
      instead, you want to add an alpha channel to one matrix (your video), based on the values in another matrix (your text).
      jit.pack is your friend in this case.
      once you've combined the two matrices with jit.pack and have a proper alpha channel, @blend_enable will take care of the rest:
    • Apr 10 2013 | 7:04 pm
      hey rob
      got it working now!
      great to be put on the right path, there are so many dead ends to explore and while you learn a lot it gets a little frustrating
    • Apr 13 2013 | 10:12 pm
      Thanks Rob! Got the working now thanks to your example patch. Yep, seems I just hadn't figured out the correct combination of attributes.
    • Apr 17 2013 | 10:13 pm
      Hi again, Rob and all,
      Though the patch is basically working as it should now, it seems that adding the multiple alpha mattes back in has slowed the frame rate down again. I'm just doing minimal video processing now, and need to add more in, but it'll need to be running faster for me to do that. (I'm working with 854x480 sources). I've attached my revised patch - is there a faster way I could be doing this? I've red-commented the render sections that are where I need help.
      Some background info: This patch is a basic demo/framework for a research/performance system that would use various aspects of gesture and sound data to layer and manipulate irregular shaped video images. Currently I'm using sound waves with some image processing to generate alpha masks, but the idea would be to be able to use a variety of real-time gestural and sound data - plus Jitter matrix processing - to create the masks. (I've tried some other generative processes to create the masks by the way, and they're just as slow. So it seems to be the layering of multiple alpha masked videos that's the bottleneck, not the way I'm generating the masks.)
      I'd also be interested in ideas on doing similar with OpenGL meshes, etc. The alpha mask is my initial approach to layerable irregular forms, but some more visual depth would be welcome. I tried using the sound data with jit.gen to transform multiple, (based on the patch posted at but that ended up being very slow and didn't really layer. I could imagine doing something like layering multiple textured cylinders and then deforming them in real time, but I'm not sure that would be possible or practical.
      Thanks much for any leads, and please let me know if I've left out any pertinent info.
    • Apr 18 2013 | 8:15 pm
      looks like it's the jit.rota sub-patches that are causing the biggest cpu drain.
      you can convert those to (or using the td.rota.jxs shader). the gl.pix code is taken from the gen patch-a-day thread.
      you can also simplify your alpha-glueing using (there's no need to calculate the luma value, as the alphaglue shader does. simply grab a single plane from the alpha matrix, and pack with the color matrix planes of the movie)
      the patch below shows this in action.
      also, check out the external, to speed up movie playback. it may yield better results.
    • Apr 29 2013 | 7:21 am
      Thanks much Rob - that helps a lot! I’ve been digging into learning the wonderful world of; seems it can be very useful as I further develop the patch.
      I’m sure there will be some things that will still need to be done with matrices, but it certainly seems that for speed’s sake I should do as much as possible on the GL side. One thing I’m trying do is distort the images and alphas something like the jit.repos-distortquad example. I’ve attached my revised patch showing how I’m using it - (See blue objects in the multilayer_stuff patcher.) Since I end up having to apply the distortion eight times - (4 videos, RGB + alpha for each), it slows things down. (And I’m still using it at 320x240 as in the original example - when I take it up to 854x480 it’s extremely slow.) It seems like that type of distortion effect might be fairly straightforward to do with, but I haven’t found a similar example. I can imagine it could be done by distorting a textured mesh - but then I lose the alpha blending from the videoplanes. So perhaps a shader? Is there perhaps a similar example out there I’m missing? I’m not sure how to approach writing that from scratch.
      Also, I had a couple other general questions about
      * Are there performance hits from daisy chaining too many objects? I.e. is it faster to combine shader functions into one object when possible, or is it ok to keep daisy chaining individual shaders? (I realize it’s sometimes necessary to use separate objects because of vectors vs. pixels).
      * Is there a way to give objects names, so i can see easily in my code what each one does? A workaround of course would be to put them into descriptively-named subpatchers, but just wondering if there is a way to name them directly.
      Re: - thanks for the suggestion - I played around with it some. Seems that it will really only help if I’m not doing any matrix operations on the videos, correct? As soon as I convert to matrix, frame rate slows down, which I guess makes sense. I also tried it without any matrix operations (i.e. the distort-quads) on the videos, but keeping the distorts on the 4 alphas. It didn’t seem to make much difference vs. doing the same with in uyvy mode - so I think the distorts on the alphas are bottlenecking it in that case.
      Thanks very much again!
    • Apr 29 2013 | 10:29 pm
      the jitter-gen examples are found at:
      Max 6.1/examples/jitter-examples/gen/
      i'll let you find the gen version of jit.repos (hint: it's called
      my guess is one shader (gen patcher) is generally going to be more efficient than several. but as always, better to test it out yourself. 1 frame of efficiency might not be worth a decline in patching organization and readability.
      did you try just typing in an argument?
      gl.hap spits out textures, and you will get the most optimal performance by keeping it as a texture (as you've discovered).
      it should be more efficient than @colormode uyvy, especially if you use the HAP encoder on your movies.
    • Apr 30 2013 | 5:17 am
      Thanks Rob! Yeah, the gen repos shader looks cool.. only problem is, I can't seem to get similar results as the Spatial Map -> Distorted Quad patch got using jit.repos. Actually the gen version's behavior seems fairly straightforward to me - it's the use of jit.repos in the demo patch that's somewhat mystifying! I'm trying to get something like that twisting of the image that happens in the demo - but with the gen version I just seem to get a straight 2D mapping of one image into the other. I'm not really following how the demo patch works, though I noticed two probably significant differences that I can't figure out how to duplicate: the use of a type long matrix and the use of the interpbits attribute. I tried adjusting various parameters on the and found equivalents of most of the jit.repos attributes, but still couldn't get the matrix to deform the same way. Am I overlooking something?
      Thanks again.
    • Apr 30 2013 | 9:20 pm
      So i'm here again...
      I took the jit.pix.repos example but feed it with jit.bfg noise stuff you could be more lucky with this noiser.
      I'm not sure it will help you but it's rich of spaces.
      Try out something with from Nesa
      Funny thing you're building Amy!!!
    • May 01 2013 | 3:42 am
      Thanks Hubert! They don't really solve this problem, but in fact they are useful since I expect to use noise and expressions here and there throughout the patch. So thanks a bunch for those tips! The more the merrier...
    • May 01 2013 | 7:28 pm
      Still banging my head on this (hopefully) last major glitch in the framework. Anybody got a suggestion? The goal: Use some sort of GL means to more or less duplicate the effect in the Spatial Map -> Distorted Quad demo patch. That is, getting a 2D matrix/texture looks like it's being twisted/pulled/warped in 3D. The problem: the gen repos shader doesn't seem to behave the same way as the jit.pix.repos object used in the demo patch (See my previous pasted patch.) Using it as/is with jit.pix.repos is too slow, as it'll end up being used many times simultaneously (possibly 8 instances.)
      The other possible solution I'm playing with is to use a mesh and capture it as a texture, but so far it doesn't look very good. (Haven't tried running it with the 8 instances yet, so not sure how that'll work speedwise.) Seems like some sort of a shader would be a nicer solution, but I'm much less experienced with the GL side of things than I am with matrix stuff, so a little stumped how to proceed.
      Thanks for any ideas!
    • May 02 2013 | 12:32 am
      here is an example of how to use the gl.pix repos to get similar effects as the matrix version.
    • May 07 2013 | 6:05 am
      Thanks Rob! That works much better! And now it works much more smoothly with Though interestingly, the non-Hap compressed Jitter demo movies @ 320x240 from the laptop's internal drive play back more smoothly with than my own Hap-compressed videos @ 854x480 played from FW800 external drive. Hopefully I can find a "happy medium" resolution. Thanks again!
    • Jan 20 2015 | 8:30 pm
      Hi everyone,
      I'm experiencing strange behaviour when loading png images into
      Sometimes the colors of my png are changed and sometimes not.
      I've saved images under same png format. And typically, for example red is changed to blue and yellow to light blue.
      There seems to be a hue rotation of 150° as I can get the original color if I apply a hue rotation filter.
      But what is weird is that other images are drawn normal.
      ok, i've just made a test and it appears that the colorspace changed.
      instead of argb, when loading in hap, hap draws it in abgr so red and blue channel are switched.
      and there doesn't seem to have a colormode attribute in the object.
      Has anyone experienced this pb too and solved it?
    • Jan 21 2015 | 6:45 pm
      i would not use for still images.
    • Oct 18 2015 | 7:09 am
      Hi Rob,
      A follow-up question for the Max patch that went with your reply:
      "...the easiest way to overlay movies with alpha-channels, is to overlay multiple gl.videoplanes.
      you simply need to set @depth_enable 0 and @blend_enable 1. the default @blend_mode will blend based on alpha channel.
      if you want to control the layering order, use the @layer attribute...."
      Suppose I wanted to adapt this code so that each video played its full size in each quadrant, rather than seeing a quarter of each video, how would I do that?
      I mainly want to use a corner of a window (I'm already rendering to from several sources) to trigger a countdown loop programmatically as a cue for a dancer.
      So, the video would be rendering normally without the picture-in-picture loop visible at all, then on a bang the loop would play in one of the corners, then get out of the way after it was done.
      Thanks for any help from Rob or anyone else who knows the answer of how to do this.
      EDIT: Figured it out! Didn't even need the jit.matrix code, simply a matter of positioning and scaling.
    • Oct 18 2015 | 7:46 am
      I'm not sure I follow exactly how you want it to work, but it sounds like you don't really want mattes as I was doing; you just want to resize and reposition the layers?
      Something like this? I'm guessing the layers aren't doing exactly what you have in mind, but you can adjust as necessary.
    • Oct 18 2015 | 8:01 am
      Thanks uebergeek, managed to get it! There should be a pretty cool video coming out in November for the performance, I'll be sure to link from this thread so you can get an idea what the project was :)
    • Nov 10 2015 | 11:31 am
      This is so kooooooool thanks UeberGEEEEEEkkkkkk :)