Re: Alpha blending multiple videos with

Forums > Jitter > Alpha blending multiple videos with
Apr 17 2013 | 10:13 pm

Hi again, Rob and all,

Though the patch is basically working as it should now, it seems that adding the multiple alpha mattes back in has slowed the frame rate down again. I’m just doing minimal video processing now, and need to add more in, but it’ll need to be running faster for me to do that. (I’m working with 854×480 sources). I’ve attached my revised patch – is there a faster way I could be doing this? I’ve red-commented the render sections that are where I need help.

Some background info: This patch is a basic demo/framework for a research/performance system that would use various aspects of gesture and sound data to layer and manipulate irregular shaped video images. Currently I’m using sound waves with some image processing to generate alpha masks, but the idea would be to be able to use a variety of real-time gestural and sound data – plus Jitter matrix processing – to create the masks. (I’ve tried some other generative processes to create the masks by the way, and they’re just as slow. So it seems to be the layering of multiple alpha masked videos that’s the bottleneck, not the way I’m generating the masks.)

I’d also be interested in ideas on doing similar with OpenGL meshes, etc. The alpha mask is my initial approach to layerable irregular forms, but some more visual depth would be welcome. I tried using the sound data with jit.gen to transform multiple, (based on the patch posted at but that ended up being very slow and didn’t really layer. I could imagine doing something like layering multiple textured cylinders and then deforming them in real time, but I’m not sure that would be possible or practical.

Thanks much for any leads, and please let me know if I’ve left out any pertinent info.

Subscribe to the Cycling ’74 Weekly Newsletter

Let us tell you about notable Max projects, obscure facts, and creative media artists of all kinds.

* indicates required