I'm trying to mix multiple video sources using irregular alpha masks of various sorts, and somehow haven't been able to find a good approach. The test patches use an audio waveform as a basis for generating the mask shapes, then do some further image processing on the resulting matrices before applying them as masks.
Here are two things I've tried.
* testalpha5: uses jit.alphablend to do the masking, then sends to a pretty-much as/is 4-way slabrenderer mixer from the Jitter Tutorial #23.
* testalpha6: doesn't use jit.alphablend; just modifies the slabrenderer mixer to do the alphablend with jit.gl.slab
Both of them basically work, except:
* They are both very slow already, and the tests are at low resolution and not doing much image processing yet. So I need a more efficient approach. Am planning to purchase a solid state drive soon (FW800 interface) in hopes of improving quicktime bandwidth issues, but suspect that won't solve everything.
* In testalpha6, which does the alphablending in jit.gl rather than on the matrix side, the background mysteriously turns from black to grey depending on which display the window is on. Can't figure out whether this is a bug or just something I need to do differently.
Despite several years using Jitter my knowledge of the jit.gl side is still fairly limited. Have gleaned quite a bit digging through forums and tutorials, but suspect I haven't found the best approach yet.
Thanks for advice!
Current Setup: Max 6.08. Mid-2010 MacBookPro, OSX 10.6.8. 8 GB RAM. FW800 external drive for videos. GeForce GT 330M (just had board swapped out by Apple to replace previously installed defective graphics chip from this MBP series, so hopefully it's good now.)