(first, pardon for attaching an image to the mail. I generally try to avoid that, but believe that this illustration explains what I'm up to better than I'm able to do by words.)
For a project I want to work on a wide projection with high resolution (something like 8 x 1.4 meters, 2048 x 410 pixels). To do so we'll split the image across two projectors. We want to apply edge blending at the transition from one image to the other as described here:
The images will be composed from a combination of layers of moving and still images, filling all or only parts of the canvas.
From an image processing point of view this can be considered a process in three steps as illustrated:
1) Combining the various sources
2) Taking the combined result, and split into two partly overlapping sections
3) Apply edge blending to each of them,a nd project to two projectors.
However I'm unsure about what would be the best approach. If I was only going to combine various sources, I'd map them as textures to videoplanes. This would also give access to using alpha channels and OpenGL accelerated approaches to scaling, positioning and blend modes.
However videoplanes seems to be rendered directly to the context, and I don't seem to be able to use it as a new texture that could then be split and have edge blending applied.
The other approaches that I can think of are:
- Do the compositing in Jitter (if fast enough), turn the resulting image into a texture, apply edge blending using a shader and map onto two videoplanes. The downsides of this is the loss of positioning/scaling/alpha in compositing that OpenGL seems to do speedy and eloquent, as well as a general fear of Jitter not being up to it as compared to using the GPU.
- Create a custom shader doing the all shebang. I'm worried about how much work might be involved, and also if it is possible to do apply alpha, blendmodes, positioning, etc. separately to each of the incoming textures.
Are there any other preferable approache(s) that I am missing out on?