Forums > Jitter

How to do edge blending on composited images

August 5, 2007 | 1:23 pm

Hi,

(first, pardon for attaching an image to the mail. I generally try to avoid that, but believe that this illustration explains what I’m up to better than I’m able to do by words.)

For a project I want to work on a wide projection with high resolution (something like 8 x 1.4 meters, 2048 x 410 pixels). To do so we’ll split the image across two projectors. We want to apply edge blending at the transition from one image to the other as described here:

http://local.wasp.uwa.edu.au/~pbourke/texture_colour/edgeblend/

The images will be composed from a combination of layers of moving and still images, filling all or only parts of the canvas.

From an image processing point of view this can be considered a process in three steps as illustrated:

1) Combining the various sources

2) Taking the combined result, and split into two partly overlapping sections

3) Apply edge blending to each of them,a nd project to two projectors.

[img]index.php?t=getfile&id=784&private=0[/img]

However I’m unsure about what would be the best approach. If I was only going to combine various sources, I’d map them as textures to videoplanes. This would also give access to using alpha channels and OpenGL accelerated approaches to scaling, positioning and blend modes.

However videoplanes seems to be rendered directly to the context, and I don’t seem to be able to use it as a new texture that could then be split and have edge blending applied.

The other approaches that I can think of are:

- Do the compositing in Jitter (if fast enough), turn the resulting image into a texture, apply edge blending using a shader and map onto two videoplanes. The downsides of this is the loss of positioning/scaling/alpha in compositing that OpenGL seems to do speedy and eloquent, as well as a general fear of Jitter not being up to it as compared to using the GPU.

- Create a custom shader doing the all shebang. I’m worried about how much work might be involved, and also if it is possible to do apply alpha, blendmodes, positioning, etc. separately to each of the incoming textures.

Are there any other preferable approache(s) that I am missing out on?

Thanks,
Trond


August 5, 2007 | 4:25 pm

Ive done this for a client, by way of creating one large composite
texture, and then sending it to two videoplanes with a generated
alpha channel, and using texture offsets and the alpha generation (I
think I just used jit.gradient?) to tweak the amount of overlay.

it seemed to work pretty well.

I cant hand out the patch however :(

so

make one large composite spanning texture

send the same texture to two different video planes offset in space
(but slightly over lapping)

use YUV for speed

use the cc.alphaglue shader to add an alpha (your gradient for the
blend amount) generate this gradient on the fly so you can tweak it.

use texture offsets on each videoplane to draw only the first half
(plus offset amount) on one plane, and the second half (plus opposite
but equal offset amount) on the other.

This should be all you need, and was actually rather fast.

On my system with optimized media I was able to composite/split up
media the around 3068×1024 off a system with a triple head to go,

Good luck!

On Aug 5, 2007, at 9:23 AM, Trond Lossius wrote:

> Hi,
>
> (first, pardon for attaching an image to the mail. I generally try
> to avoid that, but believe that this illustration explains what I’m
> up to better than I’m able to do by words.)
>
> For a project I want to work on a wide projection with high
> resolution (something like 8 x 1.4 meters, 2048 x 410 pixels). To
> do so we’ll split the image across two projectors. We want to apply
> edge blending at the transition from one image to the other as
> described here:
>
> http://local.wasp.uwa.edu.au/~pbourke/texture_colour/edgeblend/
>
> The images will be composed from a combination of layers of moving
> and still images, filling all or only parts of the canvas.
>
>> From an image processing point of view this can be considered a
>> process in three steps as illustrated:
>
> 1) Combining the various sources
>
> 2) Taking the combined result, and split into two partly
> overlapping sections
>
> 3) Apply edge blending to each of them,a nd project to two projectors.
>
> [img]index.php?t=getfile&id=784&private=0[/img]
>
> However I’m unsure about what would be the best approach. If I was
> only going to combine various sources, I’d map them as textures to
> videoplanes. This would also give access to using alpha channels
> and OpenGL accelerated approaches to scaling, positioning and blend
> modes.
>
> However videoplanes seems to be rendered directly to the context,
> and I don’t seem to be able to use it as a new texture that could
> then be split and have edge blending applied.
>
> The other approaches that I can think of are:
>
> – Do the compositing in Jitter (if fast enough), turn the resulting
> image into a texture, apply edge blending using a shader and map
> onto two videoplanes. The downsides of this is the loss of
> positioning/scaling/alpha in compositing that OpenGL seems to do
> speedy and eloquent, as well as a general fear of Jitter not being
> up to it as compared to using the GPU.
>
> – Create a custom shader doing the all shebang. I’m worried about
> how much work might be involved, and also if it is possible to do
> apply alpha, blendmodes, positioning, etc. separately to each of
> the incoming textures.
>
> Are there any other preferable approache(s) that I am missing out on?
>
> Thanks,

v a d e //

http://www.vade.info
abstrakt.vade.info


August 5, 2007 | 6:42 pm

Thanks for the input, vade!

If I can get it all onto one texture, I should be fine from there
onwards, but do you have any suggestions for where to look for further
info on building a composite texture? Do I have to go the slab or shader
route to do that, or are there specific examples or tutorials that I
should look into? I know there’s some stuff on composite slabs in Jitter
tutorials 42 and 43, but I’m not sure if they discuss applying incoming
images to part of the texture only.

Best,
Trond

vade wrote:
> Ive done this for a client, by way of creating one large composite
> texture, and then sending it to two videoplanes with a generated alpha
> channel, and using texture offsets and the alpha generation (I think I
> just used jit.gradient?) to tweak the amount of overlay.
>
> it seemed to work pretty well.
>
> I cant hand out the patch however :(
>
>
> so
>
> make one large composite spanning texture
>
> send the same texture to two different video planes offset in space (but
> slightly over lapping)
>
> use YUV for speed
>
> use the cc.alphaglue shader to add an alpha (your gradient for the blend
> amount) generate this gradient on the fly so you can tweak it.
>
> use texture offsets on each videoplane to draw only the first half (plus
> offset amount) on one plane, and the second half (plus opposite but
> equal offset amount) on the other.
>
> This should be all you need, and was actually rather fast.
>
> On my system with optimized media I was able to composite/split up media
> the around 3068×1024 off a system with a triple head to go,
>
> Good luck!
>
> On Aug 5, 2007, at 9:23 AM, Trond Lossius wrote:
>
>> Hi,
>>
>> (first, pardon for attaching an image to the mail. I generally try to
>> avoid that, but believe that this illustration explains what I’m up to
>> better than I’m able to do by words.)
>>
>> For a project I want to work on a wide projection with high resolution
>> (something like 8 x 1.4 meters, 2048 x 410 pixels). To do so we’ll
>> split the image across two projectors. We want to apply edge blending
>> at the transition from one image to the other as described here:
>>
>> http://local.wasp.uwa.edu.au/~pbourke/texture_colour/edgeblend/
>>
>> The images will be composed from a combination of layers of moving and
>> still images, filling all or only parts of the canvas.
>>
>>> From an image processing point of view this can be considered a
>>> process in three steps as illustrated:
>>
>> 1) Combining the various sources
>>
>> 2) Taking the combined result, and split into two partly overlapping
>> sections
>>
>> 3) Apply edge blending to each of them,a nd project to two projectors.
>>
>> [img]index.php?t=getfile&id=784&private=0[/img]
>>
>> However I’m unsure about what would be the best approach. If I was
>> only going to combine various sources, I’d map them as textures to
>> videoplanes. This would also give access to using alpha channels and
>> OpenGL accelerated approaches to scaling, positioning and blend modes.
>>
>> However videoplanes seems to be rendered directly to the context, and
>> I don’t seem to be able to use it as a new texture that could then be
>> split and have edge blending applied.
>>
>> The other approaches that I can think of are:
>>
>> – Do the compositing in Jitter (if fast enough), turn the resulting
>> image into a texture, apply edge blending using a shader and map onto
>> two videoplanes. The downsides of this is the loss of
>> positioning/scaling/alpha in compositing that OpenGL seems to do
>> speedy and eloquent, as well as a general fear of Jitter not being up
>> to it as compared to using the GPU.
>>
>> – Create a custom shader doing the all shebang. I’m worried about how
>> much work might be involved, and also if it is possible to do apply
>> alpha, blendmodes, positioning, etc. separately to each of the
>> incoming textures.
>>
>> Are there any other preferable approache(s) that I am missing out on?
>>
>> Thanks,
>
> v a d e //
>
> http://www.vade.info
> abstrakt.vade.info



Dan
August 5, 2007 | 6:57 pm

The td.rota shader should work fine for positioning your images in the texture and you can composite them together with another slab. If you don’t have much experience with shaders and slabs, have a look in jitter-examples/render/.


August 6, 2007 | 3:56 am

Check out the tr.edgeblend.jxs shader. This will generate an arbitrary
fade amount (alpha) for any given edge. There is an example in
/examples/jitter-examples/render/slab-helpers/transition/.

HTH,
Andrew B.


July 7, 2009 | 11:30 am

hi vade,

would you now share you patch you’re talking about? it’s two years ago Wink


Viewing 6 posts - 1 through 6 (of 6 total)