My patch works fine, but I can't achieve the light effect at the bottom of the tunnel. Even with the blend_enable & blend_mode attributes.
The problem is that with blend_enable 1 and the blend modes I tryed, the full image is transparent. I'd like to have transparency only in the black region around the light sphere.
(please look at the file joined).
I presume my photoshop picture file should have an alpha channel set to 0 in the black region and 255 in the light region or something like that. It is why I considered to do that with jit.op and jit.alphablend, but not very sure ...
But the main problem is that texture disappears from the plane when I run in fullscreen.
This was evoked in another thread and the @automatic and rendering in proper order proposals aren't very clear, since tutorial 31 seems not to work as expected.
i know there is countless threads about this and i read them all but i still do not get it - there is this mysterious co.alphablend.jxs shader but no documentation whatsoever so maybe someone could just show me the way...:
i blend videos with proper alphachannels on videoplanes - that works fine. the only problem is, that AFTER the blending i want to apply some effects (as well on the gpu-side) so i would need something which works just like a videoplane with "blend_enable 1" and a layer order but which does not draw to a window but has an outlet i could feed into some other slab.
how would i do this???
the working videoplane version is in the patch below. i would need the same, but without drawing to a videoplane but just an outlet instead.
to be honest, the problem is i did not buy gen and i would not even know what to use it for except for this.
and for editing shaders... frankly i have no idea how to do this (if i would understand code lines i would probably have none of these problems anyway).
what about this co.alphablend.jxs-shader? it sounds exactly like what i am looking for but i can't find any help patch or any other documentation...?
i just find it hard to believe i can not mix two inputs plus an alpha to one single texture. especially since it can be so easily done with videoplanes.
i currently work with 8 videoplanes, to each one applying the very same effect chain (bunch of slabs) - which is not only "not pretty" but as well generates a different result as when i would apply these effects only once to a composited image.
jpeg does not support alpha, i use targa. you can however as well use any other channel (r, g, b) of any image if you just select another output (2, 3, 4) of the jit.unpack. or you replace it with jit.rgb2luma and take any image (which gets converted to greyscale then which will get used as an alpha matte).
interesting...: i changed my patch to do the compositing (6 layers, each with individual alpha) with shaders (alphaglue/alphablend, see above) and that does work - but introduces quite noticeable frame lagging. once i stop playback, everything is fine and the mattes and videos match perfectly but while playing some of them get delayed by about 1 frame. as well single wrong images appear briefly when switching to different slabs (eg. when i bypass one slabchain with a gate).
i do not have this behaviour if i allocate a separate videoplane to each single compositing process - that performs flawlessly (apart from the problem with applying effects after that...).
the jit.gl.node approach i did not fully test yet because with a lot of layers this gets messy really quickly but i will post the outcome when i try that.