Integrating shaders

Jan 13, 2010 at 4:01pm

Integrating shaders

Hello!

I am making a video processing patch, that uses near 20 shaders from the jitter shader library, for example blur, brcosa, compositors, … I optimized it as good as i can, but its very slow even on good machines with nvidia quadro fx and 8 cores.
I am thinking about learning glsl language (I used to write gl apps in C for 8 years) and cutting all shaders into one with a lot of varying parameters and optimalisation (for example if blur>0. then blur too).
My question:
Is it better using a complex shader with one slab than using many slabs with tiny glsl code?

Thanks a lot!
Have a good day!

#47684
Jan 13, 2010 at 8:15pm

It’s a tricky question since it also depends on the hardware/drivers, and how many effects are in use most of the time.

We had some problems on the project we’re working on here with the ATI on mac not supporting if’s in vertex shaders properly – if we used ‘if’ in functions instead of in main(), the framerate would drop from 50 to 5fps.

But in my experience one big shader is usually more efficient than a stack of slabs. It should be fun also(zomg: I’m a nerd).

For vertex shaders that’s the only option atm for stacking effects: we made a javascript that can compile little peaces of glsl code into one big jitter jxs – it works like a charm.

#171480
Jan 13, 2010 at 10:29pm

Thanks Nesa for the reply!

#171481
Jan 13, 2010 at 10:30pm

Large shaders will run in to issues as there are instruction count limits and differences on the number of uniforms, texture indirection and number of samplers you can run on differing hardware. I’ve run into this in general with things as simple as loops in shaders. Also, “if blocks” are not optimized out, and run in parallel on the GPU (they tend to execute all branches, even the ‘false’ ones), so the larger the shader the more ifs you have, the slower it will run. This is in hardware, not a software thing as far as it has been explained to me.

The recommendation is to generally stay with slimmer more optimal slabs and do “multiple passes”. You can of course combine shaders, but I’ve been down that path and have noticed that you def run into issues with it. Wesley calls it the ‘mega shader’ approach. It may not be a huge issue on modern hardware, but its something to be aware of. You can’t have arbitrarily large shaders even today.

You may be running out of available video ram, depending on how textures are handled, or depending on the size of your textures. Multiple passes means doing some more work behind the scenes (binding and unbinding temporary resources etc), but it should not be too amazingly bad. 20 passes is kind of a lot though.

Another note about the GPU, its been the case that modern gaming cards typically out perform the workstation variants at least on OS X : http://arstechnica.com/apple/news/2009/12/a-second-look-at-the-nvidia-quadro-fx-4800-mac-edition.ars

Im very very curious about nesa’s patch to make a meta shader framework and have shader ‘combinatronics’ in realtime. Is this something you can share? Sounds like fun.

#171482
Jan 27, 2010 at 2:11pm

Hi,

Unfortunately I can’t share the code, but I can share the idea. Changing the stack in this system is not for realtime use, since loading the new shader file would interfere with rendering. But who changes the stack during the performance anyway:)

At the moment I’m still super busy, but would gladly put something online in the following weeks.

Since I still don’t have much time to develop&debug it all by myself and then release it in reasonable time, I was wondering if anybody else is interested for collaboration?

#171483
Jan 27, 2010 at 6:27pm

Yes, I can make the coffee.

#171484
Jan 27, 2010 at 11:00pm

I’d be potentially interested.

#171485
Jan 29, 2010 at 12:10am

I am interested.

#171486

You must be logged in to reply to this topic.