GLSL and different GPUs

Vilem_Otte 117 Dec 29, 2013 at 12:56 shaders glsl opengl

Hello, I have quite robust shader system in an engine but… as you know GLSL implementations sometimes are bloody mess (even though since #version directive it got a bit better … and most shaders run on both major vendors hardware).

The problem is with various shader quality, having gbuffer-shader in several versions - 330 (OpenGL 3.3), 400 (OpenGL 4.0) and 420 (OpenGL 4.2) becomes bloody hell soon enough.

I’m glad I limited it to just 3 versions, but still, is there a possibility to use just one without writing some huge pre-processor (which I don’t really want, as that piece of software would take some time to create and it wouldn’t pay-back, plus it would take way more time to spend on it than what I’d spend on writing and supporting different versions of shaders).

Of course it starts to become a problem since the point where I have like several hundred of various shaders in my engine.

7 Replies

Please log in or register to post a reply.

Reedbeta 167 Dec 29, 2013 at 19:54

A common trick is to hide away the version-specific stuff using macros or wrapper functions, like

#if __VERSION__ < 400
    vec4 Foo(vec4 bar) { /* GL 3.x implementation */ }
    vec4 Foo(vec4 bar) { /* GL 4.x implementation */ }

When you compile the shader, since multiple source strings can be put in, you can put the #version directive in the first string and the rest of the source in another string. Select the correct #version for the platform you’re running on, and the preprocessor does the rest.

At my last job we used this to hide differences between HLSL and PSSL (the PS4 shading language), so we could write shaders to work in either one. It worked reasonably well.

Vilem_Otte 117 Dec 30, 2013 at 18:02

Yup, I know I could use pre-processor, thanks for mentioning it. Although we basically handle the same thing by loading different shaders (they are at different locations -> example -> data/shaders/v330/ vs. data/shaders/v420/ … this way in case some shader compilation fails, the system tries to use one with older version).

We also wanted to avoid preprocessor directives in shaders, as we use somehow modified GLSL that is already pre-processed by our preprocessor (F.e. we have all shaders inside single file).

Stainless 151 Dec 30, 2013 at 11:28

A lot of commercial games use a shader bank.

They all call it different things, but in general they create a few flags to cover everything they need to know about the client platform. Things like if the device supports context less surfaces, how many instructions per shader, etc.

Then they load shaders based on those flags.

Vilem_Otte 117 Dec 30, 2013 at 18:10

Yeah, well in the end thats similar system to what I have now (I decide upon GLSL version that hardware has, of course in case of some really complex shaders some hardware might not compile it -> lower-glsl version is supplied).

Still they have to (as do I) to write multiple versions of shaders.

Reedbeta 167 Dec 31, 2013 at 03:45

I’m not sure what you’re looking for then. Obviously for the things that differ between GLSL versions you’re going to have to have multiple versions of the code somewhere, somehow. Nothing can save you from that.

The best you can do is segregate all the platform-specific stuff, so its impact is limited and you can write the rest of your shaders to a platform-independent API as much as possible. Whether you do it with the C preprocessor, or some preprocessor of your own, is just an implementation detail.

Stainless 151 Dec 31, 2013 at 10:40

Yes, but there are ways of managing this.

1) Single shader and translation layer

Write your shaders in your own macro language and translate it into the correct version of GLSL before compilation

2) Subroutines

Write an include file for each GLSL version and write the shaders using only these subroutines.Change the include file on the fly.

Or you can use one of the freeware translation libraries that are out there. I can’t remember the name of the one I have used, but it converts opengl into direct3d on the fly. This is particularly useful in windows as you can use pixwin to debug.

Vilem_Otte 117 Dec 31, 2013 at 12:34

I’ll answer Reed in this reply too - yup I’m looking to some example of your point 1 - e.g. do you know some example of this (or better full implementation - but I doubt that exists)? Because this is what I’ve been looking for.