Gamma correction is important

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Apr 06, 2010 at 14:00

10-04-05-0.jpg

Description
At this year’s GDC, I went to a talk by John Hable of Naughty Dog, and he showed a slide very much like this. I subsequently reproduced it in our game engine here at Sucker Punch. It’s a real eye-opener about the importance of gamma correction.

Both images here are using a bog-standard ambient + Lambert diffuse + Blinn specular lighting model. The difference is that on the left, we are ignoring gamma altogether, which means we’re effectively doing our lighting computation in gamma space, which is physically incorrect. On the right, we apply inverse gamma to all the light levels and colors to convert them into linear space, then do the lighting, and then convert the result back into gamma space.

As you can see, it makes an enormous difference for the specular highlight. Lighting in gamma space (the left) causes the spec highlight to take on the color of the underlying surface somewhat, creating the odd-looking ring where the green channel saturates but the red and blue channels are still catching up. In linear space (the right) the green interpolates to white much more cleanly. The diffuse shadow line is also crisper in the gamma-correct version.

These images used the sRGB gamma curve, which is implemented in hardware on many GPUs, including the PS3’s RSX. (The X360’s Xenos GPU also supports hardware gamma encoding/decoding, but with a slightly different curve.) The hardware gamma correction is quite fast, so lighting in linear space needn’t be a performance hit even in a real game.

If I’m not mistaken, many renderers out there ignore gamma issues altogether, or perhaps just provide a single gamma slider to adjust things at the end after all rendering is performed. This screenshot is, I hope, motivation enough to start paying attention to gamma in your own renderer! While I can’t show any screenshots, it certainly has had a positive effect on the overall lighting quality of our upcoming game.

29 Replies

Please log in or register to post a reply.

A77e71b962cd6c7c3b885f0488452f1f
0
tobeythorn 101 Apr 06, 2010 at 14:10

Reedbeta,
Would it be possible for you to share a screenshot of a real scene? I can’t appreciate the difference in the image of spheres you posted. In the gamma-corrected image, there is still a smaller and slightly less noticeable green ring. Is that an artifact of imperfect correction or numeric imprecision? FYI, I’m using a calibrated S-PVA display.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Apr 06, 2010 at 16:23

Unfortunately, I can’t show any screenshots from our game. However, here is the full-resolution image of the spheres. I believe the smaller, fainter green ring (which looks much less noticeable on my display) is the result of not using HDR tonemapping. The ambient, diffuse, and specular add up to an intensity greater than 1 there, and I’m just clamping everything to 1. This produces a discontinuous change in the spatial derivatives of the color value, and the eye “overcorrects” - I can’t remember what this phenomenon is called, but I’ve seen it discussed before.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Apr 06, 2010 at 18:58

Sometimes uncorrected gamma can be a good thing too. I quite like it for producing flare effects and it also looks good for blowing out a scene (nice technique especially in cinematography).

Although one drawback I find about gamma correction is that it tends to make an image less contrasty; so additional work is generally needed to improve quality. I find applying an S-curve on top helps to restore shadows and highlights while maintaining midtones.

A77e71b962cd6c7c3b885f0488452f1f
0
tobeythorn 101 Apr 06, 2010 at 19:09

Reedbeta,
Thanks for the larger image. I do agree that the gamma corrected image looks better. Is it just my eyes, or is the ring you claim to be from tone discontinuity not present in the uncorrected image? Maybe the gamma correction magnifies that artifact.

46407cc1bdfbd2db4f6e8876d74f990a
0
Kenneth_Gorking 101 Apr 06, 2010 at 20:02

@tobeythorn

Reedbeta,
Would it be possible for you to share a screenshot of a real scene? I can’t appreciate the difference in the image of spheres you posted.

Valve did a related presentation in 2008, with several screenshots for comparison from Half-Life 2: http://www.valvesoftware.com/publications/2008/GDC2008_PostProcessingInTheOrangeBox.pdf

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Apr 06, 2010 at 21:37

@tobeythorn

is the ring you claim to be from tone discontinuity not present in the uncorrected image? Maybe the gamma correction magnifies that artifact.

I agree, there doesn’t seem to be a ring in the same position in the uncorrected image, but that is to be expected, as lighting in linear space changes the way the various lighting component accumulate and so the point where they saturate to 1.0 is different. It would be interesting to make an animation interpolating the gamma from 1.0 (left image) to 2.2 (right image, approximately)…maybe I’ll do that one of these days. :D

@Kenneth: That’s an interesting presentation as well, but it focuses more on alpha blending than the effect of gamma on lighting.

@TheNut: At the same talk, John Hable also went into filmic tonemapping, which seems to have similar S-curves - he said that’s why film-like curves look better than other tonemapping operators. It’s funny how gamma and HDR seem to be closely tied together!

8080881b30a2486803c679629be78aa1
0
appleGuy 101 Apr 06, 2010 at 21:53

Does gamma correction still apply with digital displays like LCDs?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Apr 06, 2010 at 22:02

@appleGuy

Does gamma correction still apply with digital displays like LCDs?

Yes. LCDs have a gamma curve very similar to CRTs, by design.

6aa952514ff4e5439df1e9e6d337b864
0
roel 101 Apr 07, 2010 at 11:39

Gamma correction is important for sure. This article opened my eyes: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html
I didn’t know that it wasn’t yet common knowledge in the industry.

6aa952514ff4e5439df1e9e6d337b864
0
roel 101 Apr 07, 2010 at 11:43

Also: not only the “output stage” has to deal with gamma, the “input stage” (forgive me the obscure names) of your pipeline might also need to deal with gamma: take care that when doing computations on values from textures they probably must be in linear space to do things correctly, which might require inverse gamma correction.

8080881b30a2486803c679629be78aa1
0
appleGuy 101 Apr 07, 2010 at 12:46

Are grey-scale displacement maps gamma corrected into linear space? The Naughty Dog slides don’t mention this for obvious reasons.

I would assume so. However trying it out produces displacements that are very extreme. i.e not much displacement in the mid-tones.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Apr 07, 2010 at 17:25

It’s a hard call. I would guess displacement maps, and other things not representing a color (e.g. normal maps) should not be gamma corrected - particularly if they’re machine-generated from a higher-res mesh or something like that. On the other hand, if the artists are painting these maps, I’ve heard some argument that gamma space is more intuitive for an artist to work with for some reason. We do apply inverse gamma to our specular maps, as I observed that the spec highlights seem to get much brighter without it (in comparing uncorrected vs corrected lighting).

A77e71b962cd6c7c3b885f0488452f1f
0
tobeythorn 101 Apr 07, 2010 at 17:56

Reedbeta,
FYI, on my low quality, uncalibrated monitor at school, i can’t see the discontinuous ring at all. I suspect then that for most people, that artifact is a non-issue.

8080881b30a2486803c679629be78aa1
0
appleGuy 101 Apr 07, 2010 at 20:09

@tobeythorn

Reedbeta,
FYI, on my low quality, uncalibrated monitor at school, i can’t see the discontinuous ring at all. I suspect then that for most people, that artifact is a non-issue.

The main thing I notice (very prominent in Naughty Dog slides) is the light falloff around the sides->behind the sphere, where the light doesn’t reach. Gamma correction seems to provide a tighter “darkness” falloff, which is more like real life.

On my calibrated Apple LED the specular differences are very obvious.

This is such an easy fix, yet yields very nice results.

6837d514b487de395be51432d9cdd078
0
TheNut 179 Apr 08, 2010 at 01:22

@tobeythorn

Reedbeta,
FYI, on my low quality, uncalibrated monitor at school, i can’t see the discontinuous ring at all. I suspect then that for most people, that artifact is a non-issue.

Only for people with cheap, low quality monitors :lol:
@roel

I didn’t know that it wasn’t yet common knowledge in the industry.

I think devs are aware of it, just that it’s a difficult problem to solve because no two monitors are the same. I have two monitors at work with the same make a model, but neither of them produce the exact same luminance with the same settings (in fact, one of the monitors is really borked). Between your game and its resources, the renderer, the video card, and the monitor, who knows what’s being adjusted in there. So the best thing to do is give the end user a gamma slider and let them figure it out, or cheat ;)

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Apr 08, 2010 at 02:07

Well, just as a personal anecdote, I was aware before that gamma correction was an issue and that to be physically correct you should do lighting in linear space. But I didn’t realize just how much of a difference it makes for getting lighting to look good and correct. The fact that it has such an impact on such a simple image as this was surprising to me, and that’s what I posted it. :)

As for differences among monitors and putting a gamma slider at the end of the render: that’s a great idea, but re-adjusting the gamma at the end of rendering doesn’t do the same thing as gamma correcting throughout the rendering pipeline. No amount of applying gamma as a postprocess to the left image will make it look like the right, once it’s already been rendered. :D The problem is with adding together light values e.g. ambient+diffuse+specular, or adding multiple lights. (a + B)\^2.2 != (a\^2.2 + b\^2.2), and so forth.

Ideally we’d decode all the textures coming in using the monitor gamma of the artist who made them, then encode them using the user’s monitor or TV gamma…but using sRGB for both is a decent compromise, and certainly better than ignoring the issue altogether! :yes:

A77e71b962cd6c7c3b885f0488452f1f
0
tobeythorn 101 Apr 08, 2010 at 04:12

To clarify, I meant that, on a low quality display, I COULD see a noticeable improvement between the gamma and non gamma corrected images, BUT, I couldn’t see the other visual artifact(cause be clamping) in the corrected image.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Apr 08, 2010 at 09:03

@Reedbeta

Ideally we’d decode all the textures coming in using the monitor gamma of the artist who made them, then encode them using the user’s monitor or TV gamma…

No you wouldn’t. You want the textures to be in linear space and just do the gamma correction as a post process for the output monitor in the very end. If you would encode the output monitor gamma to textures, you would end up with wrong results, because like you said a\^2.2+b\^2.2 != (a+b)\^2.2. So, ideally you would just want to have input textures authored in linear space and not use sRGB textures and not do anything in shaders to have most precission.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Apr 08, 2010 at 12:30

Wouldn’t simply using actual sRGB textures (rather than sRGB data interpreted linearly) and an sRGB framebuffer do all the necessary conversions automatically? That’s where the sRGB texture formats are for.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Apr 08, 2010 at 13:29

Sure, if you never do any lighting calculations with your textures ;) Even then if you do any bi-/trilinear texture filtering the results are off. sRGB isn’t a texture format but a gamma standard, which tells how to linearize the color values assuming the widely used sRGB gamma.

5f8d1e15c62932fa37f8a383b8567e52
0
macnihilist 101 Apr 08, 2010 at 14:24

@.oisyn: We did this in a dx9-engine and ran into _huge_ problems with blending. Because (apparently) all dx9-cards blend in ‘gamma-space’ (wrong) and all dx10 cards blend in linear space. And (at least at this time) there were no CAPS to check this behavior. (I think it’s possible with an D3D9Ex device.)
With dx10 it should work correctly, although I’m not 100% sure about texture filtering and multisampling.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Apr 08, 2010 at 15:00

@JarkkoL

Sure, if you never do any lighting calculations with your textures :D Even then if you do any bi-/trilinear texture filtering the results are off. sRGB isn’t a texture format but a gamma standard, which tells how to linearize the color values assuming the widely used sRGB gamma.

Both opengl and directx support srgb texture formats (well, ok, in dx9 for example they are sampler and render states - documentation). The idea is that the values are converted to a linear space and back when reading / writing to such formats. According to the specs at least, I haven’t ever experimented with it :)

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Apr 08, 2010 at 15:18

Heh, it’s just the way those gfx apis limit gamma correction to specific formats, not that it’s really a different format (: In DX9 you had sRGB defined as a sampler state. Anyway, in your earlier post you said “rather than sRGB data interpreted linearly” which kind of indiceted you didn’t understand what was going on, i.e. that reading from sRGB texture does exactly that - linearizes the color data, so just thought of correcting that (:

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Apr 08, 2010 at 15:29

@JarkkoL

Anyway, in your earlier post you said “rather than sRGB data interpreted linearly” which kind of indiceted you didn’t understand what was going on

Or that you were misinterpreting me. What DX9 does without the sampler states is interpreting the srgb textures as linear data. In that case you’ll need to convert the samples yourself using an inverse gamma curve. But if the hardware understands that the data from the texture is actually srgb data (using the sampler state), it no longer interprets it as being linear and converts the samples using the inverted gamma ramp. That was the point I was trying to get across.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Apr 08, 2010 at 15:37

People don’t talk about linear interperation of the color data when you read the sRGB data straight without any gamma correction, so please don’t use the term wrong and cause confusion ;)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Apr 08, 2010 at 15:51

Maybe you should buy an English dictionary then :D. Perhaps you’re confusing interpretation with interpolation? Given that you write “interperation”, whatever that might mean.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Apr 08, 2010 at 16:14

Oh please ;)

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Apr 08, 2010 at 17:10

@JarkkoL

No you wouldn’t. You want the textures to be in linear space and just do the gamma correction as a post process for the output monitor in the very end.

Actually, given that we have only 8 bits per channel (even fewer with DXT), it’s a good idea to store textures in gamma space. If you store textures or your framebuffer in linear space with only 8 bits you lose a good deal of precision in the darks, leading to banding once you convert the final image back to gamma space. But if you have a higher precision (11-12 bits per channel at least) then storing things in linear space is a great way to go.

Anyway, to clarify what I meant by the comment you quoted: I meant that you should store textures in gamma space, but decode them upon read (using the artist’s monitor gamma or sRGB, whatever you decided to store it in), then do all the lighting computations (in linear space), then re-encode them into the user’s monitor/TV gamma space at the end of the shader. Sorry for the unclear phrasing. :D
@.oisyn

Wouldn’t simply using actual sRGB textures (rather than sRGB data interpreted linearly) and an sRGB framebuffer do all the necessary conversions automatically? That’s where the sRGB texture formats are for.

Yep, and it’s great that the hardware supports these conversions built-in, and so much faster than doing the math yourself in the pixel shader. (Some NVIDIA GPUs also have fast PS instructions available for gamma, for even more flexibility - I don’t know whether ATI GPUs do as well.)

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Apr 08, 2010 at 21:52

Yeah, you are right that it’s better to have uneven precision of sRGB than using linear space for input textures.