Deferred shading in view space with light volumes

00e8063b2543b1f7342807e3d55ab513
0
Nairou 101 Dec 06, 2011 at 02:13

Yeah, I know that narrows it down pretty far. I’ve been trying to learn how to do deferred shading. My last attempt was flawed, and I eventually learned I was trying to do it in clip space rather than view space.

I’ve been searching for days on how to do properly do deferred shading, in view space, using light volumes, but most of the results I’m finding are either using fullscreen quads, or are done in world space. The ones I do find seem focused on reconstructing the fragment position. Which I know is important, but I really need to learn what else there is to it.

Does anyone know of any tutorials or blogs or personal advice on learning this particular flavor of deferred shading?

(I’m using OpenGL, if it matters, but I imagine most of my learning will be concepts and math.)

5 Replies

Please log in or register to post a reply.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Dec 06, 2011 at 02:53

@Nairou

most of the results I’m finding are either using fullscreen quads, or are done in world space

Can you be more specific about what it is you’re having trouble doing? If you can do deferred shading in view space using a full-screen quad, then it should be trivial to use light volumes instead; you’re just restricting it to only process pixels within the volume of the light in screen space, rather than processing the whole screen for every light.

00e8063b2543b1f7342807e3d55ab513
0
Nairou 101 Dec 06, 2011 at 15:11

Well, from what I understand, when you use light volumes, you don’t necessarily have to do the view frustum calculations to find the full screen quad coordinates, because you can just use the position of the light volume itself, transformed to view space. However, I don’t know the specifics on how you accomplish that.

I think my bigger problem is that I’m having trouble understanding the some of the math involved in this type of deferred shading. My first attempt was in clip space (post-projection screen space), which made sense. Each pixel I was sampling was an actual pixel on the screen. But, due to the nature of projection, my lights came out distorted to the shape of the window.

Now I’m trying to redo it in view space (camera space, before projection), but I’m having trouble imagining how that relates to the final pixels on the screen. In the first (geometry) pass, where I render the scene geometry, I’m transforming the geometry like normal, into projection space, and then writing the results to textures. But then in the lighting pass, we’re taking that pixel data, which is positioned on a window-sized g-buffer texture according to the projection matrix, but we’re trying to perform the lighting calculations in view space instead.

I know some of the data written to the g-buffer textures is calculated in view space instead of projection space, but what is the coordinate range of view space? If I calculate that the fragment is positioned at -0.12,0.34 in view space, where is that on the projected window?

I can understand why the calculations need to be done in view space (to eliminate the projection distortion), but I’m having trouble visualizing the result and what it means.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Dec 06, 2011 at 17:48

View space is just like world space except the origin is placed at the camera and the axes are oriented to align with the camera. So there is no specific range for the coordinates except that visible points must be in front of the screen, so z < 0 (assuming right-handed coordinates with +Z facing out of the screen).

To put it another way, view space is before the projection matrix and clip space is after the projection matrix (but before the division by w). What you were using before is likely not clip space but screen space (after the division by w). When you work with view space, you’re still going to have “Each pixel I was sampling was an actual pixel on the screen.” That doesn’t change. But for each pixel, you have to back out the projection math to get back to pre-projective, pre-projection-matrix, view space.

The way you work out view space coordinates for a pixel is almost exactly the same as the way you work out world coordinates for a pixel. It’s just that you won’t include the view-to-world matrix you’d normally have to use to get all the way back to world space; you’ll just use the inverse projection matrix.

Here is an article that shows how to do it (in the VSPositionFromDepth routine), although his implementation isn’t as optimized as it could be.

00e8063b2543b1f7342807e3d55ab513
0
Nairou 101 Dec 07, 2011 at 00:25

Thank you very much for your replies, and for the clarification. I was indeed using screen space in my previous attempt.

Your description of view space coordinates makes sense. I suppose the lack of a fixed range of coordinates is why I see so many articles talking about calculating frustum corners and using them to extrapolate a screen position from a view space position? The article you reference is actually one of the ones I have been reading (and re-reading…) in an attempt to understand this. The VSPositionFromDepth function itself makes sense, it was just the concept of taking that view space position and doing something useful with it that I’ve been having a hard time with.

In addition to that, because I’m using light volumes rather than fullscreen quads, I’ve seen hints at the possibility of using the light volume position in view space to calculate the screen position, rather than doing the full screen frustum corners method. If you would, take a look at this reply on the forum topic that predated the article you linked. Does the method he describes, for determining position from the “bounding volume”, make sense to you? I’ve been puzzling over it for a while but don’t know enough of how this works. If there is a way to use the light volume I already have, rather than doing a fullscreen extrapolation for each pixel, it seems like it would be ideal…

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Dec 07, 2011 at 19:29

Well, an integral part of using light volumes is being able to figure out which pixel on screen you’re at so you can sample the G-buffer at the appropriate point. There are a couple of ways to do this. On certain APIs/profiles there is a pixel shader semantic that will give you the screen position directly; in DX9 it’s called VPOS, in DX10+ it’s called SV_Position, and in other APIs it’s probably called something else. If available, this is the easiest/fastest way to do it, since it’s generated directly by the hardware and you don’t have to mess around with calculating screen positions yourself. You might find you have to flip the Y axis to make it align with UV space or something, but that’s easy to solve.

The other way to do this is to calculate the screen position yourself by basically duplicating the hardware logic. You’d calculate clip space position in the vertex shader and output it to a texture coordinate as well as to the output position. In the pixel shader you’d read that position out of the texture coordinates and do the divide by W, then scale/bias it to get from screen space to UV space. There are a couple of optimizations that should be done, but that’s the basic idea.

Once you have the screen position, you sample depth at that pixel and then use the inverse projection matrix to get back to view space. This can be optimized as well by moving part of the computation to the vertex shader, but you need to walk before you can run. :) I don’t really see a compelling reason to use the bounding volume’s own view space position as part of this calculation (as that post you linked to described). I can see how that might work in principle, but it seems far simpler to me to just do it using screen position with depth value sampled from the buffer (and no worse in performance once you’ve gotten the optimizations in place).