Deferred shading in view space with light volumes
Posted 06 December 2011 - 02:13 AM
I've been searching for days on how to do properly do deferred shading, in view space, using light volumes, but most of the results I'm finding are either using fullscreen quads, or are done in world space. The ones I do find seem focused on reconstructing the fragment position. Which I know is important, but I really need to learn what else there is to it.
Does anyone know of any tutorials or blogs or personal advice on learning this particular flavor of deferred shading?
(I'm using OpenGL, if it matters, but I imagine most of my learning will be concepts and math.)
Posted 06 December 2011 - 02:53 AM
Can you be more specific about what it is you're having trouble doing? If you can do deferred shading in view space using a full-screen quad, then it should be trivial to use light volumes instead; you're just restricting it to only process pixels within the volume of the light in screen space, rather than processing the whole screen for every light.
Posted 06 December 2011 - 03:11 PM
I think my bigger problem is that I'm having trouble understanding the some of the math involved in this type of deferred shading. My first attempt was in clip space (post-projection screen space), which made sense. Each pixel I was sampling was an actual pixel on the screen. But, due to the nature of projection, my lights came out distorted to the shape of the window.
Now I'm trying to redo it in view space (camera space, before projection), but I'm having trouble imagining how that relates to the final pixels on the screen. In the first (geometry) pass, where I render the scene geometry, I'm transforming the geometry like normal, into projection space, and then writing the results to textures. But then in the lighting pass, we're taking that pixel data, which is positioned on a window-sized g-buffer texture according to the projection matrix, but we're trying to perform the lighting calculations in view space instead.
I know some of the data written to the g-buffer textures is calculated in view space instead of projection space, but what is the coordinate range of view space? If I calculate that the fragment is positioned at -0.12,0.34 in view space, where is that on the projected window?
I can understand why the calculations need to be done in view space (to eliminate the projection distortion), but I'm having trouble visualizing the result and what it means.
Posted 06 December 2011 - 05:48 PM
To put it another way, view space is before the projection matrix and clip space is after the projection matrix (but before the division by w). What you were using before is likely not clip space but screen space (after the division by w). When you work with view space, you're still going to have "Each pixel I was sampling was an actual pixel on the screen." That doesn't change. But for each pixel, you have to back out the projection math to get back to pre-projective, pre-projection-matrix, view space.
The way you work out view space coordinates for a pixel is almost exactly the same as the way you work out world coordinates for a pixel. It's just that you won't include the view-to-world matrix you'd normally have to use to get all the way back to world space; you'll just use the inverse projection matrix.
Here is an article that shows how to do it (in the VSPositionFromDepth routine), although his implementation isn't as optimized as it could be.
Posted 07 December 2011 - 12:25 AM
Your description of view space coordinates makes sense. I suppose the lack of a fixed range of coordinates is why I see so many articles talking about calculating frustum corners and using them to extrapolate a screen position from a view space position? The article you reference is actually one of the ones I have been reading (and re-reading...) in an attempt to understand this. The VSPositionFromDepth function itself makes sense, it was just the concept of taking that view space position and doing something useful with it that I've been having a hard time with.
In addition to that, because I'm using light volumes rather than fullscreen quads, I've seen hints at the possibility of using the light volume position in view space to calculate the screen position, rather than doing the full screen frustum corners method. If you would, take a look at this reply on the forum topic that predated the article you linked. Does the method he describes, for determining position from the "bounding volume", make sense to you? I've been puzzling over it for a while but don't know enough of how this works. If there is a way to use the light volume I already have, rather than doing a fullscreen extrapolation for each pixel, it seems like it would be ideal...
Posted 07 December 2011 - 07:29 PM
The other way to do this is to calculate the screen position yourself by basically duplicating the hardware logic. You'd calculate clip space position in the vertex shader and output it to a texture coordinate as well as to the output position. In the pixel shader you'd read that position out of the texture coordinates and do the divide by W, then scale/bias it to get from screen space to UV space. There are a couple of optimizations that should be done, but that's the basic idea.
Once you have the screen position, you sample depth at that pixel and then use the inverse projection matrix to get back to view space. This can be optimized as well by moving part of the computation to the vertex shader, but you need to walk before you can run. I don't really see a compelling reason to use the bounding volume's own view space position as part of this calculation (as that post you linked to described). I can see how that might work in principle, but it seems far simpler to me to just do it using screen position with depth value sampled from the buffer (and no worse in performance once you've gotten the optimizations in place).
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users