Volumetric Lighting

B861fc0599f14928b4b8ac70501aaec1
0
Anudhyan 101 Feb 17, 2006 at 14:28

I wan’t to implement volumetric lighting with a fragment shader in opengl.
But i have no idea how to do it. Please help.

:mellow:

19 Replies

Please log in or register to post a reply.

77078a51228688ee3cc87e539c4ff235
0
Faelenor 101 Feb 17, 2006 at 15:26

Did you try google?
If you search for “volumetric lighthing”, you’ll find some interesting links.

B861fc0599f14928b4b8ac70501aaec1
0
Anudhyan 101 Feb 17, 2006 at 16:06

I just need the basic idea. Rest i will try to do on my own.
Can i do it with a transparent screen-like object before my viewport, Or do i have to do advanced stuff such as fixing a color for every point in air.

Thanks for mentioning google, but i didn’t get any interesting stuff.

D7e6f7351b2b2e8bee27964ac843b1dc
0
Jynks 101 Feb 18, 2006 at 12:19

Have you tried using 3D cones with animated alpha chans in stead of proper volums

B861fc0599f14928b4b8ac70501aaec1
0
Anudhyan 101 Feb 19, 2006 at 14:02

No.
I want proper,foggy,spherical volumes.
…like in the game Hitman 3…

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Feb 19, 2006 at 16:51

Hi,
3D graphics is the art of obtaining the most while doing the least. Never forget that.

If it tricks the eye, then it’s good to go (no matter *how* the effect is achieved).
Do not be so sure that in H3 what you see is genuine volumetric lighting.

Ciao ciao :)

B861fc0599f14928b4b8ac70501aaec1
0
Anudhyan 101 Feb 21, 2006 at 12:05

I understand your point. Guess i got a little too impaitent.
However these spherical volumes i had in mind are something like this:

volumetriclightingII\_large.jpg

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Feb 21, 2006 at 13:40

I see. Sure it’s of effect.
That is probably the work of a Pixel Shader.
But I’m positive it’s not real volumetric lighting (would cost too much processing power).
Seems more of a diffused glowing effect well positioned in 3D space (in fact the blackened column in the foreground, at the center of the scene, is not affected by the glow of the light in the background).
But I’m guessing here. Hard to tell what it is without seeing it in action and examining its behavior when some non static object (like a human being) enters the radius covered by that glow or partially overlaps with the source of the light.
And I never played H3 :p
I’m sure you can find some visual glitch within the behavior of that glow. Glitches often give you important clues about the nature of a special effect.
I don’t know the game, but (if you can), try using a 3rd person camera view. Zoom out with the camera as much as you can, and then step into the light with your avatar.
As we say here, find the ‘crystal’s flaw’ ;)

Ciao ciao :)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Feb 21, 2006 at 13:48

Add depth values of frontfacing lightvolume-polygons to a buffer, and substract those of the backfacing polygons. What’s remains is the total length of the ray intersected with the volume. You can use these values to apply some sort of fogging technique.

Of course, you still need to handle the degenerate case where a frontfacing polygon is visible but a backfacing is behind other geometry, but this can be done by using min(current_z, z_in_depth_buffer) instead of the actual depth value

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Feb 21, 2006 at 15:14

.oisyn you are terrible!
Can you explain that again in a more high-level language?
I am interested in what you said, but have problems to understand.

Thank you very much,
Ciao ciao :)

B861fc0599f14928b4b8ac70501aaec1
0
Anudhyan 101 Feb 22, 2006 at 03:27

I was thinking that if this kind of volumetric lighting was too costly to perform in realtime, why not precalculate and have a sort of Volumetric Lightmaps ?

53c9ca5bb3b13224adc23291ed6293ed
0
void_ 101 Feb 22, 2006 at 09:13

The screenshot seems to be just a lightmapped room (precalculated lights) with some coronas applied to visible lights.

A corona can just be a billboard that is rendered over the light, giving the effect that it shines bright.

B861fc0599f14928b4b8ac70501aaec1
0
Anudhyan 101 Feb 22, 2006 at 09:20

Are these coronas applied by disabling depth test(i.e. on top of everything)?
If you see the left yellow light, you will see that the ‘corona’ is occluding
the pillar next to it. But if you see the right yellow light the corona isn’t occluding the pillar(the dark one). How is this done?

C4b4ac681e11772d2e07ed9a84cffe3f
0
kusma 101 Feb 22, 2006 at 11:45

i’d rather just use additive cones, simply because they can look quite good and are easy for an artist to tweak…

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Feb 22, 2006 at 12:48

@Nautilus

.oisyn you are terrible!
Can you explain that again in a more high-level language?
I am interested in what you said, but have problems to understand.

I think there is a sample included in the DirectX 9 SDK that does exactly this.

For fogging, you need to know how far a ray from the eye travels through the fog, so you can calculate how much of the light is scattered and how much will pass through. Classical fogging does this by taking the z-values either at the vertices or at the pixels when rendering polygons, but this obviously doesn’t work with custom volumes. Fortunately, it isn’t that hard to calculate the total distance that a ray travels through fog at every pixel.

Suppose you want to solve this with raytracing. Think of a convex volume, like a sphere. Shoot the ray through the sphere, and calculate the intersection points where the ray enters and exits the sphere. Of course, a raytracer calculates the actual z-values; the length that a ray needs to travel until it reaches the surface. So if you substract the z-value where the ray exits the volume from where it enters the volume, you are left with the total distance (in the z direction) the ray travels through the fog.

Since you only need the values of where a ray enters and exits the volume, you can simply render the actual volume using the GPU to a depthbuffer. All the polygons that face the camera are polygons where rays enter the volume. All polygons that backface the camera are polygons where the rays exit the volume. So if you add the z-values of all front-facing polygons of a volume to a buffer, and substract all z-values of all back-facing polygons from that buffer, you are left with the total distance a ray travels through the volume at every pixel. Note that this also works for concave volumes, as for every volume entry there is a corresponding volume exit.

Of course, a ray stops as soon as it hits actual geometry. If the geometry is between the camera and the fog volume, the volume polygons will get z-tested away. If the geometry is completely behind the volume, you’ll get fog-values as expected. But if a ray enters the volume and then hits geometry without exiting the volume first, your backfacing volume polygons will get z-tested away while front-facing polygons won’t, which leaves you with incorrect values in the buffer. This can be solved by taking the minimum of the current z-value of the pixel of the fog-polygon being rendered with the value at that pixel in the depth buffer.

Another problem is clipping against the near and far plane, you obviously don’t want that to happen. Far plane clipping can be resolved using an infinite far plane, I’m not sure how you can solve the near plane clipping problem.

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Feb 22, 2006 at 14:01

Praise you, .oisyn. That was perfect.
You cleared my doubts completely :yes:

Thank you again,
Ciao ciao :)

B861fc0599f14928b4b8ac70501aaec1
0
Anudhyan 101 Feb 24, 2006 at 13:35

Thanks .oisyn
for explaining fogging.
Still sounds kind of expensive. Can all this be done in realtime ?
If I could only get my hands on some opengl source code… :happy:

820ce9018b365a6aeba6e23847f17eda
0
geon 101 Feb 24, 2006 at 16:08

.oisyn: That’s volumetric fog, not light. However, it can be used to make volumetric light.

First, “simply” subtract the ligth’s shadow volume from the fog volume. This new fog volume will be used to calculate the added light from the fog. Tee original volume should be used for only the lost light.

This (I guess) would need multiple rendertargets that read and write to eachother, before they are combined to the final screen. (Much like depth peeling.) Would this even be possible with dx10?

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Feb 24, 2006 at 16:42

@geon

.oisyn: That’s volumetric fog, not light. However, it can be used to make volumetric light.

You know, those are physically actually exactly the same. Of course, the effect you want to achieve depends on the postprocessing filter you use. But fog means light scattering, light volumes are light volumes because there is fog (lots of tiny particles) around the light that scatter the light in all directions. So it’s fogging either way. But what matters is the technique, not what name you give it.

820ce9018b365a6aeba6e23847f17eda
0
geon 101 Feb 24, 2006 at 17:53

@.oisyn

So it’s fogging either way. But what matters is the technique, not what name you give it.

You are right of course. But to make volumetric ligth, I feel it is implied that the fog should be shadowed by anyobject between it and the lightsource. Or to put it the other way around: Objects should cast shadows onto the fog.

Like this.