Posted 07 June 2007 - 04:41 AM
I tried to read stuff on Shadow-Volume and become quite confused.
and decided to ask here, maybe i can be directed to read any other stuff. ^^
When shadow volume created, the result will be the depth-value, where every spots within the shadow-volume are shadowed. that's from the light point of view (POV).
then in z-pass/z-fail, it still calculated again from the viewer POV.
what i confused about is, why its needed to be refer back to the viewer POV?
should it just enough to tell wheter objects in the scene are shadowed or not by the shadow-volume (silhouette) from the light POV?
Thank you very much.
Posted 07 June 2007 - 06:15 AM
Find the shadow polygons that were lit and shadowed from the light POV.
is it not possible? :S
Posted 07 June 2007 - 09:05 AM
The term "shadow volume" is usually used to describe the polygonal shadow object to be rendered to the stencil buffer. That is not done for shadow buffering, where the shadow volume would be defined by the z-buffer from the light POV.
However, this scentence: "When shadow volume created, the result will be the depth-value", indicates you are talking about shadow mapping.
Maby this is where you are confused. Nothing is computed again, but the shadow buffer is compared to the scene, to see if the pixel beeing rendered is shadowed.
No. From the light's POV, nothing is shadowed. It is the shadow buffer in combination with the rendered scene from the camera that can create the shadows.
Posted 07 June 2007 - 10:54 AM
thanks geon.. :D
Actually i'm asking about shadow volume. i guess i mixed up all the terms up. Sorry. read too many articles at once. :?
The shadow created when stencil value of a fragment is non zero, rite..
my questions are:
- Why it is not calculated from the light's POV, to determine the fragments that facing the light are zero (stencil), thus lit. while the other fragments' stencil are set non-zero, thus shadowed.
I hope this is not annoying question, cos i really dont get it.
- I'm thinking on real-time simulation, where a user can walk around the scene. If shadow-volume depends on the user's POV in determining the fragment stencil-value, then in every frame the computer need to do the shadow-volume processes, no? That will be very heavy.. :?
- Is it possible to record the stencil-value? so eventhough the shadow-volume deactivated, these value can still be used to render the scene.
(but the shadow will be irrelevant when the user move.. :? )
I think it will be cool if the shadow generating process is not depends on the viewer's POV. :p
Posted 07 June 2007 - 02:46 PM
2. It's not usually that expensive, it's a matter of drawing the shadow volume geometry and updating the stencil buffer for each poly drawn. (It can be expensive for some types of geometry such as lights behind grates however)
3. No, the geometry would be split into different fragments as you move around (think of these fragments as small one-pixel-sized pieces of polygons that the scene is split into as you move around; as you get closer to a polygon you need more fragments)
You can have a view-independent shadow-generating process, it's called precalculated lighting (via lightmaps or vertex lighting).
But as soon as the lights or the geometry in the scene move, your shadow information would be wrong.
Posted 07 June 2007 - 04:18 PM
then could say that i was thinking about shadowVolume but in shadowMapping perspective (like what geon said). I thought shadow from Shadow Volume can be stored and then mapped when the user navigating. When the light move, the shadowVolume operated again then stored.
Looks like its impossible thought. :( :S
and ya, precalculated lighting means no light/geometry moves. :(
Posted 07 June 2007 - 04:22 PM
Posted 07 June 2007 - 05:12 PM
Are you saying the shadow-volume testing is refering to the camera's view but rather camera's position? So when the camera turn around, the shadow volume is not calculated again?
but when the camera move (change position), the shadow volume should do the cycle again, no?
Posted 07 June 2007 - 10:02 PM
You'd create a vertex buffer for the triangles that make up the shadow volume boundaries, and rebuild that only when the light or geometry moves. If the camera moves you'd just re-render the same vertex buffer. Of course re-rendering is necessary because the triangles won't be at the same screen location any more after the camera moves.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users