I have a theoretical question about back-face culling.
A back-facing polygon is one where the angle between the normal for the
polygon’s front face and the viewing vector (towards the eye) is less
than 90 degrees. This can be tested using the dot product of these two
vectors and the sign (positive or negative) tells us if the face is
facing the front or the back.
The question is whether to use:
- the vector from the lookat point (centre of view) to the eye/camera,
- the vector from a point on the polygon to the eye/camera
In some situations, the results are the same, but not all, it seems.
I’ve read a few textbooks, and have mixed results. Most talk about the
second option, but Hearn & Baker (3rd ed) uses the first option, and I’m
not sure if it’s always right.
It might have something to do with whether the polygon is actually
visible or not - such as if the polygon is in front of the viewer it’s
front-facing, but as it moves further to the side and the angle using
the second vector changes, when the dot product changes sign, the
polygon is actually outside of the field of vision… maybe.
Does anyone have any suggestions or knowledge they can add?
Please log in or register to post a reply.
Ok, here is how to do it:
you have a view vector formed by the this difference:
LookAtPoint - CameraPosition
You also have your polygon (positioned in world, scaled and rotated).
You also have the transformed normal of this polygon.
To find if this polygon is back facing, find the dot product of the view
vector and the transformed polygon normal and check if it’s negative. If
it is - polygon is not visible!
you have to take the vector from the polygon to the view point since you
try to find the angle between the surface normal and that vector. if you
take the vector from the eye to the surface you get another angle. try
drawing the two vectors originating from the same point. you can easily
see which vector gives you the angle you want and that inverting one of
them gives you a different angle
Back face culling can be done in different stages. You can do it in
object space, by transforming the camera position to that space. For
every polygon, take the vector from one of the polgyon’s vertices, and
compute the dot product with the polygon’s normal. The sign determines
what way the polygon is facing. This approach has great performance
advantages if you can precompute the normals of the model (i.e. when
they are static). The only cost is the dot product.
You can also do it in camera space. The method is the same, but now the
normal has to be recomputed with a cross product. If the model is static
we could also transform the normal from model space to camera space, but
this actually takes longer.
The last approach is in screen space. Here you just need to know whether
the polygon’s normal is facing to the screen or not. So only the
z-component of the cross product is required. That’s fast, but it
requires all vertices to be transformed to screen space. In many
implementations this is done anyway, to avoid complexity in the vertex
pipeline and the vertex cache. I currently use this method myself, and
have seen no significant performance advantage from doing culling in
Thanks for the answers so far guys.
I understand that the sign of the dot product of the front-face polygon
normal and the viewing vector gives me whether or not it is
front-facing. Whether I use the vector to the camera or from the camera
only dictates whether positive is front or back.
The question is whether to use the vector between the camera (eye) and
the lookat point or between the camera and a point on the polygon.
It is clear to me that we get different answers for these options
because if the polygon were behind the camera and facing it, it would be
front-facing with the latter of the two options above, and back-facing
for using the former. Is it true that if we do it after clipping then we
don’t need to worry about this difference? What if we have a really wide
field of view like 180 degrees and the polygon is right at the edge of
the view? It seems we could get different answers again.
Nick, you talked about what space you do it in, does that affect which
view vector we use?
As far as I know, back-face culling should always be computed using a
vector from a point on the polygon to the eye. (Be careful that you do
not use a vector from the eye to the polygon, as this will give the
Using the vector from the lookat point to the camera would be correct
only if the lookat point happens to lie in the same plane as the
You can do this in any space you want, so long as you make sure all the
relevant vectors (point on polygon, eye, normal) are in the same space.
There is no mathematical reason why one space would be preferred over
others, but as Nick described, there is usually one space in which the
culling is most efficient.