Calculating normals of a mesh

22b3033832c5c699c856814b0cf80cb1
0
bladder 101 Sep 02, 2004 at 07:43

Just thought I’d post something small.

Given a list of vertices and a list of indices..

// Some standard mesh information that you should have lying around.
// vertex is your vertex structure that just contains a normal and position here.
// vertices is a pointer to the first vertex
// indices is a pointer to the first index
// num_verts is number of vertices
// num_indices is number of indices
// each face of the mesh is made up of three vertices.

std::vector<Vector3>* normal_buffer = new std::vector<Vector3>[num_vertices];

for( int i = 0; i < num_indices; i += 3 )
{
  // get the three vertices that make the faces
  Vector3 p1 = vertices[indices[i+0]];
  Vector3 p2 = vertices[indices[i+1]];
  Vector3 p3 = vertices[indices[i+2]];

  Vector3 v1 = p2 - p1;
  Vector3 v2 = p3 - p1;
  Vector3 normal = v1.Cross( v2 );

  normal.Normalize();

  // Store the face's normal for each of the vertices that make up the face.
  normal_buffer[indices[i+0]].push_back( normal );
  normal_buffer[indices[i+1]].push_back( normal );
  normal_buffer[indices[i+2]].push_back( normal );
}


// Now loop through each vertex vector, and avarage out all the normals stored.
for( int i = 0; i < num_vertices; ++i )
{
  for( int j = 0; j < normal_buffer[i].size(); ++j )
    vertices[i].normal += normal_buffer[i][j];
  
  vertices[i].normal /= normal_buffer[i].size();
}

22 Replies

Please log in or register to post a reply.

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Sep 02, 2004 at 11:36

I’m new here but I remember

http://www.devmaster.net/forums/index.php?showtopic=414

It’s a very nice way to calculate vertex normals fast thought. However, that vertex normals should be the average of the connecting triangle normals is purely speculative though. There is a paper that compares different methods for computing vertex normals somewhere (www.google.com and citeseer are our friends), but IIRC they didn’t find any “best” solution, but I think averaged normals was good in general (might been weighted average (?)).

Anyways, to get weighted average based on triangle areals, simply don’t normalize triangle normals before adding them (and then continue as bladder does)…

just my 0.02kr.

-Si

2df744d739fa0e40597805545b8472ce
0
SimmerD 101 Oct 27, 2004 at 04:14

There seems to be a bug in this example, as well as an optional optimization.

1) The code averages the normals, but doesn’t renormalize ( this is the bug ).

2) There is no need to average normals, just sum them up, and then instead of dividing by the # of normals, simply renormalize.

This way you avoid doing two divides, one by the # of normals, and the other by the normal length squared.

3b017e26aa77cb336b20acd27afbc912
0
ALESIA 101 Mar 17, 2010 at 05:13

I was also trying this and it seems it has a bug, if anyone can help me to fix this, please let me know. I was trying but still no luck. :)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Mar 17, 2010 at 13:34

@SigKILL

It’s a very nice way to calculate vertex normals fast thought. However, that vertex normals should be the average of the connecting triangle normals is purely speculative though. There is a paper that compares different methods for computing vertex normals somewhere (www.google.com and citeseer are our friends), but IIRC they didn’t find any “best” solution, but I think averaged normals was good in general (might been weighted average (?)).

As said, averaging (which is merely a scale on the normal) makes no sense since the normalization will make them of unit length anyway. But I don’t think merely normalizing them is the best solution. In the simple case of a cube made out of triangles, you’ll not get normals pointing out of the verts the right way (if a vert connets two triangles from a side while it only connects one triangle for each other side, the results are skewed in the direction of the side with the two triangles)

Which brings us to the next point:

Anyways, to get weighted average based on triangle areals, simply don’t normalize triangle normals before adding them (and then continue as bladder does)…

Triangle area makes no sense either. Imagine the cube again. Stretching the cube in one of the main axes means that the triangles that are perpendicular to that axis get longer and their area increases. Yet the normals shouldn’t change.

My money is on weighed average based on angle between the two outgoing edges of the triangle. This way, extra tesselation will yield equivelant results, and prolonging triangles doesn’t change the vertex normals either.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Mar 17, 2010 at 15:13

So your thinking of something like this, oisyn?

std::vector<Vector3> normals(num_vertices, Vector3(0,0,0));

for (std::vector<int>::const_iterator i = indices.begin(); i != indices.end(); std::advance(i, 3))
{
  Vector3 v[3] = { vertices[*i], vertices[*(i+1)], vertices[*(i+2)] };
  Vector3 normal = Vector3::cross(v[1] - v[0], v[2] - v[0]);
  
  for (int j = 0; j < 3; ++j)
  {
    Vector3 a = v[(j+1) % 3] - v[j];
    Vector3 b = v[(j+2) % 3] - v[j];
    float weight = acos(Vector3::dot(a, b) / (a.length() * b.length()));
    normals[*(i+j)] += weight * normal;
  }
}

std::for_each(normals.begin(), normals.end(), std::mem_fun_ref(&Vector3::normalize));

(Untested :P)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Mar 17, 2010 at 21:25

Yes that seems about right :)

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 17, 2010 at 22:11

You probably want to project the a and b vectors onto the plane perpendicular to the vertex normal (the tangent plane) before you compute angles between them. Otherwise your angles won’t add up to 360 degrees always (think of a cube; at each vertex there are three 90-degree angles, for a total of 270 degrees).

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Mar 17, 2010 at 22:39

It doesn’t matter what they add up to since you normalize the normal in the end + you don’t know the vertex normal anyway since that’s what you are calculating (: Personally I weight the vertex normal by the product of opening angle & triangle area, IIRC.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Mar 18, 2010 at 00:15

Ah, right, good point. It still seems like you could run into trouble with certain kinds of corners getting wrong weights, though I can’t be arsed to make up an example just now…and I guess you could use some iterative process to estimate the normal, then estimate the weights based on projecting on that tangent plane, then re-estimate the normal etc…probably way too much work for some stupid normals though. :)

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Mar 18, 2010 at 01:00

Yeah, probably a bit heavy for that (: The correct way would be to use the method the modelling software uses for the normal calculation, what ever(and regardless how wrong) it is. I recall vaguely reading somewhere that for 3ds it was the opening angle & area, but I might remember wrong. I just use the same method for all formats, which seems to work ok.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Mar 18, 2010 at 18:27

@JarkkoL

It doesn’t matter what they add up to since you normalize the normal in the end + you don’t know the vertex normal anyway since that’s what you are calculating (: Personally I weight the vertex normal by the product of opening angle & triangle area, IIRC.

But why the area? Why does a very long triangle have more influence on the vertex normal than a short one? That means that if you have a box of measurements 1x1x1000, the normals are almost perpendicular to the long axis of the box.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Mar 18, 2010 at 18:48

I guess because that normal will have lighting influence over that entire triangle so it makes kind of sense that larger triangles influence the normal more. Otherwise even small changes to that tiny triangle attached to the vertex will totally change lighting on the big triangle. Note that box is very bad example to refer to since it doesn’t make sense to smooth across the box faces anyway.

Edit: Just some food for thought, you could think of beveling a cube and how it should influence the lighting: You would still expect the big cube faces to retain the lighting pretty much as it is but the lighting on the beveled edges to be “rounded”. Taking the area into account achieves this effect.

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Mar 18, 2010 at 22:54

But it only achieves that effect if those faces aren’t subdivided…

I’m of the opinion that subdividing (non-smoothing) should have little to no impact on the normals.

I don’t think that the bevelled cube is a good argument for using triangle area. The argument for using area only holds if it just so happens that the large triangles in your mesh happen to desire more normal weight, which is true for the bevelled cube, but doesn’t hold in general.

You are right in saying that the box is a bad example, but the box can only have correct normals if you duplicate vertices, so the question becomes: what makes sense if you don’t have duplicate vertices? I think everyone would agree that, whatever method you use, the normals of a box should be symmetrical – but using area doesn’t achieve that. And as oisyn said, why should stretching the box change the normals at a corner?

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Mar 18, 2010 at 23:39

If the box is subdivided, then you don’t need to achieve the effect ;) Bevelled cube is perfect example why you should take the area into account, because that’s a straight example from real world artists use on models to make the lighting look better. If you wouldn’t take the area into account you would have to add extra edges on polys next to bevelled edge to get expected result.

It seems to me that you are confused what vertex smoothing tries to achieve. It tries to approximate a curved surface with lighting. Now, it doesn’t make sense to approximate a box as a curved surface since it’s not a curved surface, unless you are cheap in modelling a sphere (: That’s why artists have options to define where the smoothing happens with smoothing groups/angles.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Mar 19, 2010 at 10:42

@JarkkoL

If the box is subdivided, then you don’t need to achieve the effect :) Bevelled cube is perfect example why you should take the area into account, because that’s a straight example from real world artists use on models to make the lighting look better. If you wouldn’t take the area into account you would have to add extra edges on polys next to bevelled edge to get expected result.

You’ll need that anyway, even if you take triangle area into account. Basically you want the vertex normals of the side polygons to be the exactly same as the face normal.

It seems to me that you are confused what vertex smoothing tries to achieve. It tries to approximate a curved surface with lighting. Now, it doesn’t make sense to approximate a box as a curved surface since it’s not a curved surface, unless you are cheap in modelling a sphere (: That’s why artists have options to define where the smoothing happens with smoothing groups/angles.

Suppose you have an approximation of a sphere. Now imagine that you take a small group of polys and replace them with a single larger one. The vertex normals shouldn’t change, it’s still representing the same sphere. However, not only has the larger poly a normal that is further away from the original vertex normal, it also contributes more because you take poly area into account. So, in a way, for equal results, you should be using a weight that is inversely proportional to area.

The bevel of a box is indeed a good example. It shows that as an artist adds more detail, the normals of that detail should be more important, since it more closely approximates the reality. Thus again arguing for a normal weight that is inversely proportional to area.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Mar 19, 2010 at 13:30

@.oisyn

You’ll need that anyway, even if you take triangle area into account. Basically you want the vertex normals of the side polygons to be the exactly same as the face normal.

Ideally yes, but using the triangle area is good approximation of it, IME much better than not taking the area into account.@.oisyn

Suppose you have an approximation of a sphere. Now imagine that you take a small group of polys and replace them with a single larger one. The vertex normals shouldn’t change, it’s still representing the same sphere.

That example doesn’t quite support your argument. If you flatten part of a sphere by replacing bunch of polys with larger it could just as well (and more likely) mean that you want that part of the sphere to be flatter. After all, you have taken the group of polys and made them lie on the same plane and then replacing those polys with a single one ;)@.oisyn

The bevel of a box is indeed a good example. It shows that as an artist adds more detail, the normals of that detail should be more important, since it more closely approximates the reality.

I don’t know how you ended up with that conclusion (: Anyway, here is an example of both methods on a bevelled cube. If you think the right cube is somehow better and more close “approximation of the reality”, then I rest my case ;)

vtx\_smoothing.png

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Mar 20, 2010 at 09:24

JarkkoL, it’s easy to engineer an example that demonstrates your case (as oisyn has for his). The question is which one works better in general.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Mar 20, 2010 at 11:35

In case you haven’t noticed, oisyn nor you have “engineered” a single example that supports your case and the above example isn’t really “engineered” but very simple real world scenario easy enough for anyone to reproduce that supports what I’m saying. If I would want to throw an “engineered” example at you I would just pick some technically horribly modelled object and claim that’s the proper and normal way for artists to create models, but what would we gain from that other hoping that would “win” me some silly argument? ;) I’m sure we all can agree that the bevelled cube isn’t technically horribly modelled, right?

So, what makes you think that the other method is better, other than that you just happen to think so? That’s all I’m hearing from you. Is that because you believe that the method you prefer happens to be widely used thus concluding that must be the better way to go? Or do you got something in your sleeve, which you should have since you say it’s easy to engineer such an example? If so, then bring it on.

And just for the record, I’m truly interested in hearing real world counter examples. This is such a fundamental thing in getting the lighting right on objects that I like to get it done well myself. And yes, I could provide two different options, but if one method is clearly superior then I wouldn’t like to make things more complicated for nothing by giving the option.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Mar 20, 2010 at 20:13

I was merely looking at it from a mathematical point of view@JarkkoL

And just for the record, I’m truly interested in hearing real world counter examples.

The real world mostly just uses the normals produced by modeling applications, and artists are able to tweak them :)

36b416ed76cbaff49c8f6b7511458883
0
poita 101 Mar 20, 2010 at 21:50

The elongated cube was the first counter example given: the normals get incorrectly skewed toward the longer sides, even though the local geometry stays the same (it’s still a cube after all). Yes, the box is a bad example, but this applies to all non-uniform scaling of meshes, the box is just a simple illustration.

Another example is just with any mesh that is more subdivided in one area than another – the more subdivided the mesh becomes, the less those triangles influence the normal, which will result in very odd normals on the boundary of lowly/highly subdivided surface regions.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Mar 20, 2010 at 23:24

@.oisyn

The real world mostly just uses the normals produced by modeling applications, and artists are able to tweak them ;)

I wish you could rely on explicit normals. Not all 3d file formats support explicit normals, even less modelling software allow you to tweak the normals, even less of that software exports the normals properly and even if you would get the normals through the entire pipeline you would still have to run vertex smoothing to get the other two vectors for the tangent space. Tweaking normals manually is also very taunting task for artist, and I don’t really take pleasure in making artists’ life miserable d:@poita

The elongated cube was the first counter example given: the normals get incorrectly skewed toward the longer sides, even though the local geometry stays the same (it’s still a cube after all).

Oh, but it is actually correct behavior to skew the normals towards the longer sides. When you start with a cube, the vertex normals represent a sphere. When you scale the box along one of the axes, it’s an ellipsoid and you have to skew the normals accordingly. If you wouldn’t the normal interpolation result in the lighting for a larger sphere. So, this example actually supports the method of using triangle area in vertex smoothing.

Could you elaborate what kind of concrete case are you after with the second example? I have done some un-even subdivision on terrain geometry, but I haven’t noticed any odd normals in those cases using the triangle area. On a subdivided cube that was bevelled, using the triangle area looks better IMO, because there is less lighting anomality on quads next to the cube edges because normals are more bent towards the cube face.

B7109317066ddd5327cb0674388c4974
0
Luz_Reyes 101 Sep 28, 2010 at 14:04

A similar discussion has taken place here: http://www.gamedev.net/community/forums/topic.asp?topic_id=355340. Might want to check it out. They find a pretty elegant solution.