0
101 Feb 21, 2010 at 16:54

Hi guys! I’m a beginner in opengl programming. I have a problem when i compute vertices normals.

I have triangles (i1, i2, i3) and vertices. The triangles are from a 3d object i load (3ds or wavefront). To compute the normal i proceed in this way:

Vec3f index = mesh->getTriangle(i);

Vec3f v1 = mesh->getVertex(index[0]);
Vec3f v2 = mesh->getVertex(index[1]);
Vec3f v3 = mesh->getVertex(index[2]);

Vec3f a = v3-v1;
Vec3f b = v2-v1;

Vec3f normal = a^b; //cross product


Then, to use the normal array, i must store the normal. I need to store a normal for every vertex, so:

normalArray has size nVertices*3 (nVertices = number of vertices)

normalArray[index[k]*3] = normal[0];
normalArray[index[k]*3 + 1] = normal[1];
normalArray[index[k]*3 + 2] = normal[2];


Unfortunately this doesn’t work… I’m sure i’m wrong, but where? That’s what i see:

http://img202.imageshack.us/img202/1521/screencb.png

Someone can help me? Sorry for my english, i hope the problem is clear :)

#### 12 Replies

0
105 Feb 21, 2010 at 17:50

This is something that pops up at times, basically
you need to average the accumulated normals for
every vertex , whose surface is associated with.
Here is the code, i got more complex ones dealing
with averaged surface area , and vertex normals
shared by multiple vertex coordinates.
This is good to make you start ,
don’t mind the shift operation ,i use for padding
my arrays, look at the code and try to see
what’s going on it not so difficult

void CMesh::ComputeNormals( void )
{

int       i;
int       p1,p2,p3;
CSurface *Surface;
Vec3f     N,N1,N2,N3;

/////////////////////////////////////////
// simple function for computing normals
// for a function with more options
// have a look at mesh optimizer

for ( i=0; i<Vertices; i++ )
VertexNormal.Set(i,Vec3f(0,0,0));

for ( i=0; i<Surfaces; i++ )
{
Surface=SurfaceList[ i ];

p1=Surface->P1 >>2;
p2=Surface->P2 >>2;
p3=Surface->P3 >>2;

N1=VertexNormal[ p1 ];
N2=VertexNormal[ p2 ];
N3=VertexNormal[ p3 ];

N=Surface->Normal;

N1+=N;
N2+=N;
N3+=N;

VertexNormal.Set( p1,N1 );
VertexNormal.Set( p2,N2 );
VertexNormal.Set( p3,N3 );

}

///////////////////////////////////////
// into the normal array

for ( i=0; i<Vertices; i++ )
VertexNormal.Normalize(i);

//////////////////////////////////////////////////
// set flags

SetBitFlag( _VT_NORMALS_COMPUTED_,true );

}

0
101 Feb 23, 2010 at 00:44

Hi, the method is great and it works. But it changes not too much.

I explain better how i use normals. I want to send them to a glsl shader. Until i don’t use the shader, the model geometry *seems* ok. When i use the shader the geometry is bad as you can see in the previous image. I think the normal computation is ok, so what’s wrong? Here the shaders:

uniform vec3 fvLightPosition;

varying vec2 Texcoord;
varying vec3 LightDirection;

attribute vec3 rm_Tangent;

void main( void )
{
Texcoord    = gl_MultiTexCoord0.xy;

vec3 fvNormal         = gl_Normal;
vec3 fvTangent        = rm_Tangent;
vec3 fvBinormal       = cross(fvNormal, fvTangent);

LightDirection.x  = dot( fvTangent, fvLightPosition.xyz );
LightDirection.y  = dot( fvBinormal, fvLightPosition.xyz );
LightDirection.z  = dot( fvNormal, fvLightPosition.xyz );

gl_Position = ftransform();
}


uniform vec4 fvAmbient;
uniform vec4 fvDiffuse;

uniform sampler2D baseMap;
uniform sampler2D bumpMap;

varying vec2 Texcoord;
varying vec3 LightDirection;

void main( void )
{
float distSqr = dot(LightDirection, LightDirection);

vec3  fvLightDirection = normalize( LightDirection  * inversesqrt(distSqr));
vec3  fvNormal         = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );
float fNDotL           = dot( fvNormal, fvLightDirection );

vec4  fvBaseColor      = texture2D( baseMap, Texcoord );

vec4  fvTotalAmbient   = fvAmbient * fvBaseColor;
vec4  fvTotalDiffuse   = fvDiffuse * fNDotL * fvBaseColor;

gl_FragColor = ( fvTotalAmbient + fvTotalDiffuse );

}


Generated by RenderMonkey.

If it is useful, i apply a transfomation to vertices, before the glVertexPointer call. Maybe the shader has not the right vertices…

0
167 Feb 23, 2010 at 00:54

The shader code looks reasonable. Try removing the bump map and looking at just the pure vertex normals for now, to eliminate the bump map as a possible source of error. Also try and reproduce the problem on a very simple model like a cube. Then you can step through it in the debugger and see exactly where things are going wrong.

0
101 Feb 26, 2010 at 17:18

Fixed!!!

The problems were:

a) the suggested code of v71.
b) A normal can become 0, during the cross product between the vectors v3-v1 and v2-v1, so i must fix it in some way. For now i assign that normal to a vector (1,1,1) and it works. If someone knows a better method, you are welcome.
c) Thanks to Reedbeta suggestion, i used a cube and after some trials i found the problems :)

Thank you guys! See you into the forum. I hope to be useful too!

0
105 Feb 26, 2010 at 17:40

Strange because i use that code everyday and it works fine, can you point me to the offending line ?

0
101 Mar 02, 2010 at 11:21
Vec3s triangle = shape.triangle.at(k); //Vec3s = vector<short>

Vec3f v1 = shape.vertex.at(triangle[0]);
Vec3f v2 = shape.vertex.at(triangle[1]);
Vec3f v3 = shape.vertex.at(triangle[2]);

Vec3f a, b;
a = v3-v1;
b = v2-v1;

Vec3f normal = b^a; //cross product

if(normal == Vec3f(0,0,0))
normal = Vec3f(1,1,1);

normal.normalize();


I write b\^a instead of a\^b to have the light in the correct direction (if not i have the shadow where i want the light and the light where i want the shadow). Maybe is the problem?

0
105 Mar 02, 2010 at 11:38

The cross product is sensitive to the orientation, also the case where the vector is zero dimensional is rather
difficult to happen it means that you may have a degenerate triangle. in that case i would use a small epsilon
not setting the vector to 1,1,1 its like making it pointing to a defiinite direction, basically its a diagonal pointing away.
Or skip completely zero area triangle, they won’t be rendered anyway.

0
101 Mar 02, 2010 at 17:07

I understand ;) and you’re right, i tried to skip zero-area triangles and it works!!! Thank you so much!

Can i ask a last thing? I write here because i think it is in topic. Sorry for the trouble.

I need to calculate tangents for a sphere, and for me is more difficult. I found a sphere implementation that sets normal coordinates = vertex coordinates, now i must compute tangents to apply bump mapping. I use the same shader i submitted some post ago. I tried something but it is the result (see the bright areas for example):

0
105 Mar 02, 2010 at 19:21

I think you want per vertex tangent space, don’t you ?

here it is :

bool CMeshOptimizer::CalculateTangentArray( CMesh *Mesh)
{
//////////////////////////////
// check for mesh validity

if ( Mesh==NULL )
return false ;
if ( Mesh->Assert()==false )
return false;

int           i;
int           p1,p2,p3;
int           i1,i2,i3;
int           Vertices;
int           Surfaces;
float         r,w;
float         x1,y1,z1;
float         x2,y2,z2;
float         s1,t1;
float         s2,t2;
Vec3f         V1,V2,V3;
Vec2f         T1,T2,T3;
Vec3f         Sdir,Tdir;
Vec3f        *TanU,*TanV;
Vec3f         n,t,d;
CSurface     *Surface;

/////////////////////////////////////

Vertices=Mesh->GetVertexCount();
Surfaces=Mesh->GetSurfaceCount();

// tangent vertex coordinates
if ( Mesh->VertexTangent.Reserve( Vertices , 4 ) ==false )
return false;

////////////////////////////////////////
// allocate arrays

if ( (TanU= new Vec3f [ Vertices ])==NULL)
return false;

if ( ( TanV= new Vec3f [ Vertices ])==NULL )
return false;

/////////////////////////
// init vertex arrays

for ( i=0; i<Vertices; i++ )
{
TanU[i].zero();
TanV[i].zero();     }

//////////////////////////////////

for ( i = 0; i < Surfaces; i++)
{

Surface=Mesh->GetSurfaceAt( i );

p1=Surface->P1;
p2=Surface->P2;
p3=Surface->P3;

i1=Surface->T1;
i2=Surface->T2;
i3=Surface->T3;

V1.set( Mesh->VertexCoord.x(p1),
Mesh->VertexCoord.y(p1),
Mesh->VertexCoord.z(p1) );

V2.set( Mesh->VertexCoord.x(p2),
Mesh->VertexCoord.y(p2),
Mesh->VertexCoord.z(p2) );

V3.set( Mesh->VertexCoord.x(p3),
Mesh->VertexCoord.y(p3),
Mesh->VertexCoord.z(p3) );

T1.set( Mesh->VertexTexCoord.x(i1),
Mesh->VertexTexCoord.y(i1) );

T2.set( Mesh->VertexTexCoord.x(i2),
Mesh->VertexTexCoord.y(i2) );

T3.set( Mesh->VertexTexCoord.x(i3),
Mesh->VertexTexCoord.y(i3) );

x1 = V2[0] - V1[0];
x2 = V3[0] - V1[0];
y1 = V2[1] - V1[1];
y2 = V3[1] - V1[1];
z1 = V2[2] - V1[2];
z2 = V3[2] - V1[2];

s1 = T2[0] - T1[0];
s2 = T3[0] - T1[0];
t1 = T2[1] - T1[1];
t2 = T3[1] - T1[1];

r = 1.0f / (s1 * t2 - s2 * t1);

Sdir.set( (t2 * x1 - t1 * x2) * r,
(t2 * y1 - t1 * y2) * r,
(t2 * z1 - t1 * z2) * r );

Tdir.set( (s1 * x2 - s2 * x1) * r,
(s1 * y2 - s2 * y1) * r,
(s1 * z2 - s2 * z1) * r );

TanU[p1] += Sdir;
TanU[p2] += Sdir;
TanU[p3] += Sdir;

TanV[p1] += Tdir;
TanV[p2] += Tdir;
TanV[p3] += Tdir;

}

////////////////////////////////////
// compute tangent space for
// each vertex

for ( i = 0; i < Vertices; i++)
{

n.set( Mesh->VertexNormal.x( i ),
Mesh->VertexNormal.y( i ),
Mesh->VertexNormal.z( i ) );

t.set( TanU[i][0],TanU[i][1],TanU[i][2] );

// Gram-Schmidt orthogonalize

d=(t - n * dot(n, t));

d.normalize();

// Calculate handedness

w = (dot(cross(n, t), TanV[i]) < 0.0F) ? -1.0F : 1.0F;

}

/////////////////////
// frees memory

delete TanU;
delete TanV;

///////////////////////////////////////
// setting flag for triangle connectivity
Mesh->SetBitFlag( _VT_TANGENT_SPACE_COMPUTED_,true);

return true;
}


Note that i still have to polish and optimize a little bit,
i am working on other components of my engine right now

0
101 Mar 04, 2010 at 00:22

Yes thanks. I use the basics of this code right now. But i have problems using that on a sphere mesh. I use a GL_TRIANGLE_STRIP method, so i have shared vertices for triangles, and it seems i have problems to retrieve the right faces…

0
101 Mar 04, 2010 at 17:10

Ok, that’s the final code i wrote. What do you think?

for(int k = 0; k<shape.triangle.size(); k++)
{
Vec3s triangle = shape.triangle.at(k);
Vec3f vertex1 = shape.vertex.at(triangle[0]);
Vec3f vertex2 = shape.vertex.at(triangle[1]);
Vec3f vertex3 = shape.vertex.at(triangle[2]);

Vec3f v1 = vertex3-vertex1;
Vec3f v2 = vertex2-vertex1;

Vec3f normal = v1^v2;

Vec2f tex1 = shape.texture.at(triangle[0]);
Vec2f tex2 = shape.texture.at(triangle[1]);
Vec2f tex3 = shape.texture.at(triangle[2]);

Vec2f st1 = tex3-tex1;
Vec2f st2 = tex2-tex1;

float coef = 1/ (st1[0] * st2[1] - st2[0] * st1[1]);

Vec3f tangent;

tangent[0] = coef * ((v1[0] * st2[1])  + (v2[0] * -st1[1]));
tangent[1] = coef * ((v1[1] * st2[1])  + (v2[1] * -st1[1]));
tangent[2] = coef * ((v1[2] * st2[1])  + (v2[2] * -st1[1]));

for(int h=0; h<3; h++)
{
normals[triangle[h]*3]+=normal[0];
normals[triangle[h]*3 + 1]+=normal[1];
normals[triangle[h]*3 + 2]+=normal[2];

tangents[triangle[h]*3]+=tangent[0];
tangents[triangle[h]*3 + 1]+=tangent[1];
tangents[triangle[h]*3 + 2]+=tangent[2];
} //H

} //K

for(int k = 0; k<shape.vertex.size(); k++)
{
Vec3f normal(normals[k*3], normals[k*3+1], normals[k*3+2]);
normal.normalize();

Vec3f tangent(tangents[k*3], tangents[k*3+1], tangents[k*3+2]);
tangent.normalize();

normals[k*3] = normal[0];
normals[k*3 + 1] = normal[1];
normals[k*3 + 2] = normal[2];

tangents[k*3] = tangent[0];
tangents[k*3 + 1] = tangent[1];
tangents[k*3 + 2] = tangent[2];
}

0
105 Mar 04, 2010 at 21:59

Yes, triangle strip uses redundant vertices, and i don’t like this way of render meshes, it still a legagy from old opengl version, i suggest to use vertex arrays or vbo, i wrote also a function do duplicate vertices when models have multiple texture coords for each vertex , just to avoid using strips.
In my opinion triangel strip are obsolete, but if you want still to use them you should duplicate redundant vertices , also for each duplicate you have to match the corresponding tex coords and normals.