# Calculating mesh tangent space vectors

37 replies to this topic

### #1Axel

Valued Member

• Members
• 119 posts

Posted 08 January 2006 - 05:22 PM

I'm currently trying to export meshes from 3dsmax. Constructing vertex normals can be done by averaging face-normals for faces in the same smoothing group, but I don't know if this is the right thing to do for tangent and bitangent vectors as well for vertices that share texcoords.

Can someone give me a hint what to do exactly?

### #2SigKILL

Valued Member

• Members
• 200 posts

Posted 08 January 2006 - 07:46 PM

The easiest thing (AFAIK) would be to calculate the tangent and bitangent vectors per. triangle and averaging them for vertices in the same smoothing group. You should not normalize tangent and bitangent vectors if you calculated them 'correctly' (I don't know if the Nvidia evil crossproduct code does it correctly, however you can always do it correctly by solving a simple linear equation (god knows why Nvidia doesn't do it this way)). I'm somewhat not sure right now if this would be the 'correct' way to do it, but it is what everybody else does. There is probably alot of info about this at the Nvidia and ATI developer pages.

-si

Edit: In the text above I'm reffering to the first sentence in the last sentence..

### #3Axel

Valued Member

• Members
• 119 posts

Posted 09 January 2006 - 01:24 AM

So everybody does accept the error you produce by averaging tangent and bitangent vectors when faces with different texture directions have the same vertices sharing texcoords?

Quote

You should not normalize tangent and bitangent vectors if you calculated them 'correctly'
Why not? normal, tangent & normal should result in an inverse 3x3 rotation matrix iirc.

PS: I deliberately do not call the bitangent "binormal", because thats just wrong.

### #4Reedbeta

DevMaster Staff

• 5309 posts
• LocationSanta Clara, CA

Posted 09 January 2006 - 03:58 AM

Binormal is the historically used term for it, not just in computer graphics but in mathematics for at least 100 years, although I grant that it seems unintuitive when applied to surfaces. (The original meaning applied only to curves in space, where the binormal is defined as the cross product of the tangent and normal, so binormal is a reasonable name for it).

That aside, in fact the tangent space does not need to be orthogonally related to the world space. The u and v axes of the tangent space are defined by the texture coordinates, and if the texture mapping is not orthogonal, then the tangent space mapping will not be either. Similiarly, if the texture is scaled so that texel areas are greater or smaller than 1 in world space, then the tangent space basis vectors will not be of unit length either.

Averaging the tangent bases at each vertex is completely correct and results in an approximation to the continuous parameterization of the curved surface that is represented by the triangle mesh, in just the same way that averaging normals at vertices approximates the continuous normal map of a curved surface. This is of course assuming that the texture mapping is consistent with some kind of continuous parameterization of a curved surface. Whenever you have a seam in the texture mapping there will also be a seam in the tangent bases (i.e. at edges of smoothing groups).
reedbeta.com - developer blog, OpenGL demos, and other projects

### #5SigKILL

Valued Member

• Members
• 200 posts

Posted 09 January 2006 - 09:44 AM

To AXEL:
The tangent basis should define a transformation to 'texture space'. The non-unit tangent and bitangent vectors correspond to 'stretch' in the texture mapping.

To Reedbeta:
Are you sure? I've read quite a bit differential geometry/topology and I've never seen a definition for the binormal that would give sense to anything except curves in R^3. I do somewhat agree that bitangent is even worse, since a bitangent is a line that tangents twice. The standard mathematical way to treat this is to call the two vectors a tangent basis and it it usally reffered to as an ordered pair {u_x, u_y} or {u,v}, never seen anything but this actually.

Reffering to the main subjcet, I was unsure if this is the 'correct' way to do it, since it is not enough that the texture mapping is consistent with a contionous parameterization, but it just struck me that it should be C^1. I'm sure that 99% of the texture mappings out there are just piecewise C^1...

-si

### #6Axel

Valued Member

• Members
• 119 posts

Posted 09 January 2006 - 09:13 PM

Reedbeta said:

That aside, in fact the tangent space does not need to be orthogonally related to the world space. The u and v axes of the tangent space are defined by the texture coordinates, and if the texture mapping is not orthogonal, then the tangent space mapping will not be either. Similiarly, if the texture is scaled so that texel areas are greater or smaller than 1 in world space, then the tangent space basis vectors will not be of unit length either.
Now the confusion is perfect.

In the book "Mathematics for 3D Game Programming & Computer Graphics" there is a passage that you should orthonormalize the tangent space vectors :|

### #7SigKILL

Valued Member

• Members
• 200 posts

Posted 09 January 2006 - 10:28 PM

Axel said:

Now the confusion is perfect.

In the book "Mathematics for 3D Game Programming & Computer Graphics" there is a passage that you should orthonormalize the tangent space vectors :|

Well, the truth is that you should not orthogonalize or normalize. The tangent basis should define a transformation to 'texture space' dependent on the texture mapping coordinates. You will almost certainly have some stretch, and you will get non-orthogonal basis when you have textures that is 'skew' (like if you have a square with texture coordinates (0,0), (1,0), (0.5,1),(0.75,1)).

-si

### #8skynet

New Member

• Members
• 16 posts

Posted 12 January 2006 - 08:46 PM

My experience with TBN vectors is that you definitivly should normalize them. I have thought about it some time and these are my conclusions:

The reasoning is similar to why we need special matrices to transform normals.
We use the TBN matrix to transform a _direction_ vector (the light vector)into the tangentspace! Matrices that contain non-uniform scaling (as non-normalized T and B would introduce) are supposed to transform direction vectors this way (transposed inverse of M):

n' = (M^-1)^t)*n

(this is kind of removing the non-uniformity in scaling, leaving us with a vector that has the right direction, but may not have the right length, thus a renormalization of n' is needed, too)

Eric Lengyel´s derivation in fact produces a matrix that would transform a vector from tangentspace to objectspace (lets call it matrix 'A'). But the lightingcode needs the inverted matrix of that ('B'). But instead of doing a real inversion of the matrix, Lengyel makes the assumption that A is nearly-orthogonal (or even forces orthogonality using that schmidth-gram step) and by normalizing T,B, N. This way he gets an orthonormalized (rotation-)matrix that is perfectly suited for transforming the light vector.

Lets see, what my "real" solution would yield:

B=A^-1

l'= ((B^-1)^t) * l
l'= ((A^-1)^-1)^t) * l
l'= A^t * l

Doh! This is the same thing that Lengyel is using, except there´s no normalization and orthogonalization is taking place. But why doesn´t this work?
I forgot that I created the per-vertex-TB vectors by summing up the face-TB-vectors this vertex is belonging to! This creates vectors that are way too long. So, I would have to either a) normalize the vectors again or b) divide by the number of vertices we summed up (in order to get some kind of "average").

Actually I did not test, if averaging instead of normalization results in a different look.

But then it hit me. What is the basic reason, why we do that TBN stuff at all? We want to transform l into the space where the normalmap-vectors live, so we can "safely" calculate dot(n, l). They (usually?) live in "unstretched" texture space.
Here´s the thing that people arguing with "stretched texture space" seem to miss: Stretching the normalmap only changes the positions where the normal-texel get fetched. But it DOES NOT stretch the direction of the fetched normals. They still live in unstretched space. And this is why the incoming l vector must not get stretched too, thus TBN should be orthonormal. Otherwise we would combine a stretched l-vector with an unstretched n which would result in weird lighting.

Of course, if my assumption is wrong and the normalmap-vectors are kind of "pre-stretched", the reasoning above does not hold. But it would drastically reduce the re-usability of normalmaps (since the normalmap normals would be only right for triangles with a specific texture stretching) and we could switch to objectspace normalmaps anyway :-)

### #9Reedbeta

DevMaster Staff

• 5309 posts
• LocationSanta Clara, CA

Posted 12 January 2006 - 10:31 PM

The equation n' = (M^-1)^t)*n is only correct for normal vectors, and other vectors formed by cross products. Tangent vectors transform as t' = Mt, regardless of whether the matrix M contains nonuniform scaling or whatever. The reason is that these are actually two different 'types' of vectors. Tangent vectors are what are called contravariant vectors, and normal vectors are covariant (pseudo-)vectors. The distinction is irrelevant in rectangular coordinates, for then the two types of vectors behave identically; but as soon as you transform into a "stretched" space, you have to take the distinction into account.

You make a good point about dot products in a stretched tangent space. The inner product is not preserved by non-orthogonal transformations, so to be completely correct we need to evaluate a different inner product in the tangent space. In differential geometry this is called the first fundamental form.

On the whole, it is probably easier to use object space normal maps, or just transform the normals into world space at each pixel, rather than transforming the light and view vectors into tangent space.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #10SigKILL

Valued Member

• Members
• 200 posts

Posted 13 January 2006 - 11:55 AM

First off, the first fundamental form is defined by taking the dot-product as the inner-product in every tangent space (this is then used to define a metric at the surface). The tangent space vectors should be the partial derivatives of our atlas of smooth local diffeomorphisms from R^2 ( these diffeomorphisms are defined by the 'inverse' texture mapping btw). So, using the dot-product in texture space is the 'correct' thing to do diff. geom. wise.
The TBN transformation should not preserve inner-product or angles since stretch in the texture mapping does not preserve angles or inner product. This is easy to see if you f.ex. have a slope at 45 deg. in texture space, and when 'stretched' upon the surface the angle will ofcourse be less han 45 deg.(or more than 45 deg. if we have some 'shrink'). So the thought is that by having non-normalized tangent basis we want to preserve the angle and inner-product from object-space when transforming to texture-space.
So, if p is your local diffeomorphism to texture space, and n is a normal in texture space, and v is any curve or ray. Then to get correct angle you should calculate the angle between p´(v)*v´ and n (In our case p'(v) is simply the interpolated non-normalized TBN, and v' is the light/eye vector whatever).
You have to consider this anytime you pass to/from texture space, so using object space normal maps or world space lighting doesn't fix this magicly.
Using a normalized tangent basis will ofcourse give you some sort of bumpmapping, however if you use it with parallax mapping I'm sure you will be able to create examples where this give wierd results.

-si

Edited for an error in the diff. geom part at the beginning of the message.
Editing again, now everything should be correct...

### #11SigKILL

Valued Member

• Members
• 200 posts

Posted 13 January 2006 - 01:15 PM

I just realized when editing my message that this will get messed up anyway (Gah, there are so many wierd things going on in computer graphics, this is probably what confused skynet). The messing up is easy to see when your normalmap is nothing but a 'proper' normal (i.e. only has value in the normal component). The simplest way would probably to do lighting in object space (and scale the fetched normal in the tangent plane directions, this is probably what reedbeta meant). Since almost everybody probably does this incorrectly, whatever you do will probably look kind of correct (?).
However, to orthogonalize and normalize the TBN is incorrect.

-si

### #12skynet

New Member

• Members
• 16 posts

Posted 13 January 2006 - 04:37 PM

Would it be enough, to scale the x und y components of the fetched normal with the length of T and B and then do a renormalization?

I wonder, if anyone has ever implemented it "the right way". Do you know, if there´s any demo showing the differences of the various techniques? I´m really interested to see how much difference it would make...

### #13SigKILL

Valued Member

• Members
• 200 posts

Posted 14 January 2006 - 08:12 PM

skynet said:

Would it be enough, to scale the x und y components of the fetched normal with the length of T and B and then do a renormalization?

I wonder, if anyone has ever implemented it "the right way". Do you know, if there´s any demo showing the differences of the various techniques? I´m really interested to see how much difference it would make...

To simply scale it by the x and y components is not enough, if the basis is orthogonal you should use the inverse-transpose, however I'm not sure this applies to non-orthogonal basises too (should be OK to check, but it's kind of an ackward calculation, I might do this sunday).

I've googled for derivations of the tangent basis but can't find any satisfactory reasoning (some try, but the details are similar to ancient egyptian mathmaticians (which would reason certain things with something like 'because this is the truth')). I can't find anyone doing it 'the right way'. This is not strange at all since trying to do it the right way obviously opens a can of worms. The best thing to do is probably to make sure your artists doesn't do anything funny the normal maps or generate them accuratly from high-detail models (in the latter case you can probably use orthonormal tangent basis and get away with it).

-si

### #14Axel

Valued Member

• Members
• 119 posts

Posted 14 January 2006 - 09:05 PM

I still dont get it why the tangent space vectors don't need to be normalized. I always thought tangent space setup is a simple base change.

### #15Reedbeta

DevMaster Staff

• 5309 posts
• LocationSanta Clara, CA

Posted 14 January 2006 - 11:58 PM

It is. But bases of R^3 don't need to be orthogonal (that is, orthogonally related to the 'standard' basis). It's perfectly admissible to work in oblique coordinate systems. You just have to be aware of the issues this causes, i.e. different transformations for normal vs tangent vectors, and the need to use different inner products in different spaces.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #16Axel

Valued Member

• Members
• 119 posts

Posted 15 January 2006 - 12:42 AM

I understand now that they don't need to be orthogonal, but does that mean that the vectors also don't need to be unit-length?

### #17Reedbeta

DevMaster Staff

• 5309 posts
• LocationSanta Clara, CA

Posted 15 January 2006 - 03:47 AM

Correct. I should have said bases of R^3 don't need to be orthonormal either.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #18Axel

Valued Member

• Members
• 119 posts

Posted 15 January 2006 - 11:23 AM

Then I think I need to be a bit more practical.

I average face-normals by adding the non-normalized cross product to the vertices (by that the face area is considered in the calculation as you likely know) and after that I will renormalize all vertex-normals.

I thought I can do something similar with binormal and tangent...

### #19Reedbeta

DevMaster Staff

• 5309 posts
• LocationSanta Clara, CA

Posted 15 January 2006 - 11:41 AM

That works fine for normals (in fact there's an automatic weighting by area which is rather nice). However, when you're using normal mapping, the geometric normal is less important, as it doesn't even enter into lighting (except sometimes as a self-shadowing correction). The computation for tangent and binormals is based on the texture coordinates of each of the adjacent faces, which can then be averaged directly. Note that if there's a seam in the texture coordinates (i.e. they aren't continous around that vertex) then there should also be a corresponding seam in the tangent/binormals, as mentioned previously.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #20Axel

Valued Member

• Members
• 119 posts

Posted 15 January 2006 - 12:55 PM

So I cannot simply sum up the tangent and binormals aswell and normalize them afterwards?
Could I keep track how many faces affected a vertex and dividing the vectors by that value?

Reedbeta said:

Note that if there's a seam in the texture coordinates (i.e. they aren't continous around that vertex) then there should also be a corresponding seam in the tangent/binormals, as mentioned previously.
That's something I understood before posting here :)

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users