Calculating mesh tangent space vectors

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 08, 2006 at 17:22

I’m currently trying to export meshes from 3dsmax. Constructing vertex normals can be done by averaging face-normals for faces in the same smoothing group, but I don’t know if this is the right thing to do for tangent and bitangent vectors as well for vertices that share texcoords.

Can someone give me a hint what to do exactly?

37 Replies

Please log in or register to post a reply.

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 08, 2006 at 19:46

The easiest thing (AFAIK) would be to calculate the tangent and bitangent vectors per. triangle and averaging them for vertices in the same smoothing group. You should not normalize tangent and bitangent vectors if you calculated them ‘correctly’ (I don’t know if the Nvidia evil crossproduct code does it correctly, however you can always do it correctly by solving a simple linear equation (god knows why Nvidia doesn’t do it this way)). I’m somewhat not sure right now if this would be the ‘correct’ way to do it, but it is what everybody else does. There is probably alot of info about this at the Nvidia and ATI developer pages.

-si

Edit: In the text above I’m reffering to the first sentence in the last sentence..

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 09, 2006 at 01:24

So everybody does accept the error you produce by averaging tangent and bitangent vectors when faces with different texture directions have the same vertices sharing texcoords?

You should not normalize tangent and bitangent vectors if you calculated them ‘correctly’

Why not? normal, tangent & normal should result in an inverse 3x3 rotation matrix iirc.

PS: I deliberately do not call the bitangent “binormal”, because thats just wrong.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 09, 2006 at 03:58

Binormal is the historically used term for it, not just in computer graphics but in mathematics for at least 100 years, although I grant that it seems unintuitive when applied to surfaces. (The original meaning applied only to curves in space, where the binormal is defined as the cross product of the tangent and normal, so binormal is a reasonable name for it).

That aside, in fact the tangent space does not need to be orthogonally related to the world space. The u and v axes of the tangent space are defined by the texture coordinates, and if the texture mapping is not orthogonal, then the tangent space mapping will not be either. Similiarly, if the texture is scaled so that texel areas are greater or smaller than 1 in world space, then the tangent space basis vectors will not be of unit length either.

Averaging the tangent bases at each vertex is completely correct and results in an approximation to the continuous parameterization of the curved surface that is represented by the triangle mesh, in just the same way that averaging normals at vertices approximates the continuous normal map of a curved surface. This is of course assuming that the texture mapping is consistent with some kind of continuous parameterization of a curved surface. Whenever you have a seam in the texture mapping there will also be a seam in the tangent bases (i.e. at edges of smoothing groups).

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 09, 2006 at 09:44

To AXEL:
The tangent basis should define a transformation to ‘texture space’. The non-unit tangent and bitangent vectors correspond to ‘stretch’ in the texture mapping.

To Reedbeta:
Are you sure? I’ve read quite a bit differential geometry/topology and I’ve never seen a definition for the binormal that would give sense to anything except curves in R\^3. I do somewhat agree that bitangent is even worse, since a bitangent is a line that tangents twice. The standard mathematical way to treat this is to call the two vectors a tangent basis and it it usally reffered to as an ordered pair {u_x, u_y} or {u,v}, never seen anything but this actually.

Reffering to the main subjcet, I was unsure if this is the ‘correct’ way to do it, since it is not enough that the texture mapping is consistent with a contionous parameterization, but it just struck me that it should be C\^1. I’m sure that 99% of the texture mappings out there are just piecewise C\^1…

-si

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 09, 2006 at 21:13

@Reedbeta

That aside, in fact the tangent space does not need to be orthogonally related to the world space. The u and v axes of the tangent space are defined by the texture coordinates, and if the texture mapping is not orthogonal, then the tangent space mapping will not be either. Similiarly, if the texture is scaled so that texel areas are greater or smaller than 1 in world space, then the tangent space basis vectors will not be of unit length either.

Now the confusion is perfect.

In the book “Mathematics for 3D Game Programming & Computer Graphics” there is a passage that you should orthonormalize the tangent space vectors :|

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 09, 2006 at 22:28

@Axel

Now the confusion is perfect. In the book “Mathematics for 3D Game Programming & Computer Graphics” there is a passage that you should orthonormalize the tangent space vectors :|

Well, the truth is that you should not orthogonalize or normalize. The tangent basis should define a transformation to ‘texture space’ dependent on the texture mapping coordinates. You will almost certainly have some stretch, and you will get non-orthogonal basis when you have textures that is ‘skew’ (like if you have a square with texture coordinates (0,0), (1,0), (0.5,1),(0.75,1)).

-si

F8be0471938727d470d323f3afc531e4
0
skynet 101 Jan 12, 2006 at 20:46

My experience with TBN vectors is that you definitivly should normalize them. I have thought about it some time and these are my conclusions:

The reasoning is similar to why we need special matrices to transform normals.
We use the TBN matrix to transform a _direction_ vector (the light vector)into the tangentspace! Matrices that contain non-uniform scaling (as non-normalized T and B would introduce) are supposed to transform direction vectors this way (transposed inverse of M):

n’ = (M\^-1)\^t)*n

(this is kind of removing the non-uniformity in scaling, leaving us with a vector that has the right direction, but may not have the right length, thus a renormalization of n’ is needed, too)

Eric Lengyel´s derivation in fact produces a matrix that would transform a vector from tangentspace to objectspace (lets call it matrix ‘A’). But the lightingcode needs the inverted matrix of that (‘B’). But instead of doing a real inversion of the matrix, Lengyel makes the assumption that A is nearly-orthogonal (or even forces orthogonality using that schmidth-gram step) and by normalizing T,B, N. This way he gets an orthonormalized (rotation-)matrix that is perfectly suited for transforming the light vector.

Lets see, what my “real” solution would yield:

B=A\^-1

l’= ((B\^-1)\^t) * l
l’= ((A\^-1)\^-1)\^t) * l
l’= A\^t * l

Doh! This is the same thing that Lengyel is using, except there´s no normalization and orthogonalization is taking place. But why doesn´t this work?
I forgot that I created the per-vertex-TB vectors by summing up the face-TB-vectors this vertex is belonging to! This creates vectors that are way too long. So, I would have to either a) normalize the vectors again or b) divide by the number of vertices we summed up (in order to get some kind of “average”).

Actually I did not test, if averaging instead of normalization results in a different look.

But then it hit me. What is the basic reason, why we do that TBN stuff at all? We want to transform l into the space where the normalmap-vectors live, so we can “safely” calculate dot(n, l). They (usually?) live in “unstretched” texture space.
Here´s the thing that people arguing with “stretched texture space” seem to miss: Stretching the normalmap only changes the positions where the normal-texel get fetched. But it DOES NOT stretch the direction of the fetched normals. They still live in unstretched space. And this is why the incoming l vector must not get stretched too, thus TBN should be orthonormal. Otherwise we would combine a stretched l-vector with an unstretched n which would result in weird lighting.

Of course, if my assumption is wrong and the normalmap-vectors are kind of “pre-stretched”, the reasoning above does not hold. But it would drastically reduce the re-usability of normalmaps (since the normalmap normals would be only right for triangles with a specific texture stretching) and we could switch to objectspace normalmaps anyway :-)

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 12, 2006 at 22:31

The equation n’ = (M\^-1)\^t)*n is only correct for normal vectors, and other vectors formed by cross products. Tangent vectors transform as t’ = Mt, regardless of whether the matrix M contains nonuniform scaling or whatever. The reason is that these are actually two different ‘types’ of vectors. Tangent vectors are what are called contravariant vectors, and normal vectors are covariant (pseudo-)vectors. The distinction is irrelevant in rectangular coordinates, for then the two types of vectors behave identically; but as soon as you transform into a “stretched” space, you have to take the distinction into account.

You make a good point about dot products in a stretched tangent space. The inner product is not preserved by non-orthogonal transformations, so to be completely correct we need to evaluate a different inner product in the tangent space. In differential geometry this is called the first fundamental form.

On the whole, it is probably easier to use object space normal maps, or just transform the normals into world space at each pixel, rather than transforming the light and view vectors into tangent space.

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 13, 2006 at 11:55

First off, the first fundamental form is defined by taking the dot-product as the inner-product in every tangent space (this is then used to define a metric at the surface). The tangent space vectors should be the partial derivatives of our atlas of smooth local diffeomorphisms from R\^2 ( these diffeomorphisms are defined by the ‘inverse’ texture mapping btw). So, using the dot-product in texture space is the ‘correct’ thing to do diff. geom. wise.
The TBN transformation should not preserve inner-product or angles since stretch in the texture mapping does not preserve angles or inner product. This is easy to see if you f.ex. have a slope at 45 deg. in texture space, and when ‘stretched’ upon the surface the angle will ofcourse be less han 45 deg.(or more than 45 deg. if we have some ‘shrink’). So the thought is that by having non-normalized tangent basis we want to preserve the angle and inner-product from object-space when transforming to texture-space.
So, if p is your local diffeomorphism to texture space, and n is a normal in texture space, and v is any curve or ray. Then to get correct angle you should calculate the angle between p´(v)*v´ and n (In our case p’(v) is simply the interpolated non-normalized TBN, and v’ is the light/eye vector whatever).
You have to consider this anytime you pass to/from texture space, so using object space normal maps or world space lighting doesn’t fix this magicly.
Using a normalized tangent basis will ofcourse give you some sort of bumpmapping, however if you use it with parallax mapping I’m sure you will be able to create examples where this give wierd results.

-si

Edited for an error in the diff. geom part at the beginning of the message.
Editing again, now everything should be correct…

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 13, 2006 at 13:15

I just realized when editing my message that this will get messed up anyway (Gah, there are so many wierd things going on in computer graphics, this is probably what confused skynet). The messing up is easy to see when your normalmap is nothing but a ‘proper’ normal (i.e. only has value in the normal component). The simplest way would probably to do lighting in object space (and scale the fetched normal in the tangent plane directions, this is probably what reedbeta meant). Since almost everybody probably does this incorrectly, whatever you do will probably look kind of correct (?).
However, to orthogonalize and normalize the TBN is incorrect.

-si

F8be0471938727d470d323f3afc531e4
0
skynet 101 Jan 13, 2006 at 16:37

Would it be enough, to scale the x und y components of the fetched normal with the length of T and B and then do a renormalization?

I wonder, if anyone has ever implemented it “the right way”. Do you know, if there´s any demo showing the differences of the various techniques? I´m really interested to see how much difference it would make…

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 14, 2006 at 20:12

@skynet

Would it be enough, to scale the x und y components of the fetched normal with the length of T and B and then do a renormalization? I wonder, if anyone has ever implemented it “the right way”. Do you know, if there´s any demo showing the differences of the various techniques? I´m really interested to see how much difference it would make…

To simply scale it by the x and y components is not enough, if the basis is orthogonal you should use the inverse-transpose, however I’m not sure this applies to non-orthogonal basises too (should be OK to check, but it’s kind of an ackward calculation, I might do this sunday).

I’ve googled for derivations of the tangent basis but can’t find any satisfactory reasoning (some try, but the details are similar to ancient egyptian mathmaticians (which would reason certain things with something like ‘because this is the truth’)). I can’t find anyone doing it ‘the right way’. This is not strange at all since trying to do it the right way obviously opens a can of worms. The best thing to do is probably to make sure your artists doesn’t do anything funny the normal maps or generate them accuratly from high-detail models (in the latter case you can probably use orthonormal tangent basis and get away with it).

-si

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 14, 2006 at 21:05

I still dont get it why the tangent space vectors don’t need to be normalized. I always thought tangent space setup is a simple base change.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 14, 2006 at 23:58

It is. But bases of R\^3 don’t need to be orthogonal (that is, orthogonally related to the ‘standard’ basis). It’s perfectly admissible to work in oblique coordinate systems. You just have to be aware of the issues this causes, i.e. different transformations for normal vs tangent vectors, and the need to use different inner products in different spaces.

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 15, 2006 at 00:42

I understand now that they don’t need to be orthogonal, but does that mean that the vectors also don’t need to be unit-length?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 15, 2006 at 03:47

Correct. I should have said bases of R\^3 don’t need to be orthonormal either.

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 15, 2006 at 11:23

Then I think I need to be a bit more practical.

I average face-normals by adding the non-normalized cross product to the vertices (by that the face area is considered in the calculation as you likely know) and after that I will renormalize all vertex-normals.

I thought I can do something similar with binormal and tangent…

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 15, 2006 at 11:41

That works fine for normals (in fact there’s an automatic weighting by area which is rather nice). However, when you’re using normal mapping, the geometric normal is less important, as it doesn’t even enter into lighting (except sometimes as a self-shadowing correction). The computation for tangent and binormals is based on the texture coordinates of each of the adjacent faces, which can then be averaged directly. Note that if there’s a seam in the texture coordinates (i.e. they aren’t continous around that vertex) then there should also be a corresponding seam in the tangent/binormals, as mentioned previously.

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 15, 2006 at 12:55

So I cannot simply sum up the tangent and binormals aswell and normalize them afterwards?
Could I keep track how many faces affected a vertex and dividing the vectors by that value?
@Reedbeta

Note that if there’s a seam in the texture coordinates (i.e. they aren’t continous around that vertex) then there should also be a corresponding seam in the tangent/binormals, as mentioned previously.

That’s something I understood before posting here :)

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 15, 2006 at 20:10

@Axel

Could I keep track how many faces affected a vertex and dividing the vectors by that value?

That’s what “average” means ;)

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 16, 2006 at 02:25

So I can normalize the normals and average the tangent vectors and everything will be fine?

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 16, 2006 at 02:55

Yeah, should be.

354419c232983843ec4434f002922f00
0
Altair 101 Jan 16, 2006 at 03:33

@Axel

So I cannot simply sum up the tangent and binormals aswell and normalize them afterwards?

That’s what I do and it works fine (:

Anyway, note that taking cross product of edges for normal and summing them up doesn’t give you correct result if you want to do what 3DS Max does. In addition to this you have to weight the normal with the opening angle between edges attached to the vertex. You are welcome (:

Cheers, Altair

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 16, 2006 at 07:48

Well, I’ve figured out how you should do ‘correct’ transformation from texture space to object space. The correct thing to do seems to calculate a normal transformation per vertex based upon the tangent basis vectors. When these are non-orthogonal you should make sure that the normals is still normals to the same surface, this is easily done by two cross-products and you get a basis for the ‘normal-space’. Of course you need to have correct scaling too, but this is induced from the tangent basis, and should be easy to figure out for anyone that understands the inverse-transpose thingie.

If you want really correct bump-mapping you should probably do a SLERP instead of a LERP. If you can waste some pixel-processing power you can pass logarithms of quaternions in the color registers and calculate the exponent in the pixel-shader/fragment-program. This will give you per-pixel SLERP.

I might write his up in TEX and make a small demo. Don’t count on this though, since I hate computers.

-si

EDIT: I should probably mention that this method would probably result in more heavy pixel-shaders, since you must transform the normals to object space for every pixel. Transforming the light-vector(s) and eye-vector to texture-space per vertex and interpolating these is alot cheaper. The problem is ofcourse that I don’t see how this could be a linear transformation except in trivial cases (remember: they should be angle preserving).

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 16, 2006 at 09:09

@SigKILL

I might write his up in TEX and make a small demo. Don’t count on this though, since I hate computers.

Please do! If you make it happen, we’ll post it as an article on DevMaster.
@SigKILL

Transforming the light-vector(s) and eye-vector to texture-space per vertex and interpolating these is alot cheaper. The problem is ofcourse that I don’t see how this could be a linear transformation except in trivial cases (remember: they should be angle preserving).

Wouldn’t this transformation simply be the differential of the continuous texture mapping at that vertex? Of course, they would be linearly interpolated across the surface, which wouldn’t be correct but would probably be “good enough” in most cases.

4e70f904a74bd2aa8773733b25b77d41
0
SigKILL 101 Jan 16, 2006 at 09:56

@Reedbeta

Wouldn’t this transformation simply be the differential of the continuous texture mapping at that vertex? Of course, they would be linearly interpolated across the surface, which wouldn’t be correct but would probably be “good enough” in most cases.

Well, this is what makes it confusing. The TBN approach (the non-orthonormalized way) does excatly this. And this works in tangent-space. The problem is that when we add another dimension to the tangent-space (and we get what I try to call texture-space) this breaks. It is kind of obvious since the normal doesn’t depend on the texture mapping. This is why I suggested doing this in object space where we allready have a three dimensional space, and we have a linear transformation (defined by the texture mapping and the normal) from texture space. This transformation is not angle-preserving but in a ‘correct’ way (I think of the problem as if I where given a displacement map and a normal map, and we want correct results for different mappings). The inverse does not give a ‘correct’ mapping for light/eye vectors, though.
Another soluton might be to do some quirks in a pixel-shader (like a different inner-product as you suggested).
I have not thought about the case when the artist want to ‘scale’ the displacement map yet, but this should be easily fixable in the object-space lighting approach (it is just a different 3x3 matrix).

-si

F8be0471938727d470d323f3afc531e4
0
skynet 101 Jan 16, 2006 at 10:19

@SigKILL

This is easy to see if you f.ex. have a slope at 45 deg. in texture space, and when ‘stretched’ upon the surface the angle will ofcourse be less han 45 deg.(or more than 45 deg. if we have some ‘shrink’). So the thought is that by having non-normalized tangent basis we want to preserve the angle and inner-product from object-space when transforming to texture-space.

I have thought about this and I think, this is a wrong assumption. You imply that scaling the normalmap (in x,y texture space direction) would not scale the bumps (z direction), thus the normals gets flattened when the normalmap gets stretched bigger. But this is not what people expect to see.
When we apply a stone-texture to a little pebble, the normalmap causes little bumps to appear on the stone. Now apply the same normalmap to a big rock. The former little bumps should become bigger and deeper, not more flat. And this is done by just preserving the direction of the fetched normals.

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 17, 2006 at 22:05

I also think that you should normalize the tangent vectors, because streching the texture-map does not strech the normals.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 18, 2006 at 01:04

sure it does, the slope represented by the normal map becomes shallower as you stretch the texture. Think about it: imaging a mountain, nonuniform scaling that mountain in anything but the up direction would change the steepness of the mountain.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 18, 2006 at 02:34

Yes, but as SigKILL mentioned this is not the ‘expected’ behavior. When you apply a normal map to a larger polygon, you expect it to look the same as if you apply it to a small one, so the bump height would indeed be scaled. However, it’s debatable if this really ‘should’ be the expected behavior (I would agree with .oisyn that it shouldn’t be) :)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 18, 2006 at 10:40

Good point, you’ll have to scale the vectors with the same amount. For example, if you divide the tangent vectors by the length of the shortest one, a uniform scale won’t change the slope, but stretches will.

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 18, 2006 at 17:41

@.oisyn

sure it does, the slope represented by the normal map becomes shallower as you stretch the texture.

No!!! It does not. When you stretch a normal map it’s like stretching the surface in both height and width, because the normals are not stretched at all.

The “color” is still the same. A normal has the same angle at a given texel independent of texcoords.

Take a look at this:
normalexample.png

The normalmap on the right is scaled 2x, but the fetched normals still point in same direction, so the surface they describe (green) is twice as high as well (the integral of the normalmap)

I don’t think that a normalmap should describe a shallower surface if it’s scaled larger.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 18, 2006 at 18:28

So this is why some people tell you to normalize your tangent basis. If you want the behavior Axel describes, you just fetch normals based on the texture coordinates, transform the light and eye vectors into an orthonormal tangent space, and the magic happens. This works fine for bumpmapping… but causes problems in parallax mapping! (And relief mapping, parallax occlusion mapping, all similiar techniques….) This is because parallax mapping uses the projected eye vector to step through the texture coordinates, so if the tangent space x,y don’t have the same scale as the s,t texture coordinates, you will have an incorrect result. So it seems that one must use a non-orthonormal tangent basis, and apply some kind of scaling to the normals, in order to do parallax/bump mapping on scaled (especially nonuniformly scaled) polygons.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 18, 2006 at 22:57

@Axel

No!!! It does not.

Yes!!! It does. :)
The example you give represents a 1-dimensional texture. You cannot non-uniform scale a 1-d texture as it only has 1 dimension. Imagine the same mountain, but now in 3d (so the associated normalmap is 2d). And then, only scale the terrain in the x-direction. What do you get? Less steep slopes in the x-direction. You cannot size the normal (the up direction) accordingly as the y-direction didn’t change either. Only if you scale both x and y with the same factor, then it is debatable whether you want to stretch the normals or not, but I’m not talking about that case. I’m talking about texture stretching, where a square rendered on the screen doesn’t have a 1:1 texture mapping.

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Jan 18, 2006 at 23:46

Let me think about this for a while ;)
Most propably you are right, and I have a wrong picture of the bump-mapping thing.

6f0a333c785da81d479a0f58c2ccb203
0
monjardin 102 Jan 19, 2006 at 14:51

Do you mean something like this?
m806fr.png

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 19, 2006 at 15:29

Exactly :)