# Calculating mesh tangent space vectors

37 replies to this topic

### #21Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 15 January 2006 - 08:10 PM

Axel said:

Could I keep track how many faces affected a vertex and dividing the vectors by that value?

That's what "average" means
reedbeta.com - developer blog, OpenGL demos, and other projects

### #22Axel

Valued Member

• Members
• 119 posts

Posted 16 January 2006 - 02:25 AM

So I can normalize the normals and average the tangent vectors and everything will be fine?

### #23Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 16 January 2006 - 02:55 AM

Yeah, should be.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #24Altair

Valued Member

• Members
• 151 posts

Posted 16 January 2006 - 03:33 AM

Axel said:

So I cannot simply sum up the tangent and binormals aswell and normalize them afterwards?
That's what I do and it works fine (:

Anyway, note that taking cross product of edges for normal and summing them up doesn't give you correct result if you want to do what 3DS Max does. In addition to this you have to weight the normal with the opening angle between edges attached to the vertex. You are welcome (:

Cheers, Altair
"Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." - Albert Einstein

### #25SigKILL

Valued Member

• Members
• 200 posts

Posted 16 January 2006 - 07:48 AM

Well, I've figured out how you should do 'correct' transformation from texture space to object space. The correct thing to do seems to calculate a normal transformation per vertex based upon the tangent basis vectors. When these are non-orthogonal you should make sure that the normals is still normals to the same surface, this is easily done by two cross-products and you get a basis for the 'normal-space'. Of course you need to have correct scaling too, but this is induced from the tangent basis, and should be easy to figure out for anyone that understands the inverse-transpose thingie.

If you want really correct bump-mapping you should probably do a SLERP instead of a LERP. If you can waste some pixel-processing power you can pass logarithms of quaternions in the color registers and calculate the exponent in the pixel-shader/fragment-program. This will give you per-pixel SLERP.

I might write his up in TEX and make a small demo. Don't count on this though, since I hate computers.

-si

EDIT: I should probably mention that this method would probably result in more heavy pixel-shaders, since you must transform the normals to object space for every pixel. Transforming the light-vector(s) and eye-vector to texture-space per vertex and interpolating these is alot cheaper. The problem is ofcourse that I don't see how this could be a linear transformation except in trivial cases (remember: they should be angle preserving).

### #26Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 16 January 2006 - 09:09 AM

SigKILL said:

I might write his up in TEX and make a small demo. Don't count on this though, since I hate computers.

Please do! If you make it happen, we'll post it as an article on DevMaster.

SigKILL said:

Transforming the light-vector(s) and eye-vector to texture-space per vertex and interpolating these is alot cheaper. The problem is ofcourse that I don't see how this could be a linear transformation except in trivial cases (remember: they should be angle preserving).

Wouldn't this transformation simply be the differential of the continuous texture mapping at that vertex? Of course, they would be linearly interpolated across the surface, which wouldn't be correct but would probably be "good enough" in most cases.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #27SigKILL

Valued Member

• Members
• 200 posts

Posted 16 January 2006 - 09:56 AM

Reedbeta said:

Wouldn't this transformation simply be the differential of the continuous texture mapping at that vertex? Of course, they would be linearly interpolated across the surface, which wouldn't be correct but would probably be "good enough" in most cases.

Well, this is what makes it confusing. The TBN approach (the non-orthonormalized way) does excatly this. And this works in tangent-space. The problem is that when we add another dimension to the tangent-space (and we get what I try to call texture-space) this breaks. It is kind of obvious since the normal doesn't depend on the texture mapping. This is why I suggested doing this in object space where we allready have a three dimensional space, and we have a linear transformation (defined by the texture mapping and the normal) from texture space. This transformation is not angle-preserving but in a 'correct' way (I think of the problem as if I where given a displacement map and a normal map, and we want correct results for different mappings). The inverse does not give a 'correct' mapping for light/eye vectors, though.
Another soluton might be to do some quirks in a pixel-shader (like a different inner-product as you suggested).
I have not thought about the case when the artist want to 'scale' the displacement map yet, but this should be easily fixable in the object-space lighting approach (it is just a different 3x3 matrix).

-si

### #28skynet

New Member

• Members
• 16 posts

Posted 16 January 2006 - 10:19 AM

SigKILL said:

This is easy to see if you f.ex. have a slope at 45 deg. in texture space, and when 'stretched' upon the surface the angle will ofcourse be less han 45 deg.(or more than 45 deg. if we have some 'shrink'). So the thought is that by having non-normalized tangent basis we want to preserve the angle and inner-product from object-space when transforming to texture-space.

I have thought about this and I think, this is a wrong assumption. You imply that scaling the normalmap (in x,y texture space direction) would not scale the bumps (z direction), thus the normals gets flattened when the normalmap gets stretched bigger. But this is not what people expect to see.
When we apply a stone-texture to a little pebble, the normalmap causes little bumps to appear on the stone. Now apply the same normalmap to a big rock. The former little bumps should become bigger and deeper, not more flat. And this is done by just preserving the direction of the fetched normals.

### #29Axel

Valued Member

• Members
• 119 posts

Posted 17 January 2006 - 10:05 PM

I also think that you should normalize the tangent vectors, because streching the texture-map does not strech the normals.

### #30.oisyn

DevMaster Staff

• Moderators
• 1842 posts

Posted 18 January 2006 - 01:04 AM

sure it does, the slope represented by the normal map becomes shallower as you stretch the texture. Think about it: imaging a mountain, nonuniform scaling that mountain in anything but the up direction would change the steepness of the mountain.
-
Currently working on: the 3D engine for Tomb Raider.

### #31Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 18 January 2006 - 02:34 AM

Yes, but as SigKILL mentioned this is not the 'expected' behavior. When you apply a normal map to a larger polygon, you expect it to look the same as if you apply it to a small one, so the bump height would indeed be scaled. However, it's debatable if this really 'should' be the expected behavior (I would agree with .oisyn that it shouldn't be)
reedbeta.com - developer blog, OpenGL demos, and other projects

### #32.oisyn

DevMaster Staff

• Moderators
• 1842 posts

Posted 18 January 2006 - 10:40 AM

Good point, you'll have to scale the vectors with the same amount. For example, if you divide the tangent vectors by the length of the shortest one, a uniform scale won't change the slope, but stretches will.
-
Currently working on: the 3D engine for Tomb Raider.

### #33Axel

Valued Member

• Members
• 119 posts

Posted 18 January 2006 - 05:41 PM

.oisyn said:

sure it does, the slope represented by the normal map becomes shallower as you stretch the texture.
No!!! It does not. When you stretch a normal map it's like stretching the surface in both height and width, because the normals are not stretched at all.

The "color" is still the same. A normal has the same angle at a given texel independent of texcoords.

Take a look at this:

The normalmap on the right is scaled 2x, but the fetched normals still point in same direction, so the surface they describe (green) is twice as high as well (the integral of the normalmap)

I don't think that a normalmap should describe a shallower surface if it's scaled larger.

### #34Reedbeta

DevMaster Staff

• 5307 posts
• LocationBellevue, WA

Posted 18 January 2006 - 06:28 PM

So this is why some people tell you to normalize your tangent basis. If you want the behavior Axel describes, you just fetch normals based on the texture coordinates, transform the light and eye vectors into an orthonormal tangent space, and the magic happens. This works fine for bumpmapping... but causes problems in parallax mapping! (And relief mapping, parallax occlusion mapping, all similiar techniques....) This is because parallax mapping uses the projected eye vector to step through the texture coordinates, so if the tangent space x,y don't have the same scale as the s,t texture coordinates, you will have an incorrect result. So it seems that one must use a non-orthonormal tangent basis, and apply some kind of scaling to the normals, in order to do parallax/bump mapping on scaled (especially nonuniformly scaled) polygons.
reedbeta.com - developer blog, OpenGL demos, and other projects

### #35.oisyn

DevMaster Staff

• Moderators
• 1842 posts

Posted 18 January 2006 - 10:57 PM

Axel said:

No!!! It does not.
Yes!!! It does.
The example you give represents a 1-dimensional texture. You cannot non-uniform scale a 1-d texture as it only has 1 dimension. Imagine the same mountain, but now in 3d (so the associated normalmap is 2d). And then, only scale the terrain in the x-direction. What do you get? Less steep slopes in the x-direction. You cannot size the normal (the up direction) accordingly as the y-direction didn't change either. Only if you scale both x and y with the same factor, then it is debatable whether you want to stretch the normals or not, but I'm not talking about that case. I'm talking about texture stretching, where a square rendered on the screen doesn't have a 1:1 texture mapping.
-
Currently working on: the 3D engine for Tomb Raider.

### #36Axel

Valued Member

• Members
• 119 posts

Posted 18 January 2006 - 11:46 PM

Most propably you are right, and I have a wrong picture of the bump-mapping thing.

### #37monjardin

Senior Member

• Members
• 1033 posts

Posted 19 January 2006 - 02:51 PM

Do you mean something like this?

monjardin's JwN Meter (1,2,3,4,5,6):
|----|----|----|----|----|----|----|----|----|----|
*

### #38.oisyn

DevMaster Staff

• Moderators
• 1842 posts

Posted 19 January 2006 - 03:29 PM

Exactly