One of the graphics gems books has a 2D version of a length aproximation. I implemented it in 3D. The isosurface looks like an inflated cube with the sides cut as a +.

```
double cVector::FastLength() const{
// The original idea for this approximation comes from one of the "Graphic Gems"-books, where I found a 2D-version of the same thing.
// This is basically the manhattan distance, with the 2 smallest factors scaled down.
double a, b, c;
a=std::fabs(x);
b=std::fabs(y);
c=std::fabs(z);
// Assigning the greatest value to a.
if((b>a)&&(b>c)){
double Temp = b;
b=a;
a=Temp;
}
else if((c>a)&&(c>b)){
double Temp = c;
c=a;
a=Temp;
}
return a+(b+c)*0.366; // I found this value optimal. There is probably no point in finetuning it further, since there still is a 20% fault... (For an int version of the same function, use a one step bitshift (a+((b+c)>>1)) instead)
}
```

When I tested it it was about 20% faster, but I did no optimizations whatsoever.

I’m making a software 3d engine just for fun and I’ve reached a point where I want to do some optimizations before I add new features. I noticed an extreme drop in framerates everytime I add a new light source, so i figured my light calculations would be an obvious choice for optimization. One of the more expensive opperations in my light calculations is the two per-vertex division and square root to normalize surface-to-light and midway vector for diffuse and specular light. Now on to my question:

Is there any fast way to approximate the normalized vector? I think I read somewhere that you can approximate the length of a vector somehow so you don’t have to do a square root.

Any usefull information on this would be appreciated :)