I know what you mean. Tex’s look great but when you come within a few inches they look like crap. I’m looking into the same problem right now, and if I find anything I’ll let you know.

Please log in or register to post a reply.

Thanx guys,

Interesting enough, I found my problem with my perspective raster before
seeing your posts. I fixed my problem by interpolating my z in 1/z space
lol.

Everything is peachy now \^\^. Only thing now that scares me is the
divides, I’m working on the arm and it frowns at the divides :P. So I
use a 1024 reciprocal table with a function that takes a 16.16 fixed
number and splits the number into integer and fractional. I use the
integer as a base look up into my table. I then use the fractional to do
a lerp from my base to base + 1 in the table and add it to my base. Just
a simple lerp to get a reciprocal with less than 0.4% error. It doesn’t
work on my 3d to 2d transform though, well it sorta works anyway :).

I assume you already know how to render a triangle, but in case not:
http://www.exaflop.org/docs/fatmap/
or what Nick posted is a good toturial.

If you have your edge interpolants ( e.g delta x divided by delta y
etc.. )

Its just pretty much you take your start and end u,v and divide them by
there corresponding z’s. You take those z’s you used to divied the
texture coordinates and get the reciprocal of them, 1/z. You then build
your edge interpolants like normal. Now during the scan line raster
phase, your going to have to divide your interpolant u,v by your current
1/z for that scan line. example:

```
typedef struct
{
float x,y,z,w;
}Vertex_t;
typedef struct
{
float x,u,v,z;
float dudy,dvdy,dzdy;
}Edge_t;
// assumming you know how to find/build edges
void BuildEdge( Edge_t* edge, Vertex_t* vertices,float yDetla )
{
float u_end,v_end,z_end;
// my end points
u_end = vertices[1].u/vertices[1].z;
v_end = vertices[1].v/vertices[1].z;
z_end = 1.0f / vertices[1].z;
edge->x = vertices[0].x;
edge->u = vertices[0].u/vertices[0].z;
edge->v = vertices[0].v/vertices[0].z;
edge->z = 1.0f/vertices[0].z; // we interpolate in 1/z space
// get my deltas
edge->dudy = (u_end - edge->u)/yDelta;
edge->dvdy = (v_end - edge->v)/yDelta;
edge->dzdy = (z_end - edge->z)/yDelta;
}
// do some interpolating of the edges down a triangle
// ..........
// ............
// time to render
void renderScan( Edge_t* edgeLeft,Edge_t* edgeRight, int yPos )
{
// calculate my deltas
float delta_x = edgeRight->x - edgeLeft->x;
float delta_u = edgeRight->u - edgeLeft->u;
float delta_v = edgeRight->v - edgeLeft->v;
float delta_z = edgeRight->z - edgeLeft->z;
// get my scan line interpolants
float dudx = delta_u/delta_x;
float dvdx = delta_v/delta_x;
float dzdx = delta_z/delta_x;
// width of scan etc..
int width = (int)delta_x;
int x = ceil( edgeLeft->x );
// my starting values
float u = edgeLeft->u;
float v = edgeLeft->v;
float z = edgeLeft->z;
do
{
// convert our texture coordinates from u' = u/z to u = u'/(1/'z) = u' * z etc..
int _u = (int)(u / z);
int _v = (int)(v / z);
frameBuffer[ x + width * yPos ] = someTexture[ _u + _v * texWidth ];
// interpolate to next pixel
u += dudx;
v += dvdx;
z += dzdx;
}while( --width );
}
```

This is just a example, for I just ripped this code out in 10 minutes ><. This is pretty much how it works for the most part, its all about the z, or you can use w if you are transforming your models/geometry to perspective space and not doing ortho.

Anyway, I could e-mail you my source, but take not that it is in c and in 16.16 fixed point

Just to throw in my two cents:

If you have an non pespective correct texture mapper running the way to a perspective correct one is easy. Divide every thing by Z prior to building the gradients, interpolate an additional 1/z per pixel and divide out per pixel. What you take for 1/Z isn’t even critical. You can use the W from your transformation, or 1/z, or 65536/z if you’re into fixed-point math. The important thing is that there is some perspective part going on. The wonderfull world of math will take care about the rest.

Interpolate all your pre-divided quantities over the screen space as you did all the time with uv before, but divide them by the interpolated “per pixel z”. This is all you need to tilt things into the depth. Don’t think to much about peformance in the first place. Get it working first and care about the speed later.

Kackurots posting is nice in this way. His code shows the most simple way to get things done, but Nyad, don’t take his code literal and copy it.

Gradients setup like delta_u_delta_x and delta_u_delta_y should still be derived once for a polygon, not once per line. They are still constant over a polygon, no matter if you’re doing perspective texuremapping or not.

It took me years to find that out because I took example codes to literal and never cared to do the math myself (why should I - it just worked).

Read the fatmap.txt thing until you understood how the gradient setup works,then apply the 1/z or w thing and you’re done.

- Upcoming Multiplatform Game Program...
- Our first game - looking for feedbacks
- Network Emulation Tool
- Trouble with accessing GLSL array
- Fiction
- Game Programming Patterns: Bytecode
- Interactive WebGL Water Demo
- Skeletal Animation Tutorial with GP...
- Unreal Engine 4
- Microsoft xbox one selling poorly

Anyone know a good tutorial on perspective texture mapping, I mean the math behind it. Like know the algorithm revolves around the transformed z coordinate. u’ = u/z, v’ =v/z etc… Of course I do this, it works good until I get really close then the texture starts to distort. Sigh, if anyone has some good math tutorials o a explanation how things work with software rasterizers, piost a link :)

or what not. Thanks