I’m trying my hand at writing a software renderer. I’ve read the Advance
Rasterisation article by Nick and it works wonderfully. However, in an
attempt to learn the fixed-point representation of floating point
numbers, I tried taking the algorithm he presents before he adds the
fill convention and fixed point math and writing the fixed point code
My understanding is that the fixed point representation simply gives
more precision (and thus more accuracy) and that’s why it’s effective.
However I am not seeing a difference (as a note – I did implement the
fill convention insight Nick outlines, so that is not lacking and
thusn’t be the reason my code doesn’t work properly).
I went back to the article and looked for differences between the
implementations. I believe I’m missing an insight, because there are
some parts of his code that don’t make sense to me and I can only assume
it is these concepts that are the reason my code doesn’t work correctly.
The particular areas of his implementation that don’t make sense to me
The necessity of the FDX and FDY variables. Supposedly they are the
fixed point representations of the deltas (hence the 4 bit shift), but
aren’t the DX variables all ready in a 28.4 fixed point format since
they are computed from the X and Y variables which are in a 28.4 fixed
The addition of 0xf to the min and max variables on their conversion
back from 28.4 fixed point format.
Why convert the min and max variables back to normal integers at all?
Would it not be equally valid to leave them in the 28.4 format, not do
the 4 bit shifts in computing the CY variables, and increment the x, y
counters by (1 << 4) in the two for loops?
Thanks ahead of time for any light you can shed on my confusions!
Please log in or register to post a reply.