How do the commercial renders perform antialiasing on one pass only? I
see them shooting some photons (looks like big blocks), then blur the
photons, then antialiase! I always though you had to render your image
4x+ the actual size!?
Please log in or register to post a reply.
I’m not quite sure what you’re describing, but the usual way to do
antialiasing in a raytracing-based renderer is indeed to fire multiple
rays per pixel (distributed over the pixel area), and filter.
A common optimization is adaptive supersampling, where the number of
rays is adjusted for each pixel based on some error metric, e.g. local
contrast. Pixels on or near geometry or shadow edges will get more
samples, but pixels with nothing too interesting going on may just get
I see, so they only scan the edges and re-render them using more
samples. But what about textures? If only one sample is used, wouldn’t
the textures also look bad?
Not necessarily - you can do nice texture filtering even with just one
ray, by using mipmaps and anisotropic filtering, etc. in much the same
way that GPUs do. This is faster than shooting multiple rays because you
don’t need to do so many intersection tests, and you can precompute at
least part of the texture filtering (in the form of mipmaps).
oh ok, that make sense, thanks Reedbeta.