Multisampling? Anisotropy?

48d5487f357fbcacc89e84720577f0ac
0
Steven_Hansen 101 Sep 30, 2004 at 16:33

Ok… I know what multisampling is at the very high level.

The DirectX parameters however are throwing me for a curve. I can’t find any information on how the video cards actually perform the sampling internally, or any kind of decent documentation on what the parameters actually mean in terms of quality vs. performance.

Eg. Is 2x sampling usually good enough, or should I default to the highest level of sampling if I am going to use sampling at all? Although we could make it user selectable, I doubt the user will generally understand this any better than I do at the moment.

What about the MultiSampleQuality parameter in the D3DPRESENT_PARAMETERS structure? The docs say that this value needs to be one less than the value returned by CheckDeviceMultiSampleType, but what does that number mean anyway? Is zero ok to use? The example antialiasing initialization code does not enable the quality parameter.

What about anisotropy? It always seems to come into the same discussion as antialiasing. Video cards often boast up to 8 or 16 levels of anisotropy, but what does that mean exactly? How do I specify those levels using directx? I noticed a SamplerState referring to Anisotropy, but it says MaxAnisotropy … is that all there is to it? Again, is the performance such that either off or full is a good default? What is the anisotropy algorithm anyway, and how does it differ from vanilla linear filtering?

I have other antialiasing questions, but I guess we’ll see how well this discussion takes off. I appreciate your help.

11 Replies

Please log in or register to post a reply.

22b3033832c5c699c856814b0cf80cb1
0
bladder 101 Oct 01, 2004 at 09:37

The first half of your post asking how much sampling to use when is very hardware requirements specific and game specific. Best thing to do would to actually let it be user selectable. You can just let the user know that increasing the sampling level is lessen the jaggies and lower the frame rate. They should be able to understand that and they’d probably try a few different levels to see if they notice any improvements or degredation before chooing a setting they like. As for whct to enable at the start, well most games (cant say all because havent played all) just turn multisampling off at the start and leave it as an option.

As for the rest of your post, I don’t know what exactly you read from the docs, but you may have missed a bit. Try looking at these pages:

  • DirectX Graphics -> Converting to DirectX 9.0

Check out the “Multisampling Quality Changes” section on that page.

  • DirectX Graphics -> Programming Guide -> Advanced Topics -> Antialiasing

May want to read the Full-scene Antialiasing page (and the Motion Blur page for fun:))

And you’ll also want to read the remarks on the page with the D3DMULTISAMPLE_TYPE enumeration details. And the remarks about the D3DRS_MULTISAMPLEMASK render state.

The above texts should give you a decent amount of info and a general idea.

The difference between anisotropic and *linear filtering is that anisotrpic filtering takes angles primitives into account.

In the docs, you can read up at:

  • DirectX Graphics -> Programming Guide -> Direct3D Textures -> Anisotropic Texture Filtering

But you’d get a better explanation from here

The usage of anisotropic filtering is as follows. Check how many levels of anisotropy the card supports by using the D3DCAPS9 structure. Then you enable the filtering D3DSAMP_MAGFILTER, D3DSAMP_MINFILTER, or D3DSAMP_MIPFILTER by setting their values to D3DTEXF_ANISOTROPIC. Then you set the level of anisotropy by the D3DSAMP_MAXANISOTROPY sampler state.

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Oct 01, 2004 at 11:34

@Steven Hansen

The DirectX parameters however are throwing me for a curve. I can’t find any information on how the video cards actually perform the sampling internally, or any kind of decent documentation on what the parameters actually mean in terms of quality vs. performance.

There are many variants, but I’ll start with explaining how 4x regular super-sampling works. If you’re working in a 800x600 screen resolution, then it uses a 1600x1200 buffer internally. So every pixel on the screen corresponds to four pixels in the internal buffer. The complete scene is rendered in this high resolution, and then it is downsampled (shrunk) to the screen resolution by averaging the groups of four pixels together.

Multi-sampling is an optimization of this. Rendering in a 1600x1200 resolution requires four times more work than rendering in 800x600 mode, so there’s a huge performance hit. What they do is sample textures only once per group of four pixels (corresponding to one pixel one the 800x600 screen), and do the lighting calculations also just once. It still renders the triangles in the high resolution, so especially for edges it knows which of the four pixels belong to the current polygon. So after the scene is rendered we again average the pixels of the big buffer together, so we get anti-aliasing at the edges. In other words, the edges are super-sampled, while the interior of the polygons is rendered as if it was in 800x600 resolution.

The only disadvantage of multi-sampling over super-sampling is that aliasing effects can occur inside the polygon. But mostly this is not as noticable as jaggies, and we have anisotropic filtering…

What about anisotropy? It always seems to come into the same discussion as antialiasing. Video cards often boast up to 8 or 16 levels of anisotropy, but what does that mean exactly? How do I specify those levels using directx? I noticed a SamplerState referring to Anisotropy, but it says MaxAnisotropy … is that all there is to it? Again, is the performance such that either off or full is a good default? What is the anisotropy algorithm anyway, and how does it differ from vanilla linear filtering?

When we render a square, say 256x256 pixels, and we use a square texture of the same size, it will look splendid. But if we tilt that square backward, it becomes more like a rectangle, being more wide than heigh. Let’s say it’s dimensions are 256x32, roughtly. This means that the texture that we map onto it is compressed 8 times in the vertical direction. In other words, we would skip 7/8 lines. All that information is lost. It also causes flickering, aliasing, because when you shift the tilted square up and down, other horizontal lines will become visible and others dissapear. To avoid the flickering, we use a 32x32 version of the texture, where blocks of 8x8 pixels of the original texture are nicely averaged together. This is also a loss of information, but it stops the flicker. This is what is done with regular mip-mapping. The biggest disadvantage is that we now have 32 pixels horizontally in the texture, mapped onto 256 pixels on the screen. So we’re stretching it out, making it look blurry.

What we really wanted is to average 1x8 pixels together, so we’d get a 256x32 texture. But that doesn’t work when we start rotating the square. And it also requires a lot of extra memory. The alternative, is to sample the texture 8 times per pixel, in the direction in which the polygon is tilted. That’s anisotropic filtering. It’s called anisotropic because it doesn’t matter how the square is rotated, it can take 8 samples in the texture in any direction required, and average them together.

Now you probably wonder why they don’t use super-sampling again then? Well, super-sampling works in a fixed pattern. Regular 4x super-sampling takes two samples vertically, and two horizontally. When the square is tilted backward, we don’t care (much) about the extra samples horizontally. We only really need the samples in the vertical direction, to avoid aliasing and blur. Furthermore, the lighting caluculations would be done multiple times per pixel when using super-sampling, while once suffices. With anisotropic filtering, the only cost is taking the extra samples in the texture and averaging them together. This is not that expensive in hardware, much easier than using eight times more complete pipelines.

So, multi-sampling and anisotropic filtering is like a perfect marriage. The first one avoids aliasing at the edges, the second avoids aliasing and blur inside the polgyons!

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Oct 01, 2004 at 12:44

@Nick

Multi-sampling is an optimization of this. Rendering in a 1600x1200 resolution requires four times more work than rendering in 800x600 mode, so there’s a huge performance hit. What they do is sample textures only once per group of four pixels (corresponding to one pixel one the 800x600 screen), and do the lighting calculations also just once. It still renders the triangles in the high resolution, so especially for edges it knows which of the four pixels belong to the current polygon. So after the scene is rendered we again average the pixels of the big buffer together, so we get anti-aliasing at the edges. In other words, the edges are super-sampled, while the interior of the polygons is rendered as if it was in 800x600 resolution.

Are you sure? I rather thought multisampling was used to save bandwidth. Instead of having to write 4 pixels to a HUGE back buffer it renders the 4 pixels in a block and then averages them together so that only 1 write is needed to the, normal sized, backbuffer. ie Saves memory bandwidth …

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Oct 01, 2004 at 13:36

@Goz

Are you sure? I rather thought multisampling was used to save bandwidth. Instead of having to write 4 pixels to a HUGE back buffer it renders the 4 pixels in a block and then averages them together so that only 1 write is needed to the, normal sized, backbuffer. ie Saves memory bandwidth …

I explained it a tiny bit over-simplified, sorry.

Of course since only one texture sample is taken per four pixels, it doesn’t have to be stored four times. For pixels around edges they do have to be stored separately though. So there’s a (hardware specific) smart compression technique used in practice. It’s the compression that really saves the bandwidth, multi-sampling itself only saves fillrate.

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Oct 01, 2004 at 14:36

@Nick

Of course since only one texture sample is taken per four pixels, it doesn’t have to be stored four times. For pixels around edges they do have to be stored separately though. So there’s a (hardware specific) smart compression technique used in practice. It’s the compression that really saves the bandwidth, multi-sampling itself only saves fillrate.

AFAIK, and i am quite regularly wrong, multisampling is a standard signal manipulation scheme (Just as super sampling is) … As such the edge pixel optimisation you talk of is an implementation specific thing and not actually anything to do with mutli-sampling per-se. Which is where super sampling has its advantage … multiple 2D signals (images if you prefer) can be combined at higher precision using super sampling. Multisampling can still introduce aliasing artifacts during combination of multiple anti-aliased signals. I know im nit-picking but i do like my purism in an explanation of a given technique :) *Scuttles off to read up on the signal aliasing theory stuff in Foley, Van dam*

48d5487f357fbcacc89e84720577f0ac
0
Steven_Hansen 101 Oct 01, 2004 at 17:29

You guys are great! Thank-you. I spent forever googling this stuff, and got nowhere. Now I begin to feel enlightened.

I’m still confused about the whole D3DRS_MULTISAMPLEMASK render state thing. What mask are we talking about here?

I also noticed that NVidia cards only seem to support anisotropic filtering for minification (from the d3d caps viewer) while ATI supports both directions. I’m still trying to get a grasp on the anisotropic thing (though the explanations have been excellent), but is there a reason for the minification only? I also noticed the OpenGL doc that bladder recommended specified minification only in OpenGL. Is magnification less important, or simply more costly to implement? I guess I figured there had to be some other way to enable anisotropy since most cards seem to support it (at least the box says so), but the d3d caps bits for magnification are missing.

Thanks again for the quite thorough explanations. Very nice.

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Oct 01, 2004 at 18:25

@Goz

I know im nit-picking but i do like my purism in an explanation of a given technique :)

The theory behind it is quite simple. To sample a signal perfectly (it’s easier to imagine a 1D signal like sound, but it work in 2D as well), you need at least two times the maximum frequency of the signal as the sampling frequency. That’s Nyquist’s limit. But when a sound signal is high pitched only locally, it’s not efficient to sample with double that frequency everywhere. So what is usually done is splitting the signal in shorter signals. Then you can determine the maximum frequency per segment, and use the double of that as the sampling frequency.

With regard to graphics rendering; the edges of polygons have -infinite- frequency. It’s impossible to take an infinite number of samples, but starting at 4 the jaggies become really hard to percieve. Either way, around the edges a high number of samples is desirable. Inside the polygon, the frequency is limited to that of the projected texture, a 2D signal. Here, one sample per pixel generally suffices, unless the polygon is tilted, in which case anisotropic filtering takes more samples.

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Oct 01, 2004 at 18:36

@Steven Hansen

I’m still confused about the whole D3DRS_MULTISAMPLEMASK render state thing. What mask are we talking about here?

In this case we’re working with an uncompressed render buffer. So a screen resolution of 800x600 and 4x anti-aliasing gives a real 1600x1200 buffer. The mask determines to wich of the four pixels in the 1600x1200 buffer that correspond to one averaged pixel in the 800x600 pixel you want to render.

This allows for example a motion blur effect. Render the object four times, each time a little bit moved, to each of the four pixel groups in the 1600x1200 buffer. The averaged result on the screen will give the object a blur effect. You would be using 1, 2, 4 and 8 as mask values.

I also noticed that NVidia cards only seem to support anisotropic filtering for minification (from the d3d caps viewer) while ATI supports both directions. I’m still trying to get a grasp on the anisotropic thing (though the explanations have been excellent), but is there a reason for the minification only?

There’s not much point in anisotropic sampling for magnification. In this case a texel of the texture is mapped onto multple pixels on the screen. So it’s more like a ‘blob’ than having fine detail. It makes no sense to sample the texture multiple times per pixel. It won’t reveal more detail, it’s just the texture that is too small, or too close.

3a53830bbb3936338554ddf7a72b5e75
0
cdgray 101 Oct 15, 2005 at 19:52

This thread is great. I was trying to google for something about it and got this. I suggest adding this to the wiki.

Da26e799270ce5e8b62659ed77b11cef
0
Axel 101 Oct 15, 2005 at 20:28

Nick I’m pretty sure 4x Multisampling uses four 800x600 backbuffers, not one 1600x1200 one.

The rasterizer then renders 4 different polygons that are slightly displaced according by the subpixel pattern, but the pixelshader is only runed once per sample (not subsample). Or am I completly wrong with this belief?

To say it in easy words: Multisampling only takes more edge-samples and not more pixelshader samples. This is where you can also compress lots of framebuffer bandwidth, because everything inside the polygon will have the same value for all four samples :)

nVIDIAs G70 and newer ATi cards can also take 4 pixelshader samples (which means doing supersampling) for polygons that use alpha-testing or texkill, because that causes very high-frequcency output that must be anti-aliased aswell.
@Nick

So, multi-sampling and anisotropic filtering is like a perfect marriage. The first one avoids aliasing at the edges, the second avoids aliasing and blur inside the polgyons!

This is fine today, but in the future we need to be very careful that our shaders output is low-frequency too :|

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 168 Oct 16, 2005 at 07:27

@Axel

the pixelshader is only runed once per sample (not subsample)…Multisampling only takes more edge-samples and not more pixelshader samples.

That’s exactly what he said…

The difference between four 800x600 buffers and one 1600x1200 buffer is just an issue of memory layout. I think Nick was giving an overview of the salient parts of the algorithm, not implementation details :)

Anyway, I agree, an edited version of this thread would make a good wiki article.

EDIT: I made two wiki articles - one about multisampling and the other about anisotropic filtering.