Posted 30 September 2004 - 04:33 PM
The DirectX parameters however are throwing me for a curve. I can't find any information on how the video cards actually perform the sampling internally, or any kind of decent documentation on what the parameters actually mean in terms of quality vs. performance.
Eg. Is 2x sampling usually good enough, or should I default to the highest level of sampling if I am going to use sampling at all? Although we could make it user selectable, I doubt the user will generally understand this any better than I do at the moment.
What about the MultiSampleQuality parameter in the D3DPRESENT_PARAMETERS structure? The docs say that this value needs to be one less than the value returned by CheckDeviceMultiSampleType, but what does that number mean anyway? Is zero ok to use? The example antialiasing initialization code does not enable the quality parameter.
What about anisotropy? It always seems to come into the same discussion as antialiasing. Video cards often boast up to 8 or 16 levels of anisotropy, but what does that mean exactly? How do I specify those levels using directx? I noticed a SamplerState referring to Anisotropy, but it says MaxAnisotropy ... is that all there is to it? Again, is the performance such that either off or full is a good default? What is the anisotropy algorithm anyway, and how does it differ from vanilla linear filtering?
I have other antialiasing questions, but I guess we'll see how well this discussion takes off. I appreciate your help.
Posted 01 October 2004 - 09:37 AM
As for the rest of your post, I don't know what exactly you read from the docs, but you may have missed a bit. Try looking at these pages:
- DirectX Graphics -> Converting to DirectX 9.0
Check out the "Multisampling Quality Changes" section on that page.
- DirectX Graphics -> Programming Guide -> Advanced Topics -> Antialiasing
May want to read the Full-scene Antialiasing page (and the Motion Blur page for fun:))
And you'll also want to read the remarks on the page with the D3DMULTISAMPLE_TYPE enumeration details. And the remarks about the D3DRS_MULTISAMPLEMASK render state.
The above texts should give you a decent amount of info and a general idea.
The difference between anisotropic and *linear filtering is that anisotrpic filtering takes angles primitives into account.
In the docs, you can read up at:
- DirectX Graphics -> Programming Guide -> Direct3D Textures -> Anisotropic Texture Filtering
But you'd get a better explanation from here
The usage of anisotropic filtering is as follows. Check how many levels of anisotropy the card supports by using the D3DCAPS9 structure. Then you enable the filtering D3DSAMP_MAGFILTER, D3DSAMP_MINFILTER, or D3DSAMP_MIPFILTER by setting their values to D3DTEXF_ANISOTROPIC. Then you set the level of anisotropy by the D3DSAMP_MAXANISOTROPY sampler state.
Posted 01 October 2004 - 11:34 AM
Steven Hansen said:
Multi-sampling is an optimization of this. Rendering in a 1600x1200 resolution requires four times more work than rendering in 800x600 mode, so there's a huge performance hit. What they do is sample textures only once per group of four pixels (corresponding to one pixel one the 800x600 screen), and do the lighting calculations also just once. It still renders the triangles in the high resolution, so especially for edges it knows which of the four pixels belong to the current polygon. So after the scene is rendered we again average the pixels of the big buffer together, so we get anti-aliasing at the edges. In other words, the edges are super-sampled, while the interior of the polygons is rendered as if it was in 800x600 resolution.
The only disadvantage of multi-sampling over super-sampling is that aliasing effects can occur inside the polygon. But mostly this is not as noticable as jaggies, and we have anisotropic filtering...
What we really wanted is to average 1x8 pixels together, so we'd get a 256x32 texture. But that doesn't work when we start rotating the square. And it also requires a lot of extra memory. The alternative, is to sample the texture 8 times per pixel, in the direction in which the polygon is tilted. That's anisotropic filtering. It's called anisotropic because it doesn't matter how the square is rotated, it can take 8 samples in the texture in any direction required, and average them together.
Now you probably wonder why they don't use super-sampling again then? Well, super-sampling works in a fixed pattern. Regular 4x super-sampling takes two samples vertically, and two horizontally. When the square is tilted backward, we don't care (much) about the extra samples horizontally. We only really need the samples in the vertical direction, to avoid aliasing and blur. Furthermore, the lighting caluculations would be done multiple times per pixel when using super-sampling, while once suffices. With anisotropic filtering, the only cost is taking the extra samples in the texture and averaging them together. This is not that expensive in hardware, much easier than using eight times more complete pipelines.
So, multi-sampling and anisotropic filtering is like a perfect marriage. The first one avoids aliasing at the edges, the second avoids aliasing and blur inside the polgyons!
Posted 01 October 2004 - 12:44 PM
Are you sure? I rather thought multisampling was used to save bandwidth. Instead of having to write 4 pixels to a HUGE back buffer it renders the 4 pixels in a block and then averages them together so that only 1 write is needed to the, normal sized, backbuffer. ie Saves memory bandwidth ...
Posted 01 October 2004 - 01:36 PM
Of course since only one texture sample is taken per four pixels, it doesn't have to be stored four times. For pixels around edges they do have to be stored separately though. So there's a (hardware specific) smart compression technique used in practice. It's the compression that really saves the bandwidth, multi-sampling itself only saves fillrate.
Posted 01 October 2004 - 02:36 PM
AFAIK, and i am quite regularly wrong, multisampling is a standard signal manipulation scheme (Just as super sampling is) ... As such the edge pixel optimisation you talk of is an implementation specific thing and not actually anything to do with mutli-sampling per-se. Which is where super sampling has its advantage ... multiple 2D signals (images if you prefer) can be combined at higher precision using super sampling. Multisampling can still introduce aliasing artifacts during combination of multiple anti-aliased signals. I know im nit-picking but i do like my purism in an explanation of a given technique :) *Scuttles off to read up on the signal aliasing theory stuff in Foley, Van dam*
Posted 01 October 2004 - 05:29 PM
I'm still confused about the whole D3DRS_MULTISAMPLEMASK render state thing. What mask are we talking about here?
I also noticed that NVidia cards only seem to support anisotropic filtering for minification (from the d3d caps viewer) while ATI supports both directions. I'm still trying to get a grasp on the anisotropic thing (though the explanations have been excellent), but is there a reason for the minification only? I also noticed the OpenGL doc that bladder recommended specified minification only in OpenGL. Is magnification less important, or simply more costly to implement? I guess I figured there had to be some other way to enable anisotropy since most cards seem to support it (at least the box says so), but the d3d caps bits for magnification are missing.
Thanks again for the quite thorough explanations. Very nice.
Posted 01 October 2004 - 06:25 PM
With regard to graphics rendering; the edges of polygons have -infinite- frequency. It's impossible to take an infinite number of samples, but starting at 4 the jaggies become really hard to percieve. Either way, around the edges a high number of samples is desirable. Inside the polygon, the frequency is limited to that of the projected texture, a 2D signal. Here, one sample per pixel generally suffices, unless the polygon is tilted, in which case anisotropic filtering takes more samples.
Posted 01 October 2004 - 06:36 PM
Steven Hansen said:
This allows for example a motion blur effect. Render the object four times, each time a little bit moved, to each of the four pixel groups in the 1600x1200 buffer. The averaged result on the screen will give the object a blur effect. You would be using 1, 2, 4 and 8 as mask values.
Posted 15 October 2005 - 07:52 PM
Posted 15 October 2005 - 08:28 PM
The rasterizer then renders 4 different polygons that are slightly displaced according by the subpixel pattern, but the pixelshader is only runed once per sample (not subsample). Or am I completly wrong with this belief?
To say it in easy words: Multisampling only takes more edge-samples and not more pixelshader samples. This is where you can also compress lots of framebuffer bandwidth, because everything inside the polygon will have the same value for all four samples :)
nVIDIAs G70 and newer ATi cards can also take 4 pixelshader samples (which means doing supersampling) for polygons that use alpha-testing or texkill, because that causes very high-frequcency output that must be anti-aliased aswell.
Posted 16 October 2005 - 07:27 AM
That's exactly what he said...
The difference between four 800x600 buffers and one 1600x1200 buffer is just an issue of memory layout. I think Nick was giving an overview of the salient parts of the algorithm, not implementation details
Anyway, I agree, an edited version of this thread would make a good wiki article.
EDIT: I made two wiki articles - one about multisampling and the other about anisotropic filtering.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users