DX10 vs DX11 and related Phong shading stuff.
Posted 16 November 2010 - 02:20 AM
Secondly the reason I have to go to DX10 in the first place is to fix Phong shading anomalies. What happens is when I have faces with vertexes that go from roughly concave to roughly convex, the normals for those vertexes end up being close and the face in-between tends to look more flat shaded. I had this idea that I could split each edge in half (i.e. each face into four new faces) but don't move the new vertexes. Then for each new vertex I would generate a temporary normal (call it N1) which is the average of the two faces on either side of it. Finally I would take the normals of the old vertexes on either side of the new vertex, average those to get another temporary normal (call it N2). Then to get the final normal NF of a new vertex I would flip N2 to the other side of N1 such that the angle between NF and N1 is the same as between N2 and N1; only NF just sits on the other side.
When I draw this out in 2D it seems to give me what I want. For flat, fully convex or fully concave areas it gives me normals similar to the average that you would expect, but for zig zaggys areas it kind of simulates a compound curve (at least shading wise) . However before I try this I was wondering is there some standard way people handle this problem? I'm not looking to do some computationally expensive perfect solution; just some approximation that will get rid of the anomalies reasonably cheap.
Posted 16 November 2010 - 05:43 AM
Posted 17 November 2010 - 09:51 AM
Anyway, if you don't really need DX10 or DX11 features, stick to DX9 to maximize the market and/or save yourself the work of writing two implementations.
It looks like your problem can be solved with just DX9, using a normal map. By storing the normals in a texture instead of at the vertices, you get full control over the curve used for lighting. You can even implement a bump mapping effect this way.
Posted 17 November 2010 - 03:00 PM
In any case the upshot is there is a lot of copying already going on and it isn’t particularly cheap (I benchmarked it). I do a lot of little tricks to cull the copying of tris. For instance the back sides if mountains and hills aren’t copied and there is a predictions algorithm that determines when to start copying them as the player moves around. Quadrupling my tris is a big hit especially since I’m not going to release this any time soon. My guess is DX9 will be a distant memory for most gamers. On the other hand since the graphics card has a lot of power this seems like the perfect place to do this. Compared to procedural shading (which I’m also implementing) it’s a tiny fraction of the calculation.
Posted 18 November 2010 - 12:48 PM
Another thing, generating the fractal in GPU is a lot faster than in CPU. That is dozens and more faster. As always, this is the reality for today, and does not mean will be this way in the future.
Posted 18 November 2010 - 03:59 PM
Also Fractals may be faster on the GPU but there are a myriad of reasons why this seems more problematic for me. A lot of them have to do with matching the graphics engine to the physics engine. In practice I find the CPU fractals are fast enough. LOD doesn’t have to happen every frame. Even once a second is fine for typical MMO applications. In fact I need time to alpha fade in the new terrain so I’m not even sure it gets me much doing it faster and it takes time away from GPU texture fractals.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users