0
101 Sep 06, 2011 at 19:39

I’ve been following a book about basics for game programming with D3D11. I now understand the absolute basics of Direct3D :)

but… I have a question. In the book, I always had to make one demo at a time. Now, I’m trying to make a 2D game with it. Since I don’t want to get used to bad habits so I need your advice.

In the book, I always had to define a (struct VertexPos with texcoord and position members) OR (struct VertexPos with only a XMFLOAT3 position member). In the game I’m making, I want to be able to draw both solid surfaces without textures and surfaces with textures. I’m not sure how to do this, let alone do this efficiently.

Here is my rendering function:

void GameSpriteDemo::Render()
{
if (m_pD3DContext == 0)
{return;}

float ClearColor[4] = {0.0f, 0.0f, 0.25f, 1.0f};
m_pD3DContext->ClearRenderTargetView(m_pBackBufferTarget,ClearColor);

UINT stride = sizeof(VertexPos);
UINT offset = 0;

m_pD3DContext->IASetInputLayout(m_pInputLayout);
m_pD3DContext->IASetVertexBuffers(0,1,&m_pVertexBuffer, &stride, &offset);
m_pD3DContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

m_pD3DContext->PSSetSamplers(0,1,&m_pColorMapSampler);

for(int i=0; i < 2; ++i)
{
XMMATRIX world = m_Sprites[i].GetWorldMatrix();
XMMATRIX mvp = XMMatrixMultiply( world, m_VpMatrix );
mvp = XMMatrixTranspose(mvp);

m_pD3DContext->VSSetConstantBuffers(0,1,&m_pMvpCB);

m_pD3DContext->Draw(6,0);
}

m_pSwapChain->Present(0,0);
}


So, how should I handle this efficiently with multiple vertex buffers, input layouts, Shaders, Blenders, etc?

Should I just create multiple versions of those, and then set reset the input assembly, shaders and such after the Draw call? Or does this not work/ is this not efficient?

Thanks :)

#### 19 Replies

0
167 Sep 06, 2011 at 19:58

Yes, you can and should have multiple all those things you mentioned. And yes, you just set all the necessary states before each draw call.

At the bottom-most level you can organize things by shaders. For each shader you would have an input layout (validated against the shader when created) and corresponding render states (rasterizer, blend, depth-stencil). These could be hardcoded if you have only a handful of shaders, or if you want to get fancy you could try parsing these things from a text file or some such.

Built on top of the shader system you would have materials; a material is a shader together with settings for all its parameters, including textures. So you might have a solid-color shader and a texture shader; then you might have materials like red, yellow, black that use the solid-color shader with a specific color assigned, and other materials like metal, wood, stone that use the texture shader with a specific texture assigned.

Then on top of that you would have meshes; each mesh has a material assigned to it, and your tools should compile each mesh into the correct vertex format for the shader associated with that material. Then you’ll have a vertex buffer and index buffer for each mesh. An object could be a collection of meshes with different materials, etc.

As far as performance goes, common wisdom is that switching shaders is the most expensive change, followed by switching textures, then switching meshes. So when you draw your scene you should sort: all meshes with the same material should be drawn together, and all materials with the same shader should be drawn together. Note however that this is unlikely to make a real difference until you have quite a lot of meshes and materials (at least hundreds), so you probably don’t need to worry too much about it in the beginning stages of developing your game.

0
101 Sep 06, 2011 at 21:15

That’s a lot of work defining all those shaders, blenders and so on!
And the code looks ugly, I’ll try make some engine to hide the ugly code in :)

Thank you

0
179 Sep 06, 2011 at 21:45

Xcrypt, it’s a lot of work starting from scratch, but it’s mandatory if you want to maintain your sanity down the road :) Once you have a framework in place, you’ll find it both easier and fun to play around with settings rather than code to produce your results.

It might help if you look at how other people organize their code so you have a better idea of how to do it yourself. Take a look at the Collada file format. See how they break down shaders, materials, geometries, lights, cameras, etc. into libraries. You should essentially break down your framework into similar pieces and then bring them together at runtime to render objects.

0
101 Sep 07, 2011 at 14:16

I came across some other questions (on the same topic) today…

1) If I want to draw multiple sprites with the same settings except a different texture, should I do that with one vertexbuffer for all the sprites together, or one vertexbuffer for each sprite? (because I heard that you shouldn’t be careless with pContext->Draw() calls)

2) If the answer to ‘1)’ is ‘one vertexbuffer for all’:
If I set the IA to this:

m_pD3DContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);


then is it possible to draw multiple sprites within one vertexbuffer? How do I tell the compiler that I want to start a new trianglestrip?
EDIT: also, how do I texture a quad using TRIANGLESTRIP instead of TRIANGLELIST? I can’t manage to render the newer half of the texture of the quad?

3) Let’s say I work with multiple vertexbuffers, (by setting

m_pD3DContext->IASetVertexBuffers(0,2,m_ppVertexBuffers, &stride, &offset);


then how do I make the compiler clear when I want to use which vertexbuffer?

EDIT: btw, Collada might be a little overkill for what I want to achieve atm. Remember, I just want to make a rather simple 2D Game (Tetris) I might extend my engine after the Tetris game, when I get a bit more exp…

Thanks

0
167 Sep 07, 2011 at 17:04
1. It doesn’t matter, as you have to use a separate Draw call for each texture anyway (you can’t change textures in the middle of a draw). Whether you use separate vertex buffers, or separate ranges in one vertex buffer, doesn’t make a difference. I’d probably keep separate vertex buffers, one for each texture (not one for each sprite) just because it would be cleaner and easier to manage.

The exception to this is if you decide to pack all the sprite textures together into one big texture (called a “texture atlas”). Then you could have sprites using multiple sub-textures all in one draw call, by setting their UVs appropriately. If you did that, you’d want all the sprites together in one vertex buffer, since you’re going to draw them all at once. I wouldn’t bother with this unless you have a LOT of different sprite textures, though.

1. Yes, you can make this work using triangle strips by inserting degenerate triangles. Here is an article about how to do it. You might want to try just using triangle lists rather than strips to start with, though, to keep things simple.

2. That’s probably not the way you want to do it. Multiple simultaneous vertex buffers are used when you want to store some attributes in one vertex buffer and other attributes in another. For instance you might put the positions in one vertex buffer and the UVs, normals, etc. in another. (You might do this if you were using shadow mapping, for instance, since drawing into the shadow map requires positions only and can use just the positions vertex buffer, while drawing into the main view requires both.) Multiple vertex buffers can also be used for instancing. But if you just want to draw first one vertex buffer, then another, just use IASetVertexBuffers with a single buffer, draw it, then use IASetVertexBuffers with the next buffer, draw it, and so on.

0
101 Sep 07, 2011 at 19:59

I wouldn’t bother with strips at all. Triangle lists have exactly the same vertex cache hit ratios, the only downside is that you need 6 indices per adjacent quad instead of 2 (but 6 for non-adjacent quads), but indices aren’t exactly what you call the bulk of your data.

Btw, doesn’t DX11 support quad lists?

0
167 Sep 07, 2011 at 20:33

@.oisyn

Btw, doesn’t DX11 support quad lists?

It doesn’t appear to. I was wondering about that myself.

0
103 Sep 07, 2011 at 20:36

separate draw call per uniquely moving entity isnt completely retarded, you dont have to make things completely impossible to code.

only if you are sure you need millions of them would you bother doing something about it, like crowd drawing for example.

0
101 Sep 07, 2011 at 22:10

All right then. Thanks guys :)

0
101 Sep 07, 2011 at 22:50

@Reedbeta

It doesn’t appear to. I was wondering about that myself.

Well that’s pretty stupid. I was under the impression that they’ve finally added it to DX10. Hardware has supported it for ages :dry:

0
101 Sep 07, 2011 at 23:36

BTW, if anyone knows a good basic reference (simple 2D engine in D3D?) for me… please share.

Feels like I’m ‘wasting’ a lot of time on engine design, I constantly have to start over because something is inefficient or I just have to handle it completely different :(

0
179 Sep 08, 2011 at 03:24

Take a look at XNA. More specifically, take a look at their tutorials for displaying graphics. The design of the XNA framework isn’t bad. It’s extremely easy to do 2D or 3D development with it, so it should help you with your own engine. Take a look at their sprite, effect, and state classes to see how they organize their code. You won’t be able to see the D3D internals, but it’s the high level classes that count here, not the D3D code.

0
101 Dec 03, 2011 at 04:40

@.oisyn

Well that’s pretty stupid. I was under the impression that they’ve finally added it to DX10. Hardware has supported it for ages :dry:

To my knowledge, standard consumer hardware doesn’t support quads. OpenGL supported it at the driver/runtime layer. The issue with quads is that unless all points of the quad are planar, you HAVE to render it as two separate triangles. This then forces the api/runtime/driver/hardware to ‘choose’ how to split the quad (given that there would be two equally acceptable solutions from a technical standpoint, although they produce noticeably different results). Triangles are inherently planar. Stick to triangles.

@Reedbeta

1. It doesn’t matter, as you have to use a separate Draw call for each texture anyway (you can’t change textures in the middle of a draw). Whether you use separate vertex buffers, or separate ranges in one vertex buffer, doesn’t make a difference. I’d probably keep separate vertex buffers, one for each texture (not one for each sprite) just because it would be cleaner and easier to manage. The exception to this is if you decide to pack all the sprite textures together into one big texture (called a “texture atlas”). Then you could have sprites using multiple sub-textures all in one draw call, by setting their UVs appropriately. If you did that, you’d want all the sprites together in one vertex buffer, since you’re going to draw them all at once. I wouldn’t bother with this unless you have a LOT of different sprite textures, though.

A couple other potential solution for Xcrypt:

DrawInstanced() and Texture Arrays
A single vertex buffer with quads along with the Position and ‘TextureIndex’, could be used to efficiently render several types of sprites at once.

DrawInstanced() and Texture Atlases
Same as with texture arrays except you would pass atlas parameters instead of the texture index.

Geometry Shader Expansion and Texture Atlases (or Texture Arrays)
Pass single points, expand to a quad in the geometry shader, texture using one of the two methods described above.

0
101 Jan 23, 2012 at 20:20

Well, since the start of this topic I’ve managed to get quite some more experience with DirectX, but now the next questions are really bothering me [ :) ]:

-> Loading and displaying models (as in import stuff from 3dsmax etc…) <-

1) What is the best format to load models? I’ve implemented a small Obj-reader, but it seems to lack in efficiency and possibilities.
2) Is it possible to convert this into a binary file while remaining portable? (for example, can you turn the text files into a binary file on user installation of the application, or any is there any other method commonly used?)
3) How do I make a proper shader for these models, so that they would look like the model the artist created in the modelling program, while also respecting the environment of the virtual/game world (number of lights currently active on the model etc, for example)

4) Is there any good open source rendering engine with Direct3D10+, written in C++?

0
167 Feb 15, 2012 at 00:14

Some more full-featured formats that are pretty widely used are COLLADA and FBX. I haven’t used either of them very extensively, but enough people use them that a lot of tools have pretty good support for them. They both have SDKs you can link into your project and use for reading/writing the files.

0
117 Feb 16, 2012 at 00:12

So, finally some time to spend on DM.net!

To the topic…

So generally:
There are several types of optimization, most of the guys here focused on optimization with state changes (as they probably eats most time today … in most cases), I also dug a lot into other optimizations (like putting whole scene in dynamic BVH or dynamic KD-Tree :ph34r: (this one needs ninja smiley)) - these are trying to reduce the count of stuff to-render.

Basically you put your scene in some kind of tree (spatial, bounding-volume based, etc. - lots of ways, some are better for more dynamic scene, some for more static scene, etc.). Determine just subtrees that are visible and count just with them… (frustum culling is simple and effective, occlusion culling is quite harder and I presonally seen slow-down mostly, or just slight boost in speed … in 3D, in 2D it could be a lot easier and also faster).

After solving this, it is time to sort stuff based on materials-shaders-textures and so render less stuff even with less state changes.

To the questions (I’ll try to give personal opinions and decisions I’ve made during development of my game engine):

1.) Collada and FBX are widely recommended, in my opinion over-complicated. I basically import models from obj file format (note that in SDK of “my game engine” you can’t edit meshes, but you can edit materials, add more textures, etc. - so basically obj file format is enough, for static meshes), for deformable meshes we’re using our own format written similarly to Doom 3 MD5 (so no matrices, but rather quaternion and weights). These are used for development, in released products we export everything to binary files, storing just necessary informations in the same way as it is in game engine’s memory (so you can load them quickly - this is needed, as it can handle large worlds without loading screens).

So you *can* use Collada, and dig through dozens of pages, or you can stay minimalistic and write your stuff into it (like adding normal maps to *.mtl files), they’re text ones - easy to edit, easy to work with them … but actually you won’t probably use any of them in the end (you’ll use your own, small binary format storing just the stuff necessary for u).

2.) You should convert everything to binary and give it away with binary files - it will be smaller, installation will be faster. Binary files are the same on all PCs.

3.) We all would like to make the “all-do shader”, but it actually is impossible (as long as there won’t be realtime physically based path tracing for all machines). You’ll have to experiment a little, choose whether you want forward or deferred shading (or something between), then comes normal mapping, shadow mapping, environment mapping and other loads of cool shader stuff.

4.) Honestly I don’t know whether there is good open-source D3D10 game engine, but you can look on new id tech (Doom 3 engine). It is though using OpenGL and isn’t that new … but one can learn a lot from their sources. But I think that if you search DevMaster database, I’m sure you will hit one or two.

0
101 Feb 16, 2012 at 13:53

I would like to spend a little more attention to 1) and 2)

1) how can I add normal maps to .mtl files? Export is done from within 3dsmax…
2) Are you 200% sure that binary files are portable? Do they not store the binary information? Types can be encoded differently on different machines, you know that right?

0
117 Feb 16, 2012 at 14:55

1) You’ll have to edit mtl files by hand and add another token specifying normal map…

For example, normal mtl file looks like this:

# Max2Mtl Version 4.0 Mar 10th, 2001
#
newmtl TreeMaterial_001
Ka  1.0 1.0 1.0
Kd  1.0 1.0 1.0
Ks  0.9 0.9 0.9
d  1.0
Ns  0.0
illum 2
map_Kd tree001_d.tga
#
# EOF


Now, as I use some texture naming system (diffuse maps have _d, normal maps _n, etc. - good for organization of stuff), I just used small C application on every mtl file I had to create:

# Max2Mtl Version 4.0 Mar 10th, 2001
#
newmtl TreeMaterial_001
Ka  1.0 1.0 1.0
Kd  1.0 1.0 1.0
Ks  0.9 0.9 0.9
d  1.0
Ns  0.0
illum 2
map_Kd tree001_d.tga
map_Kn tree001_n.tga
#
# EOF


Then I added keyword map_Kn to my mtl parser and load the target image to normal map. It is simple, easy to edit and fast to edit. Currently I use huge click-fest editor (graphics guys wanted it) and still I rather write the stuff into file directly (it is less clicks :D).

Of course in the end (before releasing the application), I make all text files to binary ones… to keep it small and fast.

2.) Actually text file is also a binary … when you store text in binary files, you won’t save much, when you store numbers, you’ll save a lot of space. Imagine storing that edited material I wrote above needs 176 bytes.
Now, I say that if I meet in file:
0x01 newmtl … material name
0x02 Ka … ambient color
0x03 Kd … diffuse color
0x04 Ks … specular color
0x05 d … diffuse reflectivity (is it? I’m making all desc out of my head, as I can’t remember them, because I use my own)
0x06 Ns … specular mode or level?
0x07 illum … illumination mode
0x08 map_Kd … diffuse map
0x09 map_Kn … normal map

And lets say, I end every string with binary zero (0x00) - why? I need to know when string ends, this file now looks like (I’ll mix hexa and strings, hexa is just hexa written bytes (two hexa numbers = 1 byte), ignore spaces, and character in string is also 1 byte)
0x01 TreeMaterial_001 0x00 0x02 0x3F800000 0x3F800000 0x3F800000 0x03 0x3F800000 0x3F800000 0x3F800000 0x04 0x3F666666 0x3F666666 0x3F666666 0x05 0x3F800000 0x06 0x00000000 0x07 0x02 0x08 tree001_d.tga 0x00 0x09 tree001_n.tga 0x00

It could be also a bit more compressed, but well - it gives me now … well (calculating) … 99 bytes total. Almost 2 times smaller, holding the same data. Now, the mtl is quite good still, as it stores floating point number 1.0 as 1.0, not as 1.000000 (which is hell lot more of bytes), I’m sure obj, collada and other stores it this way.

And now, why binary files don’t differ on machines? Because text files are binary … imagine “Hello 1.000000” written as text file - it is 15 bytes, the characters are (where there is endline in the end of file). It is stored on byte basis, same as for binary files, but binary file can store number in better way (it stores them not as characters, but it store the number on byte level) - e.g. - 11 bytes. It store the number as 0x00 0x00 0x3f 0x80 bytes (it is saved as corresponding ascii (extended ascii) characters).

There are just few problems with binary files:
Endianess - little endian vs. big endian.
As you know, when reading more than single byte, you have to read in flipped order - which is quite trivial to implement. But I’d like to note that today most CPUs has little-endian endianess - so it is not a huge problem.

Floating point
A huge one, well theoretically floating point numbers can differ (as soon as you find CPU that doesn’t has some common = IEEE754 standard), this one is not so trivial, as you need to convert the floating point on bit level to your format. Luckily for us, x86 (and its successor x64) has this standard. I even think that most ARM CPUs has this standard also today.

0
101 Feb 16, 2012 at 23:47

Nice explanation.

I have another design question for my engine:

I have a struct ‘Mesh’ which contains all the data necessary to draw any geometry.
I have a ‘Renderer’ class which gathers all information necessary for drawing, and tries to optimise that stuff a little.

Now I’m wondering what I should do for the draw() function,
Should I make ‘Mesh’ a class and give it a method ‘draw()’, in which I could determine whether the next time the renderer->render() is called (internally, not by the user) that specific mesh should be drawn or not.
Or is there any better way to handle this?

Thanks again.