Direct3D.Texture

Cfb00aaab50bcfd0a6bff966b38c7669
0
tlc660 101 Aug 19, 2006 at 12:27

me have read somewhere talking about the texture size must be square of 2 (1x1, 2x2, 48x48 …)in order to better performance.

my questions is

  1. is that true?
    because me are doing 2D game with Direct3D.Sprite game, and not all my game elements can fix in that specify size.
    if me cant make it to specify size, is it very bad?
  2. what the maximun size of picture that Texture can support?

thanks very much

16 Replies

Please log in or register to post a reply.

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Aug 19, 2006 at 13:48

Hello.

1) No, it’s not always true.
It depends on the hardware and the video driver in use.
Textures of size 256x256 are the fastest, no matter what’s
the card. Stick to that size as much as you can.
Now, this does not mean you can’t use bigger textures.
Use them if you really need, just don’t abuse them.

2) This too depends on the hardware and the video driver
in use. For example, some cards support textures of
1024x1024, while others can go up to 4096x4096, etc.
You have to query the hardware capabilities if you want to
know how big a texture can be for the video card in use.

Ciao ciao : )

Cfb00aaab50bcfd0a6bff966b38c7669
0
tlc660 101 Aug 19, 2006 at 14:17

thnx Nautilus.

just wonder why cant have the surface which used in DirectDraw previous time,
can use any size any shape of graphic and no need to care about the hardware.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Aug 19, 2006 at 22:50

If you’re just doing a 2D game, try using the ‘rectangle’ texture addressing mode (not sure if that’s what it’s called in D3D). It lets texture coordinates range from 0 to the width/height rather than 0 to 1, with non-power-of-two sizes. The flipside is that you don’t get mipmapping, but this shouldn’t matter for a 2D game.

Cfb00aaab50bcfd0a6bff966b38c7669
0
tlc660 101 Aug 20, 2006 at 23:55

me are using MDX, guess it was did it automatically.
because me can just draw a part of texture with Rect.

Thankyou

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Aug 22, 2006 at 10:22

@tlc660

me have read somewhere talking about the texture size must be square of 2 (1x1, 2x2, 48x48 …)in order to better performance.

Btw, as an aside, 48 is not a power of 2 ;)

Cfb00aaab50bcfd0a6bff966b38c7669
0
tlc660 101 Aug 22, 2006 at 17:34

Goz, hehe, thats my fault =\^_\^=

0fe338e327ceff65fc2df6da21f12488
0
neptune3d 101 Aug 23, 2006 at 11:35

One last note that in Direct3D the texture size is best to be square but you do not have to load it as such, ie if you have a 256x256 texture with two images on it you can declare two rectangles of 256x128 and load only the textures that you need.

HTH,
Nep.

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Aug 23, 2006 at 13:02

@neptune3d

One last note that in Direct3D the texture size is best to be square but you do not have to load it as such, ie if you have a 256x256 texture with two images on it you can declare two rectangles of 256x128 and load only the textures that you need.

Can someone explain to me where this square textures and 256x256 stuff comes from?

I have never heard this before and it sounds like total rubbish to me. Sure the 3DFX was incapable of doing over 256x256 but i don’t see how this applies. How does bit depth and compression come into this? To be quite honest i’d just say stick with power of 2 textures ( ie 1x1, 2x2, 2x1, 1x2, 4x4, 2x4, 4x2, etc etc) as much as possible and you will get the best speeds. It may be the case for one or 2 graphics cards out there but i can certainly say its not the same for all.

So can anyone back up their claims or should i put this down to people not fully understanding the “way it is”(tm)?

0fe338e327ceff65fc2df6da21f12488
0
neptune3d 101 Aug 23, 2006 at 13:57

Well, I can’t post the book on Direct3D that I am reading now to the board so I guess I’m outta luck. My knowledge comes from the book DirectX programming for Beginner’s by SAM’s so indeed there may be more advanced information out there…but…as the book says I am just starting out in DirectX so you may have a more intimate knowledge.

Putting the shoe on the other foot, can you offer anything for stating this is not the case?

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Aug 23, 2006 at 15:37

This has to do with optimal memory segment alignment, and a number of other things the details of are untold us.

The 256x256 textures ‘claim’ roots back to the early days of 3D hardware acceleration: when cards had 2 Megs of video ram, instead of half a Gig like today.
And it hasn’t changed over the years, because changing it would collateraly mean to break the performance of the older games developed with 256x256 textures in mind (still widely played, you know).
Who is gonna take the responsibility of this change? nVIDIA? ATI?
They aren’t rich enough to face the consequences.

Many people believe there is a sort of standard set of rules followed by the various hardware manufacturers.
The truth is, there isn’t.

It’s a nonsense, really. Today we have just ATI and nVIDIA. They rule the market. Other brands barely survive.
Why ATI and nVIDIA don’t sit together and decide on a little standard to follow?
As programmers, we would be the first to benefit from it.

Every card works in its own way.
Every card segments its own on-board memory into a number of blocks (or clusters) of unknown size to us.
These clusters of video memory work similarly to those of your HD.
Optimal video memory usage is to use memory in multiples of the size of these video clusters.

I’ll make an example.
Create a new *.txt file on your disk.
Open it, write in 1 single character, then save and close.
Now, how much space do you believe the file takes on your disk: 1 Byte or 32 KiloBytes?
The correct answer is 32 KB (right-click on the file and choose properties if you want a proof).
It could take up more space than the 32 KB of my example, of course.
It depends on how big the partition is and how is formatted the drive (FAT32 versus NTFS), but you get the idea.

The same goes for video cards, but gets more complicated.
When you allocate video memory (i.e.: upload a texture) you use space in multiples of the size of a video cluster.
Question: how big is this cluster?
Answer: you do not know it. And they won’t tell you.
The card knows it. And the driver knows it.
DirectX and OpenGL, however, do not.
So do you.
To solve this problem we would either need a unified database of the size of video clusters for every existing card, or (even better) a known standard followed by the cards manufacturers.
Neither of the two exist. And we have no means to determine the exact amount of free video memory.
The problem is further aggravated by the fact that cards may or may not silently employ texture ‘joining’ when managing the images you upload to their memory.
I’m not referring to DTX compression and such.
It’s like when you use WinZip to create an archive composed of 2 files.
You don’t necessary need to employ a compression. You have the option to simply store the bytes of the two files, uncompressed.
Is it useful? Yes. It allows you to create an uncompressed entity rapresenting, say, 32 files of 1024 bytes each, yet keeps the data very fast to access.
Assuming you have clusters of 32 KB, this archive would fit into 1 single cluster (instead of 32 separate ones).
(For simplicity I’m not accounting for the archive’s header)

This is the silent joining I meant.
You do not need to know when/if it is employed.
It may silently join one or more textures so that they fit in 1 video cluster (saving memory), without actually compressing the pixel array.
But when do they do it? On what do they base this decision?
You do not know it (I’m repetitive, I know).
You may upload 10 small textures and use 10 separate video clusters, or you may upload 100 bigger textures, and use 50 or less video clusters.
The duo card/driver decides it.

Also, cards may upload a variable amount of extra informations along with the texture (for example, to handle the texture joining). How many bytes are these extra infos?
You see, we are not told any of this.

Now, when you want to ready a texture for rendering, using this command:

// I'm assuming the API is DirectX.
pD3Device->SetTexture (stage, texture);

what happens is that the card makes a copy of the actual texture contents.
Where does it copy it? In a fast access buffer: the Video Ram.

Question: how many bytes are gonna be copied?
Answer: size multiple of 1 cluster.

A 256x256 texture would likely fit into 1 cluster, thus minimizing the amount of bytes to copy, and there would still be room for other textures.
What if more of 1 texture is acually joined together?
All of those textures would be moved together in one single operation.
So you’d end up with multiple textures ready for rendering.
What then, if you happen to need one or more of those other textures once you have finished with the one you did call SetTexture() for?
Calling again SetTexture() to ready a different texture would likely make the card discover that the needed texture is already in place.
Therefore to render with the new texture, no real movement of data would be needed.
Ok, now, this is a happy case.
But the chances that it occurs increase with the use of 256x256 textures. When they are assigned similar priority the textures end up tied and moved together, thus resulting in less real swappings when you render, thus resulting in faster renderings.
Bigger textures would (likely) require more clusters to move around, effectively resuling in more copy operations performed during rendering.
Look, I suck when it comes down to explanations. I hope you can follow me.

The bottom line is that, today like yesterday: textures of 256x256 pixels are the fastest to manage, because in a way or the other they happen to keep fitting well into video clusters.
Call it a happy coincidence, or call it an attempt of the hardware manufactuter to give us something we can count on.
Either way this is all we’re gonna be told.

I know your next question: BUT WHY it has to be 256x256 pixels?
Why not 199x199 or 512x512 pixels?
For the same reason that 1 Byte is made up of 8 bits.
Why it can’t be 9, or 12 bits?

No real reason, today, other than historycal (and, well, financial -but that’s another matter-).
In the past they found out that 8 bits were enough yet not too many.
They were a good compromise between quantity and complexity, and so they adopted it, basing everything on it (the history of the ASCII char set explains this better than I do).

Today we have no real reason to maintain this amount.
We could change it to something better (something that would solve the big problem ASCII-UNICODE in a better way, for example).
But we don’t.

Same goes for 256x256 textures. They were ok.
And everything has been based upon them.
Today we could change it. But we don’t.

Hope I haven’t confused you.
Ciao ciao : )

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Aug 23, 2006 at 16:54

Yes i can see that the larger the texture the more cacheing failures you get but most other systems i have used have a minimum cache and as long as you are under that you are laughing. These days most games are using large textures so it is in the interests of the ISV to set their cards up to get good performance from large textures … surely? Its all about the benchmarks now .. isn’t it?

Will, when i get an opportunity, write a little test app and post it up on here so we can get results from multiple graphics cards …

Cfb00aaab50bcfd0a6bff966b38c7669
0
tlc660 101 Aug 23, 2006 at 18:34

thanks Nautilus for details explanation.

my problem right now is, (me are doing 2D game with managed Direct3D) me have an animation with 100 frames, each frame are about 256x256,
so is that better to split them into 100 texture or join them into one single large texture(2560x2560 in size)?

what if my picture are not square in size, should me make it to square (power of 2), then use Rectangle in Direct3D.Sprite.Draw() to indicates the portion of the source texture for render?

thankyou very much:worthy:

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Aug 23, 2006 at 20:34

Ciao,
sure a 2560x2560 texture is quite large.
Are you certain to need one *that* big?

It would either mean 12.5 Megs (if 16 bpp), or 25 Megs (if 32 bpp).
You can’t possibly display all of it in your screen.
Therefore you wouldn’t use all of it within 1 single frame.

A considerable portion of the memory you keep allocated would remain unused for several frames, which is considered a bad-practice unless the huge texture contains *all* of the images you’ll ever need to render until the game ends (and I don’t think it’s your case).

Better to employ several smaller textures : )

Btw, what 2D game are you making to require sprites as large as 256x256 pixels?
Is it a beat ‘em up type (a’la Street Fighter)?

Ciao ciao : )

Cfb00aaab50bcfd0a6bff966b38c7669
0
tlc660 101 Aug 23, 2006 at 20:55

no, thats not beat ‘em up type, but the click-point adventure game,
the main character and story characters contain many animation, about 30 to 100 frames for each of them.

any suggestion? each frame for a texture or single texture contain all, which are better performance?

D619d95cddb1edb227f51ef539d15cdc
0
Nautilus 103 Aug 23, 2006 at 21:09

Ok. Since you can’t keep everything in 1 texture, I’d go with the multiple textures idea.

But I believe that 256x256 sprites are too big.
They would mean an impressive amount of detail.
Can’t you lower it? Even by halving it to 128x128 would yield excellent results for a sprite (and would require 75% less memory).
Have you tried it to see how it would look, before deciding that you must use 256x256 large images?

Regards,
Ciao ciao : )

Cfb00aaab50bcfd0a6bff966b38c7669
0
tlc660 101 Aug 24, 2006 at 15:08

yes, it big.
my game are running at fix 1024x768.
and the sprite will often in front of “camera” so it will look “big”.
the sprite not acutaly at 256x256 but about 230 to 250 tall.

may be me have to make to game size lower (800x600), so that can make the sprite smaller.

me guess the “Direct3D.Surface” work almost similar to “surface” which are used in DirectDraw previous time, it can hold bitmap data in any size and any shape of bitmap data, am me right?
is it possible to load bitmap into Direct3D.Surface and use Rect to indicates the portion of the source texture and render to Direct3D.Sprite?