pre-mature optimization vs good planning

A0c9c0649c5deacc0ae3b7f7721c94d2
0
starstutter 101 Jan 06, 2009 at 19:23

So, you probably know there’s another thread currently going on and got derailed (kind of) to a similar topic. I was kind of wondering how to clearly distinguish between good design and planning for performance, and pre-mature optimization which many programmers suffer from (and that totally doesn’t sound like an innuendo).

So to throw out an example here, I designed a deferred lighting system in which I got the idea to use shadows to my advantage speed wise. By using the stencil buffer when rendering shadows (shadow maps that is) and culling black pixels, I was able to cut out a large chunk light processing by causing the stencil test to fail where shadows were being projected. Now, I knew that I had to do this optimization while I was writing the lighting system itself, because implementing later would be an@malignant

actively reproducing female k9…

So, is there some kind of guideline to follow? How do you know what you should implement immediatley and what’s just a waste of time? Is there some tricky experience behind it, or is it just more common sense (probably a bit of both)?

39 Replies

Please log in or register to post a reply.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Jan 06, 2009 at 21:15

I’m afraid the only guides are experience and insight.

And, would your example really be hard to revisit, assuming you had good code that showed great fundamentals like low coupling, high cohesion, etc?

3c5be51fdeec526e1f232d6b68cc0954
0
Sol_HSA 119 Jan 06, 2009 at 21:45

There’s one optimization that is never premature: Optimizing for readability.

If you’re charting new ground, it’s usually best to play it safe. First make it work, then make it fast. If two routes are more or less equally safe (i.e. should I use gl extension A or gl extension B to do the same thing?) then picking the faster is probably a good decision, and not ‘premature optimization’. (Heck, in the case of FBOs compared to older techniques, it’s probably safer AND faster).

So, my guidelines would be:
1. Optimize for readability. Even if it seems to make the code slower. It probably doesn’t. Another hint: comments never made anyone’s code slower.
2. First make it work, then make it fast.

A0c9c0649c5deacc0ae3b7f7721c94d2
0
starstutter 101 Jan 06, 2009 at 22:33

@alphadog

And, would your example really be hard to revisit, assuming you had good code that showed great fundamentals like low coupling, high cohesion, etc?

Well I will say that this example was from a while ago when I was more inexperienced, and unfortunatley yes, there was a fairly high level of coupling that I now know makes things a nightmare. Now my design philopsophy goes something like:
Everything I make except for the highest level (game specific) functions should be able to be copy and pasted into another program and still work.

Seems to be working for me too. The only thing all the “tools” (as I’m calling them) link to is a simple debugging tool that writes to a file when an error occurs (either from directX or my engine). I re-coded my old engine in this new style and so far it retains the same capabilities with literally 1/20th of the code and I have spent less than 1.5 hours cumulative debugging the thing, as compared to many days with the old style :) . Yes I sound like a total noob right now but I’m happy, shut up! :D

Finally nice to hear that from another coder though, so thanks. But btw, to answer your question, it would have been pretty difficult considering how badly things were set up, I certainly do know much better now.

And thanks Sol_HSA for the advice as well.

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Jan 07, 2009 at 09:13

Deliberate pessimisation is just as evil, if not more so, as premature optimisation …

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Jan 07, 2009 at 13:33

@Sol_HSA

There’s one optimization that is never premature: Optimizing for readability.

Interesting way to see it. Personally, I wouldn’t call “working towards readability” an optimization technique, but I totally agree with the spirit of the suggestion. The reason I wouldn’t call it optimization is because I consider it a “baseline” aspect of good coding, and optimization is something that is highly recommended, but can be deferred or not done at all if you still fit within requirements.
@Goz

Deliberate pessimisation is just as evil, if not more so, as premature optimisation …

Just to make sure we’re on the same page, what do you mean by “pessimisation”? I take it to mean putting off optimization/refactoring until the very end.

It’s interesting to note that premature optimization syndrome (or POS, in order to optimize further typing :dry: ) will often, but not always, lead to less maintainable/readable code.

The only evil is the over-generalizations being bandied about! ;)

A balanced view is that if there is a need in the reqs, and there always is with games, you must design for performance amongst other factors. That is basically pre-optimization. However, optimization cannot occur in some cases without actual code to measure; the whole is not always the direct sum of its parts. Sub-optimization can trap you into a slower solution.

The problem with both generalizations is that they attempt to position all of the design/optimization work at one end, either the start (POS) or the end (DB).

That’s because most coders like to spit out code and never revisit it. The real way to operate is in cycles. TDD helps here…

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Jan 07, 2009 at 13:51

Pessimisation is deliberately writing un-optimised code.

As an example: Deliberate pessimisation would be saying i need a sort and implementing a bubble sort. It doesn’t really change the readability of your code to use a radix, for example, but it makes your code far slower pointlessly.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Jan 07, 2009 at 14:21

Will the bubble sort’s slower performance actually impact the overall hypothetical application? Can you determine that when you are coding that particular block?

What will the impact of using a more complex sorting technique have on future maintainability versus the gain in performance?

Can the bubble sort be refactored if it is found to affect performance?

Do you want to stop your deliverable at every block of code to think this through?

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Jan 07, 2009 at 16:28

@alphadog

Will the bubble sort’s slower performance actually impact the overall hypothetical application? Can you determine that when you are coding that particular block?

What will the impact of using a more complex sorting technique have on future maintainability versus the gain in performance?

Can the bubble sort be refactored if it is found to affect performance?

Do you want to stop your deliverable at every block of code to think this through?

Does algorithmic optimisation cause your code to become any less readable and hard to maintain?

Because thats why pre-mature optimisation is so “evil”. Algorithmic optimisation appears to be sensible to me. Use the best tool for the problem.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Jan 07, 2009 at 16:39

@Goz

Does algorithmic optimisation cause your code to become any less readable and hard to maintain?

Answering a question with a question, are you? ;)

My answer to yours: often yes. More performant algorithms are often more complex and less common than less performant ones. Thus, less readable and harder to maintain, respectively.

PS: Of course, this is a matter of degree. On a case-by-case basis, it is usually not order of magnitude more complex and less readable, but it is. And, when you aggregate that over an entire codebase, then it has a non-negligible cost wrt to those two factors.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jan 07, 2009 at 18:22

@starstutter

I was kind of wondering how to clearly distinguish between good design and planning for performance, and pre-mature optimization

If you can delay optimization to a later stage, then doing it earlier is premature. However, there is some disagreements with people what kind of optimization you can delay and how far.

Personally I think it’s important have good idea about performance characteristics early in a project for many reasons. One of the major reasons is that in game project there are artists and designers creating content for the project and if you drastically change performance characteristics over the project, you waste a lot of artist/designer resources. And in an average game dev team there are ALOT more artists/designers than there are programmers.

Another purely programmer reason is that if you implement features without having good idea about the performance, you potentially implement features that you don’t have cpu/gpu resources for, so you end up cutting them from the final product in order to hit your performance target.

It’s also important to learn to write efficient code straight off the bat and know what kind of choices have major impact to the code performance on your target platform(s). If you don’t follow good practices while writing code it’s very difficult and time consuming to fix later (it’s the “goo” I refer in the another thread).

So, any optimizations that would get me to the range.. let say \~30% mark from the final performance early in the project I wouldn’t consider premature. That last 30% you can then squeeze from code + assets when approaching the end of the project.

4cb2d2b30b08fb6b34e2175324a5d2e9
0
Grumpy 101 Jan 07, 2009 at 21:23

I have to agree with Sol HSA about first getting it running then optimizing. Pre- planning is probably %75 of the work, knocking out re-writing code multiple times and removing redundant code (and yes, changing that “pre-mature optimized code”).
For example, Malignat had defined Pi; that would most likly be used in his engine… He also had to other floats that are 2Pi and HalfPi (but not by those names) but would he actually use those floats in his proggy ??
When going through the pre-planing phase, you would know (hopefully) wich library funtions you will be using and subsequently have an idea what arguments you need for those functions… Do you still need 2Pi or HalfPi now?? Do you need both If_Odd and If_Even when you could get by with 1 of them and use an else statement cuzz it returns a bool. If not then dont include those definition, saving a few bites and micro seconds in compiler time.

JMHO…

Grumpy

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 08, 2009 at 11:58

@JarkkoL

If you can delay optimization to a later stage, then doing it earlier is premature. However, there is some disagreements with people what kind of optimization you can delay and how far.

Personally I think it’s important have good idea about performance characteristics early in a project for many reasons. One of the major reasons is that in game project there are artists and designers creating content for the project and if you drastically change performance characteristics over the project, you waste a lot of artist/designer resources. And in an average game dev team there are ALOT more artists/designers than there are programmers.

Another purely programmer reason is that if you implement features without having good idea about the performance, you potentially implement features that you don’t have cpu/gpu resources for, so you end up cutting them from the final product in order to hit your performance target.

Which is why I referred to prototyping in the other thread. But note that if you or anyone else on the team already has the prior experience to know for a fact that certain things will need certain optimizations to meet the goals, then clearly there is no reason to delay them and could even be included in the design phase. In all other situations, there is an extremely high likelihood that optimizing something in advance will turn out to be a waste of time, bloat the code, and attract all the other evils of premature optimization. I also believe that much too often people assume they have the experience to know what will need optimizing in advance. After 10 years of 3D graphics programming I still catch myself doing that from time to time (though I also believe that some of it is unavoidable due to design goal changes and uncertainties)…

It’s also important to learn to write efficient code straight off the bat and know what kind of choices have major impact to the code performance on your target platform(s). If you don’t follow good practices while writing code it’s very difficult and time consuming to fix later (it’s the “goo” I refer in the another thread).

I agree that ideally your team members should have all the relevant prior experience possible. Unfortunately there is no such thing as “learning to write efficient code straight off the bat” for the less experienced. And that’s when they should be most aware of premature optimization. Lets take for example the popular topic of what programming language someone should use for a first project. All the pros would tell them they used C++ on their last AAA title, but likely advise a simpler language. If he doesn’t take the advise, there is no doubt the project will take longer (if ever completed) and the performance goals have been overshot. Of course an important flip side is that he’ll have more experience for the next project…

So, any optimizations that would get me to the range.. let say \~30% mark from the final performance early in the project I wouldn’t consider premature. That last 30% you can then squeeze from code + assets when approaching the end of the project.

Indeed, with experience from alike projects your optimization choices are much more likely not to be premature. But the point is you can’t generalize that as an advise to everyone. Also, what was a good optimization in a previous project might bite you in the ass in the next (e.g. the compiler beats your bit twiddling hack).

Like I said in the other thread: When in doubt, it’s premature.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jan 08, 2009 at 15:46

@Nick

Which is why I referred to prototyping in the other thread.

Well, it’s not the prototyping how you determine the performance in game projects, but by early optimization of the final implementation. If you were doing more research type of work, then you would do prototyping, but that’s not what you do in game projects in general.@Nick

Unfortunately there is no such thing as “learning to write efficient code straight off the bat” for the less experienced

I have a bit more faith on those less experienced programmers, but I wasn’t talking only about them. Even if you are experienced programmer you have to learn what are the performance characteristics of your target platform to be able to write efficient code straight.@Nick

But the point is you can’t generalize that as an advise to everyone…

I’m not writing a newbie programmer tutorial here but just sharing my experiences I have had regarding the topic hoping to spawn some fruitful conversations.

A0c9c0649c5deacc0ae3b7f7721c94d2
0
starstutter 101 Jan 08, 2009 at 23:10

You know I’d just kind of like to say that game programming is kind of a depressing field… I’ve been programming for 6 years, written two 2D engines and two 3D rendering engines and I still qualify as a newbie =/
It’s a looooooooooong road…

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 09, 2009 at 00:01

@starstutter

I’ve been programming for 6 years, … and I still qualify as a newbie =/

You think other fields are any different? ;)

3c5be51fdeec526e1f232d6b68cc0954
0
Sol_HSA 119 Jan 09, 2009 at 10:54

@Reedbeta

You think other fields are any different? ;)

I’ve been given the impression that six years in bicycle repairs just might make you an expert in that field.

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Jan 09, 2009 at 13:52

OMGWTFBBQ! :ohmy: woot biking R0XX0RZ! :yes: who can say he dusnt, is a retarded hat4r n00bsauce, ill :ninja: those fuggahs wit my +4 spork of pwnage! lollololololo. i ried alot. i have ideas 4 better bike + mad screwdriver skillz. lol… hahahaha… i said screw. i liek 2 b design teh trek madone but better!!111!!!!!!!1\~\~1!!

give me bluprint n recipe 4 carbun fiberz n wats gears? how do brakez wurk? ;) ;) :wacko:

plz send me info
thx luv u guys u all rok! :wub: :worthy: :surrender :worthy: :yes:

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 09, 2009 at 14:58

@JarkkoL

Well, it’s not the prototyping how you determine the performance in game projects, but by early optimization of the final implementation. If you were doing more research type of work, then you would do prototyping, but that’s not what you do in game projects in general.

A prototype can also be made using engine code that is still unfinished, but then it’s done in a different branch of source control. If it works out as expected (i.e. meeting design goals) then it’s further developed into production quality code and then merged back into the trunk, else discarded. What you describe sounds like doing optimizations straight in the trunk, which is a very bad idea not unlike premature optimization.

Even if you are experienced programmer you have to learn what are the performance characteristics of your target platform to be able to write efficient code straight.

By prototyping. ;)

I have a bit more faith on those less experienced programmers, but I wasn’t talking only about them. […] I’m not writing a newbie programmer tutorial here but just sharing my experiences I have had regarding the topic…

The problem I have is that your experiences could put both newcomers and experienced programmers on the wrong track (but most likely the former). The way you describe it, the advice about premature optimization is incorrect for certain aspects of game development, advocating to optimize things without prior information about whether it will be gainful or not, right?

Now consider the possibility that you’ve been fairly lucky with your assumptions and it never lead to serious project delays. Or take it as a complement; intelligent people are more likely to make better decisions for a complex issue having only little prior info. But with all due respect that doesn’t prove that your advise is any good.

In my experience development is faster to some degree if you stick to Knuth’s advice. My number one experience is being the lead developer of SwiftShader, a real-time software renderer. And I believe that’s relevant because performance is never high enough and there are always more features to implement. So I can’t afford to waste any time optimizing something I’m not sure will give me any significant gain. In almost a decade the only recipe that has proven to work is: profile, prototype, optimize.

You could argue that I’ve proven nothing either but then I’d have to ask you to find a faster software renderer with the same feature set that has been written by a small team. I don’t want to make this any more personal than necessary though. But keep in mind that Knuth has also been awarded the Tokio Prize for lifetime achievement so you still have a lot of proving and disproving to do.

…hoping to spawn some fruitful conversations.

I think you definitely achieved that already. :yes: Also please note that even though I don’t fully agree with you I certainly appreciate that you share your experience and defend it. :happy: Maybe in the end we all learn something. It already made me have a closer look at the exact definition of premature optimization, only to discover that it applies exactly as much to game development though…

Thanks.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jan 09, 2009 at 17:35

If you had any experience in game development, I might put some value to your comments, but you simply got no clue what works and doesn’t work in game projects. You need to understand that different type of projects have different constraints and you have to have different strategy also regarding optimization. It’s simply ridiculous and arrogant for you tell me what kind of strategy should be taken in game development without any experience in game development or understanding of game project constraints, don’t you think?

Btw, what you try to achieve with provocative comments like “only to discover that it applies exactly as much to game development though”? How did you discover that? Do you imply that this conversation somehow proved your point or could it rather be that I managed to piss you off in previous discussion and now you try to return the favor?

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 10, 2009 at 03:06

@JarkkoL

If you had any experience in game development…

Easy there. What makes you think I have no experience in game development? Just because SwiftShader is my number one experience doesn’t mean I have no other experience. Besides, what’s a game developer anyway? We’re all just part of the machine so please let go of the idea that you’re the most important cog, ok?

The whole issue with your arguments is that they are not arguments. I appreciate that you share your experience, truely I do, but it doesn’t mean that you’re right and Knuth is wrong. I’d really like to see some sound arguments first (talking about code “goo” was a nice attempt but easily refuted, sorry).

…you simply got no clue what works and doesn’t work in game projects.

Just for the record, I was the first non-U.S. NVIDIA intern and at TransGaming we deal with optimizing actual games and implementing gaming APIs and frameworks on a daily basis. Also, as a bachelor project I wrote an H.263 decoder using SSE2 assembly and the GPU, and my masters project was a fast dynamic compiler. Hey I don’t mean to boast but if you’re telling me I’m clueless about optimization you might as well call the pope an atheist. :huh:

You need to understand that different type of projects have different constraints and you have to have different strategy also regarding optimization. It’s simply ridiculous and arrogant for you tell me what kind of strategy should be taken in game development without any experience in game development or understanding of game project constraints, don’t you think?

The reason I mentioned SwiftShader first is because it has higher performance demands than any game engine. If you have trouble accepting that as fact please consider this: Michael Abrash who basically wrote the bible of game optimization, later developed Pixomatic which he describes as “perhaps the greatest performance challenge I’ve ever encountered”.

Btw, what you try to achieve with provocative comments like “only to discover that it applies exactly as much to game development though”?

How is that provocative? That’s merely an opinion. As the discussion progressed I looked closer at the definition of premature optimization and found that I was able to refine the arguments without making exceptions to the rule. The only reason I can see why you could say it’s provocative is because you want me to look provocative, which is actually provocative of you.

…could it rather be that I managed to piss you off in previous discussion and now you try to return the favor?

Why would I be pissed off? The discussion is about Knuth’s words, not mine, I’m just repeating and interpreting them. They’re in line with my experience but that doesn’t make it personal for me. I can see why it’s personal for you though. You’re questioning what is generally accepted as truth, based only on your experience. Questioning something is absolutely fine, but I expected some more technically founded arguments as well.

My only goal is to get the best possible advice on optimization and to share that with the rest of the forum readers. If that means abandoning my own opinion, great! :happy:

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jan 10, 2009 at 10:53

@Nick

What makes you think I have no experience in game development?

If you had, we would have completely different conversation here. Also in your public info you list no experience in game development. Am I wrong? If you call 2mo internship as an “experience” in game development, well umh ok?
@Nick

but it doesn’t mean that you’re right and Knuth is wrong.

Haha, don’t try to put me against Knuth ;) Don’t try to twist this conversation to “me and other great thinkers of the time disagree with you”. It’s simple me disagreeing with you, not Knuth. You obviously don’t understand what Knuth was saying if you think I’m disagreeing Knuth. Or you don’t understand what I’m saying.
@Nick

Hey I don’t mean to boast but if you’re telling me I’m clueless about optimization you might as well call the pope an atheist. :huh:

You are clueless about optimization in GAME PROJECTS. Problem is that due to your lack of experience in GAME PROJECTS you don’t see why that would make any difference to any other project. Read the Mick West’s article I posted in the earlier thread about early optimization to understand why your optimization strategy in GAME PROJECTS needs to be different. Maybe we should have a new mantra along the lines “overgeneralization of software practices is the root of all evil.”
@Nick

How is that provocative?

I have been around the block enough to know when people try to provoke in forums.
@Nick

Why would I be pissed off?

I don’t know? Because I dare to disagree in public forums within your territoty of optimization where you think you know it all? Because I dare to say you got no experience in game development (which is true) and makes you appear less experienced amongst your peers, where you have invested almost 1000 posts worth of time? I don’t know you personally to know what could have made you pissed, but I have been around various forums to know what can get people generally pissed, including several discussions with you in the Flipcode era.
@Nick

You’re questioning what is generally accepted as truth, based only on your experience.

Hold on, what am I questioning here? I’m questioning your idea of “prototyping” performance in game projects. That’s your idea, not Knuth’s. Note that I always say in GAME PROJECTS? It might be good idea in other type of projects. Anything else? It might be good for you to revisit where we have actually disagreed before continuing further.

B91eae75cd6245bd8074bd0c3f1cc495
0
Nils_Pipenbrinck 101 Jan 10, 2009 at 12:15

@JarkkoL

You are clueless about optimization in GAME PROJECTS. Problem is that due to your lack of experience in GAME PROJECTS you don’t see why that would make any difference to any other project. Read the Mick West’s article I posted in the earlier thread about early optimization to understand why your optimization strategy in GAME PROJECTS needs to be different. Maybe we should have a new mantra along the lines “overgeneralization of software practices is the root of all evil.”

Having worked on game projects (commercial) and software-rendering technology not that different from switfshader I can assure you that the optimization METHODOLOGY is almost the same.

The optimization techniques you apply to make your hot-path faster are very different though. A game would not benefit from an improved in a dynamic-code generator as there is not much to compile on the fly while a software-renderer would not benefit from pooling memory allocation because there aren’t many allocations to start with.

However, one thing applies to both products:

You have to get an indication about the expected peformance of your solutions early on in development.

Think about this: You add a niffy feature to a game that takes up 80% of the cpu time. Your team likes that feature and you don’t optimize further because you want to test it with game mechanics first. Pre-mature optimization is evil after all. Who knows if it will end up in the game? You postpone the optimization phase, present the feature to your workmates and everyone likes it.

What once started as a little niffy feature or even a joke may becomes an integral part of the game-mechanics that way.

Now what happends if it turns out that at the end of the development cycle you start optimizing that thing and the best you can do is a factor two improvement also you **expected** that factor 10 should be a piece of cake.

Now you have a problem. Tons of code and assets may have been developed ontop of your feature. Removing it is no option anymore, and rewriting/changing the feature is a huge task because there is so much code and asset that need to be changed. You’ll get a slip in your schedule, unhappy chefs talking to unhappy publishers. Your girlfriend may get unhappy as well because you’ll spend the next two month in the office in order to get the thing working fast enough - somehow..

All this could be prevented if you spend two or three days on the algorithm to make sure it will works out in practice and be fast enough. Is such an optimization pre-mature?

Btw - I haven’t made up this “niffy feature” story. It’s one of the things that happend to me. Of course one could argue that adding experimental things to a game is something you should never do, but heck - game development is about making games that are fun to play, and in my oppinion you have to experiment and prototype. Good ideas don’t emerge from a vacuum and great ideas often start as a simple experiment or even as an accident.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jan 10, 2009 at 12:59

That’s a good practical example of what I have been trying to explain to Nick. If we were talking purely about programmer effort, postponing the optimization further might have been an option, but because there is content created by artists/designers which relies on the feature and certain performance characteristics of it you have to optimize early, or it would be very risky not to.

Also like Mick explained in his article, not optimizing code early might make it unfeasible for testing gameplay. E.g. if your game performs badly and you get slugish <10 fps while testing gameplay, you don’t necessarely know if your gameplay is actually fun.
@Nils Pipenbrinck

All this could be prevented if you spend two or three days on the algorithm to make sure it will works out in practice and be fast enough. Is such an optimization pre-mature?

Earlier in this thread I said that I would personally leave the optimization to a later stage (in a game project, if that’s unclear to someone) when I’m confident that I have reached the \~30% mark in performance from the final implementation. In a non-game project I might not necessarily optimize even that far, because there isn’t such a big risk of wasting team effort.

3c5be51fdeec526e1f232d6b68cc0954
0
Sol_HSA 119 Jan 10, 2009 at 20:01

@Nils Pipenbrinck

All this could be prevented if you spend two or three days on the algorithm to make sure it will works out in practice and be fast enough. Is such an optimization pre-mature?

Very insightful.

This still doesn’t invalidate the “first make it work, then make it fast” paradigm, but the trick is when to make it fast =)

I’ve heard similar stories, from producer point of view:
“Can it go faster?” “We’ll make it faster later.”
“Can it go faster now?” “We’ll make it faster later.”
“Can you please make it go faster?” “We’ll make it faster later.”
“We’re releasing soon, is it getting faster?” “We can’t make it faster.”

17ba6d8b7ba3b6d82970a7bbba71a6de
0
vrnunes 102 Jan 10, 2009 at 21:05

It is sad to see some good people discussing to the point of shifting the discussion to personal attacks. Calm down, boys. :-)

I think everybody in here made good points in this discussion, lets consider it.

My humble opinion is that there are some different types of optimization, some may be applied at project start, some not.

For example, algorithmic optimizations I like to apply from the very first implementation. Machine optimizations I procrastinate until the end.

And after all, I generally optimize only based on the expected final performance, like, if I aim into getting 60 fps, I’ll try to implement things and keep track of the overall fps, if it drops after a feature is implemented, I look immediately into that (algorithmically first, machine after).

So I think that *some* optimization is nice to be done from the start, but you really don’t need to be paranoid on that. If you keep an overall good code design, you’ll be able to optimize things later.

Ceee4d1295c32a0c1c08a9eae8c9459d
0
v71 105 Jan 11, 2009 at 11:19

I wouldn’t never have thought that my little insignificant blog would have spawned such a discussion, i have deleted the blog, since i think its better to work in silence and be more restrictive about sharing code in the future.
I have reworked some of my functions and i can post results about which is faster and which not, maybe some day will do , by the way that blogger ate some symbols and this lead to more confusion and incorrect code.
Bye

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 12, 2009 at 14:14

@JarkkoL

Also in your public info you list no experience in game development.

Oh, dang, I must have forgotten to mention that I work for TransGaming then. :unsure:

Seriously now, do you honestly think game development, in your strict definition, is a field entirely of it’s own in the whole of computer science that doesn’t share the same methodolgies? And that someone with a profound knowledge and experience in assembly programming, compiler technology, multimedia, graphics drivers and embedded systems has no right to question a REAL game programmer’s experience? I think it’s really a bit sad that you have to make it personal like this in an attempt to win an argument.

It also make me wonder what kind of experience you have outside of game development. Clearly to be able to state that some generic software practices don’t apply to game development must mean you have lots of experience in other fields as well just to know how incomparably different it really is. Since I had to bare myself completely (except for what I’ve been working on in the last year, that’s not public yet), I think it’s only fair that you do the same.

Haha, don’t try to put me against Knuth ;) Don’t try to twist this conversation to “me and other great thinkers of the time disagree with you”. It’s simple me disagreeing with you, not Knuth.

Then it must have been someone else who said “Unfortunately mr. Knuth’s words have been taken out of the context and people never talk about the “We should forget about small efficiencies, say about 97% of the time” part, and he wasn’t exactly working on games where you have to work with content people which twist these numbers a bit (:”. :unsure:

Back to serious: You’re totally right that Knuth’s words are often taken out of context. But the “forgetting about small efficiencies, say about 97% of the time” part isn’t even a tenth of the complete context. First please recall that I “agree that game development is different” so I’m not arguing about the percentage value itself (and with different I mean different from typical software development, not different from all software development). But you’re implying that it means that it’s ok to optimize certain things prior to knowing its effects (and this somehow only applies to game development).

The way Knuth really meant it (which I support), premature optimization is always a bad thing. Even in those 3% of time. Even when content creators need an early idea of final performance. Simply because of the definition of premature. An optimization is not premature if it’s mature. Simple as that. And you can and should turn a premature optimization into a mature one, by profiling and prototyping.

Since you don’t seem to agree with this, you don’t agree with Knuth. So don’t try to say it isn’t so just so you can make it personal and deliver a low blow by saying I’m no REAL game developer so I can’t possibly know a thing about game optimization.

You obviously don’t understand what Knuth was saying if you think I’m disagreeing Knuth. Or you don’t understand what I’m saying.

Or, you don’t know what Knuth’s saying. But hey, no need to trust me, let’s see what the man himself sais:@Donald E. Knuth

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.

(The underlining is mine.) I can’t find anything that contradicts what I’m saying. So can we get past the “I agree with Knuth but not with you” already?

Read the Mick West’s article I posted in the earlier thread about early optimization to understand why your optimization strategy in GAME PROJECTS needs to be different.

You meant his artile about mature optimization, right?

I agree with it almost entirely, except most importantly not with this: “However, if after that [profile-driven optimization] the code is still too slow, you will now find it is not because of a few weighty functions, but rather a general problem with all of the code. The code is just generally inefficient.” Even at that point 90% of execution time is spent in 10% of the code (give or take). So obviously you should still be able to identify that minority of code and concentrate on that.

Note that I’m also not denying the existence of “sick code” caused by belated pessimization. I’m just saying that optimizing a priori is like taking chemo for a benign tumor. You should strive to optimize at the right time, by profiling early and if necessary prototyping. But when still in doubt (e.g. for a feature which can only be profiled in the fully functional product), one should still hold off the optimization. The fear that it will blur into the rest of the application is unfounded.

And if someone doubts everything all the time it means he’s a n00b trying to implement an MMORPG and there is simply no saving him. :lol:

Oh and for the record, the person who coined “belated pessimization is the leaf of no good” isn’t a REAL game developer… :ninja:

Because I dare to disagree in public forums within your territoty of optimization where you think you know it all?

Do I? I recall admitting that I still regularly discover that I’ve optimized something prematurely. So no, no reason to get pissed off.

Because I dare to say you got no experience in game development (which is true) and makes you appear less experienced amongst your peers, where you have invested almost 1000 posts worth of time?

I might not have game development experience in the narrow sense, but I’ve never hidden that either. So in as far as I actually care about my reputation, you haven’t dented it one bit. Nothing to get pissed off about either.

I don’t know you personally to know what could have made you pissed…

You simply wrongfully assume I’m pissed. Maybe it’s because you can then play the victim role, I don’t know. Either way you’re trying to turn what started as a purely technical discussion into a personal one because you’re running out of technical arguments.

Hold on, what am I questioning here? I’m questioning your idea of “prototyping” performance in game projects. That’s your idea, not Knuth’s.

You’re questioning that and Knuth’s words (else you would have certainly responded negatively when I asked “The way you describe it, the advice about premature optimization is incorrect for certain aspects of game development, advocating to optimize things without prior information about whether it will be gainful or not, right?”). So now you’re just trying to focus attention on only part of the disagreement, where you think you still have a chance by making it personal.

Besides, all I’m saying is that prototyping is a tool that can help prevent premature optimization, and I’m pretty sure Knuth agrees. If you want to call that my idea and question it, fine, but I have yet to see any counter arguments (just because you don’t use prototyping during your REAL game development doesn’t make it a convincing closing argument). And I also already tried to explain that prototyping comes in many forms (small experiments, numerical estimation, engine versions, source control branches, etc.) In fact your description of “once you start optimizing, it becomes your final implementation” could even fit the evolutionary prototyping definition (although that’s riddled with its own set of pitfalls). But if it works for you, great, I’m just advising more trusted forms of prototyping.

Either way I’m thinking you don’t even disagree all that much with me. You just focus more on warning people about optimizing later than they should have while I focus on warning people about optimizing earlier than they should have. Different direction, same goal.

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 12, 2009 at 14:21

@Nils Pipenbrinck

Having worked on game projects (commercial) and software-rendering technology not that different from switfshader I can assure you that the optimization METHODOLOGY is almost the same.

Thank you.

All this could be prevented if you spend two or three days on the algorithm to make sure it will works out in practice and be fast enough. Is such an optimization pre-mature?

At that point, yes, but it quickly turned into a situation that called for provisional profiling/prototyping and separation of concerns. However, I recognise that it happens to the best and no matter what methodology you use there’s always a level of uncertainty. Especially when working with new technology the only way to gain experience is the hard way.

Also, sometimes you spend weeks or months finding the right approach, and when you finally got it, it looks so simple and obvious that you hardly understand why it had to take longer than a couple of days. In reality it’s just a long process of trial and error you have to go through to gain ‘negative information’ before you arrive at the optimal solution. So don’t blame yourself for hindsight bias.

But most importantly I think it doesn’t justify premature optimization. Two wrongs don’t make a right.

If instead of just becoming more aware of situations that ask for early profiling and prototyping you started to accept the idea that premature optimization is a good thing it would most definitely cost you more than two months. The difference is that this time would just be spread more evenly over the course of the project (researching optimizations, regression testing, debugging, maintenance, reoptimizing, etc.). But obviously this time could have been used in much more productive ways instead.

Good ideas don’t emerge from a vacuum and great ideas often start as a simple experiment or even as an accident.

Amen.

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 12, 2009 at 14:59

@v71

I wouldn’t never have thought that my little insignificant blog would have spawned such a discussion, i have deleted the blog, since i think its better to work in silence and be more restrictive about sharing code in the future.

I’m sorry to hear that. :closedeye I actually applaud anyone willing to take the effort to write a blog! It’s just that the optimizations you presented were not as foolproof as an unattentive reader might have expected. But all it really needed was a small warning about the experimental nature, or some code comments about possible disadvantages of some of the functions. I’m looking forward to your next blog though! :yes:

By the way, I found that this site’s Dialy Code Gem is an excellent place to get your code fully scrutinized before using it in production code. :happy:

8676d29610e6c98d6dd2d9c38528cd9c
0
alphadog 101 Jan 12, 2009 at 15:50

@v71

I wouldn’t never have thought that my little insignificant blog would have spawned such a discussion, i have deleted the blog, since i think its better to work in silence and be more restrictive about sharing code in the future.

That’s too bad. :sad:

This discussion, while heated, was one of the best we’ve had in the forums for a while…

4cb2d2b30b08fb6b34e2175324a5d2e9
0
Grumpy 101 Jan 12, 2009 at 18:19

I agree,

I checked on the blog the other day to find it moved but saying where… Too bad, I was interested in seeing a different approach to such a project.

But you can’t take any criticism to personally in this forum, for there are different level of programmers here from complete novices to old hacks.

Yes, I have seen this particular thread get a little nasty in posts. You must remember every body comes from different training/schooling and experience so optimization of code will have many different versions.

So play nice and support the programming challenged.
(and no, I am not implying v71 is programming challenged)

Grumpy-

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jan 12, 2009 at 22:06

@Nick

Seriously now, do you honestly think game development, in your strict definition, is a field entirely of it’s own in the whole of computer science that doesn’t share the same methodolgies?

Dear Nick, I said game development has different constraints which requires different strategies for optimization. Problem here is that it doesn’t matter what I say because you try to twist my message out of context and argue with that. That doesn’t make any sense and is waste of time for us both. This isn’t a constructive debate. I can’t find any reason why would you want to do that except of you being somehow offended and fall into such a silly yet very frustrating tactics.
@Nick

The way Knuth really meant it (which I support), premature optimization is always a bad thing. Even in those 3% of time.

It was actually me who said in the other optimization thread that “Nothing justifies premature optimization of course because by definition it’s premature”. The question is that what is premature and I think it depends on the project. I guess you don’t agree with that? Note that I never said “Optimize prematurely!” I said that in game projects you need early optimizations because of various reasons I listed. Do you see the difference?
@Nick

I agree with it almost entirely, except most importantly not with this: “However, if after that [profile-driven optimization] the code is still too slow, you will now find it is not because of a few weighty functions, but rather a general problem with all of the code. The code is just generally inefficient.” Even at that point 90% of execution time is spent in 10% of the code (give or take). So obviously you should still be able to identify that minority of code and concentrate on that.

For 1 million line code base, 10% is still 100,000 lines of code you need to identify which is cluttered all over the code base. At this point identifying the badly performing code becomes very expensive. In my opinion it’s better to “learn to write efficient code straight off the bat” so that you don’t end up to the situation where you have to identify it. Note that I don’t say you have to do crazy performance optimizations tricks which make your code completely unreadable and bloated. I say, learn good programming practices and understand the performance characteristics of your target platforms to be able to write efficient code straight.

Imagine scenario where a programmer doesn’t understand how memory access pattern impacts to the performance of code. He will make design decision which lead to degraded performance because of the lack of this information. If he would have been aware of performance characteristics of memory access patterns he would have made different decisions, which would probably have taken him around the same amount of time to implement. I know you agree with this, but don’t just disagree because it’s me who is saying it ;)
@Nick

Either way you’re trying to turn what started as a purely technical discussion into a personal one because you’re running out of technical arguments.

On the contrary, I did try to put some sense to this discussion by saying straight what I think that actually drives this conversation. But I think because you are emotionally so charged about this debate there is no way I could do that. You don’t want to argue with what I’m saying but with me personally and that doesn’t lead to any fruitful conversation. Like you said, we probably don’t even disagree much here.

B91eae75cd6245bd8074bd0c3f1cc495
0
Nils_Pipenbrinck 101 Jan 13, 2009 at 01:00

+1 for a great thread.

We will never agree, but who cares? It’s about knowledge exchange.

Don’t delete your blog just because of this thread, mate.

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 13, 2009 at 02:23

@JarkkoL

Dear Nick, I said game development has different constraints which requires different strategies for optimization.

Dear JarkkoL. You didn’t answer my question. We all agree it’s different, the point is different from what?

And if you still think my game development experience is relevant to the discussion I’d also still would like to hear all about your non-game development experience.

It was actually me who said in the other optimization thread that “Nothing justifies premature optimization of course because by definition it’s premature”. The question is that what is premature and I think it depends on the project. I guess you don’t agree with that? Note that I never said “Optimize prematurely!” I said that in game projects you need early optimizations because of various reasons I listed. Do you see the difference?

Then we both agree on that. But the problem is you don’t practice what you preach; you say premature optimization is bad but prototyping (as a tool to prevent premature optimization) is somehow not an option (not giving me any further arguments except that I “simply got no clue what works and doesn’t work in game projects”). Also, you also don’t consider an optimization premature if it gets you within \~30% mark from the final performance (which is exactly the kind of reasoning that will lead to premature optimization). Furthermore, I repeatedly said that you give the appearance that early optimization without knowing its effects can be necessary especially in game development, and you never denied or refined that (while it’s just an alternate definition of premature optimization). Lastly, you went on about “early optimization” without proper warning that it requires profiling to make sure it’s a mature optimization.

So how can I not come to the conclusion that you take premature optimization a bit loosly? No offence but it’s a bit like agreeing that killing is a sin, while you put guns in kids hands and let them point it at each other. :huh:

For 1 million line code base, 10% is still 100,000 lines of code you need to identify which is cluttered all over the code base. At this point identifying the badly performing code becomes very expensive. In my opinion it’s better to “learn to write efficient code straight off the bat” so that you don’t end up to the situation where you have to identify it.

I agree it can be a lot of work. But you have to realize that if it came to that point while still taking all of Knuth’s advice into account, the programmer has a serious lack of experience to begin with. The only other option for him, which then must be what you’re advising, is to take the risk of premature optimization.

My stance is that two wrongs don’t make a right. It’s better to start working on that pile of optimization work ahead of you than to make the mistake of optimizing prematurely.

You must understand that my opinion is also based on the fact that people have a natural tendency to worry too much about optimization than too little, especially in game development. As Nils proves you can occasionally optimize too little when fearing premature optimization too much but I can’t even begin to count the number of threads where someone tries to optimize something that is almost certain not going to matter in the final product. So it’s important not to give the impression that premature optimization isn’t truely the root of all evil. Learning that some optimizations shouldn’t be needlessly delayed comes quite easily with experience (and again leads to premature optimization when overly confident).

There are also numerous ambitious projects that never get finished where one of the main causes is putting more work into the performance of each feature than necessary. I can’t recall ever hearing about fully functional projects that were abandoned because it would require too much delayed optimization though. Countless people agree that you should first get it working, then get it fast. If you’re inexperienced it’s simply always going to take longer to finish the project, but if you give in to premature optimization it’s going to take even longer or just never get finished.

I say, learn good programming practices and understand the performance characteristics of your target platforms to be able to write efficient code straight.

Stating the obvious. And isn’t one of the ways to achieve that to prototype your features and profile them? :unsure:

Imagine scenario where a programmer doesn’t understand how memory access pattern impacts to the performance of code. He will make design decision which lead to degraded performance because of the lack of this information. If he would have been aware of performance characteristics of memory access patterns he would have made different decisions, which would probably have taken him around the same amount of time to implement. I know you agree with this, but don’t just disagree because it’s me who is saying it ;)

Indeed I agree. But you say nothing about how this programmer mysteriously gains knowledge about access pattern performance, or even how he knows that he should gain some insight into it first. The way you describe it he should teach himself about it regardless of just how much he would gain from it. Note that I agree that without this knowledge performance will degrade, but it might just be a totally insignificant amount. So even when a faster implementation would take equal time to implement (which I think is a bit of an optimistic assumption of yours), the time to learn these things thoroughly should not be underestimated. In my opinion you should first profile to know whether it’s even an issue, prototype to get an idea of the effects of various optimization approaches, and only then you integrate the optimal algorithm. You might have the impression that this takes a lot of time, but actually in the majority of cases you stop after profiling proves that it’s insignificant.

On the contrary, I did try to put some sense to this discussion by saying straight what I think that actually drives this conversation. But I think because you are emotionally so charged about this debate there is no way I could do that. You don’t want to argue with what I’m saying but with me personally and that doesn’t lead to any fruitful conversation.

I totally apologize for whatever I said that prevented you from putting some sense into this discussion. And rest assured I never felt like you stepped on my soul, I just had the impression that you tried to take advantage of my lack of real game development experience without taking my other experience into account. Anyway I hope that this post gives you lots and lots of purely technical arguments so you can reply to them without feeling forced to get personal. :blush:

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jan 20, 2009 at 10:31

@Nick

Dear JarkkoL. You didn’t answer my question. We all agree it’s different, the point is different from what?

From the top of my head, different from any project which 1) has large amount of non-engineers who dominate the project costs and your work as an engineer has major influence on how efficiently they work, 2) has to result in an entertaining and fun product which amongst other things depends on the performance, 3) has very constrained performance requirements defined by console manufacturers (i.e. if your game doesn’t run at least on a specific framerate, you fail and can’t ship the game). These things influence what I consider “premature” optimization in a project.
@Nick

Also, you also don’t consider an optimization premature if it gets you within \~30% mark from the final performance (which is exactly the kind of reasoning that will lead to premature optimization).

Could you be more specific why you think it’s “kind of reasoning reasoning which leads to premature optimization”? Performance is a very important characteristic in game projects, which is why you have to make sure early that you reach your performance goal.
@Nick

You must understand that my opinion is also based on the fact that people have a natural tendency to worry too much about optimization than too little, especially in game development.

According to an article I recently read in Gamasutra, on average 60% of game project cost goes to rework (and that’s not engineering rework obviously). From my experience one major influencing factor is the lack of knowledge about performance characteristics of the game, e.g. you have to cut/optimize content to hit performance constraints. That’s expensive.
@Nick

So it’s important not to give the impression that premature optimization isn’t truely the root of all evil.

My intention in this thread isn’t to guide OP or anyone else to any direction or to give any impression to anyone. I’m only sharing my experiences and I’m not going to twist them in order to guide people to some track I believe is correct. I understand that you probably want to educate less experienced programmers about things you think is good/bad in software engineering and are afraid that they can’t judge themselves. I don’t try to educate people here, and to be honest, neither should you, because I think it’s much more valuable to share your experiences regarding optimization rather than try to educate. Personally, even with fair amount of game development experience under my belt, I don’t want to risk pointing people to the wrong direction. My message is: I have done it this way and find these things good in practice, but YMMV.
@Nick

I can’t recall ever hearing about fully functional projects that were abandoned because it would require too much delayed optimization though.

Have you heard of projects being delayed? Have you heard of how many shipped game projects actually make profit? Do you know how much delaying a project costs?
@Nick

But you say nothing about how this programmer mysteriously gains knowledge about access pattern performance, or even how he knows that he should gain some insight into it first.

Good start would be to read your target platforms performance related material ;)

Ceee4d1295c32a0c1c08a9eae8c9459d
0
v71 105 Jan 20, 2009 at 10:43

I just wanted to say that i tested my ‘flawed’ functions on
Pc running Vista, / Linux , Mac , and a sparc stations 64 bit
All of my functions worked flawlessly , i have also a nice graph ( not shareable ) with all timing test , Nicks’ comments ignited my curiosity about running these functions on different machines to test speed gains or losses.
I also converted all divisions with inverse multiplications and you would be shocked to know the results ( which , this time i reserve for myself ).
Just to close this thread forever, i recevied some disturbing emails, where
people insulted me and other funny stuff , for the sake of these persons i want to say that i have an education, i work in a commercial game development team ( no, i am not disclosing which is ) and my flawed , slow, useless functions have been used quite a bit.
I would have expected this kind of immature behaviour more in the Gamedev forum, there is no need to insult someone trhough email because of some code.

6673a7d3bfd3d1db5e05c5676cc040b6
0
Goz 101 Jan 20, 2009 at 13:11

@v71

Just to close this thread forever, i recevied some disturbing emails, where people insulted me and other funny stuff , for the sake of these persons i want to say that i have an education, i work in a commercial game development team ( no, i am not disclosing which is ) and my flawed , slow, useless functions have been used quite a bit. I would have expected this kind of immature behaviour more in the Gamedev forum, there is no need to insult someone trhough email because of some code.

Oh the joys of the internet. I’d drop an abuse message to their e-Mail provider if I were you ;)

A0c9c0649c5deacc0ae3b7f7721c94d2
0
starstutter 101 Jan 20, 2009 at 15:38

@Goz

Oh the joys of the internet. I’d drop an abuse message to their e-Mail provider if I were you ;)

definitley not a bad idea

99f6aeec9715bb034bba93ba2a7eb360
0
Nick 102 Jan 23, 2009 at 13:18

@JarkkoL

From the top of my head, different from any project which 1) has large amount of non-engineers who dominate the project costs and your work as an engineer has major influence on how efficiently they work, 2) has to result in an entertaining and fun product which amongst other things depends on the performance, 3) has very constrained performance requirements defined by console manufacturers (i.e. if your game doesn’t run at least on a specific framerate, you fail and can’t ship the game). These things influence what I consider “premature” optimization in a project.

Thanks for the detailed answer. It’s nothing radically different from other projects though.

1) It’s irrelevant that the people depending on you are non-engineers. Tons of projects (if not every single one) has people depending on your work. An engine is essentially a framework, and for any framework the results of its users (engineers or non-engineers) varies with the qualities of the framework. One example of a similar project is a programming language implementation. It could be interpreted, JIT-compiled, natively compiled, include static and/or dynamic optimizations, etc. All of this is of importance to the user but it would be premature (read: cost way too much time) if a native optimizing compiler was implemented when an interpreter could do. For instance a friend of mine implemented a network packet router/firewall scripting language that was blazing fast but interpreted. All the heavy lifting was really in the (powerful) commands. So he focussed on optimizing those, after careful profiling. Others depended on this project early on so he shouldn’t have wasted any time looking for faster translation of the language. By the way, if you have a dozen artists already working on production assets while you don’t even have an early version/prototype engine, something is seriously wrong with your schedule.

2) Again this isn’t something exclusive to game projects. Just like games are no fun when performance isn’t up to snuff, handheld devices for example are no fun if they aren’t responsive enough. Performance is a constant worry in all middleware too; if you can’t live up to the expectations it won’t sell.

3) I agree consoles create a bit of a cutthroat situation but that’s true for anything that has to run at a predefined performance level on a predefined (minimum) system configuration. Take codecs for example. There’s no excuse for missing the client’s specification. In fact console games are a little less stringent than that because there are many parameters you can scale down. You might not get the best critique for the graphics but at least you can still sell it if the gameplay is any interesting. And as the people at Larian Studios confirmed to me, modern consoles behave much in the line of expectations if you do some prototyping.

So to get back to the point, I do agree that games have a combination of performance requirements that is rarely seen with other projects, but individually they’re not uncommon and even relatively modest, predictable and flexible. So it’s hardly convincing that Knuth’s advice applies any less to games. Besides, at the time when he wrote it computers were not nearly as fast as today and performance was a serious concern for even the simplest looking application.

I don’t agree that any of the above aspects should directly influence what is considered premature. No matter how high the demands, spending a month writing a math library is a waste of time if you’re not sure which functions will even be called, and how frequently. I’m much inclined to say that you should do more profiling and prototyping instead of guessing so you can focus your effort better and reach that design goal in time.

And finally, someone who has experienced such performance requirements in non-game projects is surely capable of identifying what would be premature or not in a real game project.

Could you be more specific why you think it’s “kind of reasoning which leads to premature optimization”? Performance is a very important characteristic in game projects, which is why you have to make sure early that you reach your performance goal.

Yes, it’s important to be aware of performance early on. But simply aiming to get within 30% of final performance easily leads to wrong assumptions. For example take the float to int conversion (it applies to much more complicated things too): If you decide that using some bit manipulation code would get you within 30% of ‘final’ performance before knowing that it’s actually a hotspot, it would be a waste of time and could introduce bugs that only pop up much later. So it’s pretty meaningless to talk about final performance for individual optimization opportunities, unless you start assuming that each of them is equally important, which is the the kind of reasoning that leads to premature optimization.

Your advice reminds me of Belady’s algorithm. The problem is that it cannot be implemented because it requires knowing the future (which is why it’s also known as the clearvoyant algorithm). Even though you added a margin of 30% it requires a crystal ball to know how that final performance is distributed over the code. If you do know it, it means you have the final code, at which point your advice has no value either, or you have a prototype, but for some reason you still think that’s a bad idea.

According to an article I recently read in Gamasutra, on average 60% of game project cost goes to rework (and that’s not engineering rework obviously). From my experience one major influencing factor is the lack of knowledge about performance characteristics of the game, e.g. you have to cut/optimize content to hit performance constraints. That’s expensive.

And where does that article mention that the solution is to take premature optimization a little loosly? Again, two wrongs don’t make a right.

Let’s say you optimize everything you assume will bring you closer to final performance. This takes time, and before you know it the deadline arrives where you should inform the art team of performance characteristics. There’s no time left for focussing on hotspots that actually show up during profiling, and there’s certainly no time left for any prototyping since you already started with the final implementation. So your information is based on unoptimized hotspots, and your expectations of future optimizations. It’s needless to say that the final performance characteristics could be very different from your early information, leading to lots of rework.

If instead of starting right away with the final implementation you first wrote a prototype, you’d get all the key features functional in minimal time, and there would be time left to focus on the profiled hotspots before you have to give the art team clear directions. Even if there’s some “goo” left it’s much easier to make a reliable estimate of how much that can be further optimized, than trying to estimate how much a remaining significant hotspot can be optimized by more complicated optimizations.

The code quality of a prototype is typically sub-standard, so it seems wasteful to not write production quality code “right off the bat”. But throwing that code away only represents a tiny bit of rework that gives you a valuable amount of information to avoid a huge amount of rework later in the project.

My intention in this thread isn’t to guide OP or anyone else to any direction or to give any impression to anyone. I’m only sharing my experiences and I’m not going to twist them in order to guide people to some track I believe is correct. I understand that you probably want to educate less experienced programmers about things you think is good/bad in software engineering and are afraid that they can’t judge themselves. I don’t try to educate people here, and to be honest, neither should you, because I think it’s much more valuable to share your experiences regarding optimization rather than try to educate. Personally, even with fair amount of game development experience under my belt, I don’t want to risk pointing people to the wrong direction. My message is: I have done it this way and find these things good in practice, but YMMV.

So you want to share your experience but have no arguments why it works other than that appeared to be the outcome for you?

Don’t you see how pointless that is? If I come here with a question I do expect to be educated. Since indeed everyone’s mileage varies, the arguments why someone used a certain technique are far more important than knowing who has had success with it and who did not.

I don’t see anything wrong with trying to educate people. Best case you helped someone. Worst case, you learn something yourself when your arguments get overthrown, which hardly counts as a worst case. So why would you fear putting people in the wrong direction? By just sharing your experience and saying my mileage may vary, that’s risking putting people in the wrong direction (like having them optimize prematurely).

Have you heard of projects being delayed? Have you heard of how many shipped game projects actually make profit? Do you know how much delaying a project costs?

You have to make a difference between justified and unjustified delays. Half-Life 2 for example was delayed by over a year but made millions of profit. If some of the delay was due to optimization work that still needed to be done but was ignored by higher management, I think that’s infinitely better than slipping off course in the middle of the project due to having wasted too much time on early optimization and not being able to convince anyone that it will ever get finished because of a lack of functional parts. For what it’s worth, the Source engine is one of the most efficient of its time.

I really don’t think we can hammer enough on getting functionality done first. That doesn’t mean there shouldn’t be any regard for performance early in the project either, but you certainly shouldn’t be thinking about performance yet when you can’t do any meaningful profiling. As soon as you can profile something to check whether it meets design goals, do it, but otherwise focus on functionality.

Good start would be to read your target platforms performance related material ;)

Sure, when you actually need the information (which could be early in the project) you need to know where to find the relative bits. But knowing this information doesn’t mean you should apply it “off the bat” for every piece of code. Cache line size could have an important effect on performance, but if I optimized all my data structures and algorithms for that line size then I’ll have wasted precious time that I could have used for more crucial optimizations instead.