# Optimization

35 replies to this topic

### #21geon

Senior Member

• Members
• 939 posts

Posted 25 August 2011 - 01:10 AM

tobeythorn said:

I suppose there are other ways of dealing with this.

I would check all adjacent cells, or keep objects at the border in multiple cells.

### #22tobeythorn

Valued Member

• Members
• 189 posts

Posted 25 August 2011 - 03:23 AM

not quite sure what you mean. i was keeping objects at the border in multiple cells, but the idea was to make sure that i didn't do collision interactions for the same pair multiple times, so I was having objects remember who they have already interacted with that frame and check that.

### #23Stainless

Member

• Members
• 577 posts
• LocationSouthampton

Posted 25 August 2011 - 08:39 AM

Sounds like you are close to a good solution.

One thing I would think about doing is changing the coordinate system to a fixed point one so you can use integer maths.

There is an incredibly fast integer square root algo that just requires a loop for every 2 bits of the integer and a shift and add in the loop.

### #24tobeythorn

Valued Member

• Members
• 189 posts

Posted 26 August 2011 - 05:03 AM

Finally got the bugs out of the partitioning system. Partitions are regenerated from scratch each frame and split down until no partition contains more than 100 objects. With 2000 moving and colliding objects in the view of the camera it goes at 30 FPS. The cool thing is that performance scales nicely. For example, 8000 objects runs at about 5-10 FPS depending on whether all 8000 objects are in the view of the camera. Here's a screenshot.

@Stainless,
I'll keep the integer coordinate idea in mind for when I run into performance issues again and need more speed. I feel comfortable moving on to the AI at this point.

### #25}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 26 August 2011 - 10:37 AM

100 objects per node seems too much for me. Did you try to optimize stopping criterion? If objects have fixed interaction radius than size-based stopping criterion will be better, or regular grid-based approaches with spacing equals to the interaction size.
Sorry my broken english!

### #26Mihail121

Senior Member

• Members
• 1059 posts

Posted 26 August 2011 - 11:14 AM

Nick said:

That is totally wrong advice. You should not bother with low level optimization, 90% of the time. That 10% is critical for the overall performance of your application. The reason some people have stopped bothering about performance altogether, is that other libraries (and drivers) take care of some of the low-level work.

No the real reason is we have to meet deadlines and boss is sitting behind my back asking why the hell I am optimizing for. If it does not crash it's fine.

### #27tobeythorn

Valued Member

• Members
• 189 posts

Posted 26 August 2011 - 02:58 PM

I actually made a mistake, I had it set to a max of 50 objects per partition. There seems to be relatively little difference in performance between having 50 and 2 objects max per partition. Not really sure what the reason is at this point. Performances does seem to depend on how dense the objects are too. This is probably because less objects get caught on the split planes when the density is low. In any case, performance is good enough for now.

@Mihail,
There is such a disconnect in perspective between engineers and business people. The business people are sometimes right about keeping priorities on getting a product done, but also don't seem to care about the quality of a product as long as they think they can sell it.

### #28Stainless

Member

• Members
• 577 posts
• LocationSouthampton

Posted 27 August 2011 - 09:00 AM

A good programmer knows when to optomise code instead of algorithms.

It should be left to the engineers to decide when it is required, that's why we have software development bastar... errr ... managers.

It is easy to get trapped looking at a loop and trying to work out how to speed it up when a change in algo could mean the loop is removed completely.

These days you really should only look at algo optomisation.

The code produced by a compiler is highly specific to platform and compiler issues. I can look at a block of assembler and tell if it was compiled with visual c++, gcc, or hand coded.

If you start dropping in x86 code, it won't work on an arm platform, and these days you should not be targeting a single platform.

There are too many devices out there not based on x86 that you would be deserting.

All those sales gone because of a few nanoseconds in a tight loop ?

### #29tobeythorn

Valued Member

• Members
• 189 posts

Posted 01 September 2011 - 12:56 AM

Following JarkkoL's advice to pay more attention to big O, I've started measuring big O in my application to see where it's worst. This has helped me understand inefficiencies better.

For example, I realized that although I was partitioning nodes effectively, I was not dealing with collisions involving objects lying on partition split-planes as efficiently as I could. Objects lying on split-planes were accounting for 2 orders of magnitude more collisions calls than the rest of the objects were. So, fixing that inefficiency, I've reduced the total collision calls by a factor of 4. When I started this thread, my application could handle collisions between about 1000 objects before framerates dropped to unacceptable levels. Now it can do 10000!

Also, I installed a profiling plugin. This is the first time I've used a profiler, but it is a very nice tool indeed.

### #30Reedbeta

DevMaster Staff

• 5305 posts
• LocationBellevue, WA

Posted 01 September 2011 - 01:05 AM

Very nice, Tobey! Out of curiosity, what profiler did you use?
reedbeta.com - developer blog, OpenGL demos, and other projects

### #31tobeythorn

Valued Member

• Members
• 189 posts

Posted 01 September 2011 - 01:24 AM

Code::Blocks has a plugin which uses the GNU Gprof.

### #32EvilEnt

New Member

• Members
• 2 posts

Posted 01 September 2011 - 11:08 AM

edit: disregard the stuff below this line, std::abs is even faster

stumbled on this a hour ago and benchmarked this function:
float sine(float x)
{
// Convert the input value to a range of -1 to 1
x = x * (1.0f / PI);

// Wrap around
volatile float z = (x + 25165824.0f);
x = x - (z - 25165824.0f);

#if LOW_SINE_PRECISION
return 4.0f * (x - x * abs(x));
#else
float y = x - x * abs(x);

const float Q = 3.1f;
const float P = 3.6f;

return y * (Q + P * abs(y));
#endif
}

things were 10-15% faster for me (gcc) when replacing abs/fabs with
#define fastAbs(f) ((f<0.0f) ? (-f) : (f))
the inline equivalent was equally slow as fabs or clearing the sign bit
note that without optimization fabs was faster than without
also the results were different

(anyone has a clue how this "Wrap around" works?)

### #33}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 01 September 2011 - 11:31 AM

EvilEnt said:

things were 10-15% faster for me (gcc) when replacing abs/fabs with

Which flags do you use? Also try std::abs instead of abs/fabs.

EvilEnt said:

anyone has a clue how this "Wrap around" works?

"x - (x + 25165824 - 25165824)" is something like "x % 2" for float (IEEE 754 single precision) x in range about -8388608..8388607. It is elaborate bit manipulation of internal float representation.
Sorry my broken english!

### #34EvilEnt

New Member

• Members
• 2 posts

Posted 01 September 2011 - 12:00 PM

'}:+()___ [Smile said:

']Which flags do you use? Also try std::abs instead of abs/fabs.

-O3, -s and -Os

fabs: 2.5 sec
#define: 2.0 sec
std::abs: 1.7 sec

so it's totally awesome (at least in my far-from-reality benchmark)

'}:+()___ [Smile said:

']
"x - (x + 25165824 - 25165824)" is something like "x % 2" for float (IEEE 754 single precision) x in range about -8388608..8388607. It is elaborate bit manipulation of internal float representation.

now I know what it does but still not how it works
evil black magic o_O

### #35}:+()___ (Smile)

Member

• Members
• 169 posts

Posted 01 September 2011 - 06:00 PM

EvilEnt said:

evil black magic o_O
Well, it is not so hard...

A float basically consists of the sign bit, position of the most significant bit in the number and values of the next 23 bits (forget about NaN, infinity and underflow).

Example calculation of 25165824 + x - 25165824 with x = 111111.25f:

25165824.0f =

1 1000 0000 0000 0000 0000 0000 . 0000 0000

^----------------------------^ first 24 bits stored in float

x = 111111.25f

1 1011 0010 0000 0111 . 0100 0000

^------------------------------^ 24 bits

25165824 + x =

1 1000 0001 1011 0010 0000 0111 . 0100 0000

^----- will be truncated ---

float(25165824 + x) =

1 1000 0001 1011 0010 0000 0110 . 0000 0000

^----------------------------^ 24 bits

float(25165824 + x) - 25165824 =

1 1011 0010 0000 0110 . 0000 0000


So, x rounded to even integer 111110.
Sorry my broken english!

### #36Vilem Otte

Valued Member

• Members
• 345 posts

Posted 30 November 2011 - 02:46 PM

Okay, I haven't read through all the stuff here (just went quickly through). Anyway, first thing - to Carmack's black magic (meaning fast rsqrt in Quake 1, etc.) - the problem with this algorithm is, that it is going to be slower (or just slightly faster) on today CPUs compared to other ways; especially when you can call RSQRTPS or RSQRTSS instructions through SSE.

As for fast abs, you will get best performance with:
_mm_max_ps(mValue, _mm_mul_ps(mValue, _mm_set1_ps(-1.0f)));


The same goes for fast sin (although that code is a bit longer, so I won't write it here. You can also go with lookup tables (although I don't think you'll get much out of them today).

Anyway as for checking collision of lots of stuff, I'd also recommend you try BVH or BIH (BIH could really shine in this case). You can very quickly dynamically rebuild/refit them (even with SAH-like partitioning heuristics, though I think that Equal counts partitioning (with sort) heuristics would be better for this case - which is even faster btw.). So see it this way - you're doing some operation 1000000 times, if you hand-optimize it in assembly (so it will be twice the fast), you'll get 200% speed, but when you avoid doing that operation firstly and do it just 100 times, you end up with 1000000% speed. (Of course assuming you can instantly say when you can avoid that operation - which is impossible - still you'll get much more speed avoiding the operation, than actually brute-forcing it).

Huh, hope you got the point (I'm not that good at explaining something).
My blog about game development (and not just game development) - http://gameprogramme...y.blogspot.com/

If you don't know how to speed up application, go "roarrrrrr!", hit the compiler with the club and use -O3 :D

#### 1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users