because the log function returns a double. so in the first cast you cast a double division to an integer and in the other case you first cast down to float and then to int.

Please log in or register to post a reply.

No, I don’t think so.

It prints

2

3

not

3

2

.

So, the second one is right, and the first one is wrong.

This works:

float somefloat = log(8)/log(2);

cout << int(somefloat) << endl;

This doesn’t:

cout << int((log(8)/log(2))) << endl;

I still think the problem is within the cast. If you first cast the division to float the result is 3 and stays the same when cast to an integer. Probably due to some accuracy problems the double division yields 2.999999999…, which gets truncated to two (but due to limited floating point precision the error disappears when using float).

Try outputting the doule directly…

Ok… i tried out your example and get the same strange result :)

Let me get into this for a moment.

You’re being bitten by precision problems. Assuming Intel x86 under Windows, floating point calculations take place at high precision (53bit or 64bit), which equates to the “double” datatype. Calculations matching the “float” datatype may be carried out in lower precision (24bit). In this case, you’ve found that a small rounding error in higher precision modes is masked in lower precision mode.

To perform both calculations in lower precision mode, try the following:

```
//Add this to the preprocessor directives
#include <cfloat>
//Add the following before the calculation
_controlfp (_PC_24, _MCW_PC);
//Both calculations result in "3" here
//Or if you try this:
_controlfp (_PC_53, _MCW_PC);
//Or this:
_controlfp (_PC_64, _MCW_PC);
//You get 2 and 3 respectively
```

So far i came to the same answer here. What’s strange though is that you can do this :

double x = log(8)/log(2);

printf(“%d”, (int)x);

and

printf(“%d”, (int)(log(8)/log(2)));

and still get the different results. If this error was due to some error beyond floating point precision this should expose it…

edit : Although that might be some compiler internal thing

I don’t run debug mode in gcc :) Anyway… I’m 100% sure that that is a comppiler thing.

edit : The OP seems to gone though…

I’m not gone. It seems like it has to do with what happens when you convert a long double to a float or double (it gets rounded to 3).

If you do this:

long double somedouble = log(8)/log(2);

cout << int(somedouble) << endl;

you get 2.

@anubis

I don’t run debug mode in gcc :) Anyway… I’m 100% sure that that is a comppiler thing.

edit : The OP seems to gone though…

[snapback]20915[/snapback]

doubles have a higher precicion IN THE CPU than anywhere else. even if you just store it into a double, it gets rounded in some way. in the cpu, its 80bit + some features, stored in a variable of type double, it gets converted to 64bit, float to 32bit..

that means only if you directly convert to int, you work with the original value with 80bit..

looks like this exactly results in one of the most insignificant bits changed, wich result in such a rounding-change

- Upcoming Multiplatform Game Program...
- Our first game - looking for feedbacks
- Network Emulation Tool
- Trouble with accessing GLSL array
- Fiction
- Game Programming Patterns: Bytecode
- Interactive WebGL Water Demo
- Skeletal Animation Tutorial with GP...
- Unreal Engine 4
- Microsoft xbox one selling poorly

I have the following code:

cout << int((log(8)/log(2))) << endl;

float somefloat = log(8)/log(2);

cout << int(somefloat) << endl;

Anyone know why it prints out two different numbers?

mike

http://www.coolgroups.com/