Checking whether a number is a power of 2

3c6597370b476903ed475f70b4b3ce31
0
john 102 Aug 25, 2004 at 21:08
bool IsPowerOfTwo (int value)
{
  return (value & -value) == value;
}

39 Replies

Please log in or register to post a reply.

F7a4a748ecf664f189bb704a660b3573
0
anubis 101 Aug 25, 2004 at 21:10

the good old trick

6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Aug 25, 2004 at 21:18

indeed. good to get the algorithms and code snippets filled with such stuff. thats what its ment for!

F7a4a748ecf664f189bb704a660b3573
0
anubis 101 Aug 25, 2004 at 21:34

hmm… let’s hope people use the search function in this forum, unlike other places we don’t want to mention

22b3033832c5c699c856814b0cf80cb1
0
bladder 101 Aug 25, 2004 at 22:41

another one

bool IsPowerOf2( int value ) 
{
  return ! (value & ( value - 1 ) ); 
}
6ad5f8c742f1e8ec61000e2b0900fc76
0
davepermen 101 Aug 26, 2004 at 12:33

@anubis

hmm… let’s hope people use the search function in this forum, unlike other places we don’t want to mention [snapback]9650[/snapback]

indeed…

lets see if we find some bether solution to it.. would be nice..

C53b386a893da037739d159e4e40b8ef
0
eyebex 101 Oct 30, 2004 at 09:48

For two more useful functions which round to the next bigger / next smaller power of two, see: http://www.flipcode.com/cgi-bin/fcarticles…show=4&id=64182

0629d03d9fad310ba0406fa067da7ee1
0
thakurna 101 Jan 30, 2006 at 12:59

@john

bool IsPowerOfTwo (int value)
{
  return (value & -value) == value;
}

It is not going to work for if the numbers are = 1, 0, -1….
Can give you solution in that case

0629d03d9fad310ba0406fa067da7ee1
0
thakurna 101 Jan 30, 2006 at 13:01

this solution will not work for if if the number = 1, 0, -1….
need some thing like return (value != 0) && (value != 1) && ( ( (value-1) & value ) == 0 );
but still need to modify

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 30, 2006 at 13:41

@thakurna

It is not going to work for if the numbers are = 1, 0, -1….

Only not for zero. 1 & -1 == 1, which is correct as 1 is a power of two (2\^0 to be exact). -1 & 1 is of course also 1, but now ‘value’ is -1 so the end result doesn’t equal ‘value’, therefore it is not a power of two (this works for every negative value btw)

Only the zero is a special case. So it would be:

bool IsPowerOfTwo(int value)
{
    return value ? ((value & -value) == value) : false;
}
Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 30, 2006 at 18:16

Is a bitwise “&” consistent across platforms?? Just curious…

Alex

2b97deded6213469bcd87b65cce5d014
0
Mihail121 102 Jan 30, 2006 at 18:31

It depends what language standards the compiler follows but for the most cases yes.

Cd577ee1cb56aa2ad5645b7daa0a2830
0
eddie 101 Jan 31, 2006 at 08:11

What?

I’ve never heard of bitwise & not being consistent. That wouldn’t be “not adhering to standards”, that would be a *bug*.

Unless I’m missing something. Mihail121, do you know of a compiler that doesn’t handle this?

I’d find it incredible to hear that there’s a “standard” that defies normal bitwise &. It’s like saying that my compiler has a different standard for “+”.

A8433b04cb41dd57113740b779f61acb
0
Reedbeta 167 Jan 31, 2006 at 08:29

Well, bitwise & ought to work the same way everywhere, but not all machines are two’s complement and so some of the formulas posted above will not always work. (Realistically though, any machine you are going to be writing code for nowadays uses two’s complement for integer arithmetic.)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 31, 2006 at 10:50

Indeed, the standard requires signed ints to be either two’s complement, one’s complement or sign and magnitude. But that doesn’t really matter, it only means we cannot use the negation operator in our test code. So either use bladder’s, or this:

bool IsPowerOfTwo (int value)
{
  return (value & (~value+1)) == value;  //~value+1 equals a two's complement -value
}
Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 12:40

I wonder why there is no support for “flags” that indicate what kind of system the code has been written for (little endian, pointer size, etc etc).
That way the compiler could easily “fix” the code (or rather change the output to do what we meant it to do) if the current target doesn’t fit the flags. Why is it up to us to worry about all that crap?
Using the flags it should be inherently clear what the code is supposed to do..

Sorry to high jack the thread but this is probably not worth starting a new one..

Alex

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Jan 31, 2006 at 12:54

Because C has been designed to write OS with, you’re supposed to know what you’re doing.
As it is the Standard is byzantine enough, it doesn’t need even more assumptions.

Use the right tool for a given task.
If you want more abstraction from the hardware, use another language.
If you want less, write some asm.

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 13:05

Now I’m probably not the only one who want’s to write portable C++ code. Also the feature I suggest wouldn’t stop anyone from writing OS code..it’s less a language thing, more of a compiler option. There is no good reason I can see to have interpretations of my code that I have no control over but which alter the logic.

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Jan 31, 2006 at 13:16

@Alex

it’s less a language thing, more of a compiler option.

If it’s not into the language/standard, then it’s not portable across compilers.

So either you don’t want to have to handle such issues and use another language that insulates you better, or rely on something like autoconf.
@Alex

There is no good reason I can see to have interpretations of my code that I have no control over but which alter the logic

The only logic that is altered is the one you _think_ your code implements.
The compiler only care about how your code should be interpreted vs the Standard.
It doens’t read into your mind yet ;)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 31, 2006 at 13:49

tbp: you are missing the point. For example, the standard states that the number of bits in a char shall be 8 or more. Fortunately, the standard also states that there shall be a CHAR_BIT constant representing the number of bits in a char. It also defines other handy constants like INT_MAX. These are all pretty implementation defined, but the point is that you have access to such constants, so you can write portable code that runs on different architectures using these constants.

Unfortunately for endianness and how signed integers are represented, there is no such thing.

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Jan 31, 2006 at 14:34

Again, portability has never been part of the set of problems C has been designed to solve.

It’s just a thin abstraction layer from the metal, some souped assembler.
By design and for varied reasons like efficiency and compiler complexity.

If portability is your prime concern you shouldn’t be using C (or any of its descendant) in the first place.

So, i’m not missing any point, you’re just barking at the wrong tree.

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 14:59

Bjarne Stroustrup lists C++ as a general-purpose language with a bias to “generic programming” (among others including OS stuff):

http://www.artima.com/cppsource/cpp0x.html
Where “Generic programming is about generalizing software components so that they can be easily reused in a wide variety of situations.”
So I agree that total portability is NOT an explicit! goal of the standard.
Reuseability on the other hand means that portability is desired where it doesn’t hurt the standard (or the target specific constrains).
Also note that at least I’m talking about C++, not C.
Which language(s) to use depends on the problem at hand. I doubt that a desire for portability prohibits using C++.

ALex

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Jan 31, 2006 at 15:20

Portability isn’t and never was an explicit goal of the C++ Standard for a good reason, let me cite the prophet himself:
@Bjarne Stroustrup

C++’s C compatibility was a key language design decision rather than a marketing gimmick. Compatibility has been difficult to achieve and maintain, but real benefits to real programmers resulted, and still result today.

You can talk genericity and OOness all day long, C++ is by design compatible with C.
Thusly, regarding portability they both stand side by side.

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 15:35

The C++0x improvements should be done in such a way that the resulting language is easier to learn and use. Among the rules of thumb for the committee are:
Provide stability and compatibility (with C++98, and, if possible, with C)

Being compatible with C is nice but rather optional (“if possible”).

This is getting no where so this will be my last post on this…

ALex

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Jan 31, 2006 at 15:42

Your quote is about C++0x, an extension to C++ (just like C99 was an extension of C).

If that compatibility is news to you, i’m sorry, but you should have asked yourself why you could happily (void *) everything, among other things.

6f0a333c785da81d479a0f58c2ccb203
0
monjardin 102 Jan 31, 2006 at 16:17

I don’t see why it would hurt to add an endianness flag to the standard. I for one would find it very useful, but I can definitely get by without it. However, I strongly disagree with a C/C++ compiler “fixing” my code. I want full control over platform specific details like that.

Otherwise, their are portable libraries that can abstract those details at a higher-level. For example, the data streams in Qt handle endianness so you can rebuild your Intel Windows GUI on a big-endian Mac with no code changes.

2fcd95b0b62d18275c6b5a6f23f29791
0
tbp 101 Jan 31, 2006 at 16:30

@monjardin

I don’t see why it would hurt to add an endianness flag to the standard.

But then what kind of flag is it, because, you know, there’s cpu out there that can switch endianness per page.
It’s not as trivial as it may sound.

If i remember Ada has an endianness constant. But don’t quote me on that (and anyway, it’s Ada…).

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 17:38

ah..a post with content :), thx monjardin.

Why would you want control over these platform specific details?
I doubt the compiler will do much worse on elementary oprations like that. Of course you want control over more complex opreations because they might influence performance much more.

If the “fixing” of endian etc is a compiler option you could disable it and do it yourself. The compiler applys platform specific optimization anyways so your control over the output is limited. It should be fairly easy (and without any performance loss compared to a handmade fix) to compile for different endians etc (as there are ways to do the exact same logical operation on both big/little endian machines).

6f0a333c785da81d479a0f58c2ccb203
0
monjardin 102 Jan 31, 2006 at 19:09

I don’t quite get where you’re going, Alex. Would you give an example of code that a compiler could know how to correct for endianness?

After thinking about it, I guess there is no real need for an endianness flag. It is fairly trivial to figure it out at runtime.

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 19:34

ok..example:

Let’s say you have an int (e.g. size 32 bits) and you’d like to access its componenets. For our example we do so by casting the int to a char*:

int my_int;
char* cp=(char*)&my_int;

cp[0]=0x1;
cp[1]=0x2;
cp[2]=0x3;
cp[3]=0x4;

The final value of my_int (hence the logic of the program) depends on the endianess of the system you run it on.

If we add a flag to the code like “#pragma (endian,little)” or something like that to indicate that the code was written for a little endian system then a compiler which targets a big endian system can figure out what our program actually means by “cp[0]”.
So in case the flag is present the compiler swizzels the index internally to produce the same logical result the program would produce on a little endian system. The process should be totally transparent. So you decide for your endian once, put the flag and then forget about the whole issue.

side note: with flag I don’t mean a define or macro but a compiler directive..I don’t want to query the flag and put tons of #ifdef BIG_ENDIAN. That’s exactely what I’d like to avoid.

Alex

Cd577ee1cb56aa2ad5645b7daa0a2830
0
eddie 101 Jan 31, 2006 at 19:44

Could you not just extend off a class like bitset, and mask that depending on the endianness you pass in from a #define?

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 19:47

You can fix the whole endian issue yourself with #ifdefs or other tricks. It’s just a pain, ugly to maintain and you forget to fix a certain case more often than not.

6f0a333c785da81d479a0f58c2ccb203
0
monjardin 102 Jan 31, 2006 at 20:14

It seems to me that such a compiler flag could cause more confusion than it solves. What code that isn’t meant for a specific platform is written like that?

A valid C++ solution is to use data streams with platform specific implementations. Your example could be implemented like this:

int my_int;
char buffer[] = { 0x1, 0x2, 0x3, 0x4 };
data_stream stream(buffer, 4, data_stream::BIG_ENDIAN);
stream >> my_int;

So, you handle your #if tricks in your data stream library and then your code is pretty and easy to maintain. When it comes down to it, you need to pick a endianness to use for data exchange (see the htonl family).

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 31, 2006 at 20:16

@Alex

If we add a flag to the code like “#pragma (endian,little)” or something like that to indicate that the code was written for a little endian system then a compiler which targets a big endian system can figure out what our program actually means by “cp[0]”.

I’m sorry but that’s not how the language works. The problem is in how an int is stored in memory, not how you access a char array. What if cp points to a C-style string, and you set that flag (so by cp[0] you actually mean cp[3], etc.), would “Hello” suddenly be read as “lleHo”? And how would you fix that when you read a whole chunk of data into a char array, and map that on a structure and want to read an int at offset 23?

No, you need a simple implementation defined macro like CHAR_BIT that lets you define other macros or inline functions that either change the endianness of a type or leave it as-is, depending on the platform. There are only two places where you would want to convert endianness, and that is either at the input or just before the output. How your application internally works is usually not a problem, it’s just that you would like to read or write data (from/to disk or network or something similar) in a cross-platform way. You know what kind of endianness your input and your output should be, the problem is usually the processor architecture itself.

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 20:42

monjardin:

You can hide the #ifdefs or how ever you abstract the problem in a lib or wherever you want still it’s ugly having to handle it at all. When ever you come across a problem that deals with endianess (which happens not only when serializing) you have to fix it manually somehow.

The code was just a stupid sample to clarify what I mean..not production code :) Currently code with similar intend like the sample isn’t portable which is what I’d like to fix.

.oisyn:

For an array of chars the index would not be swizzled of course (that’s what I meant by the proces being transparent). As said, the program should perform logically EXACTLY like it was run on the little endian system. That means all operations execute normally apart from those that make assumptions about the endianness. Those that make assumptions are “corrected”. The compiler could store internally if you’re accessing a sub section of a larger data type. Only if that’s the case a correction is needed.

So in an array of (32 bit) ints casted to char* accessing index 5 means we access the 2nd int and ONLY within that int the offset depends on the flag because the compiler knows that on its target platform the memory mapping of the int depends on the endianess.

I’d say the problem is that you cannot specify wether you’d like to to access the most or least significant part(byte or whate ever) of a value. That’s what makes things ambiguous.

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 31, 2006 at 21:27

And how would this work?

void foo()
{
    int number = 0x12345678;
    char buf[] = "1234";

    char * ptr1 = reinterpret_cast<char*>(&number);
    char * ptr2 = buf;

    for (int i = 0; i < 4; i++)
    {
        output(*ptr1++);

        if (random() > 0.5)
            std::swap(ptr1, ptr2);
    }
}

Now I have absolutely no idea what the above should do, but the fact is that it is valid C++ code and the compiler can do nothing since it doesn’t know the result of random() as it’s a runtime function.

As I said, if you can simply convert between known endiannesses and the host’s endianness with functions like little_to_host(T), big_to_host(T), host_to_little(T) and host_to_big(T), you don’t have the trouble of #defines all over the place. You know how the input or output should be represented, you simply don’t know what architecture your program will be compiled for.

Of course, you only know what format the input/output is using if it really matters; a simple settings file could be written in the architecture’s native format. Unless it is meant to be exchangeable across platforms, at which point you have to define the endianness for yourself.

// data structure, source is assumed to be little endian
struct MyData
{
    char name[4];
    short aShort;
    int andAnInt;
    int anotherInt;
};

void read(MyData * pData, FILE * f)
{
    fread(pData, sizeof(*pData), 1, f);
    pData->aShort = little_to_host(pData->aShort);
    pData->andAnInt = little_to_host(pData->andAnInt);
    pData->anotherInt = little_to_host(pData->anotherInt);
}

Your solution won’t work here, fread is just a basic precompiled C function, it has no idea of the pointer passed to it, it just copies the memory.

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 21:45

You’re right..there is limits how far the compiler can trace back what you have done so a flag won’t be sufficient..and doing it runtime is not desireable…

In that case I demand that all systems of one type (don’t care which) are destroyed ;)

Alex

6f0a333c785da81d479a0f58c2ccb203
0
monjardin 102 Jan 31, 2006 at 22:03

@Alex

In that case I demand that all systems of one type (don’t care which) are destroyed ;)

Finally, we can agree! :) Good luck convincing the chip makers!

Why did Intel go little-endian? I don’t remember.

Eaa9847123e828897f960de0badf1ffa
0
Alex 101 Jan 31, 2006 at 22:19

Hm..I found little endian quite handy when writing my virtual machine. When accessing a sub portion of a virtual register (like ax/al from eax) in memory you don’t need to add any offsets to the register’s address.

But I kinda doubt that was the reason intel went little endian :)

340bf64ac6abda6e40f7e860279823cb
0
_oisyn 101 Jan 31, 2006 at 23:34

@monjardin

Why did Intel go little-endian? I don’t remember.

Was it intel that was the black sheep that introduced little-endian, while all others went for big endian? Just curious… I always found the little-endian format more practical. Because if you cast a large integer type pointer to a smaller type, it’s just the same number, but modulo 2\^n, depending on the size of the smaller type (or, you can increase the precision without moving memory). Also, if you start with a bytebuffer of all zeroes, and keep incrementing the first byte until it overflows and you increase the second byte (and so on), you can simply read the value as a word or dword without swapping the bytes. Similarly, if you would to be storing the bits of a large number as an array, you’ll put bit 0 on position 0 in the array, as it is called bit 0. So why not break up the large integer into chunks as well?

Of course, with big endian you can more easily read the larger values from memory in the debugger or a hex-editor, as we humans use the same notation in our numbers (which is probably where the whole big endian thing came from)