# Number of bits it takes to represent a number

1. Mar 13, 2016

### iScience

Is this accurate?

$$x = some\_number$$

$$bits(x)= \frac{log(x)}{log(2)}$$

2. Mar 13, 2016

### Staff: Mentor

If you have some way to signalize "the number is done", and send every integer in binary: sure (up to rounding).

3. Mar 13, 2016

### Staff: Mentor

No, not quite. The fraction you have on the right is the same as $log_2(x)$.

As to why your formula isn't correct, consider x = 4, and that $log_2(4) = 2$. It takes 3 bits ($100_2$) to represent 4.

4. Mar 13, 2016

### Staff: Mentor

If you want to represent positive integers only, you can drop the leading 1. It will be there for every number anyway.

5. Mar 13, 2016

### Staff: Mentor

I don't understand what you're saying. I was only considering positive integers. The binary representation of 4 as an unsigned number is $100_2$. Are you interpreting the 1 digit to mean the number is negative?

The OP's formula doesn't give the right results for this and many other numbers
???

6. Mar 13, 2016

### Staff: Mentor

The binary representation is 100. Who said we are limited to the binary representation? We can save bits with the following scheme, assuming we know where digits end:

1: ""
2: "0"
3: "1"
4: "00"
5: "01"
6: "10"
and so on.
The binary representation for every positive integer starts with a 1. If you are interested in saving bits, as the title suggests, you do not have to store this 1. Floating point numbers do exactly this: they just do not store the 1 because it would be fully redundant.

7. Mar 13, 2016

### Staff: Mentor

The trouble is, negative integers are stored with a 1 digit in the most significant bit (MSB).
The title of the thread is "Number of bits it takes to represent a number". With "bits" which is short for "binary digits," it's reasonable to assume that we're talking about a binary representation. It takes three bits to represent the decimal number 4. If you take advantage of a scheme that stores only two bits, it still takes three bits to represent the number.

Some floating point numbers do this. The ones that don't follow the older x87 Extended Precision format.
The Borland C/C++ compiler back in the early 90's had an 80-bit long double type based on this format. The Microsoft compilers never did, to the best of my knowledge.

8. Mar 13, 2016

### DrZoidberg

If you want to represent integer numbers between 0 and x (inclusive) the correct equation is $ceiling(log_2(x+1))$.

9. Mar 13, 2016

### Staff: Mentor

I don't think so, and even if we would, storing them does not have to happen in the same way we would write those numbers down on paper.
Well, most do. A few do not.

10. Mar 13, 2016

### Staff: Mentor

Of course, but this thread is about how they are represented, not how they can be stored. Whether one bit is implied or not, it takes three bits to represent 4, so the OP's formula as stated doesn't give the right result.

11. Mar 14, 2016

### Tom.G

Represent to a human or to a computer?

12. Mar 14, 2016

### Staff: Mentor

Both.

13. Mar 14, 2016

### Staff: Mentor

I can represent 4 as "00", and no one can stop me.
"Number of bits it takes" implies that we want to minimize the number we need. Using binary representation would need one bit more than necessary.

14. Mar 14, 2016

### Staff: Mentor

I think that you are arguing just to be arguing. Given that this thread is in the Programming and Computer Science section, can you give us an example of one computer system or programming language or standard in which 4 is represented as "00"? I'll stipulate that the IEEE 754 standard for single-precision and double-precision floating point numbers does use an implied bit that is not stored, but can you show a similar standard for integer values?

15. Mar 14, 2016

### Staff: Mentor

I'm not aware of any computer system which stores 4 as "100" either. 00000100 and similar - sure, and you cannot drop a leading 1 here because there is none. But a storage format that uses exactly the number of bits it needs (plus one)?

16. Mar 14, 2016

### Staff: Mentor

I wrote 1002 as an abbreviation for the full byte representation 000001002.

17. Mar 14, 2016

### Staff: Mentor

But then we use 8 bits, not 3.

18. Mar 14, 2016

### jbriggs444

Data compression. A 4 can be represented and stored in lots of ways. Those ways can depend on context. If I have a long string of 4's, I can represent those 4's pretty cheaply.

19. Mar 14, 2016

### Ibix

In short, OP, if you're still reading, the answer has some surprising complexities. In a straightforward binary representation of a positive integer x, the most significant bit would be the largest integer n such that $n\leq\ln(x+1)/\ln 2$, which is not quite what you wrote. However, that is not the number of bits needed to store it (except in certain special cases).
• At least one more bit is needed if you want to handle negative numbers as well (and the answer is complicated for negative integers)
• Some way to know when the number ends is also needed, which will typically add to the storage - either through wasted space in general, having to have a "number of bits" auxiliary variable, or having to reserve a particular sequence as a terminator.
• Much more complex representations are possible, as jbriggs444 and mfb point out.
The right answer depends on why you want to know.

20. Mar 14, 2016

### glappkaeft

The point most people participating in the thread should remember is that the Shannon information theorem does not consider the value of the largest number you want to represent but instead only the number of symbols you want to represent.

Thus
• x = some number uses is {x} thus log2(1) bits
• {1, 2, ..., x} uses log2(x) bits
• {0, 1, ..., x} uses log2(x+1) bits
• and many other possibilities