Number of bits it takes to represent a number

  • Thread starter Thread starter iScience
  • Start date Start date
  • Tags Tags
    Bits
Click For Summary

Discussion Overview

The discussion revolves around the number of bits required to represent a number, particularly in binary form. Participants explore various representations, mathematical formulations, and implications for both positive and negative integers, as well as the context of data storage and compression.

Discussion Character

  • Debate/contested
  • Mathematical reasoning
  • Technical explanation

Main Points Raised

  • Some participants propose that the number of bits required can be calculated using the formula $$bits(x) = \frac{log(x)}{log(2)}$$, while others argue this does not account for the actual binary representation.
  • It is noted that for positive integers, the correct formula might be $$\lceil log_2(x + 1) \rceil$$ to include all integers from 0 to x.
  • Participants discuss the implications of leading bits in binary representations, with some suggesting that the leading 1 can be omitted to save space.
  • There is a contention regarding whether the discussion is limited to binary representation or if other encoding schemes can be considered, with examples provided for alternative methods.
  • Some participants emphasize that the representation of numbers can vary based on context, such as data compression techniques that may allow for more efficient storage.
  • Concerns are raised about the representation of negative integers and the additional bits required for their storage, complicating the discussion further.
  • Participants mention historical and technical details regarding floating-point representations and their implications for bit storage.

Areas of Agreement / Disagreement

There is no consensus on the correct formula for determining the number of bits required to represent a number, as participants present competing views and interpretations of binary representation, leading to ongoing debate.

Contextual Notes

Limitations include the dependence on definitions of representation, assumptions about the types of numbers being discussed (positive vs. negative), and the context in which numbers are stored or transmitted, which may affect the number of bits needed.

  • #31
There are two subjects being discussed, and confused.
1. How many bits to represent a quantity of symbols
2. How to code numbers in binary.

Maybe my wording is not precise, but how many symbols one might use, and what people decide the symbols represent are, for the most part, orthogonal concepts.

For example, let's design a 3 bit binary coding system using 8 symbols to represent 0,1,2,3,4,5,6,7.
Let's design a 3 bit binary coding system using 8 symbols to represent -4, -3, -2, -1 0, 1, 2, 3.

What people decide to have each symbol represent defines the coding system. (For example, unsigned, sign-mag, and 2's comp are commonly accepted methodologies, but one can define anything)
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
4K
Replies
1
Views
4K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
Replies
7
Views
4K