Minimum and maximum number of bits required....

Click For Summary

Discussion Overview

The discussion revolves around the minimum and maximum number of bits required for coding distinct quantities in binary systems, with a focus on the efficiency of different coding methods. Participants explore various representations of numbers in binary and their implications in different contexts, including computing and sensor applications.

Discussion Character

  • Homework-related
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions the conversion of a specific binary representation (0001000000) to decimal, suggesting that it does not yield the expected value of 6.
  • Another participant clarifies that the example given is a less efficient binary code that represents quantities by the position of a single '1' bit, rather than a standard binary representation.
  • A different participant introduces the concept of extended ASCII and Gray coding, noting that these methods can represent the number 6 in various ways, each with its own advantages in specific applications.
  • Discussion includes the idea that efficiency can be defined in multiple ways, such as storage, ease of calculation, or error avoidance, rather than solely by the number of bits used.

Areas of Agreement / Disagreement

Participants express differing views on the efficiency and appropriateness of various binary coding methods. There is no consensus on a single best approach, as the discussion highlights multiple valid perspectives and applications.

Contextual Notes

Participants note that the definitions of efficiency and coding methods can vary based on context, leading to different interpretations of what constitutes a "binary code." The discussion acknowledges the limitations of the examples provided and the specific applications they may serve.

mooncrater
Messages
215
Reaction score
18

Homework Statement


Here is this line in my book which says, "Although the minimum number of bits required to code ##2^n## distinct quantities in n, there is no maximum number of bits that may be used for a binary code. For example, the ten decimal digits can be coded with ten bits, and each decimal digit assigned a bit combination of nine 0's and a 1. In this particular binary code, the digit 6 is assigned the bit combination 0001000000.

Homework Equations


A binary number like 1010 is changed to it's decimal form by:
##1*2^3+0*2^2+1*2^1+0*2^0##
which is equal to : ##10##

The Attempt at a Solution


So, in this particular case, the conversion is not clear to me. When we convert 0001000000 to decimal we'll get ##2^6## and not 6. We can get 6 through 0000000110 rather. So where am I wrong? Am I misunderstanding something?

Moon
 
Physics news on Phys.org
That second example 000100000 is not an example of the minimum encoding and therefore can not be converted binary<->decimal in the usual way.
It's a less efficient way of representing 10 quantities. Here there can only be one bit on at a time.

To convert such data to base 10 you have look at where the '1' bit is. It appears to be numbered from right to left: right most is 0, then 1, then 2 etc. The 6th is '1''. The bits could also be mapped to any of 10 quantities in any order.

The confusion may be that they state that this too is a "binary code". Not to be confused with "base 2" representation.
 
  • Like
Likes   Reactions: mooncrater
Okay! Thats just a different ( and non-efficient) way of representing numbers.
Thanks..:)
 
Just as riders.

I would think the most commonly used 8 bit code for 6 is 00110110, which is extended ASCII code.

You can make up any code you choose and in an IT context there are good reasons for doing it differently from the 'natural' binary code of maths. For example the 8 bit Gray coding for 6 is 00000101 and this code is a minimum encoding with only a single bit change for each increment, which is very useful in sensors for avoiding false values from switching transients.

The 10 bit code hinted at in the OP may be derided by mathematicians, but is again useful in switch sensing. Each switch state can be represented by one bit and which switch has been pressed is determined by the position of the 1. This code can be converted to mathematical binary - IC's are available which do it - but that can only work for one switch at a time. The inefficient 10 bit code can deal with any combination of switches being pressed at one time.
And if one wanted to show which button had been pressed on a seven segment display, the code for 6 would be 01011111 (or 01111101 )

Edit after new post:
"Thats just a different ( and non-efficient) way of representing numbers."
By efficiency in this context, presumably you mean the least number of bits of storage? Efficiency could mean ease of calculation, or ability to represent a large range of numbers, or maybe even ability to avoid errors.
 
Last edited:
  • Like
Likes   Reactions: nsaspook and mooncrater

Similar threads

Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K