Minimum and maximum number of bits required....

In summary: In any real system there are always trade offs, and the best choice is very difficult to make.In summary, the conversation discusses different ways of representing numbers in binary code, including the example of using 10 bits to represent 10 quantities with each digit assigned a bit combination of nine 0's and a 1. The conversation also mentions the use of Gray coding for switch sensing and its efficiency in avoiding false values.
  • #1
mooncrater
217
18

Homework Statement


Here is this line in my book which says, "Although the minimum number of bits required to code ##2^n## distinct quantities in n, there is no maximum number of bits that may be used for a binary code. For example, the ten decimal digits can be coded with ten bits, and each decimal digit assigned a bit combination of nine 0's and a 1. In this particular binary code, the digit 6 is assigned the bit combination 0001000000.

Homework Equations


A binary number like 1010 is changed to it's decimal form by:
##1*2^3+0*2^2+1*2^1+0*2^0##
which is equal to : ##10##

The Attempt at a Solution


So, in this particular case, the conversion is not clear to me. When we convert 0001000000 to decimal we'll get ##2^6## and not 6. We can get 6 through 0000000110 rather. So where am I wrong? Am I misunderstanding something?

Moon
 
Physics news on Phys.org
  • #2
That second example 000100000 is not an example of the minimum encoding and therefore can not be converted binary<->decimal in the usual way.
It's a less efficient way of representing 10 quantities. Here there can only be one bit on at a time.

To convert such data to base 10 you have look at where the '1' bit is. It appears to be numbered from right to left: right most is 0, then 1, then 2 etc. The 6th is '1''. The bits could also be mapped to any of 10 quantities in any order.

The confusion may be that they state that this too is a "binary code". Not to be confused with "base 2" representation.
 
  • Like
Likes mooncrater
  • #3
Okay! Thats just a different ( and non-efficient) way of representing numbers.
Thanks..:)
 
  • #4
Just as riders.

I would think the most commonly used 8 bit code for 6 is 00110110, which is extended ASCII code.

You can make up any code you choose and in an IT context there are good reasons for doing it differently from the 'natural' binary code of maths. For example the 8 bit Gray coding for 6 is 00000101 and this code is a minimum encoding with only a single bit change for each increment, which is very useful in sensors for avoiding false values from switching transients.

The 10 bit code hinted at in the OP may be derided by mathematicians, but is again useful in switch sensing. Each switch state can be represented by one bit and which switch has been pressed is determined by the position of the 1. This code can be converted to mathematical binary - IC's are available which do it - but that can only work for one switch at a time. The inefficient 10 bit code can deal with any combination of switches being pressed at one time.
And if one wanted to show which button had been pressed on a seven segment display, the code for 6 would be 01011111 (or 01111101 )

Edit after new post:
"Thats just a different ( and non-efficient) way of representing numbers."
By efficiency in this context, presumably you mean the least number of bits of storage? Efficiency could mean ease of calculation, or ability to represent a large range of numbers, or maybe even ability to avoid errors.
 
Last edited:
  • Like
Likes nsaspook and mooncrater

1. What is the minimum number of bits required to represent a single character?

The minimum number of bits required to represent a single character is 8 bits. This is because 8 bits can represent a total of 256 unique characters, which is enough to cover all the characters in the standard ASCII character set.

2. How many bits are needed to represent a number between 0 and 255?

The maximum number of bits required to represent a number between 0 and 255 is 8 bits. This is because 8 bits can represent all the numbers in the range of 0 to 255, and any number beyond that would require more bits to represent.

3. What is the maximum number of bits that can be used to represent a single character?

The maximum number of bits that can be used to represent a single character is 32 bits. This is because 32 bits can represent a total of 4,294,967,296 unique characters, which is enough to cover all the characters in the Unicode character set.

4. How does the number of bits required for a character vary in different character sets?

The number of bits required for a character can vary in different character sets. For example, the standard ASCII character set requires 8 bits per character, while the Unicode character set can require up to 32 bits per character depending on the language and characters being used.

5. Does the minimum and maximum number of bits required for a character impact the size of a file?

Yes, the minimum and maximum number of bits required for a character can impact the size of a file. If a file contains characters that require more bits to represent, the file size will be larger compared to a file with characters that require fewer bits. This is because more bits are required to store each character, which increases the overall file size.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
2
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
8
Views
1K
  • Computing and Technology
Replies
4
Views
767
  • Engineering and Comp Sci Homework Help
Replies
1
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
7
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
Replies
4
Views
929
  • Engineering and Comp Sci Homework Help
Replies
4
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
Back
Top