How computer converts decimal-to-binary-to-decimal

  • Thread starter Thread starter Avichal
  • Start date Start date
  • Tags Tags
    Computer
Click For Summary

Discussion Overview

The discussion centers around the methods used by computers to convert decimal numbers to binary and back to decimal. It explores various algorithms, coding systems like BCD, and the underlying arithmetic operations involved in these conversions.

Discussion Character

  • Technical explanation
  • Exploratory
  • Debate/contested

Main Points Raised

  • Some participants mention the DIV/MOD algorithm as a standard method for decimal to binary conversion.
  • There is a question about how computers perform division on decimal numbers when they operate in binary.
  • One participant explains that computers can perform division in binary and obtain remainders and quotients, using programming constructs like modulus and integer division.
  • Another participant notes that some computers, particularly mainframes, use BCD strings for decimal data to avoid rounding issues during conversion.
  • It is suggested that conversion methods may vary depending on the application and the type of computer being used.
  • One participant describes a method of calculating decimal numbers in binary by using pre-stored binary representations of digits.
  • A participant raises a question about calculating the storage requirements for a decimal number in bytes.
  • A C++ function is provided as an example of how to parse unsigned base 10 numbers from text.

Areas of Agreement / Disagreement

Participants express differing views on the methods and systems used for decimal to binary conversion, with no consensus reached on a single approach. The discussion includes multiple competing perspectives on the topic.

Contextual Notes

Some limitations include the dependence on specific computer architectures and programming languages, as well as the potential for rounding issues when converting between number systems.

Avichal
Messages
294
Reaction score
0
I just studied BCD, excess 3 and other codes but I guess these were used earlier. What is the current method to convert decimal to binary and back to decimal?
Can decoders and encoders be used?
 
Technology news on Phys.org
This is how we do it. Do computers do the same thing?
 
Avichal said:
This is how we do it. Do computers do the same thing?

Yep.
 
How? Let's say we give the number 1206 as input. But first computer needs to convert it to binary. So it must divide it by two and check the remainders. But how will it perform division on a decimal number since it is only designed to perform arithmetic on binary numbers.
 
Avichal said:
How? Let's say we give the number 1206 as input. But first computer needs to convert it to binary. So it must divide it by two and check the remainders. But how will it perform division on a decimal number since it is only designed to perform arithmetic on binary numbers.

You can divide a number in binary format by any other whole number (not 0 though) and get the remainder (i.e. the modulus) and the quotient (i.e. the result).

In C++ the modulus is % and integer division is just /. If you want to make sure you get the right answer just calculate the modulus, subtract that from the value and then do the division and you are guaranteed to get the right answer.

Doesn't matter if its 10358 / 2 or 10358 / 23, it's the same kind of operation.
 
Avichal said:
I just studied BCD ... What is the current method to convert decimal to binary and back to decimal?
Depends on the computer and the application. In the case of mainframes and other computers used for accounting type applications, the data is kept in decimal as BCD strings and the math is performed on those BCD strings, to eliminate any rounding issues due to conversion to binary and back. COBOL is an example of a high level language that includes BCD and binary based math. The cpus on a PC include basic add and subtract operations for BCD, which can be the basis for doing BCD based math on a PC.

Otherwise, conversion is done by division (and using remainder) or multiply, depending if converting from another base to binary or from binary to another base, or if converting the integer or fractional (the part to the right of the decimal or binary point) portion of a number.
 
Last edited:
Avichal said:
Lets say we give the number 1206 as input. But first computer needs to convert it to binary.
One way to to this is to calculate
(((((1 × 10) + 2) × 10) + 0) × 10) + 6
The computer already knows what 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 are in binary -- they have been worked out in advance and stored in memory -- and so can perform the above calculation in binary.
 
How do you calculate e.g 321 decimal form to now how many bytes you can store?
 
  • #10
Remember, that decimal data usually present only in text, and task is about parsing.
It's easier to show C++ func doing this stuff:

Code:
template<class Num>bool ParseUnsignedBase10(const char** p, const char* end, Num* res)
{
    const char* p1 = *p;
    Num v = 0;
    while(p1 < end){
        if(*p1<'0' || *p1>'9') break;
        Num v1 = v*10 + (*p1 - '0');
        if(v1 < v)break;
        v = v1;
    };
    if(p1 == *p)return false;
    *p = p1;
    *res = v;
    return true;
}
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
4
Views
2K
  • · Replies 29 ·
Replies
29
Views
4K
Replies
25
Views
4K
  • · Replies 3 ·
Replies
3
Views
11K
  • · Replies 57 ·
2
Replies
57
Views
6K
  • · Replies 34 ·
2
Replies
34
Views
22K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
9
Views
4K