How computer converts decimal-to-binary-to-decimal

  • Thread starter Avichal
  • Start date
  • Tags
    Computer
In summary, the algorithm to convert decimal to binary and back to decimal is done by dividing the number by two and checking the remainders. If the number is too large to be divided by two, then the remainder is calculated and that remainder is then used to do the division.
  • #1
Avichal
295
0
I just studied BCD, excess 3 and other codes but I guess these were used earlier. What is the current method to convert decimal to binary and back to decimal?
Can decoders and encoders be used?
 
Technology news on Phys.org
  • #3
This is how we do it. Do computers do the same thing?
 
  • #4
Avichal said:
This is how we do it. Do computers do the same thing?

Yep.
 
  • #5
How? Let's say we give the number 1206 as input. But first computer needs to convert it to binary. So it must divide it by two and check the remainders. But how will it perform division on a decimal number since it is only designed to perform arithmetic on binary numbers.
 
  • #6
Avichal said:
How? Let's say we give the number 1206 as input. But first computer needs to convert it to binary. So it must divide it by two and check the remainders. But how will it perform division on a decimal number since it is only designed to perform arithmetic on binary numbers.

You can divide a number in binary format by any other whole number (not 0 though) and get the remainder (i.e. the modulus) and the quotient (i.e. the result).

In C++ the modulus is % and integer division is just /. If you want to make sure you get the right answer just calculate the modulus, subtract that from the value and then do the division and you are guaranteed to get the right answer.

Doesn't matter if its 10358 / 2 or 10358 / 23, it's the same kind of operation.
 
  • #7
Avichal said:
I just studied BCD ... What is the current method to convert decimal to binary and back to decimal?
Depends on the computer and the application. In the case of mainframes and other computers used for accounting type applications, the data is kept in decimal as BCD strings and the math is performed on those BCD strings, to eliminate any rounding issues due to conversion to binary and back. COBOL is an example of a high level language that includes BCD and binary based math. The cpus on a PC include basic add and subtract operations for BCD, which can be the basis for doing BCD based math on a PC.

Otherwise, conversion is done by division (and using remainder) or multiply, depending if converting from another base to binary or from binary to another base, or if converting the integer or fractional (the part to the right of the decimal or binary point) portion of a number.
 
Last edited:
  • #8
Avichal said:
Lets say we give the number 1206 as input. But first computer needs to convert it to binary.
One way to to this is to calculate
(((((1 × 10) + 2) × 10) + 0) × 10) + 6
The computer already knows what 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 are in binary -- they have been worked out in advance and stored in memory -- and so can perform the above calculation in binary.
 
  • #9
How do you calculate e.g 321 decimal form to now how many bytes you can store?
 
  • #10
Remember, that decimal data usually present only in text, and task is about parsing.
It's easier to show C++ func doing this stuff:

Code:
template<class Num>bool ParseUnsignedBase10(const char** p, const char* end, Num* res)
{
    const char* p1 = *p;
    Num v = 0;
    while(p1 < end){
        if(*p1<'0' || *p1>'9') break;
        Num v1 = v*10 + (*p1 - '0');
        if(v1 < v)break;
        v = v1;
    };
    if(p1 == *p)return false;
    *p = p1;
    *res = v;
    return true;
}
 

1. How does a computer convert decimal to binary?

A computer converts decimal to binary by using a mathematical process called division by 2. It repeatedly divides the decimal number by 2 and records the remainder until the quotient becomes 0. The sequence of remainders is the binary representation of the decimal number.

2. What is the purpose of converting decimal to binary?

The purpose of converting decimal to binary is to represent numbers in a form that can be easily understood and manipulated by computers. Binary is the language of computers and is used to store and process data.

3. How does a computer convert binary back to decimal?

A computer converts binary back to decimal by using a process called multiplication by 2. It multiplies each binary digit by the corresponding power of 2 and adds them together to get the decimal equivalent. For example, 1011 in binary is 1*2^3 + 0*2^2 + 1*2^1 + 1*2^0 = 8 + 0 + 2 + 1 = 11 in decimal.

4. Can a computer convert any decimal number to binary?

Yes, a computer can convert any decimal number to binary as long as the number is within the range that the computer can handle. For example, a 32-bit computer can convert decimal numbers up to 2^32 - 1, which is approximately 4.3 billion.

5. Is converting decimal to binary reversible?

Yes, converting decimal to binary and back to decimal is a reversible process. As long as the binary number is converted correctly, it can be converted back to its original decimal form. This makes binary a reliable and efficient way to store and process data in computers.

Similar threads

  • Programming and Computer Science
Replies
3
Views
966
Replies
4
Views
941
  • Programming and Computer Science
Replies
29
Views
2K
Replies
25
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
4K
  • Programming and Computer Science
2
Replies
57
Views
3K
  • Programming and Computer Science
Replies
11
Views
2K
Replies
13
Views
2K
  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
  • Programming and Computer Science
Replies
4
Views
10K
Back
Top