## How computer converts decimal-to-binary-to-decimal

I just studied BCD, excess 3 and other codes but I guess these were used earlier. What is the current method to convert decimal to binary and back to decimal?
Can decoders and encoders be used?

 PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire
 Hey Avichal and welcome to the forums. The standard algorithm is known as the DIV/MOD algorithm in programming. Here is an intuitive idea of how it works: http://www.mathsisfun.com/base-conversion-method.html
 This is how we do it. Do computers do the same thing?

## How computer converts decimal-to-binary-to-decimal

 Quote by Avichal This is how we do it. Do computers do the same thing?
Yep.

 How? Lets say we give the number 1206 as input. But first computer needs to convert it to binary. So it must divide it by two and check the remainders. But how will it perform division on a decimal number since it is only designed to perform arithmetic on binary numbers.

 Quote by Avichal How? Lets say we give the number 1206 as input. But first computer needs to convert it to binary. So it must divide it by two and check the remainders. But how will it perform division on a decimal number since it is only designed to perform arithmetic on binary numbers.
You can divide a number in binary format by any other whole number (not 0 though) and get the remainder (i.e. the modulus) and the quotient (i.e. the result).

In C++ the modulus is % and integer division is just /. If you want to make sure you get the right answer just calculate the modulus, subtract that from the value and then do the division and you are guaranteed to get the right answer.

Doesn't matter if its 10358 / 2 or 10358 / 23, it's the same kind of operation.

Recognitions:
Homework Help
 Quote by Avichal I just studied BCD ... What is the current method to convert decimal to binary and back to decimal?
Depends on the computer and the application. In the case of mainframes and other computers used for accounting type applications, the data is kept in decimal as BCD strings and the math is performed on those BCD strings, to eliminate any rounding issues due to conversion to binary and back. COBOL is an example of a high level language that includes BCD and binary based math. The cpus on a PC include basic add and subtract operations for BCD, which can be the basis for doing BCD based math on a PC.

Otherwise, conversion is done by division (and using remainder) or multiply, depending if converting from another base to binary or from binary to another base, or if converting the integer or fractional (the part to the right of the decimal or binary point) portion of a number.

Recognitions:
Gold Member
 Remember, that decimal data usually present only in text, and task is about parsing. It's easier to show C++ func doing this stuff: Code: templatebool ParseUnsignedBase10(const char** p, const char* end, Num* res) { const char* p1 = *p; Num v = 0; while(p1 < end){ if(*p1<'0' || *p1>'9') break; Num v1 = v*10 + (*p1 - '0'); if(v1 < v)break; v = v1; }; if(p1 == *p)return false; *p = p1; *res = v; return true; }