How Does a Pocket Calculator Convert Binary to Decimal?

  • Thread starter Thread starter seetherulez
  • Start date Start date
  • Tags Tags
    Calculators
AI Thread Summary
A pocket calculator operates similarly to a simple computer, utilizing a microprocessor that processes data in binary. The conversion from binary to decimal is typically hardcoded within the processor for basic calculators, which often contain minimal components, such as a single epoxy blob integrating the display driver and logic for basic functions. More advanced calculators, like scientific models, incorporate a microprocessor with ROM that contains the necessary programming for operations, including binary-decimal conversion. High-end graphing calculators utilize off-the-shelf microprocessors, RAM, and can run user programs or firmware updates, which also manage data conversion. The discussion highlights that while cheaper calculators may consist of basic logic gates, more sophisticated models share fundamental components with computers, emphasizing the evolution of calculators from simple devices to complex computing tools.
seetherulez
Messages
4
Reaction score
0
hello, I'm new here so help me out please.
question: what does a pocket calculator use to convert binary into decimal data?
I've always been curious about this.
a calculator is nothing but a simple computer right? therefore it's core component consists of a microprocessor which instructs and processes in binary, so what is doing all the rest?
obviously it needs some kind of programming language like BASIC to convert the computer logic into decimals, and it needs some kind of interrupt handler for the keypad and the LCD screen. in a desktop computer this is done by the BIOS, so my question is does a calculator
have a bios containing actual software for all this or is it hardwired into the CPU or what?
 
Technology news on Phys.org
Welcome to PhysicsForums!

Some calculators are just highly-integrated computers. A bit of history: modern computers were made possible by the calculator boom of the 70s. It paved the way for the general microprocessor, and ramped up their frequencies. When this collapsed (how many calculators do scientific / engineering types need?) the processors (and some of the other associated ICs) were well-positioned to enable personal computers (not just 'the' PC) to come into being.

In any case, when you take apart your low-end calculators (the ones that only have a single-line display and 5 or 6 buttons in addition to the numeric pad) you'll find an epoxy blob and not much else. That blob probably contains a little sliver of silicon with the display driver, keypad interpreter, and a little bit of logic which can perform the basic math functions on the numbers you punch in, and not much more (why bother wasting space / money / time on any more?) The conversion from binary to decimal isn't particularly difficult, and is probably hardcoded into that little sliver.

Your more expensive scientific computer probably contains something closer to a microprocessor, along with a ROM (Read Only Memory) containing the program necessary for the calculator to carry out its operations. Yes, there's probably a routine which takes care of the decimal-binary conversion.

Your high-end graphing calculators frequently use an off-the-shelf microprocessor (fun fact: the TI-89 uses a slightly updated version of the 68000 used in the original Macintosh), have RAM, and can accept user programs, or firmware updating. The firmware will have some routine or other to convert the binary data to decimal (or hexadecimal, or whatever) for display purposes.

The BIOS in your computer doesn't do much aside from initializing your system (and, well, having a program that allows you to change a bunch of these parameters). I believe interrupts are usually handled by the CPU itself (interrupt is signaled on the system bus, fusing determines whether or not this is valid, CPU sets aside what it's doing to address the interrupt, CPU resumes what it does after the interrupt).

In the above discussion, the CPU is often treated as if it were some stand-alone unit. Some microprocessors also have on-board flash memory, which allows you to reduce your count by one or two ICs (you don't need an external IC to store your programs / firmware on). However, there usually isn't very much of this (it makes your chips more complicated and bigger, which makes it more susceptible to manufacturing defects and more expensive).
 
ok, so basically it depends on how expensive the calculator is, a cheap calc like you get from someone in a gift basket is just a couple of logic gates but a scientific calc actually has the basics of a computer right?
thanks for the reply btw:
 
A simple calculator still has a CPU, it's just integrated onto the same chip as the keypad and screen interfaces. The software is probably also burned into a ROM on the same chip so it can't be changed.
But fundemantaly it has the same components as a computer except possibly lacking any memory.
The first CPUs (Intel 4004) were built to be used in calculators to replace previous operations based on just gates.

As MATLABdude said, at the higher end there is really no difference between a calculator and a computer.
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...
Back
Top