Why Computers Use Binary Number System

  • Thread starter Thread starter Defennder
  • Start date Start date
  • Tags Tags
    Binary System
Click For Summary
Computers utilize the binary number system primarily because it aligns with the fundamental on/off states of electronic circuits, simplifying design and reducing error potential. The binary system allows for fast processing, as logic gates can quickly transition between states, a feat complicated by multi-level systems that would introduce noise and require longer settling times for reliable outputs. Manufacturing inconsistencies further complicate non-binary systems, as slight variations in component performance could lead to significant errors. Binary arithmetic is straightforward, relying on simple logical operations like AND, OR, and inversion, which can be efficiently executed by electronic switches. Overall, the binary system is the most efficient and reliable choice for computer architecture, balancing speed, simplicity, and manufacturing feasibility.
Defennder
Homework Helper
Messages
2,590
Reaction score
5
I have no idea where to put this thread, so I'm trying out this forum.

Why do computers use the binary number system instead of other n-base number systems? I did a Google search and I saw that one explanation was that in a circuit, certain devices are either turned on or off (1 or 0) which is why base 2 was used historically. I'm rather clueless about this, so excuse me if I sound naive. But hasn't technology progressed enough such that we can build computers which are not constrained by either on or off binary systems?

Another reason I found was that otherwise the complexity involved in using something other than 2-base number system would burn out the system, more room for errors. Here: http://wiki.answers.com/Q/Why_do_computers_use_the_binary_code_instead_of_the_decimal_system

Or are there other reasons why such is not possible or feasible?
 
Computer science news on Phys.org
On or off is still the simplest system. You could use a system with more levels but this would be more complicated.

Consider doing an addition, you could have 100 voltages levels between say 0 and 5V and add them in an amplifier to give an output in the range 0-10V. But your system would have to have a noise of less than 0.05V otherwise you would read say 10 as 11 or 9. Then how would you do fractions, you would have to make smaller and smaller fractions of a volt represent the decimal places.
It would also be hard to store values because you would have to care about any changes in the stored voltage changing the value.
 
mgb_phys said:
Consider doing an addition, you could have 100 voltages levels between say 0 and 5V and add them in an amplifier to give an output in the range 0-10V.
In what base is this working in?

But your system would have to have a noise of less than 0.05V otherwise you would read say 10 as 11 or 9. Then how would you do fractions, you would have to make smaller and smaller fractions of a volt represent the decimal places.
Why isn't the problem of fractions present in the binary system?

It would also be hard to store values because you would have to care about any changes in the stored voltage changing the value.
Can you elaborate on this, and how this doesn't also affect binary systems?
 
The reason computers use binary is ultimately because you want them to be fast. Most logic gates are effectively open-loop amplifiers with extremely high gains (as much as millions of volts / volt). When the input changes, the amplifier immediately swings to its opposite state, and this transition is engineered to be as fast as technologically possible, on the order of picoseconds.

If logic gates had many different levels, rather than just "on" and "off," then you've opened up a Pandora's box of problems. If each state was represented by only a small range of voltages, then you'd to deal with the inevitable problems of ringing and oscillation and settling-time. The end result is that the inputs would have to be allowed to settle for a comparatively long time before the output will be reliable -- something on the order of hundreds of nanoseconds or maybe even microseconds.

There are also major manufacturing problems with many-state logic. No two manufactured transistors are ever exactly the same, and these mismatches cause offset errors. If your states were represented by 100 mV bands, and you had some logic gates with 240 mV of offset, you wouldn't have a computer -- you'd have a paperweight.

So, in order to make fast, easily manufactured computers, you have to use as few states as possible. Binary is, in a sense, the most "forgiving" number system possible.

- Warren
 
Defennnder said:
... hasn't technology progressed enough such that we can build computers which are not constrained by either on or off binary systems?...
As you can see, binary is the most efficient system.
 
I can't say I understand most of what was said, but thanks guys.
 
In binary, arithmetic is simplified to a sequence of AND, OR and inversion decisions that can be handled by electronic switches. For example, consider the four addition problems: 0+0=0, 0+1=1, 1+0=1, and 1+1=10 (10 is the binary equivalent for the decimal number 2). Look at the pattern of the answers in terms of the sameness or difference of digits. If the two numbers are not the same (the exclusive-OR) either 0+1 or 1+0, then the answer is 1. If the two numbers are the same (the inversion of the exclusive-OR), either 0+0 or 1+1, then the answer is 0, with the qualifier that in the case of 1+1 (AND) we have to carry a 1 to the next column, but in the case of 0+0 (NOR) we don't carry. Keep thinking that way and you can make an array of transistors do any arithmetic.
 

Similar threads

Replies
4
Views
2K
Replies
3
Views
6K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
9
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 3 ·
Replies
3
Views
4K