Why do we use binary computing systems instead of terniary systems?

AI Thread Summary
Binary computing systems are preferred over ternary systems primarily due to hardware efficiency and cost-effectiveness. Historical examples, such as the Soviet Setun computer, show that while ternary systems can simplify certain operations, binary systems have proven to be more practical and less expensive to produce. Advances in integrated circuit technology, influenced by Moore's Law, have further solidified the dominance of binary systems, as they allow for smaller feature sizes and higher densities. The complexity of implementing ternary logic, especially in modern computing, poses significant challenges that outweigh potential benefits. Overall, the simplicity and reliability of binary logic continue to make it the standard in computing systems.
elcaro
Messages
129
Reaction score
30
TL;DR Summary
In the past there have been computers built that used a ternary (= three valued, like -1, 0, 1) storage system instead of the (now) common binary (= two valued, 0 and 1) system. Both systems have their pro's and con's, but why is computing hardware now dominated by binary hardware systems almost exclusively?
See for example Ternary computer
 
Computer science news on Phys.org
What have you found with your Google searching so far for the tradeoffs? I can think of several reasons (mostly hardware related), but I'd be interested to see some links to other analyses...
 
  • Like
Likes elcaro
berkeman said:
What have you found with your Google searching so far for the tradeoffs? I can think of several reasons (mostly hardware related), but I'd be interested to see some links to other analyses...
In 1958 a ternary computer system was built in the Soviet union, the Setun. Some 50 computers were built. They were later replaced with binary computers. They performed equally well, but the binary computer was 2,5 times the cost. See Setun

Advantages of ternary computers are for example branching on comparison instructions (greater then, equal as, less then) which are more simple to implement using ternary logic.

Also I found this: Why not ternary computers and Building the first ternary computer
 
Last edited:
Because on-off, i.e. a switch, is much easier to implement efficiently and at high-speed than a 3-state system.
What would the ternary computer equivalent of MOSFET be?
I am sure you could implement a ternary system using MOSFET-based switching, but presumably that would defeat the point.

Btw, a modern equivalent of this discussion is the qutrit (3-level system) vs qubit (2-level system). Since it is hard to make qubits one could argue that it would make more sense to use (at least) 3 levels for computations. Right now, this looks like an interesting idea; but one that is hard to implement. The "gain" from using fewer qubits is typically more than offset by the increased control complexity and the reduction is stability.
That is, it is not dissimilar to the old ternary vs binary discussion.
 
  • Like
Likes FactChecker, berkeman, PeroK and 1 other person
elcaro said:
In 1958 a ternary computer system was built in the Soviet union
Be careful putting much stock in what was done many years ago. We've come a long way since then in computing hardware.
elcaro said:
These are newer efforts, but not very good articles, IMO. Very popular-press-type articles with little substance.

A fruitful search on this probably should include you searching for how the state of the art in cell density and operational speed have made such huge advances over the past couple of decades. You will find that the super-dense and very fast computer and memory ICs have evolved because of the ability to simplify and shrink the circuit features so much (related to Moore's Law). As IC feature sizes have shrunk, the power supply voltages have also gone down (because otherwise you get too much leakage and risk arc-over and punch-through), which really limits you to storing and detecting just two states per cell/gate. To store and detect 3 states, you need analog detectors with wider voltages which keeps you at larger feature sizes and lower IC densities (more expensive and much slower).

Especially if you've seen some of the state-of-the-art design rules for the smalles geometries that we are using now, it's amazing how close we are cutting voltage margins, etc., in order to maximize the performance and minimize the cost of high-density ICs.
 
  • Like
Likes elcaro, NTL2009 and DaveE
It's following that old rule that it takes 2 to tango!

If you recall the earliest computing machine were mechanical and used base 10. (Babbage) Then came the electromechanical relay ones and base 2 (Zuse).

https://en.wikipedia.org/wiki/Z3_(computer)

Finally electronics replaced the mechanical relays with tube logic then transistor logic and base 2 was here to stay.

One could argue that its simpler to implement base 2 logic in hardware over base 3.
 
  • Like
Likes FactChecker and sysprog
Why not more levels than ternary (if we're discussing a move)?

This question is mirrored in the selection of Long Instructions and Reduced Instruction sets for computer microcode. The shorter the list of possible instructions (logic levels), the simpler the implementation but the slower the execution times can be.
These days, long winded calculations tend to be done with dedicated processors (image and video processing are examples) and we have hybrid structures. I guess the same could be said about logic levels. You can more or less assume distributed processing so different processors could use different logic; the more logic levels, the more specialised the processes.
 
Back
Top