Why do we use binary computing systems instead of terniary systems?

Click For Summary
SUMMARY

The discussion centers on the preference for binary computing systems over ternary systems, highlighting historical examples such as the Soviet Union's Setun computer from 1958. Despite ternary systems offering advantages in branching and comparison instructions, binary systems dominate due to their simpler implementation and cost-effectiveness. The evolution of integrated circuits, driven by Moore's Law, has further solidified binary computing's supremacy by optimizing for two-state logic, which is more efficient at smaller feature sizes. The complexities of implementing ternary systems, including the need for analog detectors and wider voltage ranges, contribute to their decline in favor of binary systems.

PREREQUISITES
  • Understanding of binary and ternary logic systems
  • Familiarity with integrated circuit design and Moore's Law
  • Knowledge of computer architecture and instruction sets
  • Basic concepts of quantum computing, specifically qubits and qutrits
NEXT STEPS
  • Research the historical development and performance of the Setun ternary computer
  • Explore the implications of Moore's Law on modern computing architectures
  • Investigate the challenges of implementing qubits versus qutrits in quantum computing
  • Study the trade-offs between long instruction sets and reduced instruction sets in microcode design
USEFUL FOR

This discussion is beneficial for computer engineers, hardware designers, and researchers in computing architecture, particularly those interested in the evolution of logic systems and the implications for modern computing technologies.

elcaro
Messages
129
Reaction score
30
TL;DR
In the past there have been computers built that used a ternary (= three valued, like -1, 0, 1) storage system instead of the (now) common binary (= two valued, 0 and 1) system. Both systems have their pro's and con's, but why is computing hardware now dominated by binary hardware systems almost exclusively?
See for example Ternary computer
 
Computer science news on Phys.org
What have you found with your Google searching so far for the tradeoffs? I can think of several reasons (mostly hardware related), but I'd be interested to see some links to other analyses...
 
  • Like
Likes   Reactions: elcaro
berkeman said:
What have you found with your Google searching so far for the tradeoffs? I can think of several reasons (mostly hardware related), but I'd be interested to see some links to other analyses...
In 1958 a ternary computer system was built in the Soviet union, the Setun. Some 50 computers were built. They were later replaced with binary computers. They performed equally well, but the binary computer was 2,5 times the cost. See Setun

Advantages of ternary computers are for example branching on comparison instructions (greater then, equal as, less then) which are more simple to implement using ternary logic.

Also I found this: Why not ternary computers and Building the first ternary computer
 
Last edited:
Because on-off, i.e. a switch, is much easier to implement efficiently and at high-speed than a 3-state system.
What would the ternary computer equivalent of MOSFET be?
I am sure you could implement a ternary system using MOSFET-based switching, but presumably that would defeat the point.

Btw, a modern equivalent of this discussion is the qutrit (3-level system) vs qubit (2-level system). Since it is hard to make qubits one could argue that it would make more sense to use (at least) 3 levels for computations. Right now, this looks like an interesting idea; but one that is hard to implement. The "gain" from using fewer qubits is typically more than offset by the increased control complexity and the reduction is stability.
That is, it is not dissimilar to the old ternary vs binary discussion.
 
  • Like
Likes   Reactions: FactChecker, berkeman, PeroK and 1 other person
elcaro said:
In 1958 a ternary computer system was built in the Soviet union
Be careful putting much stock in what was done many years ago. We've come a long way since then in computing hardware.
elcaro said:
These are newer efforts, but not very good articles, IMO. Very popular-press-type articles with little substance.

A fruitful search on this probably should include you searching for how the state of the art in cell density and operational speed have made such huge advances over the past couple of decades. You will find that the super-dense and very fast computer and memory ICs have evolved because of the ability to simplify and shrink the circuit features so much (related to Moore's Law). As IC feature sizes have shrunk, the power supply voltages have also gone down (because otherwise you get too much leakage and risk arc-over and punch-through), which really limits you to storing and detecting just two states per cell/gate. To store and detect 3 states, you need analog detectors with wider voltages which keeps you at larger feature sizes and lower IC densities (more expensive and much slower).

Especially if you've seen some of the state-of-the-art design rules for the smalles geometries that we are using now, it's amazing how close we are cutting voltage margins, etc., in order to maximize the performance and minimize the cost of high-density ICs.
 
  • Like
Likes   Reactions: elcaro, NTL2009 and DaveE
It's following that old rule that it takes 2 to tango!

If you recall the earliest computing machine were mechanical and used base 10. (Babbage) Then came the electromechanical relay ones and base 2 (Zuse).

https://en.wikipedia.org/wiki/Z3_(computer)

Finally electronics replaced the mechanical relays with tube logic then transistor logic and base 2 was here to stay.

One could argue that its simpler to implement base 2 logic in hardware over base 3.
 
  • Like
Likes   Reactions: FactChecker and sysprog
Why not more levels than ternary (if we're discussing a move)?

This question is mirrored in the selection of Long Instructions and Reduced Instruction sets for computer microcode. The shorter the list of possible instructions (logic levels), the simpler the implementation but the slower the execution times can be.
These days, long winded calculations tend to be done with dedicated processors (image and video processing are examples) and we have hybrid structures. I guess the same could be said about logic levels. You can more or less assume distributed processing so different processors could use different logic; the more logic levels, the more specialised the processes.
 

Similar threads

Replies
3
Views
6K
Replies
9
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 35 ·
2
Replies
35
Views
5K
Replies
4
Views
2K
  • · Replies 29 ·
Replies
29
Views
3K
  • · Replies 13 ·
Replies
13
Views
753
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 26 ·
Replies
26
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K