How come we haven't advanced since binary code?

Click For Summary

Discussion Overview

The discussion revolves around the use of binary code in computing and the potential for alternative coding systems, such as tertiary code. Participants explore the implications of binary's simplicity, its historical context, and the challenges associated with implementing different base systems in computer architecture.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants question why binary code, which seems simple, has remained dominant since the 1940s and propose exploring tertiary code as an alternative.
  • Others explain that binary represents electrical pulses, with 1 as a pulse and 0 as no pulse, and suggest that any alternative would fundamentally change how computers process information.
  • A participant mentions that higher-level programming languages are simpler in terms of usability compared to lower-level languages, which require more complex coding.
  • Some participants note that simplicity can be subjective, depending on whether one considers the amount of code needed or the complexity of individual instructions.
  • There are mentions of attempts to use more than two voltage levels in memory storage to increase data density, with references to past technologies that experimented with this idea.
  • Quantum computing is brought up as a potential avenue for representing more than two states, with discussions on how quantum bits (qubits) can exist in superposition, leading to different computational approaches.
  • Some participants argue that noise in electrical systems makes binary more reliable than a tertiary system, which could lead to more frequent errors in data representation.
  • Historical examples of non-binary computing architectures, such as the IBM 650 and UNIVAC, are cited, but it is noted that binary remains the standard due to efficiency in memory construction.

Areas of Agreement / Disagreement

Participants express a range of views on the effectiveness and simplicity of binary versus alternative coding systems. There is no consensus on whether tertiary or other base systems could effectively replace binary, and the discussion remains open with multiple competing perspectives.

Contextual Notes

Participants highlight limitations related to noise in electrical systems and the challenges of building efficient memory with more than two states. The discussion also touches on the historical context of computing architectures without resolving the effectiveness of different base systems.

Femme_physics
Gold Member
Messages
2,548
Reaction score
1
This may sound like a stupid question, but binary code seems so "simple", but it's been with us since the 1940's. What'd be wrong with tertiary code, say?
 
Technology news on Phys.org
Computers process information by using pulses of electricity. Binary essentially represents this electricity. A 1 in binary is a pulse, a 0 is no pulse. I'm sure it's a bit more complicated than that, but I think that would do as a rough idea. I'm not sure if any kind of code could replace binary without fundamentally changing the way computers process information.

I'm also not sure I understand you when you say simple. In my mind, the higher level the code, the simpler it is; since it is essentially getting closer and closer to spoken language. I know that to write some of the programs I write in C or C++ in Assembly, it would be a very difficult and complex task. To write the program in binary would be unthinkable.
 
Oh, that actually makes a lot of sense. :) Thanks.
 
KrisOhn said:
Computers process information by using pulses of electricity. Binary essentially represents this electricity. A 1 in binary is a pulse, a 0 is no pulse.
If we're talking about computer memory, such as RAM, it's not pulses of electricity - it's two different voltage levels, high or low. One voltage level corresponds to 1 and the other corresponds to 0. If we're talking about storage devices such as hard disks, each one of the trillions of magnetic domains can be read or changed (magnetized) to one of two orientations by the drive head. [/quote]
KrisOhn said:
I'm sure it's a bit more complicated than that, but I think that would do as a rough idea. I'm not sure if any kind of code could replace binary without fundamentally changing the way computers process information.

I'm also not sure I understand you when you say simple. In my mind, the higher level the code, the simpler it is; since it is essentially getting closer and closer to spoken language. I know that to write some of the programs I write in C or C++ in Assembly, it would be a very difficult and complex task. To write the program in binary would be unthinkable.
In one way programs written in an assembly language are very simple. Most instructions in assembly cause one thing to happen, such as moving a value from memory into a particular register or adding two numbers.
 
Mark44 said:
If we're talking about computer memory, such as RAM, it's not pulses of electricity - it's two different voltage levels, high or low. One voltage level corresponds to 1 and the other corresponds to 0. If we're talking about storage devices such as hard disks, each one of the trillions of magnetic domains can be read or changed (magnetized) to one of two orientations by the drive head.
Yes, I understand that, I meant what I said to be a rough approximation of what happens.

In one way programs written in an assembly language are very simple. Most instructions in assembly cause one thing to happen, such as moving a value from memory into a particular register or adding two numbers.

I had not considered this, I can see what is meant by simple now.
 
My point was that simplicity is in the eye of the beholder. From one perspective, simplicity is being able to do complicated things with a minimum of code. For example, I remember using an implementation of BASIC that included matrix operations. (This type of BASIC ran on some kind of minicomputer back in the mid 70s.) You could add together two matrices and store the sum in another matrix using this syntax: C = A + B. Other high-level languages, such as C and Fortran, required considerably more lines of code to do the same thing.

From another perspective, simplicity could be from the point of view of what individual assembly instructions are doing, which I mentioned already.
 
Femme_physics said:
This may sound like a stupid question, but binary code seems so "simple", but it's been with us since the 1940's. What'd be wrong with tertiary code, say?
Search Wiki for Octal and Hexadecimal.
 
I vaguely remember perhaps 10-20 years ago when they were trying to cram more bits into the same space that one company announced a ram or rom that used more than just high or low voltage, instead I think they tried using 4 voltages so they could get two bits stored in the space of one transistor or capacitor.
 
Isn't quantum computing trying to do that? From what I read (well, understood), the photon states are coupled and somehow can represent combinations of states, including both 1 and 0 simultaneously. I would like more of an explanation on how that works from someone who's more familiar with it.
 
  • #10
timthereaper said:
Isn't quantum computing trying to do that? From what I read (well, understood), the photon states are coupled and somehow can represent combinations of states, including both 1 and 0 simultaneously. I would like more of an explanation on how that works from someone who's more familiar with it.

This is a little bit different, what happens here is you perform a series of quantum operations on each bit such that, by the time you are done and you measure the results, there is a probability distribution that each bit is 1 or 0. So making a quantum algorithm is all about structuring your operations such that at the end there is a more-than-50% probability that the final answer is "correct", and running the quantum algorithm involves running it multiple times and then taking the "majority" answer. When we are doing math reasoning about a quantum algorithm, we represent this by at each moment pre-measurement saying the qubit has a value which is a quantum superposition of the states |0> and |1>, and that superposition allows us to compute the probability distribution. However whether the qubit actually is in multiple states simultaneously combining |0> and |1> until the measurement and then randomly "collapses" to one on measurement, or what, depends on which interpretation of quantum mechanics you prefer.

I suggest reading Scott Aaronson's "quantum computing since Democritus" series, on the right hand bar here (note: the alphabetical order is a little jumbled) http://www.scottaaronson.com/blog/
 
  • #11
Some forms data transmissions encode more than 1 bit per frequency cycle, using combinations of amplitude and phase shifting to encode the data.

In computers themselves, some binary math operations such as addition or multiplication, are sped up by doing more stuff in parallel, to reduce the number of gate propagation delays.
 
  • #12
I don't see the reason people mix base system(I guess that's what you meant with binary code) with programming languages. The programming language is just a tool to control the machine. It doesn't have anything to do with the binary system. You could use C, assembly, Python, Java even in a tertiary base or 20ary base and so on. The base is just a way to represent information.

The reason we use binary and not tertiary is because of noise. Electricity voltage is very hard to keep stable and if we used a tertiary base system then we would get false values very often with the technology we have nowadays.
 
  • #13
Pithikos said:
The reason we use binary and not tertiary is because of noise. Electricity voltage is very hard to keep stable and if we used a tertiary base system then we would get false values very often with the technology we have nowadays.

That's obviously true but I can see things evolving in times of quantum computer mechanics. However, in my opinion simplistic is best and binary is very simplistic :)
 
  • #14
There have indeed been various computers based on non-base 2 architecture.

The IBM 650 used base 10 http://en.wikipedia.org/wiki/IBM_650
UNIVAC used 36 bit words http://en.wikipedia.org/wiki/UNIVAC_1107

However base 2 is used almost universally now because you can build computer memory efficiently with one transistor representing on/off or zero/one.

Building transistor storage with 3 or more states just hasn't prove effective or efficient. DRAM rules! http://upload.wikimedia.org/wikipedia/commons/3/3d/Square_array_of_mosfet_cells_read.png
 

Similar threads

Replies
13
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
3
Views
3K
Replies
10
Views
5K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 18 ·
Replies
18
Views
4K
Replies
2
Views
3K
Replies
6
Views
3K