How come we haven't advanced since binary code?

AI Thread Summary
The discussion centers on the simplicity and efficiency of binary code in computing, which has been the standard since the 1940s. Participants explore the idea of using tertiary or other base systems, noting that while theoretically possible, such systems would introduce significant challenges, particularly with noise and voltage stability. Binary's representation of information through high and low voltage levels is highlighted as a key reason for its dominance, as it allows for reliable data processing and storage. The conversation also touches on the complexity of programming languages, where higher-level languages simplify coding tasks compared to lower-level languages like assembly or binary. Quantum computing is mentioned as a potential evolution in data representation, utilizing superposition to process information differently than traditional binary systems. Overall, the consensus is that while alternative coding systems exist, binary remains the most practical choice for current technology due to its efficiency and reliability.
Femme_physics
Gold Member
Messages
2,548
Reaction score
1
This may sound like a stupid question, but binary code seems so "simple", but it's been with us since the 1940's. What'd be wrong with tertiary code, say?
 
Technology news on Phys.org
Computers process information by using pulses of electricity. Binary essentially represents this electricity. A 1 in binary is a pulse, a 0 is no pulse. I'm sure it's a bit more complicated than that, but I think that would do as a rough idea. I'm not sure if any kind of code could replace binary without fundamentally changing the way computers process information.

I'm also not sure I understand you when you say simple. In my mind, the higher level the code, the simpler it is; since it is essentially getting closer and closer to spoken language. I know that to write some of the programs I write in C or C++ in Assembly, it would be a very difficult and complex task. To write the program in binary would be unthinkable.
 
Oh, that actually makes a lot of sense. :) Thanks.
 
KrisOhn said:
Computers process information by using pulses of electricity. Binary essentially represents this electricity. A 1 in binary is a pulse, a 0 is no pulse.
If we're talking about computer memory, such as RAM, it's not pulses of electricity - it's two different voltage levels, high or low. One voltage level corresponds to 1 and the other corresponds to 0. If we're talking about storage devices such as hard disks, each one of the trillions of magnetic domains can be read or changed (magnetized) to one of two orientations by the drive head. [/quote]
KrisOhn said:
I'm sure it's a bit more complicated than that, but I think that would do as a rough idea. I'm not sure if any kind of code could replace binary without fundamentally changing the way computers process information.

I'm also not sure I understand you when you say simple. In my mind, the higher level the code, the simpler it is; since it is essentially getting closer and closer to spoken language. I know that to write some of the programs I write in C or C++ in Assembly, it would be a very difficult and complex task. To write the program in binary would be unthinkable.
In one way programs written in an assembly language are very simple. Most instructions in assembly cause one thing to happen, such as moving a value from memory into a particular register or adding two numbers.
 
Mark44 said:
If we're talking about computer memory, such as RAM, it's not pulses of electricity - it's two different voltage levels, high or low. One voltage level corresponds to 1 and the other corresponds to 0. If we're talking about storage devices such as hard disks, each one of the trillions of magnetic domains can be read or changed (magnetized) to one of two orientations by the drive head.
Yes, I understand that, I meant what I said to be a rough approximation of what happens.

In one way programs written in an assembly language are very simple. Most instructions in assembly cause one thing to happen, such as moving a value from memory into a particular register or adding two numbers.

I had not considered this, I can see what is meant by simple now.
 
My point was that simplicity is in the eye of the beholder. From one perspective, simplicity is being able to do complicated things with a minimum of code. For example, I remember using an implementation of BASIC that included matrix operations. (This type of BASIC ran on some kind of minicomputer back in the mid 70s.) You could add together two matrices and store the sum in another matrix using this syntax: C = A + B. Other high-level languages, such as C and Fortran, required considerably more lines of code to do the same thing.

From another perspective, simplicity could be from the point of view of what individual assembly instructions are doing, which I mentioned already.
 
Femme_physics said:
This may sound like a stupid question, but binary code seems so "simple", but it's been with us since the 1940's. What'd be wrong with tertiary code, say?
Search Wiki for Octal and Hexadecimal.
 
I vaguely remember perhaps 10-20 years ago when they were trying to cram more bits into the same space that one company announced a ram or rom that used more than just high or low voltage, instead I think they tried using 4 voltages so they could get two bits stored in the space of one transistor or capacitor.
 
Isn't quantum computing trying to do that? From what I read (well, understood), the photon states are coupled and somehow can represent combinations of states, including both 1 and 0 simultaneously. I would like more of an explanation on how that works from someone who's more familiar with it.
 
  • #10
timthereaper said:
Isn't quantum computing trying to do that? From what I read (well, understood), the photon states are coupled and somehow can represent combinations of states, including both 1 and 0 simultaneously. I would like more of an explanation on how that works from someone who's more familiar with it.

This is a little bit different, what happens here is you perform a series of quantum operations on each bit such that, by the time you are done and you measure the results, there is a probability distribution that each bit is 1 or 0. So making a quantum algorithm is all about structuring your operations such that at the end there is a more-than-50% probability that the final answer is "correct", and running the quantum algorithm involves running it multiple times and then taking the "majority" answer. When we are doing math reasoning about a quantum algorithm, we represent this by at each moment pre-measurement saying the qubit has a value which is a quantum superposition of the states |0> and |1>, and that superposition allows us to compute the probability distribution. However whether the qubit actually is in multiple states simultaneously combining |0> and |1> until the measurement and then randomly "collapses" to one on measurement, or what, depends on which interpretation of quantum mechanics you prefer.

I suggest reading Scott Aaronson's "quantum computing since Democritus" series, on the right hand bar here (note: the alphabetical order is a little jumbled) http://www.scottaaronson.com/blog/
 
  • #11
Some forms data transmissions encode more than 1 bit per frequency cycle, using combinations of amplitude and phase shifting to encode the data.

In computers themselves, some binary math operations such as addition or multiplication, are sped up by doing more stuff in parallel, to reduce the number of gate propagation delays.
 
  • #12
I don't see the reason people mix base system(I guess that's what you meant with binary code) with programming languages. The programming language is just a tool to control the machine. It doesn't have anything to do with the binary system. You could use C, assembly, Python, Java even in a tertiary base or 20ary base and so on. The base is just a way to represent information.

The reason we use binary and not tertiary is because of noise. Electricity voltage is very hard to keep stable and if we used a tertiary base system then we would get false values very often with the technology we have nowadays.
 
  • #13
Pithikos said:
The reason we use binary and not tertiary is because of noise. Electricity voltage is very hard to keep stable and if we used a tertiary base system then we would get false values very often with the technology we have nowadays.

That's obviously true but I can see things evolving in times of quantum computer mechanics. However, in my opinion simplistic is best and binary is very simplistic :)
 
  • #14
There have indeed been various computers based on non-base 2 architecture.

The IBM 650 used base 10 http://en.wikipedia.org/wiki/IBM_650
UNIVAC used 36 bit words http://en.wikipedia.org/wiki/UNIVAC_1107

However base 2 is used almost universally now because you can build computer memory efficiently with one transistor representing on/off or zero/one.

Building transistor storage with 3 or more states just hasn't prove effective or efficient. DRAM rules! http://upload.wikimedia.org/wikipedia/commons/3/3d/Square_array_of_mosfet_cells_read.png
 
Back
Top