How does a computer tell a 0 from a 1?

  • Thread starter Thread starter EnumaElish
  • Start date Start date
  • Tags Tags
    Computer
Click For Summary

Discussion Overview

The discussion revolves around how computers differentiate between binary values of 0 and 1, focusing on the underlying hardware mechanisms, particularly transistors, and the implications of noise and error in digital circuits. The scope includes theoretical aspects of digital logic, technical explanations of circuitry, and considerations of error rates in practical applications.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Experimental/applied

Main Points Raised

  • Some participants suggest that a CPU operates on a continuous scale, questioning how this translates into binary values.
  • Others argue that transistors function as switches, operating in an on/off manner to represent binary states.
  • It is proposed that transistors in digital circuitry have high gain, making their linear range negligible, thus functioning primarily as switches.
  • Some participants note that the definitions of logic 0 and 1 depend on the specifications of different logic circuit types, such as CMOS and TTL.
  • One participant states that logic 1 is typically defined as any voltage above a certain threshold, while logic 0 is defined as any voltage below that threshold.
  • Questions are raised about the probability of false readings in digital circuits, with some participants mentioning the impact of noise, surges, and other physical factors on signal integrity.
  • It is mentioned that there are extensive branches of electrical engineering dedicated to addressing potential errors in digital logic design.
  • Some participants express interest in the relationship between randomness at the micro level and determinism at the macro level, suggesting that averaging may play a role in this phenomenon.
  • One participant shares an anecdote about a memory state flip caused by external factors, highlighting the unpredictability in digital systems.

Areas of Agreement / Disagreement

Participants express a range of views on how binary values are represented in digital circuitry, with some agreeing on the role of transistors as switches, while others raise questions about error rates and the effects of physical phenomena. The discussion remains unresolved regarding the implications of randomness and determinism in this context.

Contextual Notes

Limitations include the dependence on specific definitions of logic levels and the unresolved nature of how various physical factors contribute to errors in digital circuits.

EnumaElish
Science Advisor
Messages
2,348
Reaction score
124
Since a CPU is essentially a chip, a transistor, or a capacitor (?), it must be operating on a continuous scale. How does it translate a continuous scale into a binary one?
 
Computer science news on Phys.org
On or off.
A transistor is simply a switch.
 
A transistor is an amplifier. The transistors in your analog equipment are designed to faithfully amplify the inputs, that is, linearly. All transistors have an upper limit to their linear amplification range. The transistors output flattens out beyond this limit and eventually reaches some saturation level. This saturation effect is very undesirable in analog circuitry but is essential for making digital circuitry.

The transistors in digital circuitry have a very high gain, making the linear range negligible. Instead, the transistors in digital circuitry are either generating no output (off) or are saturated (on). The transistor becomes a switch.
 
What constitutes "0" and "1" is defined as part of the specification of different logic circuit types (CMOS, TTL, etc). It's then the chip designer's job to make sure the chip responds properly to input signals and generates valid outputs.

See http://www.interfacebus.com/voltage_threshold.html
 
Most commonly, logic 1 is defined as any voltage above some threshold; logic 0 is defined as any voltage below some threshold. (There are some kinds of exotic logic that use currents or other signalling mechanisms, but you can safely ignore pretty much all of them.)

In CMOS static digital logic, the output of each gate is connected to either the positive supply (VDD) or to ground, via a conducting (turned-on) transistor, at all times. Thus, the gate is producing a clear, unambiguous logic 1 or logic 0 signal.

- Warren
 
Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?
 
EnumaElish said:
Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?

Yes. For example, if you are connecting two digital devices by a cable, what is the maximum length of cable you can use for a given error rate in the transmitted signal.
 
EnumaElish said:
Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?

There are enormous branches of electrical engineering devoted to exactly this possibility. Every piece of digital logic ever designed includes many such considerations.

In the real world, supplies have noise and surges. Power wires on chips have resistance and thus lose voltage over their length. Transistors take time to turn on and turn off. Wires and transistors have unavoidable parasitic capacitances that must be charged and discharged. Cosmic rays can strike memory cells and change their contents. Clocks can reach flip-flops at the wrong time and put the flip-flop into indeterminate "metastable" states. There are dozens and dozens of failure modes that can cause poorly-designed digital circuits to malfunction because, at some point, a logic low is confused with a logic high, or vice versa.

- Warren
 
It's interesting that something which is variably random at the most basic layer becomes something fairly deterministic at the top.
Maybe randomness & determinism aren't mutually exclusive after all.
 
  • #10
-Job- said:
It's interesting that something which is variably random at the most basic layer becomes something fairly deterministic at the top.
Maybe randomness & determinism aren't mutually exclusive after all.
Is this because of some kind of averaging algorithm (execute an operation many times, then take the average [or some other summary statistic]), or is there some other explanation?
 
  • #11
In a sense, yes, it's an averaging. If you try to send just one electron down a wire, you'll find that its motion is almost completely random -- moving at hundreds of thousands of meters per second in random directions due to its own thermal energy and collisions with the metal atoms. Its motion is almost entirely dominated by thermal energy, and it just barely drifts down the wire at all, at a leisurely couple of centimeters per hour.

On the other hand, if you observe not just one electron, but billions, you can make a very accurate calculation of the number of electrons passing a specific point in the wire every second, or of the average velocity of those electrons.

- Warren
 
  • #12
One time we had a memory that would flip its state once every few weeks. After much money and investigation it turned out that the packaging that we were using was emiting alpha particles of all things. You never know what you are going to find.
 

Similar threads

  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 35 ·
2
Replies
35
Views
6K
  • · Replies 44 ·
2
Replies
44
Views
6K
  • · Replies 19 ·
Replies
19
Views
6K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
4
Views
2K