- 2,348
- 124
Since a CPU is essentially a chip, a transistor, or a capacitor (?), it must be operating on a continuous scale. How does it translate a continuous scale into a binary one?
The discussion revolves around how computers differentiate between binary values of 0 and 1, focusing on the underlying hardware mechanisms, particularly transistors, and the implications of noise and error in digital circuits. The scope includes theoretical aspects of digital logic, technical explanations of circuitry, and considerations of error rates in practical applications.
Participants express a range of views on how binary values are represented in digital circuitry, with some agreeing on the role of transistors as switches, while others raise questions about error rates and the effects of physical phenomena. The discussion remains unresolved regarding the implications of randomness and determinism in this context.
Limitations include the dependence on specific definitions of logic levels and the unresolved nature of how various physical factors contribute to errors in digital circuits.
EnumaElish said:Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?
EnumaElish said:Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?
Is this because of some kind of averaging algorithm (execute an operation many times, then take the average [or some other summary statistic]), or is there some other explanation?-Job- said:It's interesting that something which is variably random at the most basic layer becomes something fairly deterministic at the top.
Maybe randomness & determinism aren't mutually exclusive after all.