- #1
Jamin2112
- 986
- 12
I'm thinking about how to do a proof that a computer cannot generate a truly random number.
Attempt. Let Ω = {ω1, ω2, ..., ωn}, a subset of ℝ, be all the numbers represented on a certain machine. A random number generator rand(), because its output is dependent on how many times it has been called, is analogous to a function f:N→Ω (where I'm letting N denote the natural numbers). We need f to have the following properties:
(1) The long term frequency of any wi appearing is 1/n. That is, for any i ε {1, ..., n}, if we let gi(f(n)) = 1 if f(n) = ωi, and gi(f(n)) = 0 if f(n) ≠ ωi, then
(2) We need independence ... I'm trying to think how to formalize this statement.
Attempt. Let Ω = {ω1, ω2, ..., ωn}, a subset of ℝ, be all the numbers represented on a certain machine. A random number generator rand(), because its output is dependent on how many times it has been called, is analogous to a function f:N→Ω (where I'm letting N denote the natural numbers). We need f to have the following properties:
(1) The long term frequency of any wi appearing is 1/n. That is, for any i ε {1, ..., n}, if we let gi(f(n)) = 1 if f(n) = ωi, and gi(f(n)) = 0 if f(n) ≠ ωi, then
ƩnεN gi = gi(f(1)) + gi(f(2)) + ... = 1/n.
(2) We need independence ... I'm trying to think how to formalize this statement.