rjbeery
- 346
- 8
In Information Theory, entropy is defined as the unpredictability of information content and, as such, the entropy of the output from so-called pseudo random number generators (PRNG) is often measured as a test of their "randomness". An interesting paradox arises with this definition...
Start with a suitable seed, S_n, consisting of an array of n previously generated pseudo-random numbers from the function PRNG(seed). In order to maximize the randomness of PRNG we want PRNG(S_n) to return a result such that the entropy of S_{n+1} is also maximized. Such a result can be distinct, given a suitable seed. However, in an effort to maximize the randomness of PRNG we have now created a process which is deterministic and completely predictable, which directly contradicts the proclaimed unpredictability of information content through high entropy.
Start with a suitable seed, S_n, consisting of an array of n previously generated pseudo-random numbers from the function PRNG(seed). In order to maximize the randomness of PRNG we want PRNG(S_n) to return a result such that the entropy of S_{n+1} is also maximized. Such a result can be distinct, given a suitable seed. However, in an effort to maximize the randomness of PRNG we have now created a process which is deterministic and completely predictable, which directly contradicts the proclaimed unpredictability of information content through high entropy.
