rjbeery
- 346
- 8
In Information Theory, entropy is defined as the unpredictability of information content and, as such, the entropy of the output from so-called pseudo random number generators (PRNG) is often measured as a test of their "randomness". An interesting paradox arises with this definition...
Start with a suitable seed, S_n, consisting of an array of n previously generated pseudo-random numbers from the function PRNG(seed). In order to maximize the randomness of PRNG we want PRNG(S_n) to return a result such that the entropy of S_{n+1} is also maximized. Such a result can be distinct, given a suitable seed. However, in an effort to maximize the randomness of PRNG we have now created a process which is deterministic and completely predictable, which directly contradicts the proclaimed unpredictability of information content through high entropy.
Start with a suitable seed, S_n, consisting of an array of n previously generated pseudo-random numbers from the function PRNG(seed). In order to maximize the randomness of PRNG we want PRNG(S_n) to return a result such that the entropy of S_{n+1} is also maximized. Such a result can be distinct, given a suitable seed. However, in an effort to maximize the randomness of PRNG we have now created a process which is deterministic and completely predictable, which directly contradicts the proclaimed unpredictability of information content through high entropy.
After typing that it occurred to me that the proposed RNG would be predictable right up to the point that the "most recent seed's entropy" could no longer be increased (such as in your example). This is probably actually proving the link between entropy and unpredictability but I'm currently too hungry to think about it, time for lunch.