A proof that a computer cannot generate a truly random number?

In summary: That's something that you would need to show. In summary, a computer cannot generate a truly random number.
  • #1
Jamin2112
986
12
I'm thinking about how to do a proof that a computer cannot generate a truly random number.

Attempt. Let Ω = {ω1, ω2, ..., ωn}, a subset of ℝ, be all the numbers represented on a certain machine. A random number generator rand(), because its output is dependent on how many times it has been called, is analogous to a function f:N→Ω (where I'm letting N denote the natural numbers). We need f to have the following properties:

(1) The long term frequency of any wi appearing is 1/n. That is, for any i ε {1, ..., n}, if we let gi(f(n)) = 1 if f(n) = ωi, and gi(f(n)) = 0 if f(n) ≠ ωi, then

ƩnεN gi = gi(f(1)) + gi(f(2)) + ... = 1/n.​

(2) We need independence ... I'm trying to think how to formalize this statement.
 
Physics news on Phys.org
  • #2
Hey Jamin2112.

The first thing you have to do is define random.

The best way that I can think of is to assume that all possible distributions (i.e. all subsets of the power set that correspond to the events) has maximum entropy. This is pure randomness.

If you can prove that there exists a situation that does not have this property, then you have proved that a computer may not generate a purely random process.

If you are looking at a particular algorithm or class of generators, you need to take that process and prove that the entropy characteristics of the process are lower than the maximum bounds.
 
  • #3
It perhaps doesn't need saying, but in addition to chiro's suggestion, you would also define what you mean by "computer" -- deterministic Turing machine (without an oracle)?
 
  • #4
Jamin2112 said:
I'm thinking about how to do a proof that a computer cannot generate a truly random number.

Attempt. Let Ω = {ω1, ω2, ..., ωn}, a subset of ℝ, be all the numbers represented on a certain machine.

It isn't really the machine that determines what numbers it can represent. For example, mathematical software that does symbolic manipulations can represent numbers like [itex] \sqrt{2} [/itex] and [itex] \pi [/itex] using strings for their symbols. The machine's processors have a system for reprsenting particular subsets of real numbers (e.g. floating point ), but programs aren't limited to using that system. You can refer to the set of numbers that can be generated on a particular machine by a particular program.

Perhaps you should begin by trying to prove that a computer cannot randomly select a number from th set of two numbers {0,1} or select a random symbol from the set of two symbols {H,T}.

because its output is dependent on how many times it has been called,

That limits the scope of your proof to particular types of random number generators. It's true that recursive random number generators are the ones most commonly used.

Should you define what you mean by "random"? It's amusing that we often don't make this demand of people who post about problems in probability theory. For example, we don't cross examine a poster who makes a statement like "Let X be a random number chosen uniformly in the interval [0,1]". For questions in probability theory, all we demand is that an event and its associated probability distribution be clearly stated. However, in probability theory, what we say about "probability" is essentially "how it behaves" and not "what it is". i.e. there are certain definitions for events (like "independence") that are stated in terms of how the probability of other events ( like their joint occurrence) behaves as a function of the individual probabilities of the events. However, there is no definition of probablity that let's us look at a situation where no probabilities are defined and conclude that certain probabilities exist and have certain values. All problems in the conventional theory of mathematical probability must begin by stating that certain probabilities are "given". So if you want to deal with a situation where no probabilities are given and either conclude some probabilities exist or prove that none do, you need to look for other definitions of "random" that would give you some hope of doing this.
 
  • #5
Stephen Tashi said:
Perhaps you should begin by trying to prove that a computer cannot randomly select a number from th set of two numbers {0,1} or select a random symbol from the set of two symbols {H,T}.

I think that's what I'll do.

Should you define what you mean by "random"? It's amusing that we often don't make this demand of people who post about problems in probability theory. For example, we don't cross examine a poster who makes a statement like "Let X be a random number chosen uniformly in the interval [0,1]".

I think that one component of the definition is like I said in the original post, the limiting value of Ʃg as I defined in the original post is 1/n. But I need to make that more general because in fact all ordered arrangements of elements from Ω needs to have the same limiting value for ∑g if I use a function g that equals 1 if that arrangement appears and 0 otherwise.
 
  • #6
Also, I'm not convinced that it is relevant whether the random number generator uses a recursion algorithm. Either way it's analogous to a function f(n) because you have a number of trials.
 
  • #7
Generating random (read unpredictable) numbers in conventional computer is not possible, because it behaves deterministically, i.e. the numbers can be predicted if you read the state of the computer (what is the program, what is in the memory).

This reminds me that von Neumann once said "tell me what computer cannot do and I will build one which will do it". In fact you can enhance your computer with hardware device which produces numbers not based on some clever algorithm, but based on the transformation of a signal from sensitive measurement of, say, electric current across a resistor. The small fluctuations of current do not follow any obvious deterministic algorithm, so they can be used to produce random numbers.
 
  • #8
Random, as I understand it, means 'all possibilities with equal probabiliy'

So, flipping a coin is considered a random event, because in the scope of our problem we recognize two possibilities, heads and tails, and the probabilities of these possible events are equal. I am essentially subjectivist/bayesian in attitude on statistics. Probabilities are not objective features of the word, they are features of our expectations. A pseudo-random algorithm that humans don't/can't predict IS random.

Its kind of a weird idea to prove a computer can't do something. Any proof you do will be regarding mathematical notions, not physical objects. There would have to be an independent understanding that those notions have something to do with computers. You can't incorporate a physical object into your proof.
 
Last edited:
  • #9
Meh, first there is no reason a computer can't generate random sequences. There is no requirement that a computer be fully deterministic. If its operation contains a random component then it can extract that random component and produce a random sequence.

If we are restricted to deterministic computers then the future states of the computer are fully determined by the current state. Therefore the future billion gigabyte of random numbers that will be generated can be specified by simply providing the current state of the computer. This amounts to a massive amount of compression of the sequence. Since a random sequence cannot be compressed we know that the sequence could not have been random.
 
  • #10
Since a random sequence cannot be compressed...
That is not true. If you produce sequence of 100000 coin flips, which is always random by common definition, write it down into binary file and run compressing program, sometimes you can compress it to smaller size.

True, conventional program will achieve compression only for certain small subset of possible sequences. But which subset it is depends on the method of compression; there is no sequence that can be said "incompressible absolutely" (unless we require that the program be shorter than some maximum number of bytes).

The root of the problem is in the fact that "randomness of sequence" is not a property deducible from its numerical content; the numbers cannot be "random".

Randomness means rather that we are not able to predict the next values of the sequence from the preceding ones. It thus refers to knowledge of the human about the production of the sequence, not to the sequence itself.
 
  • #11
Uvohtufo said:
Its kind of a weird idea to prove a computer can't do something.
Here are some things that a computer (a Turing machine) cannot be programmed to do:
  • Compute a non-computable number. Note well: Almost all of the real numbers are not computable.
  • Given a random computer program and a finite set of input data, prove whether or not that program will halt.
  • Given a random Diophantine problem, prove whether or not that problem has a solution.
 
  • #12
D H said:
Here are some things that a computer (a Turing machine) cannot be programmed to do:
  • Compute a non-computable number. Note well: Almost all of the real numbers are not computable.
  • Given a random computer program and a finite set of input data, prove whether or not that program will halt.
  • Given a random Diophantine problem, prove whether or not that problem has a solution.

I believe you when you tell me a computer cannot be programmed to do those things, but its another thing entirely to say that a mathematical proof alone tells us what a computer can or cannot do.
 
  • #13
All of those are mathematical proofs.

The first is a consequence of Cantor's proof that the reals are uncountably large. There number of possible computer programs is only a countably large. Almost all of the reals are not computable. Google "computable number" for more.

The second is the "halting problem." Google that term, along with "Church-Turing theorem." That the halting problem is unsolvable was an important discovery regarding the limits of computation.

The third problem is "Hilbert's tenth problem." That this was proven to be unsolvable in general was an important discovery regarding the limits of mathematics.
 
  • #14
D H said:
All of those are mathematical proofs.

Yeah I don't disagree, but I am getting at something else entirely. There is nothing about those proof that tells you they say anything about real physical objects like computers. I don't disagree that we can meaningfully demonstrate things about computers with math, but that's because I am the one who can see the connection between a computer and a proof. There are proofs which don't have any kind of connection, and we wouldn't say it meant anything about the world.
 
  • #15
Uvohtufo said:
Yeah I don't disagree, but I am getting at something else entirely. There is nothing about those proof that tells you they say anything about real physical objects like computers. I don't disagree that we can meaningfully demonstrate things about computers with math, but that's because I am the one who can see the connection between a computer and a proof.

Unless Jamin2112 comes back with a different definition, the word "computer" means "Turing machine". This is a math forum, if you want to talk about physical machines, we have a forum for that.

There are proofs which don't have any kind of connection, and we wouldn't say it meant anything about the world.

Which is completely irrelevant from this thread.
 
  • #16
Sorry, I didn't realize there was a formal definition of a computer. I shouldn't have derailed the thread.
 
  • #17
Jano L. said:
That is not true. If you produce sequence of 100000 coin flips, which is always random by common definition, write it down into binary file and run compressing program, sometimes you can compress it to smaller size.

True, conventional program will achieve compression only for certain small subset of possible sequences. But which subset it is depends on the method of compression; there is no sequence that can be said "incompressible absolutely" (unless we require that the program be shorter than some maximum number of bytes).

The root of the problem is in the fact that "randomness of sequence" is not a property deducible from its numerical content; the numbers cannot be "random".

Randomness means rather that we are not able to predict the next values of the sequence from the preceding ones. It thus refers to knowledge of the human about the production of the sequence, not to the sequence itself.

No.

Choose a compression algorithm. Then you can spend your entire life flipping coins and never find a 100000 bit sequence that will compress appreciably. The number of compressible sequences is so small that you will never see them by accident. And every bit you add to the sequence you want to compress makes it exponentially worse.

It is true that given a any sequence, even a random one, you could design a compression algorithm that will compress it. But that is cheating. And besides the size of your program will exceed the size of the data being compressed so in reality there is no compression anyway. All you are doing is front-loading the information from the file into your algorithm. A cheat with no practical use.

There is no free lunch. A compression algorithm that achieves substantial compression does so by restricting itself to a very small subset of possible inputs. You will never see one of those compressible inputs of any size by accident.
 
  • #18
ppnl, I think you misunderstood me.

I do not say that given program can compress significant percentage of all possible sequences. I also do not suggest that transferring the data from the sequence to the program is the way to compress it.

What I say is this: whether the sequence can be compressed or not is not an absolute property of this sequence, but also depends on the algorithm we use to compress it. Do you agree with this?

If we can agree on this, then we cannot base universal definition of randomness on the fact that the sequence is compressible or not. Such definition refers to particular program.

Let me give a concrete example of what I mean.

Let us flip the coin 100 000 times and denote 1-head, 0-tail. We may obtain any sequence of 1's and 0's, and they are random, because the process that lead to them is unpredictable.

We may obtain a sequence

1111...

of 100 000 heads. It does not matter than it is highly unlikely; it is possible and as likely as any other sequence.

This sequence is random, since it has been obtained in an unpredictable process, but it can be compressed by the simplest current programs.

I think what you mean is that we cannot compress all or great percentage of such random sequences, and I agree, but it does not allow us to say that one random sequence cannot be compressed.
 
  • #19
Jano L. said:
ppnl, I think you misunderstood me.
It's the other way around. You misunderstood ppnl. More below.

Let me give a concrete example of what I mean.

Let us flip the coin 100 000 times and denote 1-head, 0-tail. We may obtain any sequence of 1's and 0's, and they are random, because the process that lead to them is unpredictable.

We may obtain a sequence

1111...

of 100 000 heads. It does not matter than it is highly unlikely; it is possible and as likely as any other sequence.
It matters very much that this all-heads sequence is highly unlikely. Yes, there is a chance of a sequence of all 100,000 heads. The probability is smaller than small. Suppose you had a supercomputer capable of generating a billion of these 100,000 element random coin toss sequences every second. This supercomputer would have run for 1030,000 centuries before the expected value of seeing this all heads sequence rises to over 1/2.

ppnl covered this all-heads scenario, along with every other random sequence that is compressible in the opening lines of his last post, emphasis mine.

ppnl said:
Choose a compression algorithm. Then you can spend your entire life flipping coins and never find a 100000 bit sequence that will compress appreciably. The number of compressible sequences is so small that you will never see them by accident.
In particular, the number of compressible sequences is small in comparison to the 2100000 possible sequences. We're getting close to "almost surely" territory with 100,000 coin tosses. All but a tiny, tiny, tiny fraction of those 2100000 possible sequences is not compressible.
 
  • #20
D H,

I do not understand how your post shows that I misunderstood ppnl. I agree with him and you on his point - most sequences cannot be compressed by given program.

Let's agree we agree on this.

What I am saying is something different. I object to adoption of statement "random sequence cannot be compressed" as true, and I gave counter-example, sequence 1111... obtained by coin flipping can be compressed. Its low probability is immaterial, as any other example would have exactly the same probability and surely one example is enough to convince us that the statement is not true.If we change the statement into

"most sequences long 100 000 digits cannot be compressed by a given program"

then it is true. However, then it also has nothing to do with randomness.For a given compression program:

There are random sequences that are compressible, and other random sequences that are incompressible.

There are non-random sequences that are compressible, and other non-random sequences that are incompressible.

EDIT:
Clearly, compressibility of a sequence by a given program has no direct relation to its randomness.
 
  • #21
Jano L. said:
What I am saying is something different. I object to adoption of statement "random sequence cannot be compressed" as true
Who said that?
Random sequences cannot be compressed on average. For every sequence where you save some bits, there are many sequences where you have to add bits (or some way to say "message terminated", which is additional data as well).

If we change the statement into

"most sequences long 100 000 digits cannot be compressed by a given program"

then it is true. However, then it also has nothing to do with randomness.
It has. For non-random processes, there are algorithms which can compress the message on average - at least if the process (but not its specific result) is known.
 
  • #22
ppnl proposed the following criterion of randomness:

If we are restricted to deterministic computers then the future states of the computer are fully determined by the current state. Therefore the future billion gigabyte of random numbers that will be generated can be specified by simply providing the current state of the computer. This amounts to a massive amount of compression of the sequence. Since a random sequence cannot be compressed we know that the sequence could not have been random.

This deals with one sequence and its compression based on statement of the program that generated it.

In my posts, I only objected to literal adoption of (sequence is random => sequence is incompressible) and thus also to (sequence compressible => sequence is not random).


It has. For non-random processes, there are algorithms which can compress the message on average - at least if the process (but not its specific result) is known.

I am not sure what you mean. However, I think it might help to point out that given program cannot compress all non-random sequences, because the rule governing the sequence may be too complicated for simple program to discover.
 
  • #23
Ok, forget about the coin flip. Let's say you have a box that contains 100000 molecules of a gas bouncing randomly. At any point in time there is a 50% chance that any given molecule will be on the left side of the box rather than the right side.

Would you say that if you looked and found all the molecules on one side that it was still a random distribution of molecules? No, it is a reduction of entropy. It would make energy available in violation of the second law and it would be a compressible sequence of ones and zeros. The reduction of entropy may have happened randomly but the distribution itself is not random.

It also wouldn't happen in the life span of the universe. Thats how the second law works.

You see we can define randomness in terms of available energy. We can also define randomness in terms of compressibility.

And finally deciding if a sequence is really random is not computable. If you find a pattern in the sequence then you know that it is not random. But not finding a pattern can never be a proof that there is no pattern.
 
  • #24
Jano L, you should try and thinking of things in terms of the intrinsic structure of the information.

This is actually how a great majority of compression techniques that give good compression relative to the information that is encoded actually work.

Things like MP3, MPEG, and JPEG are examples of this: they use domain knowledge of a variety of fields (including psycho-acoustics for MP3) to give data.

Now I know these are examples of lossy compression schemes, but there are analogues for loss-less ones as well and one example is the FLAC audio codec.

Instead of thinking necessarily about purely unstructured data, try adding little bits of structure where appropriate and start making connections between a variety of different structures and domains.

This will give you a better way to compress data and although the technique above won't let you solve the general problem right away, you can aggregate the results to find more patterns and important attributes in general classes of data as you become more aware through your investigations.

Compression requires you to have some kind of structure because without it, you revert to the worst possible case and that just means you might as well not compress it at all.
 
  • #25
We can also define randomness in terms of compressibility.

This is what I object to from the very beginning and so far I did not read any relevant response. How do you make such definition independent of a special compression program?

chiro, I think what you are saying makes sense, but I do not see a connection to the question how to define random sequence.
 
  • #26
Just a quick (ignorant) question: wouldn't compressing a set of values/sequences/etc in a sense decrease the randomness? Couldn't we apply a compression enough times so that we could only have one value and thus no random variable?
 
  • #27
You need to put it relative to other possible variation.

Entropy is always measured relative to other sources of variation and the likelihood (probability) of that variation occurring.
 

What is a truly random number?

A truly random number is a number that is generated without any predictable pattern or sequence. It is completely unpredictable and cannot be reproduced.

Why can't a computer generate a truly random number?

A computer is a deterministic machine, meaning that its actions are based on a set of programmed instructions. Therefore, it is not capable of generating a truly random number as it can only produce numbers based on its algorithm.

What methods do computers use to generate random numbers?

Computers use various algorithms and methods, such as pseudorandom number generators, to generate numbers that appear random but are actually deterministic and can be reproduced.

Can a computer ever generate a truly random number?

No, a computer cannot generate a truly random number as it is limited by its deterministic nature. However, it can generate numbers that are close enough to randomness for practical purposes.

What are the practical implications of not being able to generate a truly random number?

This limitation can have implications in areas such as cryptography, where truly random numbers are essential for secure encryption. It can also affect simulations and scientific experiments that require truly random inputs.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
681
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
335
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
397
Back
Top