# What is randomness in QM?

• B
Aufbauwerk 2045
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.

When it comes to actually understanding nature, I find this whole topic of probability and determinism quite mysterious. I ask the following question, and would love to know if someone has a good answer. Perhaps there is a good reference to an answer specifically to the following question? I do mean specific, not a general discourse on probability.

Consider the following thought experiment. I am presented with two black boxes. I am told that one contains some natural emitter of random numbers. I am told that this process is therefore "non-deterministic." The other contains a small computer that is running a random number generator. I am told that this process is therefore "deterministic." I understand what is meant by "deterministic" because I understand how a random number generator program works. But I do not understand "non-deterministic." What does it mean? Does it mean there is no so-called "causal relationship?" Of course this means we must define "causality." Another riddle.

Continuing with the thought experiment, the output of each box is a sequence of numbers. My job is to determine which box contains the natural process and which contains the computer.

I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?

Last edited by a moderator:
• Deepblu, DennisN, plasmon_shmasmon and 1 other person

stevendaryl
Staff Emeritus
My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Well, that's the question that John Bell was considering when he developed Bell's Theorem. He showed that the probabilistic predictions of QM in an experiment such as the EPR experiment cannot be reproduced by any "pseudo-random" process without one of two equally unappealing features:
1. Faster-than-light influences
2. Superdeterminism
It's a little hard to explain the difference between ordinary determinism and superdeterminism. I could try to explain it, if you're interested.

• entropy1, Demystifier and Aufbauwerk 2045
Aufbauwerk 2045
Well, that's the question that John Bell was considering when he developed Bell's Theorem. He showed that the probabilistic predictions of QM in an experiment such as the EPR experiment cannot be reproduced by any "pseudo-random" process without one of two equally unappealing features:
1. Faster-than-light influences
2. Superdeterminism
It's a little hard to explain the difference between ordinary determinism and superdeterminism. I could try to explain it, if you're interested.

Thanks for responding. Of course I am interested in having this explained. The reason I ask for references to standard literature is because I don't want the thread to be shut down for venturing into fringe areas. So I would be happy if you could explain what is the standard answer to my question, assuming there is one, along with references to the "gold standard" sort of literature that is accepted by PF. Thanks again.

stevendaryl
Staff Emeritus
Thanks for responding. Of course I am interested in having this explained. The reason I ask for references to standard literature is because I don't want the thread to be shut down for venturing into fringe areas. So I would be happy if you could explain what is the standard answer to my question, assuming there is one, along with references to the "gold standard" sort of literature that is accepted by PF. Thanks again.

In an EPR experiment, you have some source of electron-positron pairs. You have two experimenters, Alice and Bob. For each particle, Alice chooses a direction ##\vec{a}## and measures the spin of one particle relative to that direction. QM predicts that she always get +1/2 (spin-up) or -1/2 (spin-down). Bob chooses a direction ##\vec{b}## and measures the spin of the other particle relative to that direction. For EPR pairs, the prediction is that if Alice and Bob measure relative to the same direction, they always get opposite results.

So the way to explain this using determinism is this: Assume that the results are deterministic functions of:
1. Some unknown property ##\lambda## of the pair.
2. The settings ##\vec{a}## and ##\vec{b}## of the two detectors.
So we assume that Alice's result is ##F_A(\vec{a}, \vec{b}, \lambda)## and Bob's result is ##F_B(\vec{a}, \vec{b}, \lambda)##.

(Because integers are easier to work with, let's scale the spin results to ##\pm 1## rather than ##\pm 2##. So ##F_A## and ##F_B## are assumed to always return ##\pm 1##. Plus 1 means spin-up and minus 1 means spin-down, relative to a direction.)

Here's where the speed of light comes in: If the two measurements take place far enough apart, then, assuming that no effects can travel faster than light, then it should be impossible for Alice's settings to affect Bob's result, or vice-versa. In other words,

Alice's result is some function ##F_A(\vec{a}, \lambda)##. Bob's result is some function ##F_B(\vec{b}, \lambda)##. The requirement that if Alice and Bob choose the same direction, they always get opposite results implies that ##F_B(\vec{x},\lambda) = -F_A(\vec{x}, \lambda)## for any direction ##\vec{x}##. So we don't have independent functions.

So this deterministic model of the EPR correlations works like this:
1. Initially, a pair is produced with some unknown property ##\lambda## that determines the spin measurements. Since we don't know what the value of ##\lambda## is (it's a hidden variable), we use a probability distribution ##P(\lambda)## to express our ignorance of the value of ##\lambda##.
2. Later, Alice chooses a direction ##\vec{a}## to measure her particle's spin, and gets the result ##F_A(\vec{a}, \lambda)##
3. Later, Bob chooses another direction ##\vec{b}## to measure his particle's spin, and gets the result ##F_B(\vec{b}, \lambda)##
Now, what Bell did next was to calculate a quantity that measures the relationship between Bob's result and Alice's result. Multiply the two results together, and you get another result that is always ##\pm 1##. If Alice always got the same result from Bob, then this number would be +1. If Alice always got the opposite result, then this number would be -1. We want to compute ##E(\vec{a}, \vec{b})##, which is the average value of the product of their results, averaged over many trials (which presumably means averaging over different values of ##\lambda##). Mathematically,

##E(\vec{a}, \vec{b}) = -\sum_\lambda P(\lambda) F_A(\vec{a}, \lambda) F_A(\vec{b}, \lambda)##

Bell showed that there is no mathematical function ##F_A## and probability distribution ##P(\lambda)## that makes ##E(\vec{a}, \vec{b})## turn out to be the quantum-mechanical prediction, which is ##E(\vec{a}, \vec{b}) = -\vec{a} \cdot \vec{b}##

A loophole is to allow ##F_A## to depend on both Alice's setting and Bob's setting. That's only possible if:
1. Bob's setting affects Alice's result (which implies FTL influences)
2. Bob's setting is not a free parameter, but is actually determined by ##\lambda##.
The second possibility is superdeterminism.

• entropy1, Wolfenstein3d, bhobba and 3 others
Boing3000
Gold Member
So I would be happy if you could explain what is the standard answer to my question, assuming there is one.
Half the question is answerable, because the box with a random number generator will "loop" once the total span of its internal state will be visited. It can be huge but still measurable.
The people that pretend to have given you a non-deterministic box are the one that have to give you the answer to that question. As far as I know, some generator based on "quantum" randomness (like radioactive decay) also have a limited time-span. So it seems that even if "pure", the quantity of randomness is still limited quite precisely (which I consider to be a deterministic feature).
Let's not talk about the third box, with some people inside that is going to spew out number based on its "free-will" Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.
[..]
I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?
A practical example is Malus law which only gives the probability of light passing a linear polarizer. Even if we know the alignment of the polarizer and the alignment of the incoming light we still cannot be certain whether the photon will pass ( unless the aligments are identical or opposite).
One explanation is that the path of the photon depends on the state of the birefringent material at the time of the interaction. This involves about 109 crystal sites and their phonon population. This is not something we can know - so in this case one could argue that limited information forces us to use probability. In this case the initial conditions form a natural (FAPP) random number generator.

stevendaryl
Staff Emeritus
A practical example is Malus law which only gives the probability of light passing a linear polarizer. Even if we know the alignment of the polarizer and the polarizer we still cannot be certain whether the photon will pass ( unless the aligments are identical or opposite).
One explanation is that the path of the photon depends on the state of the birefringent material at the time of the interaction. This involves about 109 crystal sites and their phonon population. This is not something we can know - so in this case one could argue that limited information forces us to use probability. In this case the initial conditions form a natural (FAPP) random number generator.

I know that this wasn't your point, but EPR shows that this explanation can't be correct (at least, not without making some highly implausible assumptions). If you create a correlated pair of photons, and one passes through one polarizing filter, then the other will definitely pass through the other filter (if their alignments are the same, if we disregard defects in the crystals). So if the explanation for the apparent randomness is the details of the state of the filter, then that would imply that every other filter in the universe with the same orientation must be the same in those details.

Stephen Tashi
So the way to explain this using determinism is this: Assume that the results are deterministic functions of:
1. Some unknown property ##\lambda## of the pair.
2. The settings ##\vec{a}## and ##\vec{b}## of the two detectors.

However, the original post asks about distinguishing between a pseudo-random generator of values (e.g. for ##\lambda##) versus a "truly" random generator of values for the same variable.

The conclusion of the example is that there is no ##P(\lambda)## that explains the results of the experiment. Likewise the results can't be explained by a pseudo-random process process that simulates ##P(\lambda)##. As far as ##\lambda## goes, the example rules out both of the situations that the original post wishes to distinguish.

Is there a variable other than ##\lambda## in the example that can be used to distinguish "true" randomness versus simulated randomness?

stevendaryl
Staff Emeritus
However, the original post asks about distinguishing between a pseudo-random generator of values (e.g. for ##\lambda##) versus a "truly" random generator of values for the same variable.

The conclusion of the example is that there is no ##P(\lambda)## that explains the results of the experiment. Likewise the results can't be explained by a pseudo-random process process that simulates ##P(\lambda)##. As far as ##\lambda## goes, the example rules out both of the situations that the original post wishes to distinguish.

Is there a variable other than ##\lambda## in the example that can be used to distinguish "true" randomness versus simulated randomness?

I'm only saying that a pseudo-random process can't reproduce the nondeterminism of quantum mechanics.

Stephen Tashi
Half the question is answerable, because the box with a random number generator will "loop" once the total span of its internal state will be visited.

Yes, the usual type of random number generator (linear congruential) is periodic. Is there a theorem saying that an arbitrary type of random number generator must be periodic? (To formulate such a theorem, we'd need a definition of "random number generator" that wasn't tied to a specific class of functions.)

Stephen Tashi
I'm only saying that a pseudo-random process can't reproduce the nondeterminism of quantum mechanics.

It seems to me that if people do "successful" computer simulations of QM then they have simulated its stochastic aspects using pseudo-random procedures. Perhaps there are phenomena that nobody can simulate on non-quantum computers.

• Klystron
I know that this wasn't your point, but EPR shows that this explanation can't be correct (at least, not without making some highly implausible assumptions). If you create a correlated pair of photons, and one passes through one polarizing filter, then the other will definitely pass through the other filter (if their alignments are the same, if we disregard defects in the crystals). So if the explanation for the apparent randomness is the details of the state of the filter, then that would imply that every other filter in the universe with the same orientation must be the same in those details.
I don't think Bells theorem has anything to say about randomness.
The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all. Any other settings combinations appear to be random.

Last edited:
stevendaryl
Staff Emeritus
I don't tjink Bells theorem has anything to say about randomness.

Maybe not, but it implies, as I said, that you can't explain the randomness in terms of the details of the polarizing filter, unless those details are the same in every polarizing filter.

The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all.

That's not true. If Alice and Bob both have aligned filters, and they measure photons from a correlated EPR pair, then there is a 50% chance that both photons will pass, and there is a 50% chance that neither will pass. That's randomness.

stevendaryl
Staff Emeritus
It seems to me that if people do "successful" computer simulations of QM then they have simulated its stochastic aspects using pseudo-random procedures.

Inside a computer, there is no locality restrictions.

Bell's assumption is that Alice's result depends only on the random parameter ##\lambda## and her detector's setting, and Bob's result depends only on the random parameter ##\lambda## and his detector's setting. To simulate the spin-1/2 EPR case with a computer, you can do the following:
1. Randomly assign Alice spin-up or spin-down, with 50% probability of each.
2. Then assign Bob the same result with probability ##sin^2(\frac{\theta}{2})## and the opposite result with probability ##cos^2(\frac{\theta}{2})##.
This gives the quantum-mechanical statistics, but it violates "locality", since the parameter ##\theta## that determines Bob's result depends on Alice's setting.

Maybe not, but it implies, as I said, that you can't explain the randomness in terms of the details of the polarizing filter, unless those details are the same in every polarizing filter.
I don't see why not. The explanation in terms of initial conditions does not require a cosmic conspiracy.

Stephen Tashi
This gives the quantum-mechanical statistics, but it violates "locality", since the parameter ##\theta## that determines Bob's result depends on Alice's setting.

Can we make locality relevant to the question in the original post?

Assuming locality, If we have several black boxes that are either all "truly" random generators of values or all pseudo-random generators of values, is there some way to hook them up experimentally and distinguish which is the case?

stevendaryl
Staff Emeritus
I don't see why not. The explanation in terms of initial conditions does not require a cosmic conspiracy.

You measure a photon's polarization relative to a filter, and you either get H or V. If you want to explain this in deterministic terms, then that means that the result is some function ##F(\alpha, \lambda, h)##, where ##\alpha## is the orientation of the filter, ##\lambda## is some unknown property of the photon, and ##h## is some unknown property of the filter.

Now, create a twin pair of correlated photons, and let Alice and Bob measure them using different filters. So let ##F_a(\alpha, \lambda, a)## be the function determining Alice's outcome as a function of her filter's orientation, ##\alpha##, the photon's state, ##\lambda## and the state of her filter, ##a##. Let ##F_B(\beta, \lambda, b)## be the function determining Bob's outcome as a function of his orientation, ##\beta##, the photon state, ##\lambda## and his filter state, ##b##.

Empirically, if ##\alpha = \beta##, then ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)##. When the filters are aligned, the filter states ##a## and ##b## make no difference. It's hard to see how the filter state could make a difference when the filters are not aligned, but not if they are aligned. (I actually think that it's possible to prove that it's impossible, but that would require more work than I'm willing to do right now.)

stevendaryl
Staff Emeritus
Can we make locality relevant to the question in the original post?

Assuming locality, If we have several black boxes that are either all "truly" random generators of values or all pseudo-random generators of values, is there some way to hook them up experimentally and distinguish which is the case?

I don't think so. Actually, I'll go further--the answer is definitely no. Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.

You measure a photon's polarization relative to a filter, and you either get H or V. If you want to explain this in deterministic terms, then that means that the result is some function ##F(\alpha, \lambda, h)##, where ##\alpha## is the orientation of the filter, ##\lambda## is some unknown property of the photon, and ##h## is some unknown property of the filter.

Now, create a twin pair of correlated photons, and let Alice and Bob measure them using different filters. So let ##F_a(\alpha, \lambda, a)## be the function determining Alice's outcome as a function of her filter's orientation, ##\alpha##, the photon's state, ##\lambda## and the state of her filter, ##a##. Let ##F_B(\beta, \lambda, b)## be the function determining Bob's outcome as a function of his orientation, ##\beta##, the photon state, ##\lambda## and his filter state, ##b##.

Empirically, if ##\alpha = \beta##, then ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)##. When the filters are aligned, the filter states ##a## and ##b## make no difference. It's hard to see how the filter state could make a difference when the filters are not aligned, but not if they are aligned. (I actually think that it's possible to prove that it's impossible, but that would require more work than I'm willing to do right now.)
I will study this but I do't see the relevance to anything but closing off a 'settings-conspiracy' loophole in an EPR experiment !
Nothing to do with Malus law as I see it.

@stevendaryl
I apologise for the late edit. To clarify my original post and to point out a misunderstanding.
You are conflating 'setting' with 'state'. The setting ##\alpha, \beta## is dtermined by the experiment but the state is the unknown internal state of the polarizer ##\psi(t, ...)## which changes continuously.

Last edited:
I don't think so. Actually, I'll go further--the answer is definitely no. Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
My bold. I think this is what I said !

Stephen Tashi
If there is a way to use a QM scenario to distinguish between random versus pseudo-random black boxes, I think that time would be a critical aspect. We could specify that a random black box produces a value x(t) at time t and this value does not exist before that time. A pseudo-random box can be implemented by a device that generates the values x(t) in advance and stores them. Can we rig up an experiment that distinguishes between a physical quantity that has a "definite but unknown values" before time t versus one that has no definite value until time t ?

stevendaryl
Staff Emeritus
@stevendaryl
I apologise for the late edit. To clarify my original post and to point out a misunderstanding.
You are conflating 'setting' with 'state'. The setting ##\alpha, \beta## is dtermined by the experiment but the state is the unknown internal state of the polarizer ##\psi(t, ...)## which changes continuously.

No, I specifically was not doing that. That's why my functions ##F_A(\alpha, \lambda, a)## and ##F_B(\beta, \lambda, b)## depends on three variables: ##a## and ##b## are the unknown internal states.

The fact that in an EPR experiment, ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)## strongly suggests that the function doesn't actually depend on the internal state.

Last edited:
stevendaryl
Staff Emeritus
If there is a way to use a QM scenario to distinguish between random versus pseudo-random black boxes.

There definitely is not. On the other hand, there is a way to distinguish between pseudo-random behavior with only local dynamics and the predictions of QM. The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness.

Stephen Tashi
The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness.

Are you using EPR to refer the experiment described in post #4 ? That experiment rules out both local pseudo-randomness and genuine randomness in setting the value of ##\lambda##.

An experiment relevant to the original post needs to involves quantities whose values can be set by either a random or pseudo-random process.

stevendaryl
Staff Emeritus
An experiment relevant to the original post needs to involves quantities whose values can be set by either a random or pseudo-random process.

Well, I gave my opinion several times already: there is no empirical way to distinguish pseudo-randomness from true randomness. But pseudo-randomness doesn't help explain the weirdness of QM.

• bhobba
Nugatory
Mentor
Well, I gave my opinion several times already: there is no empirical way to distinguish pseudo-randomness from true randomness.
If presented with a black box generating a sequence of outputs, and asked the question "Is the output random (as opposed to pseudo-random)?", the possible answers are "no" or "we don't know". If we can demonstrate an algorithm that successfully predicts the next bit, we know it's pseudo-random; but inability to do that doesn't tell us anything except that we might not have figured it out yet. In particular, there is no way of excluding the possibility that the entire sequence of outputs is going to repeat from the beginning if we wait long enough.
But pseudo-randomness doesn't help explain the weirdness of QM.
Yes indeed. @Aufbauwerk 2045 's original question may be more relevant to the cryptographers than the physicists; it is a very big deal if the bad guys can figure out the PRNG you're using for key generation.

stevendaryl
Staff Emeritus
If presented with a black box generating a sequence of outputs, and asked the question "Is the output random (as opposed to pseudo-random)?", the possible answers are "no" or "we don't know".

Right. There should be a name for the type of yes/no question that can only be definitively answered one way. "Are you asleep?" is another example.

I will study this but I do't see the relevance to anything but closing off a 'settings-conspiracy' loophole in an EPR experiment !
Nothing to do with Malus law as I see it.
No, I specifically was not doing that. That's why my functions ##F_A(\alpha, \lambda, a)## and ##F_B(\beta, \lambda, b)## depends on three variables: ##a## and ##b## are the unknown internal states.

The fact that in an EPR experiment, ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)## strongly suggests that the function doesn't actually depend on the internal state.
OK, I apologise for my misunderstanding.
I'm still unconviced by your logic because it assumes that a(t) and b(t) are not functions of time and are independent. Thus I think your statement "The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness." is wrong without more restrictions on your definitions of a and b.

PeterDonis
Mentor
I'm still unconviced by your logic because it assumes that a(t) and b(t) are not functions of time

He made no such assumption. The values of ##a## and ##b## that go into the formulas are whatever the internal states of the polarizers are at the times the measurements are made. That in no way rules out the possibility that those internal states change with time; it just says that the states which are relevant to the measurements are the states at the times of the measurements.

stevendaryl
Staff Emeritus