What is randomness in QM?

  • B
  • Thread starter Aufbauwerk 2045
  • Start date
  • #1
Aufbauwerk 2045
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.

When it comes to actually understanding nature, I find this whole topic of probability and determinism quite mysterious. I ask the following question, and would love to know if someone has a good answer. Perhaps there is a good reference to an answer specifically to the following question? I do mean specific, not a general discourse on probability.

Consider the following thought experiment. I am presented with two black boxes. I am told that one contains some natural emitter of random numbers. I am told that this process is therefore "non-deterministic." The other contains a small computer that is running a random number generator. I am told that this process is therefore "deterministic." I understand what is meant by "deterministic" because I understand how a random number generator program works. But I do not understand "non-deterministic." What does it mean? Does it mean there is no so-called "causal relationship?" Of course this means we must define "causality." Another riddle.

Continuing with the thought experiment, the output of each box is a sequence of numbers. My job is to determine which box contains the natural process and which contains the computer.

I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?
 
Last edited by a moderator:
  • Like
Likes Deepblu, DennisN, plasmon_shmasmon and 1 other person

Answers and Replies

  • #2
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"
Well, that's the question that John Bell was considering when he developed Bell's Theorem. He showed that the probabilistic predictions of QM in an experiment such as the EPR experiment cannot be reproduced by any "pseudo-random" process without one of two equally unappealing features:
  1. Faster-than-light influences
  2. Superdeterminism
It's a little hard to explain the difference between ordinary determinism and superdeterminism. I could try to explain it, if you're interested.
 
  • Like
Likes entropy1, Demystifier and Aufbauwerk 2045
  • #3
Aufbauwerk 2045
Well, that's the question that John Bell was considering when he developed Bell's Theorem. He showed that the probabilistic predictions of QM in an experiment such as the EPR experiment cannot be reproduced by any "pseudo-random" process without one of two equally unappealing features:
  1. Faster-than-light influences
  2. Superdeterminism
It's a little hard to explain the difference between ordinary determinism and superdeterminism. I could try to explain it, if you're interested.
Thanks for responding. Of course I am interested in having this explained. The reason I ask for references to standard literature is because I don't want the thread to be shut down for venturing into fringe areas. So I would be happy if you could explain what is the standard answer to my question, assuming there is one, along with references to the "gold standard" sort of literature that is accepted by PF. Thanks again.
 
  • #4
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
Thanks for responding. Of course I am interested in having this explained. The reason I ask for references to standard literature is because I don't want the thread to be shut down for venturing into fringe areas. So I would be happy if you could explain what is the standard answer to my question, assuming there is one, along with references to the "gold standard" sort of literature that is accepted by PF. Thanks again.
In an EPR experiment, you have some source of electron-positron pairs. You have two experimenters, Alice and Bob. For each particle, Alice chooses a direction ##\vec{a}## and measures the spin of one particle relative to that direction. QM predicts that she always get +1/2 (spin-up) or -1/2 (spin-down). Bob chooses a direction ##\vec{b}## and measures the spin of the other particle relative to that direction. For EPR pairs, the prediction is that if Alice and Bob measure relative to the same direction, they always get opposite results.

So the way to explain this using determinism is this: Assume that the results are deterministic functions of:
  1. Some unknown property ##\lambda## of the pair.
  2. The settings ##\vec{a}## and ##\vec{b}## of the two detectors.
So we assume that Alice's result is ##F_A(\vec{a}, \vec{b}, \lambda)## and Bob's result is ##F_B(\vec{a}, \vec{b}, \lambda)##.

(Because integers are easier to work with, let's scale the spin results to ##\pm 1## rather than ##\pm 2##. So ##F_A## and ##F_B## are assumed to always return ##\pm 1##. Plus 1 means spin-up and minus 1 means spin-down, relative to a direction.)

Here's where the speed of light comes in: If the two measurements take place far enough apart, then, assuming that no effects can travel faster than light, then it should be impossible for Alice's settings to affect Bob's result, or vice-versa. In other words,

Alice's result is some function ##F_A(\vec{a}, \lambda)##. Bob's result is some function ##F_B(\vec{b}, \lambda)##. The requirement that if Alice and Bob choose the same direction, they always get opposite results implies that ##F_B(\vec{x},\lambda) = -F_A(\vec{x}, \lambda)## for any direction ##\vec{x}##. So we don't have independent functions.

So this deterministic model of the EPR correlations works like this:
  1. Initially, a pair is produced with some unknown property ##\lambda## that determines the spin measurements. Since we don't know what the value of ##\lambda## is (it's a hidden variable), we use a probability distribution ##P(\lambda)## to express our ignorance of the value of ##\lambda##.
  2. Later, Alice chooses a direction ##\vec{a}## to measure her particle's spin, and gets the result ##F_A(\vec{a}, \lambda)##
  3. Later, Bob chooses another direction ##\vec{b}## to measure his particle's spin, and gets the result ##F_B(\vec{b}, \lambda)##
Now, what Bell did next was to calculate a quantity that measures the relationship between Bob's result and Alice's result. Multiply the two results together, and you get another result that is always ##\pm 1##. If Alice always got the same result from Bob, then this number would be +1. If Alice always got the opposite result, then this number would be -1. We want to compute ##E(\vec{a}, \vec{b})##, which is the average value of the product of their results, averaged over many trials (which presumably means averaging over different values of ##\lambda##). Mathematically,

##E(\vec{a}, \vec{b}) = -\sum_\lambda P(\lambda) F_A(\vec{a}, \lambda) F_A(\vec{b}, \lambda)##

Bell showed that there is no mathematical function ##F_A## and probability distribution ##P(\lambda)## that makes ##E(\vec{a}, \vec{b})## turn out to be the quantum-mechanical prediction, which is ##E(\vec{a}, \vec{b}) = -\vec{a} \cdot \vec{b}##

A loophole is to allow ##F_A## to depend on both Alice's setting and Bob's setting. That's only possible if:
  1. Bob's setting affects Alice's result (which implies FTL influences)
  2. Bob's setting is not a free parameter, but is actually determined by ##\lambda##.
The second possibility is superdeterminism.
 
  • Like
Likes entropy1, Wolfenstein3d, bhobba and 3 others
  • #5
Boing3000
Gold Member
332
37
So I would be happy if you could explain what is the standard answer to my question, assuming there is one.
Half the question is answerable, because the box with a random number generator will "loop" once the total span of its internal state will be visited. It can be huge but still measurable.
The people that pretend to have given you a non-deterministic box are the one that have to give you the answer to that question. As far as I know, some generator based on "quantum" randomness (like radioactive decay) also have a limited time-span. So it seems that even if "pure", the quantity of randomness is still limited quite precisely (which I consider to be a deterministic feature).
Let's not talk about the third box, with some people inside that is going to spew out number based on its "free-will" :rolleyes:
 
  • #6
5,428
292
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.
[..]
I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?
A practical example is Malus law which only gives the probability of light passing a linear polarizer. Even if we know the alignment of the polarizer and the alignment of the incoming light we still cannot be certain whether the photon will pass ( unless the aligments are identical or opposite).
One explanation is that the path of the photon depends on the state of the birefringent material at the time of the interaction. This involves about 109 crystal sites and their phonon population. This is not something we can know - so in this case one could argue that limited information forces us to use probability. In this case the initial conditions form a natural (FAPP) random number generator.
 
  • #7
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
A practical example is Malus law which only gives the probability of light passing a linear polarizer. Even if we know the alignment of the polarizer and the polarizer we still cannot be certain whether the photon will pass ( unless the aligments are identical or opposite).
One explanation is that the path of the photon depends on the state of the birefringent material at the time of the interaction. This involves about 109 crystal sites and their phonon population. This is not something we can know - so in this case one could argue that limited information forces us to use probability. In this case the initial conditions form a natural (FAPP) random number generator.
I know that this wasn't your point, but EPR shows that this explanation can't be correct (at least, not without making some highly implausible assumptions). If you create a correlated pair of photons, and one passes through one polarizing filter, then the other will definitely pass through the other filter (if their alignments are the same, if we disregard defects in the crystals). So if the explanation for the apparent randomness is the details of the state of the filter, then that would imply that every other filter in the universe with the same orientation must be the same in those details.
 
  • #8
Stephen Tashi
Science Advisor
7,583
1,472
So the way to explain this using determinism is this: Assume that the results are deterministic functions of:
  1. Some unknown property ##\lambda## of the pair.
  2. The settings ##\vec{a}## and ##\vec{b}## of the two detectors.
However, the original post asks about distinguishing between a pseudo-random generator of values (e.g. for ##\lambda##) versus a "truly" random generator of values for the same variable.

The conclusion of the example is that there is no ##P(\lambda)## that explains the results of the experiment. Likewise the results can't be explained by a pseudo-random process process that simulates ##P(\lambda)##. As far as ##\lambda## goes, the example rules out both of the situations that the original post wishes to distinguish.

Is there a variable other than ##\lambda## in the example that can be used to distinguish "true" randomness versus simulated randomness?
 
  • #9
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
However, the original post asks about distinguishing between a pseudo-random generator of values (e.g. for ##\lambda##) versus a "truly" random generator of values for the same variable.

The conclusion of the example is that there is no ##P(\lambda)## that explains the results of the experiment. Likewise the results can't be explained by a pseudo-random process process that simulates ##P(\lambda)##. As far as ##\lambda## goes, the example rules out both of the situations that the original post wishes to distinguish.

Is there a variable other than ##\lambda## in the example that can be used to distinguish "true" randomness versus simulated randomness?
I'm only saying that a pseudo-random process can't reproduce the nondeterminism of quantum mechanics.
 
  • #10
Stephen Tashi
Science Advisor
7,583
1,472
Half the question is answerable, because the box with a random number generator will "loop" once the total span of its internal state will be visited.
Yes, the usual type of random number generator (linear congruential) is periodic. Is there a theorem saying that an arbitrary type of random number generator must be periodic? (To formulate such a theorem, we'd need a definition of "random number generator" that wasn't tied to a specific class of functions.)
 
  • #11
Stephen Tashi
Science Advisor
7,583
1,472
I'm only saying that a pseudo-random process can't reproduce the nondeterminism of quantum mechanics.
It seems to me that if people do "successful" computer simulations of QM then they have simulated its stochastic aspects using pseudo-random procedures. Perhaps there are phenomena that nobody can simulate on non-quantum computers.
 
  • Like
Likes Klystron
  • #12
5,428
292
I know that this wasn't your point, but EPR shows that this explanation can't be correct (at least, not without making some highly implausible assumptions). If you create a correlated pair of photons, and one passes through one polarizing filter, then the other will definitely pass through the other filter (if their alignments are the same, if we disregard defects in the crystals). So if the explanation for the apparent randomness is the details of the state of the filter, then that would imply that every other filter in the universe with the same orientation must be the same in those details.
I don't think Bells theorem has anything to say about randomness.
The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all. Any other settings combinations appear to be random.
 
Last edited:
  • #13
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
I don't tjink Bells theorem has anything to say about randomness.
Maybe not, but it implies, as I said, that you can't explain the randomness in terms of the details of the polarizing filter, unless those details are the same in every polarizing filter.

The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all.
That's not true. If Alice and Bob both have aligned filters, and they measure photons from a correlated EPR pair, then there is a 50% chance that both photons will pass, and there is a 50% chance that neither will pass. That's randomness.
 
  • #14
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
It seems to me that if people do "successful" computer simulations of QM then they have simulated its stochastic aspects using pseudo-random procedures.
Inside a computer, there is no locality restrictions.

Bell's assumption is that Alice's result depends only on the random parameter ##\lambda## and her detector's setting, and Bob's result depends only on the random parameter ##\lambda## and his detector's setting. To simulate the spin-1/2 EPR case with a computer, you can do the following:
  1. Randomly assign Alice spin-up or spin-down, with 50% probability of each.
  2. Then assign Bob the same result with probability ##sin^2(\frac{\theta}{2})## and the opposite result with probability ##cos^2(\frac{\theta}{2})##.
This gives the quantum-mechanical statistics, but it violates "locality", since the parameter ##\theta## that determines Bob's result depends on Alice's setting.
 
  • #15
5,428
292
Maybe not, but it implies, as I said, that you can't explain the randomness in terms of the details of the polarizing filter, unless those details are the same in every polarizing filter.
I don't see why not. The explanation in terms of initial conditions does not require a cosmic conspiracy.
 
  • #16
Stephen Tashi
Science Advisor
7,583
1,472
This gives the quantum-mechanical statistics, but it violates "locality", since the parameter ##\theta## that determines Bob's result depends on Alice's setting.
Can we make locality relevant to the question in the original post?

Assuming locality, If we have several black boxes that are either all "truly" random generators of values or all pseudo-random generators of values, is there some way to hook them up experimentally and distinguish which is the case?
 
  • #17
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
I don't see why not. The explanation in terms of initial conditions does not require a cosmic conspiracy.
You measure a photon's polarization relative to a filter, and you either get H or V. If you want to explain this in deterministic terms, then that means that the result is some function ##F(\alpha, \lambda, h)##, where ##\alpha## is the orientation of the filter, ##\lambda## is some unknown property of the photon, and ##h## is some unknown property of the filter.

Now, create a twin pair of correlated photons, and let Alice and Bob measure them using different filters. So let ##F_a(\alpha, \lambda, a)## be the function determining Alice's outcome as a function of her filter's orientation, ##\alpha##, the photon's state, ##\lambda## and the state of her filter, ##a##. Let ##F_B(\beta, \lambda, b)## be the function determining Bob's outcome as a function of his orientation, ##\beta##, the photon state, ##\lambda## and his filter state, ##b##.

Empirically, if ##\alpha = \beta##, then ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)##. When the filters are aligned, the filter states ##a## and ##b## make no difference. It's hard to see how the filter state could make a difference when the filters are not aligned, but not if they are aligned. (I actually think that it's possible to prove that it's impossible, but that would require more work than I'm willing to do right now.)
 
  • #18
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
Can we make locality relevant to the question in the original post?

Assuming locality, If we have several black boxes that are either all "truly" random generators of values or all pseudo-random generators of values, is there some way to hook them up experimentally and distinguish which is the case?
I don't think so. Actually, I'll go further--the answer is definitely no. Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
 
  • #19
5,428
292
You measure a photon's polarization relative to a filter, and you either get H or V. If you want to explain this in deterministic terms, then that means that the result is some function ##F(\alpha, \lambda, h)##, where ##\alpha## is the orientation of the filter, ##\lambda## is some unknown property of the photon, and ##h## is some unknown property of the filter.

Now, create a twin pair of correlated photons, and let Alice and Bob measure them using different filters. So let ##F_a(\alpha, \lambda, a)## be the function determining Alice's outcome as a function of her filter's orientation, ##\alpha##, the photon's state, ##\lambda## and the state of her filter, ##a##. Let ##F_B(\beta, \lambda, b)## be the function determining Bob's outcome as a function of his orientation, ##\beta##, the photon state, ##\lambda## and his filter state, ##b##.

Empirically, if ##\alpha = \beta##, then ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)##. When the filters are aligned, the filter states ##a## and ##b## make no difference. It's hard to see how the filter state could make a difference when the filters are not aligned, but not if they are aligned. (I actually think that it's possible to prove that it's impossible, but that would require more work than I'm willing to do right now.)
I will study this but I do't see the relevance to anything but closing off a 'settings-conspiracy' loophole in an EPR experiment !
Nothing to do with Malus law as I see it.

[Edit]
@stevendaryl
I apologise for the late edit. To clarify my original post and to point out a misunderstanding.
You are conflating 'setting' with 'state'. The setting ##\alpha, \beta## is dtermined by the experiment but the state is the unknown internal state of the polarizer ##\psi(t, ...)## which changes continuously.
 
Last edited:
  • #20
5,428
292
I don't think so. Actually, I'll go further--the answer is definitely no. Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
My bold. I think this is what I said !
 
  • #21
Stephen Tashi
Science Advisor
7,583
1,472
If there is a way to use a QM scenario to distinguish between random versus pseudo-random black boxes, I think that time would be a critical aspect. We could specify that a random black box produces a value x(t) at time t and this value does not exist before that time. A pseudo-random box can be implemented by a device that generates the values x(t) in advance and stores them. Can we rig up an experiment that distinguishes between a physical quantity that has a "definite but unknown values" before time t versus one that has no definite value until time t ?
 
  • #22
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
@stevendaryl
I apologise for the late edit. To clarify my original post and to point out a misunderstanding.
You are conflating 'setting' with 'state'. The setting ##\alpha, \beta## is dtermined by the experiment but the state is the unknown internal state of the polarizer ##\psi(t, ...)## which changes continuously.
No, I specifically was not doing that. That's why my functions ##F_A(\alpha, \lambda, a)## and ##F_B(\beta, \lambda, b)## depends on three variables: ##a## and ##b## are the unknown internal states.

The fact that in an EPR experiment, ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)## strongly suggests that the function doesn't actually depend on the internal state.
 
Last edited:
  • #23
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
If there is a way to use a QM scenario to distinguish between random versus pseudo-random black boxes.
There definitely is not. On the other hand, there is a way to distinguish between pseudo-random behavior with only local dynamics and the predictions of QM. The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness.
 
  • #24
Stephen Tashi
Science Advisor
7,583
1,472
The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness.
Are you using EPR to refer the experiment described in post #4 ? That experiment rules out both local pseudo-randomness and genuine randomness in setting the value of ##\lambda##.

An experiment relevant to the original post needs to involves quantities whose values can be set by either a random or pseudo-random process.
 
  • #25
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,499
2,635
An experiment relevant to the original post needs to involves quantities whose values can be set by either a random or pseudo-random process.
Well, I gave my opinion several times already: there is no empirical way to distinguish pseudo-randomness from true randomness. But pseudo-randomness doesn't help explain the weirdness of QM.
 
  • Like
Likes bhobba

Related Threads on What is randomness in QM?

  • Last Post
Replies
9
Views
4K
  • Last Post
Replies
9
Views
5K
Replies
5
Views
2K
  • Last Post
Replies
8
Views
2K
  • Last Post
Replies
5
Views
2K
Replies
4
Views
726
  • Last Post
Replies
11
Views
2K
  • Last Post
Replies
16
Views
4K
  • Last Post
Replies
12
Views
1K
  • Last Post
Replies
8
Views
766
Top