Is Randomness in Quantum Mechanics Truly Non-Deterministic?

In summary, the conversation touches on the concept of determinism and non-determinism in nature, specifically in relation to probability and randomness. The conversation explores a thought experiment involving two black boxes, one containing a natural emitter of random numbers and the other containing a computer running a random number generator. The question is raised whether there is a mathematical test that can distinguish between truly random and pseudo-random processes. The conversation also touches on the concept of superdeterminism and references John Bell's Theorem. The conversation concludes with a detailed explanation of how determinism can be used to explain the results of an EPR experiment, with the caveat that the two measurements must take place far enough apart to avoid the possibility of faster-than-light influences.
  • #36
Mentz114 said:
Please explain what you mean by this ? Are you saying 'one could say' or 'that mentz114 could say' ?

If you are suggesting that one has to assume that polarizing filters behave differently in EPR than in single photon experiments - you are wrong.

I thought you were saying that the internal state of the filter is relevant in single photons but is not relevant for the perfect correlation case. That sure seems to me that you're saying that it behaves differently in the two cases.
 
Physics news on Phys.org
  • #37
stevendaryl said:
I thought you were saying that the internal state of the filter is relevant in single photons but is not relevant for the perfect correlation case. That sure seems to me that you're saying that it behaves differently in the two cases.

Okay, I guess it works if you allow nonlocal interactions between the photons.
  • Assume that initially the photon pair have a polarization in some random direction ##\lambda##, but they both have the same random polarization.
  • Alice's photon reaches her filter, which is oriented at some angle ##\alpha##. We assume that whether the photon passes through is a deterministic function ##P_A(\alpha, \lambda, a)## depending on the orientation ##\alpha##, the polarization ##\lambda## and the filter state ##a##. But the average over all possible ##a## of ##P_A(\alpha, \lambda)## gives ##1/2 cos^2(\alpha - \lambda)##.
  • If Alice's photon passes through, then Bob's photon instantaneously changes its polarization to ##\alpha##.
  • If Alice's photon is absorbed, then Bob's photon instantaneously change its polarization to ##\lambda' = \frac{\pi}{2} - \alpha##.
  • Bob's photon reaches his filter, which is oriented at angle ##\beta##. Whether his photon passes through his filter is determined by another function, ##P_B(\beta, \lambda', b)##, where analogously, the average over all ##b## gives ##1/2 cos^2(\beta - \lambda')##.
This model has the same statistics as predicted by QM.

So, I back off. If you have the instantaneous change of photon state, then this model works.
 
  • Like
Likes zonde
  • #38
stevendaryl said:
Okay, I guess it works if you allow nonlocal interactions between the photons.
This model has the same statistics as predicted by QM.
If you have the instantaneous change of photon state, then this model works.
Thanks.
I like your reasoning here because it shows that the mode of operation of the filter and its internal state do not affect the outcome of EPR as long as all of them are interchangeably identical ( I hope this is not contentious.)
(Because it is not possible to tell the difference between instantaneous inter-pair communication and other 'mechanisms' I always assume the former).
 
  • #39
Aufbauwerk 2045 said:
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.

When it comes to actually understanding nature, I find this whole topic of probability and determinism quite mysterious. I ask the following question, and would love to know if someone has a good answer. Perhaps there is a good reference to an answer specifically to the following question? I do mean specific, not a general discourse on probability.

Consider the following thought experiment. I am presented with two black boxes. I am told that one contains some natural emitter of random numbers. I am told that this process is therefore "non-deterministic." The other contains a small computer that is running a random number generator. I am told that this process is therefore "deterministic." I understand what is meant by "deterministic" because I understand how a random number generator program works. But I do not understand "non-deterministic." What does it mean? Does it mean there is no so-called "causal relationship?" Of course this means we must define "causality." Another riddle.

Continuing with the thought experiment, the output of each box is a sequence of numbers. My job is to determine which box contains the natural process and which contains the computer.

I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?

In classical probability, we can interpret the formalism as something definitely really going on, but we don't know which case exactly is going on. In the bare quantum formalism, there is more than one possibility as to what the "something definite" is, which means it is not that definite. Mathematically, this corresponds to classical probability containing a structure called a simplex, which is absent in quantum probability.
Holevo, Statistical Structure of Quantum Theory https://books.google.com.sg/books/a...antum_Theory.html?id=CX4-064Rao8C&redir_esc=y
Bengtsson and Zyczkowski, Geometry of Quantum States https://books.google.com.sg/books?id=aA4vXMbuOTUC&source=gbs_navlinks_s

Can we add structure to quantum theory so that quantum theory becomes classical probability? In some cases like Bohmian Mechanics, we know how to do this. Bell's theorem says that these hidden variables are non-local. These hidden variables would be like a hidden reality, so we can call say they are "ontological". There are several valid interpretations of Bell's theorem, and some are more about "operational" senses of quantum mechanics, rather than about "ontology". One of them says that if we believe that no one can communicate faster than light, then quantum mechanics can be used to certify true randomness in an operational sense. In short: if nature uses deterministic black boxes, the boxes are nonlocal, and if no one can access the nonlocality, then they are operationally random in a way that we know cannot be broken.
Acin and Masanes, Certified randomness in quantum physics, https://arxiv.org/abs/1708.00265
Wiseman and Cavalcanti, Causarum Investigatio and the Two Bell's Theorems of John Bell https://arxiv.org/abs/1503.06413
 
Last edited:
  • Like
Likes Aufbauwerk 2045
  • #40
atyy said:
In classical probability, we can interpret the formalism as something definitely really going on, but we don't know which case exactly is going on. In the bare quantum formalism, there is more than one possibility as to what the "something definite" is, which means it is not that definite. Mathematically, this corresponds to classical probability containing a structure called a simplex, which is absent in quantum probability.
Holevo, Statistical Structure of Quantum Theory https://books.google.com.sg/books/a...antum_Theory.html?id=CX4-064Rao8C&redir_esc=y
Bengtsson and Zyczkowski, Geometry of Quantum States https://books.google.com.sg/books?id=aA4vXMbuOTUC&source=gbs_navlinks_s

Can we add structure to quantum theory so that quantum theory becomes classical probability? In some cases like Bohmian Mechanics, we know how to do this. Bell's theorem says that these hidden variables are non-local. These hidden variables would be like a hidden reality, so we can call say they are "ontological". There are several valid interpretations of Bell's theorem, and some are more about "operational" senses of quantum mechanics, rather than about "ontology". One of them says that if we believe that no one can communicate faster than light, then quantum mechanics can be used to certify true randomness in an operational sense. In short: if nature uses deterministic black boxes, the boxes are nonlocal, and if no one can access the nonlocality, then they are operationally random in a way that we know cannot be broken.
Acin and Masanes, Certified randomness in quantum physics, https://arxiv.org/abs/1708.00265
Wiseman and Cavalcanti, Causarum Investigatio and the Two Bell's Theorems of John Bell https://arxiv.org/abs/1503.06413

Thanks! I started reading the Acin and Masanes paper first. Maybe I will have comments or questions about it later on. This is not applicable to my work. It's just an area that has fascinated me for the last few years, and I am trying to learn more about it.
 
  • #41
Aufbauwerk 2045 said:
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.

Well in that vein ie as a part of probability theory, see the following:
https://arxiv.org/abs/1402.6562

The 64 million dollar question of course is why did nature choose a probability framework ? Nobody knows. My best guess is determinism is a subset of ordinary probability. So if you want to build a general model of some sort you start out assuming its probabilistic because determinism is part of that anyway. Then you have the issues of the above paper - you would like to have continuous transformations between pure states so you can apply calculus for example. But then you are led to QM. Ugly reality then rears its head in the form of the Kochen-Specker theorem that says determinism is not really part of that probability model. So basically we don't really know - it just seems to be how nature is. We have some guesses like Bohemian Mechanics that restored determinism - but due to the uncertainty relations of QM in a rather strange way - its deterministic - but since you can't know initial conditions you can't predict exact outcomes - only probabilities. So we are back where we started in practical terms.

Thanks
Bill
 
  • Like
Likes Aufbauwerk 2045
  • #42
bhobba said:
We have some guesses like Bohemian Mechanics

Was that misspelling on purpose?

Is this the real life?
Is this just fantasy?
Caught in a landslide
No escape from reality...
 
  • Like
Likes DennisN, bhobba, Asymptotic and 2 others
  • #43
bhobba said:
Well in that vein ie as a part of probability theory, see the following:
https://arxiv.org/abs/1402.6562

The 64 million dollar question of course is why did nature choose a probability framework ? Nobody knows. My best guess is determinism is a subset of ordinary probability. So if you want to build a general model of some sort you start out assuming its probabilistic because determinism is part of that anyway. Then you have the issues of the above paper - you would like to have continuous transformations between pure states so you can apply calculus for example. But then you are led to QM. Ugly reality then rears its head in the form of the Kochen-Specker theorem that says determinism is not really part of that probability model. So basically we don't really know - it just seems to be how nature is. We have some guesses like Bohemian Mechanics that restored determinism - but due to the uncertainty relations of QM in a rather strange way - its deterministic - but since you can't know initial conditions you can't predict exact outcomes - only probabilities. So we are back where we started in practical terms.

Thanks
Bill

Interesting. But I am not discussing matters of opinion or speculation any more on this forum. If I participate again, it will be to help students with math or physics problems in a way that hopefully will not cause any disputes. No opening for argument, I hope. 2+2=4, etc.

As for the question in this thread, I can't get involved any more. I find satisfaction in solving problems. I just solved one I have been working on for some time. No debate about the meaning of human words, no speculation, just imagination, logic, pure mathematics and programming. There is no doubt involved, and nothing humans can muck up with their disputes. Of course I am human and I include myself in that group of pathetic creatures. But my work is beautiful. Just equations, code, and the quiet whirring of my computer fan, and the beautiful flash of numbers on the screen. Beautiful. Related to physics? Perhaps. Not directly, but one never knows. But there is practical importance.

Perhaps one day we can solve deeper problems of determinism. My goals now are more modest. I love to ponder and speculate on the ultimate questions, but I know I can't solve them, and I think no one can, at this stage of human development. I think we need advanced AI in order to make the next great leap of understanding. I hope our human brains are adequate to develop advanced AI, using the primitive AI we have developed to date to help in that effort. That is our only hope. Even the greatest geniuses are inadequate if they rely on their pathetically limited intelligence.

I am not interested in discussion any more on any topic. I may read, but I will not engage. But if I do help students, something I have experience of as a former mathematics tutor and teaching assistant, then I will answer questions in that context.

Cheers.

:)
 
Last edited by a moderator:
  • Like
Likes Mentz114
  • #44
stevendaryl said:
Well, I gave my opinion several times already: there is no empirical way to distinguish pseudo-randomness from true randomness.
Hi stevendaryl:

I find this quite puzzling. If one can guess the algorithm used to produce the pseudo-random numbers then one can predict the sequence of pseudo-random numbers, but there is no way to predict the sequence of true random numbers. I am guessing you have some concept in mind related to "distinguish" that I am not understanding.

ADDED
I suppose that one might argue that the likelihood of guessing the algorithm is infinitesimally tiny, but it is not theoretically impossible. Does your concept of "distinguish" depend on this being "practically" impossible?

Regards,
Buzz
 
Last edited:
  • #45
Buzz Bloom said:
Hi stevendaryl:

I find this quite puzzling. If one can guess the algorithm used to produce the pseudo-random numbers then one can predict the sequence of pseudo-random numbers, but there is no way to predict the sequence of true random numbers. I am guessing you have some concept in mind related to "distinguish" that I am not understanding.

This was pointed out by @Nugatory: whether a random variable is pseudo-random or not is "semi-decidable". If it's pseudo-random, you can eventually figure that out (by finding the pattern). If it's not pseudo-random, then you will never know. You can never prove that it's not pseudo-random.
 
  • Like
Likes bhobba
  • #46
Hi stevendaryl:

Thanks for your reply. I apologize for carelessly missing the Nugatory post.

Regards,
Buzz
 
  • #47
stevendaryl said:
Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
I would say that a "truly non-deterministic system" is inherently non-local. Either its output states are generated entirely locally or they are dependent on something not local. If according to physics there is a choice, then something makes that choice - local, non-local, or you split the universe.
 
  • Like
Likes akvadrako
  • #48
Mentz114 said:
I don't think Bells theorem has anything to say about randomness.
The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all. Any other settings combinations appear to be random.
Hmmm. That's interesting.

Let's see what the Bell experiment looks like if things are not random: With two filters each at three positions, you have 9 combinations. Then you have 18 random functions, 9 for each filter (A and B), each having one for every A/B configuration.

The problem is that if A can predict how B's filter will react, then A will be able to telegraph a message to B at faster than the speed of light.

So the math in the Bell experiment works fine for both random and pseudo random filter behavior. But the Bell experiment as a while does require that the filter output be inherently unpredictable - else FTL information transfer is demonstrated and the Bell analysis is no longer needed.
 
  • #49
.Scott said:
[..]
The problem is that if A can predict how B's filter will react, then A will be able to telegraph a message to B at faster than the speed of light.

So the math in the Bell experiment works fine for both random and pseudo random filter behavior. But the Bell experiment as a while does require that the filter output be inherently unpredictable - else FTL information transfer is demonstrated and the Bell analysis is no longer needed.
Exactly ( if I understand you). Every time a filter acts it must be an independent random event, otherwise the probabilities cannot be averaged out and the QM (singlet state) statistics are not predicted.
 
  • #50
.Scott said:
So the math in the Bell experiment works fine for both random and pseudo random filter behavior. But the Bell experiment as a while does require that the filter output be inherently unpredictable - else FTL information transfer is demonstrated and the Bell analysis is no longer needed.

Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.
 
  • #51
stevendaryl said:
Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.

Or if it were possible to determine ##\lambda## without disturbing the photon (or its partner), then you could communicate FTL.
 
  • #52
Nugatory said:
@Aufbauwerk 2045 's original question may be more relevant to the cryptographers than the physicists; it is a very big deal if the bad guys can figure out the PRNG you're using for key generation.
I'd like to chime in on this... :smile:
Aufbauwerk 2045 said:
I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?
I like this question! :smile:

Without venturing into quantum mechanics, I think this can be distinguished if the mathematical/statistical capabilities of the machine analyzing the data from the two processes is good enough.

How the analyzing could be done? By using Information Theory.

Let's say we have a sufficiently long message, preferably very long, for statistical reasons.
And let's say we encode this message two times, using two different random generators.
The first encoding is made by modifying the message using a "true" random quantum process.
The second encoding is made by modifying the message using a pseudorandom1 process.

Then we can perform analysis of the so-called information entropy2 of the two encodings with respect to the original message.
If our hypothesis that the quantum mechanics process is truly random, and the pseudorandom generator "less random", this should be able to be seen in the values of the information entropies of the two encoded messages. The QM message entropy should be at maximum, and the pseudorandom message entropy should be less than the QM value.
(see e.g. Entropy as information content)

Edit:
On a second thought I may have been a little too quick here, it was a long time ago since I used information theory. Maybe we could use mutual information as well... I have to think about it... :smile:

Edit 2:
Aufbauwerk 2045 said:
how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?
I just remembered a thing from my time studying cryptography... a pseudo-random process can be identified by analyzing sufficiently long output sequences of the process. A pseudo-random process will at one point or another, repeat itself, that is, start over.
So, the quality of the pseudo-random process, the "randomness", if you like, can be judged by how long it takes for the process to start repeating itself.

Edit 3: Footnotes:

1 With pseudorandom I mean a sequence generated by a digital machine only, like a computer.
2 This is not physical entropy, it is a purely information theoretical concept.
 
Last edited:
  • #53
stevendaryl said:
Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.

Could you describe the communication method explicitly? ( I'm losing track of whether we are or are-not assuming instantaneous communication between entangled electrons and whether we are using "pseudo-random" as a synonym for "predictable". )
 
  • #54
Stephen Tashi said:
Could you describe the communication method explicitly? ( I'm losing track of whether we are or are-not assuming instantaneous communication between entangled electrons and whether we are using "pseudo-random" as a synonym for "predictable". )

The idea for a model that matches the statistics of EPR:
  1. The source of twin pairs randomly picks a polarization ##\lambda##. It can be any number between 0 (Vertical polarization in the x-y plane) and ##\pi/2## (Horizontal polarization). Two photons with these polarizations are created, one going to Alice and the other going to Bob.
  2. When Alice (or whoever measures the polarization first) measures her photon's polarization, she picks an orientation ##\alpha## for her filter. The way her filter works is: If the photon's polarization is aligned with the filter, the photon passes through. If its polarization is at right angles to that, it definitely does not pass through. For intermediate cases, it passes through with probability ##cos^2(\lambda - \alpha). (Malus' law)
  3. Immediately after Alice's measurement, Bob's photon's polarization switches to be ##\alpha## (if Alice's photon passed through her filter) or perpendicular to ##\alpha## (otherwise).
  4. Bob chooses an orientation ##\beta## and his photon passes or not following the same rules as Alice's photon in 2, except with the polarization determined in 3.
For clarification, step 3 is FTL. So this is a nonlocal model, not a violation of Bell's theorem.

The issue is whether the random choices made in steps 1, 2 and 4 could be pseudorandom (predictable) without allowing FTL communication.

What I said earlier was that I thought that if step 1 was truly random/unpredictable, then FTL communication would be impossible, even if steps 2 and 4 were predictable. Now I'm not sure about that. I don't immediately see a strategy for Alice to communicate with Bob.
 
  • Like
Likes Mentz114
  • #55
stevendaryl said:
For clarification, step 3 is FTL. So this is a nonlocal model, not a violation of Bell's theorem.
I don't know how these EPR post are relevant to the OP but in your example there is only one λ and there is no way to describe what a FTL/immediate means. Non-locality imply there is a unique value for λ at all time for ever observer possible.

stevendaryl said:
I don't immediately see a strategy for Alice to communicate with Bob.
You need at least a triplet of particle sharing λ (which is physically impossible). Alice need to have two particle on her side, and arrange for bob to measure between her two measurement.
 
  • #56
Boing3000 said:
I don't know how these EPR post are relevant to the OP

It's what prompted splitting this thread off from the original.

But in your example there is only one λ and there is no way to describe what a FTL/immediate means. Non-locality imply there is a unique value for λ at all time for ever observer possible.

I don't know what you mean. Alice's measurement instantaneously changes Bob's photon state, far away. That's FTL.

It's not FTL communication, though. For Alice to communicate FTL, there would have to be two choices that Alice can make on her end that affect the statistics seen by Bob.
 
  • #57
stevendaryl said:
I don't know what you mean. Alice's measurement instantaneously changes Bob's photon state, far away. That's FTL.
I mean there is only one state for the two photon. That's not FTL in the sense that there is no speed of change involve.

stevendaryl said:
It's not FTL communication, though. For Alice to communicate FTL, there would have to be two choices that Alice can make on her end that affect the statistics seen by Bob.
Precisely, if she had two photon on her side she could make two choices of polarization angle.
 
  • #58
stevendaryl said:
The issue is whether the random choices made in steps 1, 2 and 4 could be pseudorandom (predictable) without allowing FTL communication.
On the one hand, the idea that data generated by a pseudorandom process would be predictable is interesting. But if we use "predictable" as a synonym for "pseudorandom" then, in the context of communication, we have to answer the question "Predictable by who?".

The way I think of predicting a pseudorandom process is that I'd know some history of its outputs and then be able to predict the next output. That's no problem if I play the role of an omniscient observer. But does saying step 1 is predictable process imply that Alice in step 2 can predict the value of ##\lambda## generated by step 1?
 
  • #59
Boing3000 said:
I mean there is only one state for the two photon. That's not FTL in the sense that there is no speed of change involve.

Sorry, I still don't know what you mean. Something happens to Alice's photon. It's either transmitted or absorbed. Then some time later, something happens to Bob's photon---its polarization changes to be either aligned with Alice's filter orientation, or perpendicular to it. If the change in Bob's photon takes place instantaneously, then that's an FTL interaction.
 
  • #60
Stephen Tashi said:
On the one hand, the idea that data generated by a pseudorandom process would be predictable is interesting. But if we use "predictable" as a synonym for "pseudorandom" then, in the context of communication, we have to answer the question "Predictable by who?".

I'm talking about predictable by Alice and Bob.
 
  • #61
stevendaryl said:
Then some time later, something happens to Bob's photon---
You keep writing "sometime", like it was possible for observer to agree whose time(FOR) it is. The simpler situation is to consider that as far as entangled values are concerned, there is one photon. So if Bob's photon as already gone trough Alice one or not only depend on the path length.

stevendaryl said:
If the change in Bob's photon takes place instantaneously, then that's an FTL interaction.
FLT implies an "action at distance" from two distinct space-time event. This also implies breaking lorenz covariance and you'll have to propose some way to compute this FTL speed (because infinite is not a number).
"Instantaneous" is different than FLT in sense that those are the same event, they just happens to be non-local, that is spanning space. That way all observer can agree on what instantaneous means.
 
  • #62
Boing3000 said:
the same event, they just happens to be non-local, that is spanning space

This is a contradiction. An event is a single point in spacetime. It can't be non-local.
 
  • Like
Likes bhobba
  • #63
Boing3000 said:
You keep writing "sometime", like it was possible for observer to agree whose time(FOR) it is.

I'm assuming that from the point of view of Alice's frame of reference, the change in Bob's photon happens instantaneously (or quicker than light could travel from Alice to Bob).

The simpler situation is to consider that as far as entangled values are concerned, there is one photon. So if Bob's photon as already gone trough Alice one or not only depend on the path length.

I'm not talking about entanglement. I'm talking about a hidden-variables explanation for EPR statistics.

FLT implies an "action at distance" from two distinct space-time event. This also implies breaking lorenz covariance

Yes, that's what I'm talking about.
 
  • #64
DennisN said:
How the analyzing could be done? By using Information Theory.
DennisN said:
a pseudo-random process can be identified by analyzing sufficiently long output sequences of the process.

This was fun to think about, so I've thought some more, and this is how I think it could be done:

Setup

We have two processes which will function as number generators, which then will produce long sequences of numbers which we later will perform statistical and information theoretical analyses on. The first sequence is produced by a quantum mechanical process, and the hypothesis Q is that this process is purely random due to quantum mechanics, so I call this sequence R. The second sequence is produced by a digital computer and is thus a pseudorandom process, and I call this sequence P.

For simplicity I choose the base 2 (binary) for the two sequences, which means we don't have to convert back and forth between bases when we talk about them.

R can be produced by any quantum mechanical process that has two outcomes with the equal probability 0.5.
As an example, assume we have a Stern–Gerlach apparatus with detects the spin of an atom in a vertical direction. If spin down is detected, this generates a 0 in the R sequence, and if spin up is detected, this instead generates a 1 in the R sequence. When n number of atoms have been analyzed we have a R sequence of length n.

P is produced by a reasonably good pseudorandom generator which outputs 0 or 1 with the equal probability 0.5, and we generate n numbers of bits in the P sequence.

Analysis

1. Repetition detection.

An algorithm which detects repetitive sequences of length r << n, analyzes the two sequences R and P. If our hypothesis Q is correct, the algorithm will detect a repetitive sequence in P of length r and never detect a repetitive sequence in R.

2. Information entropy

The maximum information entropy [itex]H[/itex] of a random (or pseudorandom) variable of base 2 is 1, so

[itex]0 < H(R) ≤ 1[/itex]

and

[itex]0 < H(P) ≤ 1[/itex]

(see e.g. Information entropy - example)

Furthermore, if our hypothesis Q is correct, the information entropy of sequence P will be smaller than the information entropy of sequence R, so

[itex]H(P) < H(R)[/itex]

and together we get

[itex]0 < H(P) < H(R) ≤ 1[/itex]

Edit:

Also, if our hypothesis Q is correct, the information entropy of a sufficiently long sequence R should be very close to 1. More accurately, [itex]H(R)[/itex] should approach 1 when longer and longer sequences are analyzed.
 
Last edited:
  • #65
PeterDonis said:
This is a contradiction. An event is a single point in spacetime. It can't be non-local.
There is no contradiction. A local event is different from a non-local event. All quantities are perfectly defined and Lorenz covariant.
Furthermore that simple logic is vindicated by experiment, and does not need fuzzy/bizarre logic.
On the other hand, FLT and the need to connect different event does need new (unnecessary in Occam's sense) physics.

Don't you agree that non-local is per definition something that happens once (thus an event) but across a wide range of place ?
 
  • #66
Boing3000 said:
There is no contradiction. A local event is different from a non-local event. All quantities are perfectly defined and Lorenz covariant.
Not a contradiction, it just makes no sense.
Boing3000 said:
Don't you agree that non-local is per definition something that happens once (thus an event) but across a wide range of place
This doesn't make sense either.
 
  • #67
Boing3000 said:
Don't you agree that non-local is per definition something that happens once (thus an event) but across a wide range of place ?

Its not the way I would explain it.

In QM it's the cluster decomposition property:
https://www.physicsforums.com/threads/cluster-decomposition-in-qft.547574/

But what has this got to do with randomness - that's another issue.

I gave a subtle view before - I now don't think that may have been the best approach in a B level thread. We have had discussions on this before and they tended to just meander, and were eventually shut down by the mentors. But the outcome is really the same - we do not know. It passes all the current tests we have for randomness. But deterministic processes exist that pass all those tests, and other more subtle views (eg BM) also explain it. So the answer is again we do not know. If you wish to discuss EPR, locality etc it should be in another thread. Now I do not like like wielding a big stick, but I would like contributors to think is a different thread more appropriate to discuss such things.

One thing non-mentors may not know is mentors generally do not take action without a discussion between the other mentors, however a thread that goes off track is something that mentors generally agree is at least best split.

Thanks
Bill
 
Last edited:
  • #68
I am still thinking about this because it's fun, and I want to say I wrote my two previous posts before reading the entire thread in detail. I've seen that others have been thinking along the same lines as me previously, @Boing3000 (post 5 and 32), @Stephen Tashi (post 8 and 10) and of course @Nugatory in post 26. I'm sorry if I missed mentioning anyone.

Even though I think Bell tests and entanglement experiments are very interesting, I've bypassed commenting on them since as far as I know they are designed specifically to test locality and realism, and not the inherent randomness in quantum mechanical processes.
Let me be more specific what I mean:

In Bell tests, streams of photon pairs are analyzed two by two, and when detected the photons are destroyed.
This means Bell tests are not measuring any random variable of one system/pair of particles only, they are measuring the statistics of many pairs of particles, where one pair of particles are not dependent upon any other pair. I would love to be corrected if I am wrong here. :smile:

And this means that my initial setup:
DennisN said:
As an example, assume we have a Stern–Gerlach apparatus with detects the spin of an atom in a vertical direction. If spin down is detected, this generates a 0 in the R sequence, and if spin up is detected, this instead generates a 1 in the R sequence. When n number of atoms have been analyzed we have a R sequence of length n.

also is flawed, since it mentioned measuring the spin of different atoms. But the spins of different atoms should ideally be completely independent of each other, so this will not be a good randomness test.

To do the randomness test, that is, test the hypothesis that a quantum mechanical process is random, we need to make repeated measurements on the same particle/system. The generated sequence would then be the variation of a single random variable, which then could be statistically and information theoretically analyzed.

One suggestion could be repeated measurements of the spin of one particle in various directions:
  1. Measure the spin in the vertical direction. Spin down will generate a 0 in the R sequence and spin up will generate a 1 in the in the R sequence. This will also result in that the horizontal spin now is undetermined.
  2. Measure the spin in the horizontal direction. Spin left will generate a 0 in the R sequence and spin right will generate a 1 in the in the R sequence. This will also result in that the vertical spin now is undetermined.
  3. Goto 1 and repeat this many times.
  4. Then do the analyses as I described in my previous post.

Maybe there's a better way to experimentally do what I am thinking of, and if so, I would love to hear it. :smile:
 
Last edited:
  • #69
bhobba said:
But the outcome is really the same - we do not know. It passes all the current tests we have for randomness. But deterministic processes exist that pass all those tests, and other more subtle views (eg BM) also explain it.

A basic problem with discussing how to detect the non-randomness of a pseudorandom process is that "pseudorandom" does not define a particular type of process. For example, if we assume a source of pseudorandom numbers is the output of a linear congruential random number algorithm then "pseudorandom" is something specific. By contrast, if the meaning of "pseudorandom process" only says it is a process whose outputs obey all the statistical properties of the outputs of random process then we have defined the possibility of distinguishing it by statistics out of existence.

If @bhobba 's above remark: "deterministic processes exist that pass all those tests" is taken as the definition of a pseudorandom process then his conclusion: "But the outcome is really the same - we do not know" is correct, as far as statistical analysis goes.

So far in this thread, attempts to make progress involve attributing specific properties to a pseudorandom process.

1. Assume a pseudorandom process is predictable. The exact nature of this predictability has not been defined. Presumably, the general idea is that we reject the assumption that pseudorandom process will pass all statistical tests for randomness - or perhaps some are using the strong assumption that the output of a pseudorandom process can be exactly predicted by someone who knows its history - or perhaps the definition of a pseudorandom process is that there exists an observer of the process who knows what its output will be, even if this knowledge is not shared with the rest of the world.

2. An embellishment of the above idea is considering thought experiments where random and pseudorandom processes are hooked up together. We try to conclude the results imply specific physical consequences ( e.g. faster-than-light communication). The paper linked by @atyy ( Acin and Masanes, Certified randomness in quantum physics, https://arxiv.org/abs/1708.00265 ) has a title suggesting it accomplishes this. (I don't see how - can someone summarize?) The discussions in this thread of Bell's type experiments seem to have similar goals.

3. We can consider the physics of implementing deterministic versus random processes ( as hinted in post #32). A completely naive question is "Which takes less energy to implement, a random process or a deterministic pseudorandom process that produces outputs in the same format?". If we were to straighten out someone who asked this question, what would we say?

The notion of a physically implemented pseudorandom process suggests the existence of hidden variables that describe its state. Discussions of hidden variables tend to focus on whether they exist, but there is also a question of where we find the machinery that generates or responds to them - and how much energy does it take to run the machinery.
 
Last edited:
  • Like
Likes DennisN
  • #70
DennisN said:
[]
Even though I think Bell tests and entanglement experiments are very interesting, I've bypassed commenting on them since as far as I know they are designed specifically to test locality and realism, and not the inherent randomness in quantum mechanical processes.
Let me be more specific what I mean:

.
I have to agree with that. Some people find the following fact interesting and significant : the raw data from a two-channel EPR experiment ( ie using PBSs) is a list of Alice and Bobs detector readings which will each be either (0,1) or (1,0). These may be grouped by the settings ##\alpha=(a,a')## and ##\beta=(b,b')##. Any counting of relative frequencies of detector clicks will give apparently random results, 50/50 odds.
But if we count the number of times Alice and Bob both got (1,0) or (0,1) there are huge deviations from 50/10 if we divide into the four settings categories. The deviations are large enough to be impossible without assuming entanglement.
So from one point of view the results contain no information - from another we find non-random results.
The reason is that when we do the coincidence counting we are extracting the only information there is in the singlet state wave function and averaging out non-existent dof.
For me this is the only connection between EPR and randomness - it's all random except that ##P(coincidence) = \cos(\alpha-\beta)^2##.
 

Similar threads

Replies
12
Views
748
  • Quantum Physics
Replies
23
Views
1K
Replies
8
Views
1K
Replies
50
Views
3K
Replies
13
Views
1K
  • Quantum Physics
Replies
16
Views
2K
Replies
86
Views
10K
Replies
4
Views
1K
Replies
22
Views
2K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
Back
Top