Is Randomness in Quantum Mechanics Truly Non-Deterministic?

In summary, the conversation touches on the concept of determinism and non-determinism in nature, specifically in relation to probability and randomness. The conversation explores a thought experiment involving two black boxes, one containing a natural emitter of random numbers and the other containing a computer running a random number generator. The question is raised whether there is a mathematical test that can distinguish between truly random and pseudo-random processes. The conversation also touches on the concept of superdeterminism and references John Bell's Theorem. The conversation concludes with a detailed explanation of how determinism can be used to explain the results of an EPR experiment, with the caveat that the two measurements must take place far enough apart to avoid the possibility of faster-than-light influences.
  • #1
Aufbauwerk 2045
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.

When it comes to actually understanding nature, I find this whole topic of probability and determinism quite mysterious. I ask the following question, and would love to know if someone has a good answer. Perhaps there is a good reference to an answer specifically to the following question? I do mean specific, not a general discourse on probability.

Consider the following thought experiment. I am presented with two black boxes. I am told that one contains some natural emitter of random numbers. I am told that this process is therefore "non-deterministic." The other contains a small computer that is running a random number generator. I am told that this process is therefore "deterministic." I understand what is meant by "deterministic" because I understand how a random number generator program works. But I do not understand "non-deterministic." What does it mean? Does it mean there is no so-called "causal relationship?" Of course this means we must define "causality." Another riddle.

Continuing with the thought experiment, the output of each box is a sequence of numbers. My job is to determine which box contains the natural process and which contains the computer.

I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?
 
Last edited by a moderator:
  • Like
Likes Deepblu, DennisN, plasmon_shmasmon and 1 other person
Physics news on Phys.org
  • #2
Aufbauwerk 2045 said:
My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Well, that's the question that John Bell was considering when he developed Bell's Theorem. He showed that the probabilistic predictions of QM in an experiment such as the EPR experiment cannot be reproduced by any "pseudo-random" process without one of two equally unappealing features:
  1. Faster-than-light influences
  2. Superdeterminism
It's a little hard to explain the difference between ordinary determinism and superdeterminism. I could try to explain it, if you're interested.
 
  • Like
Likes entropy1, Demystifier and Aufbauwerk 2045
  • #3
stevendaryl said:
Well, that's the question that John Bell was considering when he developed Bell's Theorem. He showed that the probabilistic predictions of QM in an experiment such as the EPR experiment cannot be reproduced by any "pseudo-random" process without one of two equally unappealing features:
  1. Faster-than-light influences
  2. Superdeterminism
It's a little hard to explain the difference between ordinary determinism and superdeterminism. I could try to explain it, if you're interested.

Thanks for responding. Of course I am interested in having this explained. The reason I ask for references to standard literature is because I don't want the thread to be shut down for venturing into fringe areas. So I would be happy if you could explain what is the standard answer to my question, assuming there is one, along with references to the "gold standard" sort of literature that is accepted by PF. Thanks again.
 
  • #4
Aufbauwerk 2045 said:
Thanks for responding. Of course I am interested in having this explained. The reason I ask for references to standard literature is because I don't want the thread to be shut down for venturing into fringe areas. So I would be happy if you could explain what is the standard answer to my question, assuming there is one, along with references to the "gold standard" sort of literature that is accepted by PF. Thanks again.

In an EPR experiment, you have some source of electron-positron pairs. You have two experimenters, Alice and Bob. For each particle, Alice chooses a direction ##\vec{a}## and measures the spin of one particle relative to that direction. QM predicts that she always get +1/2 (spin-up) or -1/2 (spin-down). Bob chooses a direction ##\vec{b}## and measures the spin of the other particle relative to that direction. For EPR pairs, the prediction is that if Alice and Bob measure relative to the same direction, they always get opposite results.

So the way to explain this using determinism is this: Assume that the results are deterministic functions of:
  1. Some unknown property ##\lambda## of the pair.
  2. The settings ##\vec{a}## and ##\vec{b}## of the two detectors.
So we assume that Alice's result is ##F_A(\vec{a}, \vec{b}, \lambda)## and Bob's result is ##F_B(\vec{a}, \vec{b}, \lambda)##.

(Because integers are easier to work with, let's scale the spin results to ##\pm 1## rather than ##\pm 2##. So ##F_A## and ##F_B## are assumed to always return ##\pm 1##. Plus 1 means spin-up and minus 1 means spin-down, relative to a direction.)

Here's where the speed of light comes in: If the two measurements take place far enough apart, then, assuming that no effects can travel faster than light, then it should be impossible for Alice's settings to affect Bob's result, or vice-versa. In other words,

Alice's result is some function ##F_A(\vec{a}, \lambda)##. Bob's result is some function ##F_B(\vec{b}, \lambda)##. The requirement that if Alice and Bob choose the same direction, they always get opposite results implies that ##F_B(\vec{x},\lambda) = -F_A(\vec{x}, \lambda)## for any direction ##\vec{x}##. So we don't have independent functions.

So this deterministic model of the EPR correlations works like this:
  1. Initially, a pair is produced with some unknown property ##\lambda## that determines the spin measurements. Since we don't know what the value of ##\lambda## is (it's a hidden variable), we use a probability distribution ##P(\lambda)## to express our ignorance of the value of ##\lambda##.
  2. Later, Alice chooses a direction ##\vec{a}## to measure her particle's spin, and gets the result ##F_A(\vec{a}, \lambda)##
  3. Later, Bob chooses another direction ##\vec{b}## to measure his particle's spin, and gets the result ##F_B(\vec{b}, \lambda)##
Now, what Bell did next was to calculate a quantity that measures the relationship between Bob's result and Alice's result. Multiply the two results together, and you get another result that is always ##\pm 1##. If Alice always got the same result from Bob, then this number would be +1. If Alice always got the opposite result, then this number would be -1. We want to compute ##E(\vec{a}, \vec{b})##, which is the average value of the product of their results, averaged over many trials (which presumably means averaging over different values of ##\lambda##). Mathematically,

##E(\vec{a}, \vec{b}) = -\sum_\lambda P(\lambda) F_A(\vec{a}, \lambda) F_A(\vec{b}, \lambda)##

Bell showed that there is no mathematical function ##F_A## and probability distribution ##P(\lambda)## that makes ##E(\vec{a}, \vec{b})## turn out to be the quantum-mechanical prediction, which is ##E(\vec{a}, \vec{b}) = -\vec{a} \cdot \vec{b}##

A loophole is to allow ##F_A## to depend on both Alice's setting and Bob's setting. That's only possible if:
  1. Bob's setting affects Alice's result (which implies FTL influences)
  2. Bob's setting is not a free parameter, but is actually determined by ##\lambda##.
The second possibility is superdeterminism.
 
  • Like
Likes entropy1, Wolfenstein3d, bhobba and 3 others
  • #5
Aufbauwerk 2045 said:
So I would be happy if you could explain what is the standard answer to my question, assuming there is one.
Half the question is answerable, because the box with a random number generator will "loop" once the total span of its internal state will be visited. It can be huge but still measurable.
The people that pretend to have given you a non-deterministic box are the one that have to give you the answer to that question. As far as I know, some generator based on "quantum" randomness (like radioactive decay) also have a limited time-span. So it seems that even if "pure", the quantity of randomness is still limited quite precisely (which I consider to be a deterministic feature).
Let's not talk about the third box, with some people inside that is going to spew out number based on its "free-will" :rolleyes:
 
  • #6
Aufbauwerk 2045 said:
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.
[..]
I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?
A practical example is Malus law which only gives the probability of light passing a linear polarizer. Even if we know the alignment of the polarizer and the alignment of the incoming light we still cannot be certain whether the photon will pass ( unless the aligments are identical or opposite).
One explanation is that the path of the photon depends on the state of the birefringent material at the time of the interaction. This involves about 109 crystal sites and their phonon population. This is not something we can know - so in this case one could argue that limited information forces us to use probability. In this case the initial conditions form a natural (FAPP) random number generator.
 
  • #7
Mentz114 said:
A practical example is Malus law which only gives the probability of light passing a linear polarizer. Even if we know the alignment of the polarizer and the polarizer we still cannot be certain whether the photon will pass ( unless the aligments are identical or opposite).
One explanation is that the path of the photon depends on the state of the birefringent material at the time of the interaction. This involves about 109 crystal sites and their phonon population. This is not something we can know - so in this case one could argue that limited information forces us to use probability. In this case the initial conditions form a natural (FAPP) random number generator.

I know that this wasn't your point, but EPR shows that this explanation can't be correct (at least, not without making some highly implausible assumptions). If you create a correlated pair of photons, and one passes through one polarizing filter, then the other will definitely pass through the other filter (if their alignments are the same, if we disregard defects in the crystals). So if the explanation for the apparent randomness is the details of the state of the filter, then that would imply that every other filter in the universe with the same orientation must be the same in those details.
 
  • #8
stevendaryl said:
So the way to explain this using determinism is this: Assume that the results are deterministic functions of:
  1. Some unknown property ##\lambda## of the pair.
  2. The settings ##\vec{a}## and ##\vec{b}## of the two detectors.

However, the original post asks about distinguishing between a pseudo-random generator of values (e.g. for ##\lambda##) versus a "truly" random generator of values for the same variable.

The conclusion of the example is that there is no ##P(\lambda)## that explains the results of the experiment. Likewise the results can't be explained by a pseudo-random process process that simulates ##P(\lambda)##. As far as ##\lambda## goes, the example rules out both of the situations that the original post wishes to distinguish.

Is there a variable other than ##\lambda## in the example that can be used to distinguish "true" randomness versus simulated randomness?
 
  • #9
Stephen Tashi said:
However, the original post asks about distinguishing between a pseudo-random generator of values (e.g. for ##\lambda##) versus a "truly" random generator of values for the same variable.

The conclusion of the example is that there is no ##P(\lambda)## that explains the results of the experiment. Likewise the results can't be explained by a pseudo-random process process that simulates ##P(\lambda)##. As far as ##\lambda## goes, the example rules out both of the situations that the original post wishes to distinguish.

Is there a variable other than ##\lambda## in the example that can be used to distinguish "true" randomness versus simulated randomness?

I'm only saying that a pseudo-random process can't reproduce the nondeterminism of quantum mechanics.
 
  • #10
Boing3000 said:
Half the question is answerable, because the box with a random number generator will "loop" once the total span of its internal state will be visited.

Yes, the usual type of random number generator (linear congruential) is periodic. Is there a theorem saying that an arbitrary type of random number generator must be periodic? (To formulate such a theorem, we'd need a definition of "random number generator" that wasn't tied to a specific class of functions.)
 
  • #11
stevendaryl said:
I'm only saying that a pseudo-random process can't reproduce the nondeterminism of quantum mechanics.

It seems to me that if people do "successful" computer simulations of QM then they have simulated its stochastic aspects using pseudo-random procedures. Perhaps there are phenomena that nobody can simulate on non-quantum computers.
 
  • Like
Likes Klystron
  • #12
stevendaryl said:
I know that this wasn't your point, but EPR shows that this explanation can't be correct (at least, not without making some highly implausible assumptions). If you create a correlated pair of photons, and one passes through one polarizing filter, then the other will definitely pass through the other filter (if their alignments are the same, if we disregard defects in the crystals). So if the explanation for the apparent randomness is the details of the state of the filter, then that would imply that every other filter in the universe with the same orientation must be the same in those details.
I don't think Bells theorem has anything to say about randomness.
The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all. Any other settings combinations appear to be random.
 
Last edited:
  • #13
Mentz114 said:
I don't tjink Bells theorem has anything to say about randomness.

Maybe not, but it implies, as I said, that you can't explain the randomness in terms of the details of the polarizing filter, unless those details are the same in every polarizing filter.

The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all.

That's not true. If Alice and Bob both have aligned filters, and they measure photons from a correlated EPR pair, then there is a 50% chance that both photons will pass, and there is a 50% chance that neither will pass. That's randomness.
 
  • #14
Stephen Tashi said:
It seems to me that if people do "successful" computer simulations of QM then they have simulated its stochastic aspects using pseudo-random procedures.

Inside a computer, there is no locality restrictions.

Bell's assumption is that Alice's result depends only on the random parameter ##\lambda## and her detector's setting, and Bob's result depends only on the random parameter ##\lambda## and his detector's setting. To simulate the spin-1/2 EPR case with a computer, you can do the following:
  1. Randomly assign Alice spin-up or spin-down, with 50% probability of each.
  2. Then assign Bob the same result with probability ##sin^2(\frac{\theta}{2})## and the opposite result with probability ##cos^2(\frac{\theta}{2})##.
This gives the quantum-mechanical statistics, but it violates "locality", since the parameter ##\theta## that determines Bob's result depends on Alice's setting.
 
  • #15
stevendaryl said:
Maybe not, but it implies, as I said, that you can't explain the randomness in terms of the details of the polarizing filter, unless those details are the same in every polarizing filter.
I don't see why not. The explanation in terms of initial conditions does not require a cosmic conspiracy.
 
  • #16
stevendaryl said:
This gives the quantum-mechanical statistics, but it violates "locality", since the parameter ##\theta## that determines Bob's result depends on Alice's setting.

Can we make locality relevant to the question in the original post?

Assuming locality, If we have several black boxes that are either all "truly" random generators of values or all pseudo-random generators of values, is there some way to hook them up experimentally and distinguish which is the case?
 
  • #17
Mentz114 said:
I don't see why not. The explanation in terms of initial conditions does not require a cosmic conspiracy.

You measure a photon's polarization relative to a filter, and you either get H or V. If you want to explain this in deterministic terms, then that means that the result is some function ##F(\alpha, \lambda, h)##, where ##\alpha## is the orientation of the filter, ##\lambda## is some unknown property of the photon, and ##h## is some unknown property of the filter.

Now, create a twin pair of correlated photons, and let Alice and Bob measure them using different filters. So let ##F_a(\alpha, \lambda, a)## be the function determining Alice's outcome as a function of her filter's orientation, ##\alpha##, the photon's state, ##\lambda## and the state of her filter, ##a##. Let ##F_B(\beta, \lambda, b)## be the function determining Bob's outcome as a function of his orientation, ##\beta##, the photon state, ##\lambda## and his filter state, ##b##.

Empirically, if ##\alpha = \beta##, then ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)##. When the filters are aligned, the filter states ##a## and ##b## make no difference. It's hard to see how the filter state could make a difference when the filters are not aligned, but not if they are aligned. (I actually think that it's possible to prove that it's impossible, but that would require more work than I'm willing to do right now.)
 
  • #18
Stephen Tashi said:
Can we make locality relevant to the question in the original post?

Assuming locality, If we have several black boxes that are either all "truly" random generators of values or all pseudo-random generators of values, is there some way to hook them up experimentally and distinguish which is the case?

I don't think so. Actually, I'll go further--the answer is definitely no. Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
 
  • #19
stevendaryl said:
You measure a photon's polarization relative to a filter, and you either get H or V. If you want to explain this in deterministic terms, then that means that the result is some function ##F(\alpha, \lambda, h)##, where ##\alpha## is the orientation of the filter, ##\lambda## is some unknown property of the photon, and ##h## is some unknown property of the filter.

Now, create a twin pair of correlated photons, and let Alice and Bob measure them using different filters. So let ##F_a(\alpha, \lambda, a)## be the function determining Alice's outcome as a function of her filter's orientation, ##\alpha##, the photon's state, ##\lambda## and the state of her filter, ##a##. Let ##F_B(\beta, \lambda, b)## be the function determining Bob's outcome as a function of his orientation, ##\beta##, the photon state, ##\lambda## and his filter state, ##b##.

Empirically, if ##\alpha = \beta##, then ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)##. When the filters are aligned, the filter states ##a## and ##b## make no difference. It's hard to see how the filter state could make a difference when the filters are not aligned, but not if they are aligned. (I actually think that it's possible to prove that it's impossible, but that would require more work than I'm willing to do right now.)
I will study this but I do't see the relevance to anything but closing off a 'settings-conspiracy' loophole in an EPR experiment !
Nothing to do with Malus law as I see it.

[Edit]
@stevendaryl
I apologise for the late edit. To clarify my original post and to point out a misunderstanding.
You are conflating 'setting' with 'state'. The setting ##\alpha, \beta## is dtermined by the experiment but the state is the unknown internal state of the polarizer ##\psi(t, ...)## which changes continuously.
 
Last edited:
  • #20
stevendaryl said:
I don't think so. Actually, I'll go further--the answer is definitely no. Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
My bold. I think this is what I said !
 
  • #21
If there is a way to use a QM scenario to distinguish between random versus pseudo-random black boxes, I think that time would be a critical aspect. We could specify that a random black box produces a value x(t) at time t and this value does not exist before that time. A pseudo-random box can be implemented by a device that generates the values x(t) in advance and stores them. Can we rig up an experiment that distinguishes between a physical quantity that has a "definite but unknown values" before time t versus one that has no definite value until time t ?
 
  • #22
Mentz114 said:
@stevendaryl
I apologise for the late edit. To clarify my original post and to point out a misunderstanding.
You are conflating 'setting' with 'state'. The setting ##\alpha, \beta## is dtermined by the experiment but the state is the unknown internal state of the polarizer ##\psi(t, ...)## which changes continuously.

No, I specifically was not doing that. That's why my functions ##F_A(\alpha, \lambda, a)## and ##F_B(\beta, \lambda, b)## depends on three variables: ##a## and ##b## are the unknown internal states.

The fact that in an EPR experiment, ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)## strongly suggests that the function doesn't actually depend on the internal state.
 
Last edited:
  • #23
Stephen Tashi said:
If there is a way to use a QM scenario to distinguish between random versus pseudo-random black boxes.

There definitely is not. On the other hand, there is a way to distinguish between pseudo-random behavior with only local dynamics and the predictions of QM. The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness.
 
  • #24
stevendaryl said:
The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness.

Are you using EPR to refer the experiment described in post #4 ? That experiment rules out both local pseudo-randomness and genuine randomness in setting the value of ##\lambda##.

An experiment relevant to the original post needs to involves quantities whose values can be set by either a random or pseudo-random process.
 
  • #25
Stephen Tashi said:
An experiment relevant to the original post needs to involves quantities whose values can be set by either a random or pseudo-random process.

Well, I gave my opinion several times already: there is no empirical way to distinguish pseudo-randomness from true randomness. But pseudo-randomness doesn't help explain the weirdness of QM.
 
  • Like
Likes bhobba
  • #26
stevendaryl said:
Well, I gave my opinion several times already: there is no empirical way to distinguish pseudo-randomness from true randomness.
If presented with a black box generating a sequence of outputs, and asked the question "Is the output random (as opposed to pseudo-random)?", the possible answers are "no" or "we don't know". If we can demonstrate an algorithm that successfully predicts the next bit, we know it's pseudo-random; but inability to do that doesn't tell us anything except that we might not have figured it out yet. In particular, there is no way of excluding the possibility that the entire sequence of outputs is going to repeat from the beginning if we wait long enough.
But pseudo-randomness doesn't help explain the weirdness of QM.
Yes indeed. @Aufbauwerk 2045 's original question may be more relevant to the cryptographers than the physicists; it is a very big deal if the bad guys can figure out the PRNG you're using for key generation.
 
  • #27
Nugatory said:
If presented with a black box generating a sequence of outputs, and asked the question "Is the output random (as opposed to pseudo-random)?", the possible answers are "no" or "we don't know".

Right. There should be a name for the type of yes/no question that can only be definitively answered one way. "Are you asleep?" is another example.
 
  • #28
Mentz114 said:
I will study this but I do't see the relevance to anything but closing off a 'settings-conspiracy' loophole in an EPR experiment !
Nothing to do with Malus law as I see it.
stevendaryl said:
No, I specifically was not doing that. That's why my functions ##F_A(\alpha, \lambda, a)## and ##F_B(\beta, \lambda, b)## depends on three variables: ##a## and ##b## are the unknown internal states.

The fact that in an EPR experiment, ##F_A(\alpha, \lambda, a) = F_B(\alpha, \lambda, b)## strongly suggests that the function doesn't actually depend on the internal state.
OK, I apologise for my misunderstanding.
I'm still unconviced by your logic because it assumes that a(t) and b(t) are not functions of time and are independent. Thus I think your statement "The predictions of EPR don't rule out pseudo-randomness, but they rule out local pseudo-randomness." is wrong without more restrictions on your definitions of a and b.
 
  • #29
Mentz114 said:
I'm still unconviced by your logic because it assumes that a(t) and b(t) are not functions of time

He made no such assumption. The values of ##a## and ##b## that go into the formulas are whatever the internal states of the polarizers are at the times the measurements are made. That in no way rules out the possibility that those internal states change with time; it just says that the states which are relevant to the measurements are the states at the times of the measurements.
 
  • #30
Mentz114 said:
OK, I apologise for my misunderstanding. I'm still unconviced by your logic because it assumes that a(t) and b(t) are not functions of time and are independent.

It doesn't help to let them be functions of time. The important thing is that at the time the measurement is performed, the perfect correlation implies that the filter state had no effect. I don't see how this is escapable: You always get the same result for the same orientation, so how could the filter state come into play? Unless, as I said, all filters share exactly the same filter state, which I guess is possible, but seems like it defeats the purpose of attributing the randomness to details of the filter.
 
  • #31
stevendaryl said:
It doesn't help to let them be functions of time.
OK, my point about the time dependency is stupid and irrelevant.
The important thing is that at the time the measurement is performed, the perfect correlation implies that the filter state had no effect.
I don't see how this is escapable: You always get the same result for the same orientation, so how could the filter state come into play?
Please read my original post. I specifically do not include the perfect correlation because the internal state IS irrelevant in that case.
Unless, as I said, all filters share exactly the same filter state, which I guess is possible, but seems like it defeats the purpose of attributing the randomness to details of the filter.
It is not necessary in the perfect correlation case !

In my original post I talk about the action of a linear polarizing filter. Why do you keep talking about Bells theorem(s) ?
How can it possibly impinge on my (proposed) scenario ? Surely the way a polarizer works is independent of Bells theorems ?
 
  • #32
Stephen Tashi said:
Is there a theorem saying that an arbitrary type of random number generator must be periodic?
I think if a theorem exist, then it is the definition of "determinism". As computer state are discreet, there is a precise number of configuration/state. And for any given state it always evolve in the same following state.

Stephen Tashi said:
(To formulate such a theorem, we'd need a definition of "random number generator" that wasn't tied to a specific class of functions.)
Indeed. If I include in the "class of function" a simple double pendulum (simulated in code, or why not in true-analogic form), the result will still depend on the precision (state size) and the initial condition (precision).

I have yet to encounter (in though experiment or real) some "true" randomness. The closest "pure" object to generate random number would be using PI numbers. But this is clearly deterministic, and I think some theorem exist to show that in Pi's case, it cannot contains a series of N'th zero of arbitrary length. I suppose that a "pure" random source could...

But to stick to the OP question, I think the only way to differentiate any boxes is wait long enough, and to observe their behavior over time.
Chaotic algorithm must be sustained by external power, that's important. They have intrinsic precision and will repeat (or if the sate is programmed to add a bit every loop (to not repeat), it will slow down progressively)
Chaotic"real" processes must also be sustained in some way, because the second law of thermodynamic.

I have never heard of anything truly random, but I know what it would look like, a box that don't exchange entropy with its surrounding, but just spew out number...
 
  • #33
Mentz114 said:
Please read my original post. I specifically do not include the perfect correlation because the internal state IS irrelevant in that case.

But in the perfect correlation case, there is still nondeterminism: You either have both photons passing the filter, or neither. So I'm just saying that details of the filter state can't be relevant to explain that nondeterminism.

I suppose you could say that there is a different mechanism in the correlated pair case and in the single photon case, but that's kind of weird. If you just look at one photon, there is no statistical difference between the case where it's just a random photon, and where it's one half of a pair.

In my original post I talk about the action of a linear polarizing filter. Why do you keep talking about Bells theorem(s) ?

It seems relevant to showing that that explanation for polarizing filters can't possibly be correct. Unless you also suppose that the filter can figure out whether the photon passing through it comes from a correlated pair, or not.

How can it possibly impinge on my (proposed) scenario ? Surely the way a polarizer works is independent of Bells theorems ?

On the contrary, it seems that Bell's theorem rules out a particular model for how polarizing filters work.
 
  • #34
stevendaryl said:
On the contrary, it seems that Bell's theorem rules out a particular model for how polarizing filters work.

I suppose, as I said, you could say that a polarizing filter behaves one way when an unpaired photon enters it, and a different way when a photon from a twin pair enters it.
 
  • #35
stevendaryl said:
I suppose, as I said, you could say that a polarizing filter behaves one way when an unpaired photon enters it, and a different way when a photon from a twin pair enters it.
Please explain what you mean by this ? Are you saying 'one could say' or 'that mentz114 could say' ?

If you are suggesting that one has to assume that polarizing filters behave differently in EPR than in single photon experiments - you are wrong.

If you think that is what I'm saying - you are still wrong.

The actual outcome of a filter experiment can depend on the initial state of the filter (and obviously the alignment) without breaking any physical laws. I'm still baffled but I appreciate the effort you've made to pursuade me otherwise.
 

Similar threads

Replies
12
Views
738
  • Quantum Physics
Replies
23
Views
1K
Replies
8
Views
994
Replies
50
Views
3K
Replies
13
Views
1K
  • Quantum Physics
Replies
16
Views
2K
Replies
86
Views
9K
Replies
4
Views
1K
Replies
22
Views
2K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
Back
Top