High School Is Randomness in Quantum Mechanics Truly Non-Deterministic?

  • Thread starter Thread starter Aufbauwerk 2045
  • Start date Start date
  • Tags Tags
    Qm Randomness
Click For Summary
The discussion revolves around the distinction between "non-deterministic" processes in quantum mechanics and "deterministic" processes like those generated by computers. A thought experiment is presented involving two black boxes, one containing a natural random number emitter and the other a random number generator, with the challenge of distinguishing between their outputs using mathematical tests for randomness. The conversation references John Bell's Theorem, which illustrates that quantum mechanics' predictions cannot be replicated by any deterministic model without invoking concepts like superdeterminism or faster-than-light influences. Participants express a desire for clear explanations and references to established literature on these topics, emphasizing the complexity and mystery surrounding the nature of randomness in quantum mechanics. The thread highlights the ongoing debate about the nature of randomness and determinism in the context of quantum phenomena.
  • #31
stevendaryl said:
It doesn't help to let them be functions of time.
OK, my point about the time dependency is stupid and irrelevant.
The important thing is that at the time the measurement is performed, the perfect correlation implies that the filter state had no effect.
I don't see how this is escapable: You always get the same result for the same orientation, so how could the filter state come into play?
Please read my original post. I specifically do not include the perfect correlation because the internal state IS irrelevant in that case.
Unless, as I said, all filters share exactly the same filter state, which I guess is possible, but seems like it defeats the purpose of attributing the randomness to details of the filter.
It is not necessary in the perfect correlation case !

In my original post I talk about the action of a linear polarizing filter. Why do you keep talking about Bells theorem(s) ?
How can it possibly impinge on my (proposed) scenario ? Surely the way a polarizer works is independent of Bells theorems ?
 
Physics news on Phys.org
  • #32
Stephen Tashi said:
Is there a theorem saying that an arbitrary type of random number generator must be periodic?
I think if a theorem exist, then it is the definition of "determinism". As computer state are discreet, there is a precise number of configuration/state. And for any given state it always evolve in the same following state.

Stephen Tashi said:
(To formulate such a theorem, we'd need a definition of "random number generator" that wasn't tied to a specific class of functions.)
Indeed. If I include in the "class of function" a simple double pendulum (simulated in code, or why not in true-analogic form), the result will still depend on the precision (state size) and the initial condition (precision).

I have yet to encounter (in though experiment or real) some "true" randomness. The closest "pure" object to generate random number would be using PI numbers. But this is clearly deterministic, and I think some theorem exist to show that in Pi's case, it cannot contains a series of N'th zero of arbitrary length. I suppose that a "pure" random source could...

But to stick to the OP question, I think the only way to differentiate any boxes is wait long enough, and to observe their behavior over time.
Chaotic algorithm must be sustained by external power, that's important. They have intrinsic precision and will repeat (or if the sate is programmed to add a bit every loop (to not repeat), it will slow down progressively)
Chaotic"real" processes must also be sustained in some way, because the second law of thermodynamic.

I have never heard of anything truly random, but I know what it would look like, a box that don't exchange entropy with its surrounding, but just spew out number...
 
  • #33
Mentz114 said:
Please read my original post. I specifically do not include the perfect correlation because the internal state IS irrelevant in that case.

But in the perfect correlation case, there is still nondeterminism: You either have both photons passing the filter, or neither. So I'm just saying that details of the filter state can't be relevant to explain that nondeterminism.

I suppose you could say that there is a different mechanism in the correlated pair case and in the single photon case, but that's kind of weird. If you just look at one photon, there is no statistical difference between the case where it's just a random photon, and where it's one half of a pair.

In my original post I talk about the action of a linear polarizing filter. Why do you keep talking about Bells theorem(s) ?

It seems relevant to showing that that explanation for polarizing filters can't possibly be correct. Unless you also suppose that the filter can figure out whether the photon passing through it comes from a correlated pair, or not.

How can it possibly impinge on my (proposed) scenario ? Surely the way a polarizer works is independent of Bells theorems ?

On the contrary, it seems that Bell's theorem rules out a particular model for how polarizing filters work.
 
  • #34
stevendaryl said:
On the contrary, it seems that Bell's theorem rules out a particular model for how polarizing filters work.

I suppose, as I said, you could say that a polarizing filter behaves one way when an unpaired photon enters it, and a different way when a photon from a twin pair enters it.
 
  • #35
stevendaryl said:
I suppose, as I said, you could say that a polarizing filter behaves one way when an unpaired photon enters it, and a different way when a photon from a twin pair enters it.
Please explain what you mean by this ? Are you saying 'one could say' or 'that mentz114 could say' ?

If you are suggesting that one has to assume that polarizing filters behave differently in EPR than in single photon experiments - you are wrong.

If you think that is what I'm saying - you are still wrong.

The actual outcome of a filter experiment can depend on the initial state of the filter (and obviously the alignment) without breaking any physical laws. I'm still baffled but I appreciate the effort you've made to pursuade me otherwise.
 
  • #36
Mentz114 said:
Please explain what you mean by this ? Are you saying 'one could say' or 'that mentz114 could say' ?

If you are suggesting that one has to assume that polarizing filters behave differently in EPR than in single photon experiments - you are wrong.

I thought you were saying that the internal state of the filter is relevant in single photons but is not relevant for the perfect correlation case. That sure seems to me that you're saying that it behaves differently in the two cases.
 
  • #37
stevendaryl said:
I thought you were saying that the internal state of the filter is relevant in single photons but is not relevant for the perfect correlation case. That sure seems to me that you're saying that it behaves differently in the two cases.

Okay, I guess it works if you allow nonlocal interactions between the photons.
  • Assume that initially the photon pair have a polarization in some random direction ##\lambda##, but they both have the same random polarization.
  • Alice's photon reaches her filter, which is oriented at some angle ##\alpha##. We assume that whether the photon passes through is a deterministic function ##P_A(\alpha, \lambda, a)## depending on the orientation ##\alpha##, the polarization ##\lambda## and the filter state ##a##. But the average over all possible ##a## of ##P_A(\alpha, \lambda)## gives ##1/2 cos^2(\alpha - \lambda)##.
  • If Alice's photon passes through, then Bob's photon instantaneously changes its polarization to ##\alpha##.
  • If Alice's photon is absorbed, then Bob's photon instantaneously change its polarization to ##\lambda' = \frac{\pi}{2} - \alpha##.
  • Bob's photon reaches his filter, which is oriented at angle ##\beta##. Whether his photon passes through his filter is determined by another function, ##P_B(\beta, \lambda', b)##, where analogously, the average over all ##b## gives ##1/2 cos^2(\beta - \lambda')##.
This model has the same statistics as predicted by QM.

So, I back off. If you have the instantaneous change of photon state, then this model works.
 
  • Like
Likes zonde
  • #38
stevendaryl said:
Okay, I guess it works if you allow nonlocal interactions between the photons.
This model has the same statistics as predicted by QM.
If you have the instantaneous change of photon state, then this model works.
Thanks.
I like your reasoning here because it shows that the mode of operation of the filter and its internal state do not affect the outcome of EPR as long as all of them are interchangeably identical ( I hope this is not contentious.)
(Because it is not possible to tell the difference between instantaneous inter-pair communication and other 'mechanisms' I always assume the former).
 
  • #39
Aufbauwerk 2045 said:
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.

When it comes to actually understanding nature, I find this whole topic of probability and determinism quite mysterious. I ask the following question, and would love to know if someone has a good answer. Perhaps there is a good reference to an answer specifically to the following question? I do mean specific, not a general discourse on probability.

Consider the following thought experiment. I am presented with two black boxes. I am told that one contains some natural emitter of random numbers. I am told that this process is therefore "non-deterministic." The other contains a small computer that is running a random number generator. I am told that this process is therefore "deterministic." I understand what is meant by "deterministic" because I understand how a random number generator program works. But I do not understand "non-deterministic." What does it mean? Does it mean there is no so-called "causal relationship?" Of course this means we must define "causality." Another riddle.

Continuing with the thought experiment, the output of each box is a sequence of numbers. My job is to determine which box contains the natural process and which contains the computer.

I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?

My understanding is that if there is no such mathematical test, which can distinguish between the two outputs, then we would naturally fall back on Occam's Razor, namely "do not multiply entities without necessity." I know how random number generators work. Why should I believe that whatever is going on in nature's black boxes is something other than this? In other words, why do we introduce this idea of a "non-deterministic" black box in nature? Can we even define "non-deterministic?"

Is there a good explanation of this in the scientific literature? Please provide the reference if possible. Thanks!

P.S. I took several QM courses at university some years ago, and this question never came up. Maybe it's different these days. Maybe it's somewhere in a popular textbook? Or would it be considered a fringe question?

In classical probability, we can interpret the formalism as something definitely really going on, but we don't know which case exactly is going on. In the bare quantum formalism, there is more than one possibility as to what the "something definite" is, which means it is not that definite. Mathematically, this corresponds to classical probability containing a structure called a simplex, which is absent in quantum probability.
Holevo, Statistical Structure of Quantum Theory https://books.google.com.sg/books/a...antum_Theory.html?id=CX4-064Rao8C&redir_esc=y
Bengtsson and Zyczkowski, Geometry of Quantum States https://books.google.com.sg/books?id=aA4vXMbuOTUC&source=gbs_navlinks_s

Can we add structure to quantum theory so that quantum theory becomes classical probability? In some cases like Bohmian Mechanics, we know how to do this. Bell's theorem says that these hidden variables are non-local. These hidden variables would be like a hidden reality, so we can call say they are "ontological". There are several valid interpretations of Bell's theorem, and some are more about "operational" senses of quantum mechanics, rather than about "ontology". One of them says that if we believe that no one can communicate faster than light, then quantum mechanics can be used to certify true randomness in an operational sense. In short: if nature uses deterministic black boxes, the boxes are nonlocal, and if no one can access the nonlocality, then they are operationally random in a way that we know cannot be broken.
Acin and Masanes, Certified randomness in quantum physics, https://arxiv.org/abs/1708.00265
Wiseman and Cavalcanti, Causarum Investigatio and the Two Bell's Theorems of John Bell https://arxiv.org/abs/1503.06413
 
Last edited:
  • Like
Likes Aufbauwerk 2045
  • #40
atyy said:
In classical probability, we can interpret the formalism as something definitely really going on, but we don't know which case exactly is going on. In the bare quantum formalism, there is more than one possibility as to what the "something definite" is, which means it is not that definite. Mathematically, this corresponds to classical probability containing a structure called a simplex, which is absent in quantum probability.
Holevo, Statistical Structure of Quantum Theory https://books.google.com.sg/books/a...antum_Theory.html?id=CX4-064Rao8C&redir_esc=y
Bengtsson and Zyczkowski, Geometry of Quantum States https://books.google.com.sg/books?id=aA4vXMbuOTUC&source=gbs_navlinks_s

Can we add structure to quantum theory so that quantum theory becomes classical probability? In some cases like Bohmian Mechanics, we know how to do this. Bell's theorem says that these hidden variables are non-local. These hidden variables would be like a hidden reality, so we can call say they are "ontological". There are several valid interpretations of Bell's theorem, and some are more about "operational" senses of quantum mechanics, rather than about "ontology". One of them says that if we believe that no one can communicate faster than light, then quantum mechanics can be used to certify true randomness in an operational sense. In short: if nature uses deterministic black boxes, the boxes are nonlocal, and if no one can access the nonlocality, then they are operationally random in a way that we know cannot be broken.
Acin and Masanes, Certified randomness in quantum physics, https://arxiv.org/abs/1708.00265
Wiseman and Cavalcanti, Causarum Investigatio and the Two Bell's Theorems of John Bell https://arxiv.org/abs/1503.06413

Thanks! I started reading the Acin and Masanes paper first. Maybe I will have comments or questions about it later on. This is not applicable to my work. It's just an area that has fascinated me for the last few years, and I am trying to learn more about it.
 
  • #41
Aufbauwerk 2045 said:
Consider this a layman's question. I am not an expert on deeper aspects of probability. I simply rely on Kolmogorov's axioms and the calculation methods I learned as a student. To me it's just a bit of mathematics and it makes perfect sense as mathematics.

Well in that vein ie as a part of probability theory, see the following:
https://arxiv.org/abs/1402.6562

The 64 million dollar question of course is why did nature choose a probability framework ? Nobody knows. My best guess is determinism is a subset of ordinary probability. So if you want to build a general model of some sort you start out assuming its probabilistic because determinism is part of that anyway. Then you have the issues of the above paper - you would like to have continuous transformations between pure states so you can apply calculus for example. But then you are led to QM. Ugly reality then rears its head in the form of the Kochen-Specker theorem that says determinism is not really part of that probability model. So basically we don't really know - it just seems to be how nature is. We have some guesses like Bohemian Mechanics that restored determinism - but due to the uncertainty relations of QM in a rather strange way - its deterministic - but since you can't know initial conditions you can't predict exact outcomes - only probabilities. So we are back where we started in practical terms.

Thanks
Bill
 
  • Like
Likes Aufbauwerk 2045
  • #42
bhobba said:
We have some guesses like Bohemian Mechanics

Was that misspelling on purpose?

Is this the real life?
Is this just fantasy?
Caught in a landslide
No escape from reality...
 
  • Like
Likes DennisN, bhobba, Asymptotic and 2 others
  • #43
bhobba said:
Well in that vein ie as a part of probability theory, see the following:
https://arxiv.org/abs/1402.6562

The 64 million dollar question of course is why did nature choose a probability framework ? Nobody knows. My best guess is determinism is a subset of ordinary probability. So if you want to build a general model of some sort you start out assuming its probabilistic because determinism is part of that anyway. Then you have the issues of the above paper - you would like to have continuous transformations between pure states so you can apply calculus for example. But then you are led to QM. Ugly reality then rears its head in the form of the Kochen-Specker theorem that says determinism is not really part of that probability model. So basically we don't really know - it just seems to be how nature is. We have some guesses like Bohemian Mechanics that restored determinism - but due to the uncertainty relations of QM in a rather strange way - its deterministic - but since you can't know initial conditions you can't predict exact outcomes - only probabilities. So we are back where we started in practical terms.

Thanks
Bill

Interesting. But I am not discussing matters of opinion or speculation any more on this forum. If I participate again, it will be to help students with math or physics problems in a way that hopefully will not cause any disputes. No opening for argument, I hope. 2+2=4, etc.

As for the question in this thread, I can't get involved any more. I find satisfaction in solving problems. I just solved one I have been working on for some time. No debate about the meaning of human words, no speculation, just imagination, logic, pure mathematics and programming. There is no doubt involved, and nothing humans can muck up with their disputes. Of course I am human and I include myself in that group of pathetic creatures. But my work is beautiful. Just equations, code, and the quiet whirring of my computer fan, and the beautiful flash of numbers on the screen. Beautiful. Related to physics? Perhaps. Not directly, but one never knows. But there is practical importance.

Perhaps one day we can solve deeper problems of determinism. My goals now are more modest. I love to ponder and speculate on the ultimate questions, but I know I can't solve them, and I think no one can, at this stage of human development. I think we need advanced AI in order to make the next great leap of understanding. I hope our human brains are adequate to develop advanced AI, using the primitive AI we have developed to date to help in that effort. That is our only hope. Even the greatest geniuses are inadequate if they rely on their pathetically limited intelligence.

I am not interested in discussion any more on any topic. I may read, but I will not engage. But if I do help students, something I have experience of as a former mathematics tutor and teaching assistant, then I will answer questions in that context.

Cheers.

:)
 
Last edited by a moderator:
  • Like
Likes Mentz114
  • #44
stevendaryl said:
Well, I gave my opinion several times already: there is no empirical way to distinguish pseudo-randomness from true randomness.
Hi stevendaryl:

I find this quite puzzling. If one can guess the algorithm used to produce the pseudo-random numbers then one can predict the sequence of pseudo-random numbers, but there is no way to predict the sequence of true random numbers. I am guessing you have some concept in mind related to "distinguish" that I am not understanding.

ADDED
I suppose that one might argue that the likelihood of guessing the algorithm is infinitesimally tiny, but it is not theoretically impossible. Does your concept of "distinguish" depend on this being "practically" impossible?

Regards,
Buzz
 
Last edited:
  • #45
Buzz Bloom said:
Hi stevendaryl:

I find this quite puzzling. If one can guess the algorithm used to produce the pseudo-random numbers then one can predict the sequence of pseudo-random numbers, but there is no way to predict the sequence of true random numbers. I am guessing you have some concept in mind related to "distinguish" that I am not understanding.

This was pointed out by @Nugatory: whether a random variable is pseudo-random or not is "semi-decidable". If it's pseudo-random, you can eventually figure that out (by finding the pattern). If it's not pseudo-random, then you will never know. You can never prove that it's not pseudo-random.
 
  • Like
Likes bhobba
  • #46
Hi stevendaryl:

Thanks for your reply. I apologize for carelessly missing the Nugatory post.

Regards,
Buzz
 
  • #47
stevendaryl said:
Any truly nondeterministic system that has only local evolution is empirically equivalent to a deterministic system in which the only randomness is from the initial conditions.
I would say that a "truly non-deterministic system" is inherently non-local. Either its output states are generated entirely locally or they are dependent on something not local. If according to physics there is a choice, then something makes that choice - local, non-local, or you split the universe.
 
  • Like
Likes akvadrako
  • #48
Mentz114 said:
I don't think Bells theorem has anything to say about randomness.
The 100% correlation or anticorrelation case always has the filters set to the same alignment so there is no randomness at all. Any other settings combinations appear to be random.
Hmmm. That's interesting.

Let's see what the Bell experiment looks like if things are not random: With two filters each at three positions, you have 9 combinations. Then you have 18 random functions, 9 for each filter (A and B), each having one for every A/B configuration.

The problem is that if A can predict how B's filter will react, then A will be able to telegraph a message to B at faster than the speed of light.

So the math in the Bell experiment works fine for both random and pseudo random filter behavior. But the Bell experiment as a while does require that the filter output be inherently unpredictable - else FTL information transfer is demonstrated and the Bell analysis is no longer needed.
 
  • #49
.Scott said:
[..]
The problem is that if A can predict how B's filter will react, then A will be able to telegraph a message to B at faster than the speed of light.

So the math in the Bell experiment works fine for both random and pseudo random filter behavior. But the Bell experiment as a while does require that the filter output be inherently unpredictable - else FTL information transfer is demonstrated and the Bell analysis is no longer needed.
Exactly ( if I understand you). Every time a filter acts it must be an independent random event, otherwise the probabilities cannot be averaged out and the QM (singlet state) statistics are not predicted.
 
  • #50
.Scott said:
So the math in the Bell experiment works fine for both random and pseudo random filter behavior. But the Bell experiment as a while does require that the filter output be inherently unpredictable - else FTL information transfer is demonstrated and the Bell analysis is no longer needed.

Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.
 
  • #51
stevendaryl said:
Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.

Or if it were possible to determine ##\lambda## without disturbing the photon (or its partner), then you could communicate FTL.
 
  • #52
Nugatory said:
@Aufbauwerk 2045 's original question may be more relevant to the cryptographers than the physicists; it is a very big deal if the bad guys can figure out the PRNG you're using for key generation.
I'd like to chime in on this... :smile:
Aufbauwerk 2045 said:
I am only allowed to examine the output. I am not allowed to examine the boxes themselves in any way. Looking only at the output, and using only mathematical tests for randomness, how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?
I like this question! :smile:

Without venturing into quantum mechanics, I think this can be distinguished if the mathematical/statistical capabilities of the machine analyzing the data from the two processes is good enough.

How the analyzing could be done? By using Information Theory.

Let's say we have a sufficiently long message, preferably very long, for statistical reasons.
And let's say we encode this message two times, using two different random generators.
The first encoding is made by modifying the message using a "true" random quantum process.
The second encoding is made by modifying the message using a pseudorandom1 process.

Then we can perform analysis of the so-called information entropy2 of the two encodings with respect to the original message.
If our hypothesis that the quantum mechanics process is truly random, and the pseudorandom generator "less random", this should be able to be seen in the values of the information entropies of the two encoded messages. The QM message entropy should be at maximum, and the pseudorandom message entropy should be less than the QM value.
(see e.g. Entropy as information content)

Edit:
On a second thought I may have been a little too quick here, it was a long time ago since I used information theory. Maybe we could use mutual information as well... I have to think about it... :smile:

Edit 2:
Aufbauwerk 2045 said:
how can I distinguish between the so-called "truly random" and the so-called "pseudo-random" processes?
I just remembered a thing from my time studying cryptography... a pseudo-random process can be identified by analyzing sufficiently long output sequences of the process. A pseudo-random process will at one point or another, repeat itself, that is, start over.
So, the quality of the pseudo-random process, the "randomness", if you like, can be judged by how long it takes for the process to start repeating itself.

Edit 3: Footnotes:

1 With pseudorandom I mean a sequence generated by a digital machine only, like a computer.
2 This is not physical entropy, it is a purely information theoretical concept.
 
Last edited:
  • #53
stevendaryl said:
Well, in the model for EPR that I sketched out, based on @Mentz114's points, there are two different random processes involved:
  1. There is a random process for determining the initial photon polarization ##\lambda##.
  2. For each filter, if the photon has polarization ##\lambda##, and the filter has orientation ##\alpha##, then there is a random process for determining whether the photon with that polarization will pass that filter.
I think that the second process could be pseudo-random and it still wouldn't allow FTL communication. If both processes were pseudo-random, you could in theory communicate FTL.

Could you describe the communication method explicitly? ( I'm losing track of whether we are or are-not assuming instantaneous communication between entangled electrons and whether we are using "pseudo-random" as a synonym for "predictable". )
 
  • #54
Stephen Tashi said:
Could you describe the communication method explicitly? ( I'm losing track of whether we are or are-not assuming instantaneous communication between entangled electrons and whether we are using "pseudo-random" as a synonym for "predictable". )

The idea for a model that matches the statistics of EPR:
  1. The source of twin pairs randomly picks a polarization ##\lambda##. It can be any number between 0 (Vertical polarization in the x-y plane) and ##\pi/2## (Horizontal polarization). Two photons with these polarizations are created, one going to Alice and the other going to Bob.
  2. When Alice (or whoever measures the polarization first) measures her photon's polarization, she picks an orientation ##\alpha## for her filter. The way her filter works is: If the photon's polarization is aligned with the filter, the photon passes through. If its polarization is at right angles to that, it definitely does not pass through. For intermediate cases, it passes through with probability ##cos^2(\lambda - \alpha). (Malus' law)
  3. Immediately after Alice's measurement, Bob's photon's polarization switches to be ##\alpha## (if Alice's photon passed through her filter) or perpendicular to ##\alpha## (otherwise).
  4. Bob chooses an orientation ##\beta## and his photon passes or not following the same rules as Alice's photon in 2, except with the polarization determined in 3.
For clarification, step 3 is FTL. So this is a nonlocal model, not a violation of Bell's theorem.

The issue is whether the random choices made in steps 1, 2 and 4 could be pseudorandom (predictable) without allowing FTL communication.

What I said earlier was that I thought that if step 1 was truly random/unpredictable, then FTL communication would be impossible, even if steps 2 and 4 were predictable. Now I'm not sure about that. I don't immediately see a strategy for Alice to communicate with Bob.
 
  • Like
Likes Mentz114
  • #55
stevendaryl said:
For clarification, step 3 is FTL. So this is a nonlocal model, not a violation of Bell's theorem.
I don't know how these EPR post are relevant to the OP but in your example there is only one λ and there is no way to describe what a FTL/immediate means. Non-locality imply there is a unique value for λ at all time for ever observer possible.

stevendaryl said:
I don't immediately see a strategy for Alice to communicate with Bob.
You need at least a triplet of particle sharing λ (which is physically impossible). Alice need to have two particle on her side, and arrange for bob to measure between her two measurement.
 
  • #56
Boing3000 said:
I don't know how these EPR post are relevant to the OP

It's what prompted splitting this thread off from the original.

But in your example there is only one λ and there is no way to describe what a FTL/immediate means. Non-locality imply there is a unique value for λ at all time for ever observer possible.

I don't know what you mean. Alice's measurement instantaneously changes Bob's photon state, far away. That's FTL.

It's not FTL communication, though. For Alice to communicate FTL, there would have to be two choices that Alice can make on her end that affect the statistics seen by Bob.
 
  • #57
stevendaryl said:
I don't know what you mean. Alice's measurement instantaneously changes Bob's photon state, far away. That's FTL.
I mean there is only one state for the two photon. That's not FTL in the sense that there is no speed of change involve.

stevendaryl said:
It's not FTL communication, though. For Alice to communicate FTL, there would have to be two choices that Alice can make on her end that affect the statistics seen by Bob.
Precisely, if she had two photon on her side she could make two choices of polarization angle.
 
  • #58
stevendaryl said:
The issue is whether the random choices made in steps 1, 2 and 4 could be pseudorandom (predictable) without allowing FTL communication.
On the one hand, the idea that data generated by a pseudorandom process would be predictable is interesting. But if we use "predictable" as a synonym for "pseudorandom" then, in the context of communication, we have to answer the question "Predictable by who?".

The way I think of predicting a pseudorandom process is that I'd know some history of its outputs and then be able to predict the next output. That's no problem if I play the role of an omniscient observer. But does saying step 1 is predictable process imply that Alice in step 2 can predict the value of ##\lambda## generated by step 1?
 
  • #59
Boing3000 said:
I mean there is only one state for the two photon. That's not FTL in the sense that there is no speed of change involve.

Sorry, I still don't know what you mean. Something happens to Alice's photon. It's either transmitted or absorbed. Then some time later, something happens to Bob's photon---its polarization changes to be either aligned with Alice's filter orientation, or perpendicular to it. If the change in Bob's photon takes place instantaneously, then that's an FTL interaction.
 
  • #60
Stephen Tashi said:
On the one hand, the idea that data generated by a pseudorandom process would be predictable is interesting. But if we use "predictable" as a synonym for "pseudorandom" then, in the context of communication, we have to answer the question "Predictable by who?".

I'm talking about predictable by Alice and Bob.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 86 ·
3
Replies
86
Views
12K
  • · Replies 59 ·
2
Replies
59
Views
7K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K