Bell experiment would somehow prove non-locality and information FTL?

Click For Summary
The Bell experiment illustrates quantum entanglement, where two particles created together exhibit correlated properties, such as spin, regardless of the distance separating them. Observing one particle determines the state of the other, leading to interpretations of non-locality or faster-than-light information transfer. However, some argue that the particles' states are predetermined at creation, and the act of measurement merely reveals these states without invoking non-locality. The discussion also touches on Bell's Theorem, which posits that local realism cannot coexist with quantum mechanics, suggesting that hidden variable theories are insufficient. Ultimately, the debate centers on the nature of reality and measurement in quantum mechanics, emphasizing that the observed correlations do not imply any mysterious influence or faster-than-light communication.
  • #91
DrChinese said:
I don't get it, heusdens. Your position - as expressed above - is as similar to the non-realistic position as I have seen. The HUP, if taken as representing accurately underlying mechanics, is non-realistic. So where is there a point of disagreement?

That above weren't my words, it was a quote in my post (but I had not paired the quote - unquote well, but I re-editted it, see the actual post now).
 
Physics news on Phys.org
  • #93
heusdens, did you see the question I posted right before you responded to DrChinese above? In your older post #76 which you just linked to, when you say "I don't see why this would not be a good explenation of the above mentioned experiment" are you claiming that you have a classical-style explanation for the experiment?
 
  • #94
JesseM said:
heusdens, did you see the question I posted right before you responded to DrChinese above? In your older post #76 which you just linked to, when you say "I don't see why this would not be a good explenation of the above mentioned experiment" are you claiming that you have a classical-style explanation for the experiment?

Wether you ascribe the explenation to be "classical" or not is somewhat irrelevant to me, but I just claimed that it appeared to be there is a simple explenation for the results of that specific experiment.

For which I can not claim is the whole truth, since as I already explained in that post, therefore one has to actually perform the experiment again and experiment with different width polarization filters.

Yet, the question is, from what the experiment appears to be, is this explenation I gave approximate correct or not? Or where can the explenation be shown to be wrong (for instance the assumption that the gap width of the polarization filter can have something to do with it, which could be demonstrated false if changing the gap width makes no difference to the outcomes).

So, I don't make rigorous claims, esp. since I never studied quantum mechanics much, and never made any experiments.

And yet another remark, as is my main issue here, which is the inapproproriateness of formal concepts and thoughts to speak about the world, is that when defining in the formal sense objects, properties and values, we always or most of the time, run into problems.
For instance the outcome of a distinghuishable observable, can show us the behaviour of total randomness, and yet it can also show us strong correlation. Formal logic has some problems with such features, since it breaks the law of excluded middle. Either a property of an object is A, or it is not A, but never both. Dialectics has no problems with that, however.

For dialectics, there is the distinction of appearance and essence. What appears to be random and what is random, are two separate notions, that need not coincide and can often be shown to contradict. So, dialectics does not dictate that appearence and esence must coincide.

I will post a short primary to dialectics shortly, it might explain some of the aspects of dialectics and how it differs from formal logic, and how it might serve to get a broader picture of reality then formal logic can give us.
 
Last edited:
  • #95
heusdens said:
Wether you ascribe the explenation to be "classical" or not is somewhat irrelevant to me, but I just claimed that it appeared to be there is a simple explenation for the results of that specific experiment.
But the problem is that your "explanation" does not refer to anything specifically quantum, so it must be incomplete if you agree that no classical setup could replicate the results (and by 'classical' I basically just mean a system which is in a definite measurable state at every moment)--how do you account for the fact that your experiment cannot be replicated using some classical source of randomness like dice?
heusdens said:
For dialectics, there is the distinction of appearance and essence. What appears to be random and what is random, are two separate notions, that need not coincide and can often be shown to contradict.
So, dialectics does not dictate that appearence and esence must coincide.

I will post a short primary to dialectics shortly, it might explain some of the aspects of dialectics and how it differs from formal logic.
You seem to be discussing philosophical ideas rather than the sort of clearly-defined concepts used in physics, so maybe you should post your discussion on dialectics in the philosophy forum and just post a link here.
 
Last edited:
  • #96
JesseM said:
But the problem is that your "explanation" does not refer to anything specifically quantum, so it must be incomplete if you agree that no classical setup could replicate the results--how do you account for the fact that your experiment cannot be replicated using some classical source of randomness like dice? You seem to be discussing philosophical ideas rather than the sort of clearly-defined concepts used in physics, so maybe you should post your discussion on dialectics in the philosophy forum and just post a link here.

We are talking here about quantum mechanics in terms applicable for understanding. I have not yet read a clear and self-consistent strictly physical explenation of the outcomes of such experiments, that is just the reason we discuss it on here. If it were 'clear', why would we discuss it so extensively?

I don't hold on to the idea that the quantum mechanical part of nature is totally separate from the classical part, although it is correct to say that the attributes we use in the macroscopic world can not be applied in the quantum world.

Who says the experiment can not be reproduced using -what is called- "classical" concepts? Although the question is of course, what do we mean with 'reproduce', since likely our experiment will involve totally different set up, objects, properties and range of values as also methods of detection.

If you can formalize that into something that is also applicable to the macroscopic world, then maybe we can proceed.
 
  • #97
Here is an example of a superposition of two macroscopic states:

http://www.anti-thesis.net/child.html

:wink:
 
Last edited by a moderator:
  • #98
heusdens said:
We are talking here about quantum mechanics in terms applicable for understanding. I have not yet read a clear and self-consistent strictly physical explenation of the outcomes of such experiments, that is just the reason we discuss it on here. If it were 'clear', why would we discuss it so extensively?
What kind of "physical explanation" are you looking for, though? A verbal one? Physicists usually try to focus on finding mathematical models in which the verbal terms they used can be translated into elements of the model, rather than just relying on words alone.
heusdens said:
I don't hold on to the idea that the quantum mechanical part of nature is totally separate from the classical part
I didn't say it was. My point about the impossibility of finding a "classical" explanation is just that we can come up with a model of what a classical universe would be like--one ruled by classical laws which obey locality such as Maxwell's laws of electromagnetism, for example--and show that in this imaginary universe, you could never reproduce the same results we see in EPR-type experiments. You could even perform a simulation of a classical universe on a computer if you wished. And remember the comment I made in parentheses in my last post--"by 'classical' I basically just mean a system which is in a definite measurable state at every moment". We don't have to assume the classical laws are the laws known to 19th century physicists, we could even invent some new "classical" laws which didn't resemble our universe at all, I'd still call them classical as long as the universe had a single well-defined state at each moment and the results of measurements followed from this state.
 
  • #99
JesseM said:
And remember the comment I made in parentheses in my last post--"by 'classical' I basically just mean a system which is in a definite measurable state at every moment". We don't have to assume the classical laws are the laws known to 19th century physicists, we could even invent some new "classical" laws which didn't resemble our universe at all, I'd still call them classical as long as the universe had a single well-defined state at each moment and the results of measurements followed from this state.

Bell's theorem isn't quite that strong. Bell's theorem does not apply to models where the coincidence of non-commutable measurement results is undefined. This is probably possible using non-standard notions of probability, and certainly possible with strong determinism.
 
  • #100
JesseM said:
What kind of "physical explanation" are you looking for, though? A verbal one? Physicists usually try to focus on finding mathematical models in which the verbal terms they used can be translated into elements of the model, rather than just relying on words alone.

Right so, because that is how physics reflects on the world, using the language of mathematics. This has it's merits, but also brings forward it's own dismerits.
I didn't say it was. My point about the impossibility of finding a "classical" explanation is just that we can come up with a model of what a classical universe would be like--one ruled by classical laws which obey locality such as Maxwell's laws of electromagnetism, for example--and show that in this imaginary universe, you could never reproduce the same results we see in EPR-type experiments. You could even perform a simulation of a classical universe on a computer if you wished. And remember the comment I made in parentheses in my last post--"by 'classical' I basically just mean a system which is in a definite measurable state at every moment". We don't have to assume the classical laws are the laws known to 19th century physicists, we could even invent some new "classical" laws which didn't resemble our universe at all, I'd still call them classical as long as the universe had a single well-defined state at each moment and the results of measurements followed from this state.

I for sure could not bring forward a universe to which the classical laws of physics apply, so I hope you forgive me that I can not do that.

The whole point here again, is what do you define as a "well defined state"?

A signal that by all means is random can not, by mere logic, be also non-random, yet it can be easily shown to be the case.

I just have to create a clear signal, and split that into two signals that are correlated, and add to both signals a random noise (the same random noise, that is, so that after subtraction, it can be eliminated).

Each of the signals now is random. Yet I can manage to recreate the clear signal from both random signals.

So, how is this possible even in the classical case, if I am to assume the signal was really random, and could not contain any information at all?
How does random + random become a clear signal? It does not make sense when using only formal descriptions (a random signal is something that can bey definition carry no information), yet it is the case.

This being the case, doesn't make it a QM event, neither have I stated that it beats the Bell Inequality.

However, if you give me a clear formal description of an experiment and set up which can in principle be made using only the "classical" aspects of physics, I am about sure one can show a deviation from the Bell Inequality in the non-QM case too.

Btw. I think I almost described a rather classical anology already. If we use the previously mentioned signal, and use some device to spread the signals around some frequency peak, and have both observers take the data and give them the ability to "tune in" on different frequences and add different random noise for different frequencys, we are able to show that:
- when both observers use the same frequence, they can extract a perfect signal.
- when their frequency somewhat deviates, they get a less perfect signal
- when their frequency deviates above a certain range, all they can get is random noise.

(but if we really design this thing, using electronics, this would raise the objection then that electronic devices are based on QM phenomena, not classical phenomena, and neither can I use a computer for the same reason, but a setup using dices to create a stream of data works however the same in my example, although the elaboration of it in a real experiment would be rather dreadfull...)
 
Last edited:
  • #101
NateTG said:
Bell's theorem isn't quite that strong. Bell's theorem does not apply to models where the coincidence of non-commutable measurement results is undefined.
What do you mean by "undefined"? What would happen when you measured the non-commuting observables? Can you give an example of the sort of model you're talking about?
NateTG said:
This is probably possible using non-standard notions of probability, and certainly possible with strong determinism.
Could you have a non-standard notion of probability that applies to a deterministic computer simulation, for example? If so, what aspects of the program's output would fail to obey the standard laws of probability?
 
  • #102
heusdens said:
We are talking here about quantum mechanics in terms applicable for understanding. I have not yet read a clear and self-consistent strictly physical explenation of the outcomes of such experiments, that is just the reason we discuss it on here. If it were 'clear', why would we discuss it so extensively?

I don't hold on to the idea that the quantum mechanical part of nature is totally separate from the classical part, although it is correct to say that the attributes we use in the macroscopic world can not be applied in the quantum world.

Who says the experiment can not be reproduced using -what is called- "classical" concepts? Although the question is of course, what do we mean with 'reproduce', since likely our experiment will involve totally different set up, objects, properties and range of values as also methods of detection.

If you can formalize that into something that is also applicable to the macroscopic world, then maybe we can proceed.

Dear heusdens

I sent you a personal email on this point: ''I have not yet read a clear and self-consistent strictly physical explenation of the outcomes of such experiments.'' Did you receive it?

It describes a classical class-room demonstration that refutes Bell's inequality. If it is not clear, just let me know off-thread.

In summary: It is ''Bellian realism'' that is false, not Einstein locality.

NB: It is no insult to Einstein to reject such naive realism from both Bell and EPR: EPR was written by Podolsky; Einstein did not see the submitted version and was not happy with it. (Who could be!? Since ''measurement'' perturbation of the pristine (''measured'') system was known from classical mechanics, and certainly in QM from its beginnings.)

Regards, wm
 
  • #103
heusdens said:
Notice that for a photon the polarization filter is a big gap, which gives some tolerance for not perfectly lined up polarization directions of photons. This means we find less correlated photons, and introduce more of the randomness.

However, above a certain range, the correlation gets completely lost, that is we get total randomness.

I don't see why this would not be a good explenation of the above mentioned experiment. Although it could be tested by using different sized polarization filters.

And a different anology would be to see this as the broadcasting of a radio signal between a reciever and a sender. If sender and receiver have the same frequency we get a clear signal. If one or both sender and reciever have a different frequence, the signal gets less clear (more random), until at a certain frequence difference, we get only noise (total random signal).

The above represents an improper understanding of polarization and how it is measured. The "gap" has nothing WHATSOEVER to do with the cos^2 relationship. In fact, such filters are sometimes used in Bell tests but often they are not. Instead, polarizing beam splitters (bifringent prisms) are used and these have no gap.

You seem to keep missing the idea that the setup is tuned initially so that "perfect" correlations are seen (0 degrees of difference). There is very little noise to speak of when the angles are the same. So this is not an issue in any sense. All reputable experiments have a small amount of noise and this is considered when the margin of error is calculated. This is on the order of magnitude of 50+ standard deviations in modern Bell tests.

If you like, I can provide several references for Bell tests to assist in seeing that it is not an experimental issue.

Bell test results agree with the basic predictions of ordinary QM, without the need for adding a non-local component. My conclusion is that the HUP is fundamental, and there is no observation independent layer of reality for quantum observables. (But that is merely one possible interpretation. MWI and BM are others.)
 
  • #104
wm said:
NB: It is no insult to Einstein to reject such naive realism from both Bell and EPR: EPR was written by Podolsky; Einstein did not see the submitted version and was not happy with it.

Einstein's view of realism was repeated by him long after EPR. He may not have liked the paper, but not because he thought it was erroneous. He was not happy with the focus on certain specifics of QM.

Einstein never disavowed "naive" realism: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured." Personally, I don't think this is a naive statement. But that does not make it correct, either.
 
  • #105
wm said:
Dear heusdens

I sent you a personal email on this point: ''I have not yet read a clear and self-consistent strictly physical explenation of the outcomes of such experiments.'' Did you receive it?

No, or I accidently deleted it when removing all the spam that keeps filling my mail box. Sorry of that happens.

It describes a classical class-room demonstration that refutes Bell's inequality. If it is not clear, just let me know off-thread.

Why don't you post it here so it can be discussed?

In summary: It is ''Bellian realism'' that is false, not Einstein locality.

NB: It is no insult to Einstein to reject such naive realism from both Bell and EPR: EPR was written by Podolsky; Einstein did not see the submitted version and was not happy with it. (Who could be!? Since ''measurement'' perturbation of the pristine (''measured'') system was known from classical mechanics, and certainly in QM from its beginnings.)

Regards, wm

[/quote]
 
  • #106
heusdens said:
The whole point here again, is what do you define as a "well defined state"?
I'd have to think about that more to have something really precise, but as a first try, you could say that every possible physical state of the universe can be represented by an element of some mathematically-defined set, with the universe's state corresponds to a single element at every moment. And there is some mathematical function for the time-evolution that tells you what future states the universe will be in given its past states (the function could be either deterministic or stochastic). And knowing which element of the set corresponds to its current state gives you the maximum possible information about the physical universe, there are no other variables which could affect your measurements or your predictions about the future state of the universe which could differ even for states that correspond to the same element of the state.
heusdens said:
A signal that by all means is random can not, by mere logic, be also non-random, yet it can be easily shown to be the case.
But you're not really violating the laws of logic, you're just using the word "random" in a poorly-defined linguistic way, as opposed to a precise mathematical definition. Similarly, if I say "putting one rabbit and one rabbit together can give a lot more than two rabbits, since they could have babies", I'm not really violating the laws of arithmetic, I'm just using the phrase "putting one and one together" in a way that doesn't really correspond to addition in arithmetic.
heusdens said:
I just have to create a clear signal, and split that into two signals that are correlated, and add to both signals a random noise.

Each of the signals now is random. Yet I can manage to recreate the clear signal from both random signals.
But how are you defining "random"? Without a clear definition this is just vague verbal reasoning. There might indeed be some definition where two strings of digits could individually be maximally random, but taken together they are not (I think this would be true if you define randomness in terms of algorithmic incompressibility, for example)--this need not be any more of a contradiction than the fact that two objects can individually weigh less than five pounds while together they weigh more than five pounds.
heusdens said:
However, if you give me a clear formal description of an experiment and set up which can in principle be made using only the "classical" aspects of physics, I am about sure one can show a deviation from the Bell Inequality in the non-QM case too.
Just think of the experiment with Alice and Bob at the computer monitors which I described earlier, and try to think of a way to get the Bell inequality violations in such a way that a third-party observer can see exactly how the trick is being done--what procedure the computer uses to decide whether to display a + or - depending on what letter Alice and Bob type, based on some sort of signal or object sent to each computer from a common source, with the signal or object not containing any "hidden" information which can't be seen by this third-party observer but which help the computer to decide its output. This description might be a little vague, but as long as you avoid having each computer measure one member of a pair of entangled particles in order to choose its answer, it should be sufficiently "classical" for the purposes of this discussion.
 
Last edited:
  • #107
DrChinese said:
The above represents an improper understanding of polarization and how it is measured. The "gap" has nothing WHATSOEVER to do with the cos^2 relationship. In fact, such filters are sometimes used in Bell tests but often they are not. Instead, polarizing beam splitters (bifringent prisms) are used and these have no gap.

Possibly, because I'm not too familiar with these kind of things.

Perhaps I'm confusing this with other kind of filters for other experiments.

You seem to keep missing the idea that the setup is tuned initially so that "perfect" correlations are seen (0 degrees of difference). There is very little noise to speak of when the angles are the same. So this is not an issue in any sense. All reputable experiments have a small amount of noise and this is considered when the margin of error is calculated. This is on the order of magnitude of 50+ standard deviations in modern Bell tests.

From what do you imply that I didn't catch that?

If you like, I can provide several references for Bell tests to assist in seeing that it is not an experimental issue.

You can post them, I would be glad to read them.

Bell test results agree with the basic predictions of ordinary QM, without the need for adding a non-local component. My conclusion is that the HUP is fundamental, and there is no observation independent layer of reality for quantum observables. (But that is merely one possible interpretation. MWI and BM are others.)

Sorry, what does HUP stand for?

I have in the course of this and other threats heard so many different explenations, each having their own dismerits (and merits), but all rather one-sided and only revealing partial truths.

I do not exactly conform myself to any of such explenations, because as for one thing, they basically shift the problem to some other department of physics, without resolving it (we would in some of these explenations for example have to reconsider relativity since it undermines it basic premisses, or otherwise undermine other basic premisis about our understanding of the world, or introduce arbitrary new phenomena, like many worlds, etc.).

So, actually I am trying to figure things out in a more substantial way.

The refereces to dialectics was meant to give a clue to this, because dialectics tries to escape from this one-sidedness of these formal mathematical explenations, and instead give a full picture of what can be regarded as truth.

[ Perhaps not everyone is happy with that, cause dialectics is not specifically related to quantum physics, and such discussions are meant to occur in the forums meant for philosophic topics, yet most of such threads are rather worthless, since most topics are rather un concrete. ]

One thing is clear, that in regard of dialectics, we can distinguish between appearance and essence. Formal logic does not make that distinction, which therefore ends up in contradictions, since dialectics does not insist for the appearance of something to coincide with it's essence.
 
Last edited:
  • #108
JesseM said:
I'd have to think about that more to have something really precise, but as a first try, you could say that every possible physical state of the universe can be represented by an element of some mathematically-defined set, with the universe's state corresponds to a single element at every moment. And there is some mathematical function for the time-evolution that tells you what future states the universe will be in given its past states (the function could be either deterministic or stochastic). And knowing which element of the set corresponds to its current state gives you the maximum possible information about the physical universe, there are no other variables which could affect your measurements or your predictions about the future state of the universe which could differ even for states that correspond to the same element of the state. But you're not really violating the laws of logic, you're just using the word "random" in a poorly-defined linguistic way, as opposed to a precise mathematical definition. Similarly, if I say "putting one rabbit and one rabbit together can give a lot more than two rabbits, since they could have babies", I'm not really violating the laws of arithmetic, I'm just using the phrase "putting one and one together" in a way that doesn't really correspond to addition in arithmetic. But how are you defining "random"? Without a clear definition this is just vague verbal reasoning. There might indeed be some definition where two strings of digits could individually be maximally random, but taken together they are not (I think this would be true if you define randomness in terms of algorithmic incompressibility, for example)--this need not be any more of a contradiction than the fact that two objects can individually weigh less than five pounds while together they weigh more than five pounds. Just think of the experiment with Alice and Bob at the computer monitors which I described earlier, and try to think of a way to get the Bell inequality violations in such a way that a third-party observer can see exactly how the trick is being done--what procedure the computer uses to decide whether to display a + or - depending on what letter Alice and Bob type, based on some sort of signal or object sent to each computer from a common source, with the signal or object not containing any "hidden" information which can't be seen by this third-party observer but which help the computer to decide its output. This description might be a little vague, but as long as you avoid having each computer measure one member of a pair of entangled particles in order to choose its answer, it should be sufficiently "classical" for the purposes of this discussion.

I am not an expert on information science, but for sure "random" has a precise mathematical definition by which it can be judged and stated that a stream of data is random or not.
Part of that definition will of course entail that from any part of the stream of data, we are not able to tell what data will come next, neither can we discover any meaningfull pattern from the data.

Now, supposing this definition, if I use such a random stream of data (the same random data) for two signals, and add to one of them a non-random stream of data, both data streams individually are still random, although they DO have a correlation.

That is the sort of correlation we are in fact looking for.

Just as a small example. I throw dice and note every time the outcome.
From that I create 2 streams of data. On one stream of data I add up a message, encoded in some form [ for example, I could encode the message or signal in a stream of numbers in base-6, and add that to the random values (modulo 6), and produce another random signal. ]

The resulting data stream keeps being random, since by no means we know the random stream of data, and can't detect the data that goes with it from either one stream.
Yet, we know from how we created these two data streams, that they do have a correlation. We only have to substract each value from one stream from the other, to get back to the data that was implemented on the stream.
This of course only works because we use the *same* random stream for *both* signals.
If however this correlation would be lost (for instance, if we use different random streams for the data signals), we would not be able to extract the original stream of data.

Now in this case, the resulting two streams are totally correlated, because we use the exact same random stream. But we can create of course other streams of data, by using different (independent) streams of random data, and in such a way that the original (meaningfull) stream of data gets more and more lost.

Wether we get the signal from both datastreams would depend for instance on some setting at the end of the data stream where we measure the output.

So, my thesis is, that in such a way the QM correlations might be reproduced, just by using random data streams which contain correlated data (to be discovered only by combining the two signals), and which correlation is dependend on some setting at both sides of the measuring device.

I know the above is a rather loosely defined "system" and is not written in strong mathematical terms, but I hope you get the picture.
 
Last edited:
  • #109
JesseM said:
What do you mean by "undefined"? What would happen when you measured the non-commuting observables?
Undefined is like \frac{0}{0} or maybe like 0^0 or \lim_{x \rightarrow 0} \sin{\frac{1}{x}}.

Can you give an example of the sort of model you're talking about?
Strong determinism is present, for example with Bohmian mechanics in a universe with a zero-diameter big bang, or in MWI's branch-both-ways.
Something like like “Deterministic Model of Spin and Statistics”, Physical Review D27, 2316-2326 (1983) http://edelstein.huji.ac.il/staff/pitowsky/papers/Paper%2004.pdf
(This is certainly not mainstream, but is mathematically sound.)
Even DrChinese's 'negative probabilities' apply (although, I expect that that approach will run into some problems if it's taken further).

Could you have a non-standard notion of probability that applies to a deterministic computer simulation, for example? If so, what aspects of the program's output would fail to obey the standard laws of probability?

There is no terminating deterministic touring machine that has any states with undefined probabilities.

I'll elaborate a little:

Kolmogorov (not sure it's spelled correctly) probability requires that if A has a probability of occurring, and B has a probability of occurring, then (A and B) must also have a probability (possibly 0) of occurring.

Now, if A and B are commutative observables, then this probability can be experimentally determined, so this notion will hold for any classical setting where measurements are non-perturbing, and thus always commutative.

However, in the Quantum setting, A and B may not be commutative, so, in order for the usual notion of probability to apply it's necessary to assume that the expression (A and B) has a well-defined probability.

In order to construct the inequality, Bell's theorem adds and subtracts expressions with untestable probabilities. Without the assumption that these scenarios have well-defined probabilities, it's like 'simplifying':
\frac{0}{0}-\frac{0}{0}
to
0
 
Last edited by a moderator:
  • #110
heusdens said:
I am not an expert on information science, but for sure "random" has a precise mathematical definition by which it can be judged and stated that a stream of data is random or not.
I think there are various definitions, but like I said, even if two 10-digit strings are maximally random for their length, there's no reason the 20-digit string created by combining them would also have to be maximally random for its length.
heusdens said:
Now, supposing this definition, if I use such a random stream of data (the same random data) for two signals, and add to one of them a non-random stream of data
What do you mean by "adding" two streams, one random and the other nonrandom? If one stream was 10010101101 and the other was 11111111111, what would the sum be?
heusdens said:
both data streams individually are still random, although they DO have a correlation.

That is the sort of correlation we are in fact looking for.
Is it? Please state a particular form of the Bell inequality, and show how your example violates it--I promise, you won't be able to get any such violation unless the streams are generated using entangled particles.

Alternately, you could explain how this idea of adding random and nonrandom data streams can be used to reproduce the results seen in the experiment with the computer monitors I brought up earlier in this post, where whenever Alice and Bob type the same letter they both get the same response from their computer, and yet one or both of these inequalities are violated:
* Number(Alice types A, gets +; Bob types B, gets +) plus Number(Alice types B, gets +; Bob types C, gets +) is greater than or equal to Number(Alice types A, gets +; Bob types C, gets +).

* when Alice and Bob pick different letters, the probability of them getting opposite results (one sees a + and the other sees a -) must be greater than or equal to 1/3.
(If you'd like some additional explanation of where these inequalities come from I can provide it.) When I asked you about this before, you did say "I am about sure one can show a deviation from the Bell Inequality in the non-QM case too." Well, are you willing to try to come up with a specific example?
heusdens said:
Just as a small example. I throw dice and note every time the outcome.
From that I create streams of data. One stream of data I add up a message, encoded in some form. The resulting data stream keeps being random, since by no means we know the random stream of data, and can't detect the data that goes with it.
Yet, we know from how we created these two data streams, that they do have a correlation. We only have to substract each value from one stream from the other, to get back to the data that was implemented on the stream.
This of course only works because we use the *same* random stream for *both* signals.
If however this correlation would be lost, we would not be able to extract the original stream of data.
Again, what does this have to do with the Bell inequalities? The Bell inequalities are all specific quantitative statements about the number of measuremenst with some outcomes vs. some other outcomes, not just some broad statement about the measurements being "correlated" when they measure on the same axis and "uncorrelated" on another. Again, would it help to go over the specific reasoning behind the two inequalities I mentioned above?
heusdens said:
So, my thesis is, that in such a way the QM correlations might be reproduced, just by using random data streams which contain correlated data (to be discovered only by combining the two signals), and which correlation is dependend on some setting at both sides of the measuring device.
OK, so to have a "non-quantum" case let's just Alice and Bob are both being sent a stream of signals which their computers are using as a basis for deciding whether to display a + or - each time they type one of the three letters, and that I am a third-party observer who sees every digit of both streams, how the streams are being generated, and what algorithm the computer uses to choose its output based on its input from the streams. In this case I promise you it will be impossible to reproduce the results described, where on each trial where they both type the same letter, they always get opposite symbols on their display, yet one or both of the inequalities I gave above is violated. If you think this is wrong, can you try to give the specifics of a counterexample?
 
  • #112
DrChinese said:
Einstein's view of realism was repeated by him long after EPR. He may not have liked the paper, but not because he thought it was erroneous. He was not happy with the focus on certain specifics of QM.

Einstein never disavowed "naive" realism: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured." Personally, I don't think this is a naive statement. But that does not make it correct, either.

To bring clarity to the discussion, let's allow that there is: naive realism, strong realism, EPR realism, Bell realism, Einstein realism, ... (In my view: naive realism = strong realism = EPR realism = Bell realism = silliness.)

Now I am not aware that Einstein ever endorsed the other versions, so could you let me have the full quotes and sources that you rely on?

Note: We are not looking for Einstein's support of ''pre-measurement values'' BUT for the idea that measurement does NOT perturb the measured system (for that is the implicit silliness with EPR, Bell, etc). Einstein (1940, 1954) understood that the wave-function plus Born-formula related to the statistical prediction of ''measurement ourcomes'' and NOT pre-measurement values.
 
  • #113
heusdens said:
Like I said, there are probably a number of possible definitions--one used in information theory is algorithmic randomness, which says that a random string is "incompressible", meaning it's impossible to find a program that can generate it which is shorter than the string itself. The definition given in your link is more like statistical randomness, which is probably related--if there is some way of having better-than-even chances of guessing the next digit, then that could help in finding a shorter program to generate the string. But the definition the guy in the link was using doesn't seem quite identical to statistical randomness, because a string could be "statistically random" in the sense that there's no pattern in the string itself that would help you guess the next digit, but knowledge of some external information would allow you to predict it (as might be the case for a deterministic pseudorandom algorithm).
 
Last edited by a moderator:
  • #114
JesseM said:
I think there are various definitions, but like I said, even if two 10-digit strings are maximally random for their length, there's no reason the 20-digit string created by combining them would also have to be maximally random for its length.

No. It is even more subtle then that, whatever test we have for examining a data stream if it is random, it is in theory possible that a data stream passes this test, while it is not random, but can be decoded using an algorithm and a key.

What do you mean by "adding" two streams, one random and the other nonrandom? If one stream was 10010101101 and the other was 11111111111, what would the sum be?

Some form would be to encode it in base X, and do the additions of the random stream data R and meaningfull data stream D as: ( R(i) + D(i) ) modulo X.

It does not matter how we do it, as long as we can decompose the stream back, if we have the random signal R to extract singal D from it.

Is it? Please state a particular form of the Bell inequality, and show how your example violates it--I promise, you won't be able to get any such violation unless the streams are generated using entangled particles.

I am working on a good formulation of it.

Alternately, you could explain how this idea of adding random and nonrandom data streams can be used to reproduce the results seen in the experiment with the computer monitors I brought up earlier in this post, where whenever Alice and Bob type the same letter they both get the same response from their computer, and yet one or both of these inequalities are violated: (If you'd like some additional explanation of where these inequalities come from I can provide it.) When I asked you about this before, you did say "I am about sure one can show a deviation from the Bell Inequality in the non-QM case too." Well, are you willing to try to come up with a specific example?

I will do my best.

Again, what does this have to do with the Bell inequalities? The Bell inequalities are all specific quantitative statements about the number of measuremenst with some outcomes vs. some other outcomes, not just some broad statement about the measurements being "correlated" when they measure on the same axis and "uncorrelated" on another. Again, would it help to go over the specific reasoning behind the two inequalities I mentioned above? OK, so to have a "non-quantum" case let's just Alice and Bob are both being sent a stream of signals which their computers are using as a basis for deciding whether to display a + or - each time they type one of the three letters, and that I am a third-party observer who sees every digit of both streams, how the streams are being generated, and what algorithm the computer uses to choose its output based on its input from the streams. In this case I promise you it will be impossible to reproduce the results described, where on each trial where they both type the same letter, they always get opposite symbols on their display, yet one or both of the inequalities I gave above is violated. If you think this is wrong, can you try to give the specifics of a counterexample?

I said: I will try!
 
  • #115
wm said:
Note: We are not looking for Einstein's support of ''pre-measurement values'' BUT for the idea that measurement does NOT perturb the measured system
You can certainly assume measurement perturbs the system, but if you want to explain the perfect correlation between results when the experimenters measure along the same axis, in terms of local hidden variables, you'd have to assume it perturbs the state of the system in an entirely predictable way which does not vary between trials (i.e. if the particle was in state X and you make measurement Y, then if you get result Z once, you should get that result every time it was in state X and you made measurent Y).
 
  • #116
DrChinese said:
Bell test results agree with the basic predictions of ordinary QM, without the need for adding a non-local component. My conclusion is that the HUP is fundamental, and there is no observation independent layer of reality for quantum observables. (But that is merely one possible interpretation. MWI and BM are others.)

DrC, could you please expand on this interesting position? As I read it, you have a local comprehension of Bell-test results? (I agree that there is such, but did not realize that you had such.)

However, your next sentence is not so clear: By definition, an observable is observation dependent, so you seem to be saying that there are no underlying quantum beables?? There is no ''thing-in-itself''??

Personally: I reject BM on the grounds of its non-locality; and incline to endorse MWI with its locality, while rejecting its need for ''many worlds''.
 
  • #117
JesseM said:
Like I said, there are probably a number of possible definitions--one used in information theory is algorithmic randomness, which says that a random string is "incompressible", meaning it's impossible to find a program that can generate it which is shorter than the string itself. The definition given in your link is more like statistical randomness, which is probably related--if there is some way of having better-than-even chances of guessing the next digit, then that could help in finding a shorter program to generate the string. But the definition the guy in the link was using doesn't seem quite identical to statistical randomness, because a string could be "statistically random" in the sense that there's no pattern in the string itself that would help you guess the next digit, but knowledge of some external information would allow you to predict it (as might be the case for a deterministic pseudorandom algorithm).

Although it is a somewhat *side* issue, I think that even in theory there is no possibility of creating a true random stream. It could always be data that could be decoded back, using some algorithm and key, into meaningfull data.

So this makes random a very problematic feature. It would mean for instance that random and not-random fails the law of excluded middle. The same stream can be random (from some point of view or for some observer) and not random (from some other point of view or some other observer).
 
  • #118
JesseM said:
You can certainly assume measurement perturbs the system, but if you want to explain the perfect correlation between results when the experimenters measure along the same axis, in terms of local hidden variables, you'd have to assume it perturbs the state of the system in an entirely predictable way which does not vary between trials (i.e. if the particle was in state X and you make measurement Y, then if you get result Z once, you should get that result every time it was in state X and you made measurent Y).

Yes; its called determinism. That is why, in a Bell-test, when the detectors have the (say) same settings, the outcomes are identical (++, ++, --, ++, --, ...) (with no evidence of DrC's HUP). NEVERTHELESS, each perturbed particle (with its revealed observable) now differs from its pre-measurement state (with its often-hidden beable).

AND NOTE: Prior to one or other measurement, our knowledge of the state is generally insufficient for us to avoid a probablistic prediction; here 50/50 ++ XOR --. So, from locality, determinism is the underlying mechanism that delivers such beautifully correlated results from randomly delivered twins; HUP notwithstanding.
 
Last edited:
  • #119
heusdens said:
Although it is a somewhat *side* issue, I think that even in theory there is no possibility of creating a true random stream. It could always be data that could be decoded back, using some algorithm and key, into meaningfull data.
Well, neither definition says that this would preclude a string from being random. The "algorithmic incompressibility" definition just tells you that the algorithm for encoding the message, plus the most compressed algorithm for generating the message on its own (and if the message is meaningful, it might have a short algorithm to generate it), must be longer than the string itself. And the "statistical randomness" definition says that despite the fact that the string is an encoded message, that won't help you predict what each successive digit will be (or if it does help, then the string is not statistically random).
heusdens said:
So this makes random a very problematic feature. It would mean for instance that random and not-random fails the law of excluded middle. The same stream can be random (from some point of view or for some observer) and not random (from some other point of view or some other observer).
With any precise mathematical definition of randomness, there will be no violation of logic.
 
  • #120
Maybe this is a very naive attempt, but what if we just create 3 random streams

(that is: each stream is random of it self, and in relation to the other, so that neither one can predict any of the data of the same stream, or of the other stream, nor can one when combining any two streams or even all 3 streams, extract any usefull data from it).

of data, which are labelled a,b,c, corresponding to detector settings A,B,C of Alice and Bob. If Alice picks A and Bob picks A, they get the same data, likewise for B and for C. However if Alice picks a different setting as Bob, they get random outcomes. Does that match the criteria for breaking the inequality, or not?
 
Last edited:

Similar threads

  • · Replies 59 ·
2
Replies
59
Views
7K
Replies
8
Views
1K
  • · Replies 42 ·
2
Replies
42
Views
3K
  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
41
Views
5K
  • · Replies 24 ·
Replies
24
Views
2K
Replies
42
Views
5K
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K