I Are entanglement correlations truly random?

Click For Summary
The discussion centers on the nature of correlations between measurements from entangled particles versus truly random sources. While two random sources yield independent results with no correlation, measurements from entangled photons often exhibit non-zero correlations influenced by filter angles. The conversation explores whether these correlations imply that the detections are not purely random, suggesting a deeper connection between the particles. The concept of "truly random" is debated, with participants questioning the ability to definitively prove randomness in quantum systems. Ultimately, the dialogue emphasizes the complexity of defining randomness and correlation in quantum mechanics.
  • #61
PeterDonis said:
because measurements on entangled quantum bits can violate the Bell inequalities.
Even in case of parallel basisses/full correlation?
 
Physics news on Phys.org
  • #62
entropy1 said:
Even in case of parallel basisses/full correlation?

"Violate the Bell inequalities" means over the full range of possible combinations of measurement settings. Obviously if you only pick the one case where both measurements are parallel, you won't violate the inequalities. So what?
 
  • #63
PeterDonis said:
Of entropy, yes.
Ok. I think that is important. Are you willing and able to suggest a Google search term for this, I quess, entanglement-entropy? (on that word I find only very advanced articles) :smile:
 
  • #64
entropy1 said:
entanglement-entropy

Yes, that's a good search term.

entropy1 said:
on that word I find only very advanced articles

Yes, that's because it is an advanced topic. :wink:
 
  • #65
Simon Phoenix said:
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
 
Last edited:
  • #66
entropy1 said:
Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

What are these probabilities? What is P(A, B) and what are P(A) and P(B)?

entropy1 said:
we compare a0..an-1 with b1..bn-1b0, and find total correlation.

What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
 
  • #67
entropy1 said:
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
The correlation between your strings is found by lining them up and counting coincidences. There is no correlation of a single bit. Coincidences is all you got.
 
  • #68
entropy1 said:
You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros.

No, that's called cherry-picking your data.
 
  • #69
entropy1 said:
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
You are floundering. Correlation is a statistical property. The expectation of the correlation between two random bit strings is zero by definition !

The correlation between streams A and B, ##-1 \le \rho_{ab} = (2c-n)/n \le 1## (where c is the number of coincidences) has expectation zero because the number of coincidences tends to ##n/2##. But the expectaion of ##\rho^2## is not zero, so you get fluctuations.

The way to reproduce an EPR dataset is to have a machine that produces two random streams and a EPR demon that inserts anti-correlated pairs at random intervals. If the demon tells you which are its work, you can pick then out and get perfect anti-correlation. The remaining data will have ##<\rho>=0##. If all the data is used, then ##<\rho>\ \lt 0##. i.e. the expectation of the correlation is negative, not zero.

So a good EPR experiment could comprise a hundred million bit string and give a result like ##\hat{\rho} = -0.14325378 \pm 0.00001## which would show something strange was happening, Time to call Rosencrantz&Guildenstern.
 
Last edited:
  • #70
PeterDonis said:
What are these probabilities? What is P(A, B) and what are P(A) and P(B)?
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).
PeterDonis said:
What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).

@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.
PeterDonis said:
No, that's called cherry-picking your data.
Yes. I ment it as example of a non-random cause.

You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.
 
Last edited:
  • #71
entropy1 said:
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).

Ok. But "pairs of bits" here means (or should mean--if you are defining it differently, you are doing it wrong) "pairs of bits measured in the same run of the experiment". So these probabilities, to be meaningful, require a certain way of "lining up" the two bit sequences next to each other: bits 0 and 0, bits 1 and 1, bits 2 and 2, etc., of each sample. Otherwise you are making meaningless comparisons; there is no physical meaning to comparing bit 0 from one sample and bit 1 of the other, because they are from different runs of the experiment.

Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.

entropy1 said:
I ment it as example of a non-random cause.

How you meant it doesn't change the fact that it's meaningless. See above.

entropy1 said:
You can claim that the entropy of the random content decreases in fractional bits

I have said no such thing. You are confused.

What I said was that if you have a pair of bits that are correlated, the entropy of that pair of bits, as a system, will be less than the entropy of two random uncorrelated bits; and if the correlation is only partial, the entropy of the two-bit system will not be an integral number of bits. But that doesn't mean we made the correlated pair of bits out of the two random uncorrelated bits and thereby decreased their entropy.
 
  • #72
PeterDonis said:
Ok. But "pairs of bits" here means (or should mean--if you are defining it differently, you are doing it wrong) "pairs of bits measured in the same run of the experiment". So these probabilities, to be meaningful, require a certain way of "lining up" the two bit sequences next to each other: bits 0 and 0, bits 1 and 1, bits 2 and 2, etc., of each sample. Otherwise you are making meaningless comparisons; there is no physical meaning to comparing bit 0 from one sample and bit 1 of the other, because they are from different runs of the experiment.

Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.
If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right? So, what I meant to illustrate is that, since rotating a string does not change its randomness, finding a (strong) correlation with another random string could deny its total randomness.

I understand that in an experimental setting the correlation is linked to the physical setup, which has restrictions to its operation. That is what I mean by 'non-random elements'.

My approach may be seen more theoretical, abstract, for which I hold the reasoning valid (not inflating it needlessly :biggrin: ).
PeterDonis said:
What I said was that if you have a pair of bits that are correlated, the entropy of that pair of bits, as a system, will be less than the entropy of two random uncorrelated bits; and if the correlation is only partial, the entropy of the two-bit system will not be an integral number of bits. But that doesn't mean we made the correlated pair of bits out of the two random uncorrelated bits and thereby decreased their entropy.
AFAICS I agree. :biggrin:
 
Last edited:
  • #73
entropy1 said:
If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right?

It is true that P(A) and P(B) are unchanged by reordering the string. That's obvious, because those probabilities only depend on the total numbers of 0 or 1 bits, not on their order. However, just knowing P(A) and P(B), by itself, does not tell you whether a string is "random". In fact, you have not given, anywhere in this thread that I can see, a definition of what you mean by "random".

Also, if we have two strings, and we reorder only one of them, that will, in general, change P(A, B), since that probability relies on comparing corresponding bits of each string. But only one such comparison is actually meaningful: the one that compares bits from each string that came from the same run of the experiment. Any other comparison is meaningless.

entropy1 said:
I understand that in an experimental setting the correlation is linked to the physical setup, which has restrictions to its operation. That is what I mean by 'non-random elements'.

What "restrictions to its operation" are you talking about? And why do you think such restrictions would be appropriately called "non-random elements"? (Note that this depends on what you mean by "random", which, as I noted above, you have not specified.)
 
  • #74
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
PeterDonis said:
What "restrictions to its operation" are you talking about? And why do you think such restrictions would be appropriately called "non-random elements"?
This is probably an example of my limited knowledge of English. Maybe "operating conditions" is a better term?
PeterDonis said:
But only one such comparison is actually meaningful: the one that compares bits from each string that came from the same run of the experiment. Any other comparison is meaningless.
Good point. But on what grounds would you call the results 'random'?
 
  • #75
entropy1 said:
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).

P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).

@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.

Yes. I ment it as example of a non-random cause.

You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.
There are many criteria one can use to define the randomness of bit string. The most important is that P(1)=P(0)= 1/2. There is also the auto-correlation which measures how many times a 1 is followed by a 1, and a 0 is followed by a 0. The formula is the same as the correlation between two strings and also has expectation 0.

I have to say that I don't follow what point you are trying to make so I'll leave it there.
 
  • Like
Likes entropy1
  • #76
Mentz114 said:
I have to say that I don't follow what point you are trying to make so I'll leave it there.
To put it simply: I am asking if the sources of a correlation are 100% random.

You are more than welcome to participate, which I would like, however, I respect any decision you make thereabout.
 
Last edited:
  • #77
PeterDonis said:
Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.

How you meant it doesn't change the fact that it's meaningless. See above.
What I mean is that the experiment(al setup) is the "cherrypicker" in this case, in my consideration.
 
  • #78
entropy1 said:
To put it simply: I am asking if the sources of a correlation are 100% random.

You are more than welcome to participate, which I would like, however, I respect any decision you make thereabout.
I assume you mean 'deviate from randomness. This big topic is part of standard statistical theory and the Wiki articles are a good introduction.
https://en.wikipedia.org/wiki/Statistical_randomness
and this is useful and mentions higher concepts like spectral decompositions and Hadamard transformations.
https://en.wikipedia.org/wiki/Randomness_tests

This is not part of quantum theory. Correlations in QT play a different but very important role.
 
  • #79
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##.

Where are you getting this definition of "random" from?

entropy1 said:
I am asking if the sources of a correlation are 100% random.

Obviously it depends on the sources.

entropy1 said:
What I mean is that the experiment(al setup) is the "cherrypicker" in this case, in my consideration.

The experimental setup certainly tells you what comparison between two bit strings is meaningful. But you seemed to be saying that any such comparison was meaningful, because you were talking about rearranging how the two bit strings are compared with each other (by, for example, shifting one bit string relative to the other and then comparing). If you do that, you aren't doing what the experimental setup tells you to do; you're doing something different, and meaningless. That's what I meant by cherry-picking the data.
 
  • #80
entropy1 said:
This is probably an example of my limited knowledge of English.

I think the entire topic of this thread might be an artifact of your limited knowledge of English. That's why I keep asking what you mean by the word "random"; I don't think you mean what that word usually means in English.

Perhaps it would help to ask the question a different way: why do you care whether "the sources of a correlation are 100% random"? What would it tell you if the answer was yes? What would it tell you if the answer was no?
 
  • #81
PeterDonis said:
Where are you getting this definition of "random" from?
From the data of my computer code :biggrin:
PeterDonis said:
The experimental setup certainly tells you what comparison between two bit strings is meaningful. But you seemed to be saying that any such comparison was meaningful, because you were talking about rearranging how the two bit strings are compared with each other (by, for example, shifting one bit string relative to the other and then comparing). If you do that, you aren't doing what the experimental setup tells you to do; you're doing something different, and meaningless. That's what I meant by cherry-picking the data.
Well, I can reassure you that was not my angle of approach. The cherry-picking part I introduced to illustrate how improbable it is to get a correlation out of pure randomness.
PeterDonis said:
I think the entire topic of this thread might be an artifact of your limited knowledge of English. That's why I keep asking what you mean by the word "random"; I don't think you mean what that word usually means in English.
That's not entirely fair - I think it is a matter of starting point.
 
  • #82
entropy1 said:
how improbable it is to get a correlation out of pure randomness.

But you still haven't really explained what you mean by this.
 
  • Like
Likes Zafa Pi
  • #83
PeterDonis said:
But you still haven't really explained what you mean by this.
Well, I am afraid I can't do better than this currently. I will ponder some more.
 
  • Like
Likes Zafa Pi and Mentz114
  • #84
entropy1 said:
Well, I am afraid I can't do better than this currently. I will ponder some more.
You should stsrt by finding out the customary meanings of randomness and also how to calculate a correlation.
 
  • Like
Likes Zafa Pi
  • #85
entropy1 said:
Suppose we have two truly random sources A and B that generate bits ('0' or '1') synchronously. If we measure the correlation between the respective bits generated, we find a random, ie no, correlation.

Now suppose A and B are two detectors that register polarization-entangled photons passing respective polarization filters. We can define bits as 'detection'='1' and 'no detection'='0'. A and B individually yield random results. However, there is in almost every case a non-zero correlation, depending on the angle of the filters.

So my question would then be: since the detections of the entangled particles often exhibit a different correlation than truly random sources, are the detections purely random in case of entanglement? (or do they only seem random?)
A major problem in getting your question answered is that your terminology is sloppy, in fact truly sloppy.
Demystifier said:
Define "truly random"!
PeterDonis said:
Where are you getting this definition of "random" from?
You failed to make a definition. The terms "random" and "truly random" are neither used nor defined in probability texts. And after reading more of your posts it is not clear to me what you mean.

Let me give a simple concrete QM example:
Given an entangled pair from state √½(|00⟩ + |11⟩), we let A measure one of the pair at angle 0º, i.e. with measurement operator/observable ##Z =\begin{pmatrix}1&0\\0&-1 \end {pmatrix}##.
We let B measure the other at 30°, i.e. with observable ##½Z + √¾X =\begin{pmatrix}½&√¾\\√¾&-½\end{pmatrix}##.

The joint probability density of (A,B) is (1,1) with prob ⅜, (1,-1) with prob ⅛, (-1,1) with prob ⅛, (-1,-1) with prob ⅜. (1 & -1 are eigenvalues of the observables)
We see A and B agree with prob = ¾ = cos²30º, as usual.
The correlation coefficient is ½.
The marginal density of A is 1 with prob ½, -1 with prob ½. Same for B. A and B are not independent.

All of this is justified by repeated trials in the lab.

Can you ask your question from the above formulation?
 
  • #86
Zafa Pi said:
Can you ask your question from the above formulation?
I thought I remembered the basics, but this morning my meds demand my brain, so I would have to look it up.
 
  • #87
Zafa Pi said:
You failed to make a definition. The terms "random" and "truly random" are neither used nor defined in probability texts.
If the term is not defined in scientific literature, then why are you asking me, a layman, to define it? By the way, I gave one:
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
Anyway, I will think about what I mean by 'truly' random.
 
Last edited:
  • #88
entropy1 said:
If the term is not defined in scientific literature, then why are you asking me, a layman, to define it?

Because you used it. We need to know what you meant by it.
 
  • Like
Likes Zafa Pi
  • #89
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is (12)n−1(12)n−1(\frac{1}{2})^{n-1}. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
I don't know what you mean by fragment. Do you have a link?
If a1,a2, ... ,al with l=20 is a binary sequence, what is a fragment of 10 bits? Is it a subset of size 10? Is it a contiguous subset like a7,a8, ... ,a16? Or what?
 
  • #90
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is (12)n−1(12)n−1(\frac{1}{2})^{n-1}. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
I now think you were trying to define a binary normal sequence, but failed.

"A sequence of bits is random if there exists no Program shorter than it which can produce the same sequence." ~ Kolmogorov
So obviously it is impossible to exhibit a Kolmogorov random sequence.

Neither normality or K-random imply one another. But all of this should be in the Probability section of PF. And none of this is relevant to QM.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
58
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 27 ·
Replies
27
Views
2K
Replies
14
Views
2K
Replies
41
Views
6K