I Are entanglement correlations truly random?

Chris Miller

DrChinese has given the example of full entanglement swapping, but there's a kind of 'half-way house' that might help to shed some light.

Imagine a perfect optical cavity (the experiments were done in very high-Q microwave cavities). Now take a 2 level atom in its excited state (Rydberg atoms can be used as a reasonable approximation to a 2 level atom). Fire this atom through the cavity with a specific transit time such that the field and atom become perfectly entangled.

Now suppose we live in an ideal world and we can maintain the entanglement between the cavity field and the atom. Go make a cup of tea. Ship the atom off to the outer moons of Saturn.

Now take a second atom prepared in its ground state and fire this through the cavity with a different tailored transit time through the cavity. Tailor this time just right and after this second atom has gone through the cavity the 2 atoms are now entangled and the cavity field 'decoupled'.

The two atoms have never directly interacted - and (if we can maintain the entanglement long enough) the two atoms can be fired through the cavity years apart.

OK that's wildly fanciful in terms of shipping things off to Saturn and maintaining entanglement for years - but the experiments have been performed (although with more modest parameters).
Thanks for the great example. It seems more than ever to me as though "entangled" isn't quite the right word for the phenomenon. There is no co-dependence. The two atoms have just been configured/seeded to perform with some predictable similarity. I.e., only their states are correlated.

Question: Does it have to be the same "perfect optical cavity," or could these two atoms be "entangled" via two identical cavities in distant locations?

entropy1

Gold Member
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
What I noticed was, that the case of two truly independent sources differs from the case of pairs of entangled particles, in the sense that the latter case 'contains' a correlation. So, a difference of the latter case with 'true randomness' means that you have either conclude that the latter is 'not truly' random, or you have to extend the definition of 'true randomness' to include correlation.

The difference between the two is that in the first case the sources are independent, and in the second they are dependent. If you define correlation = dependence, then you can just observe that there is a correlation and leave it with that. It would be a tautology. But the correlation has a cause, and the extension with this cause is what, in my eyes, characterizes the difference. But maybe that's a tautology too. Last edited:

Simon Phoenix

Gold Member
So, a difference of the latter case with 'true randomness' means that you have either conclude that the latter is 'not truly' random, or you have to extend the definition of 'true randomness' to include correlation.
Seems to me like you're going round in circles a bit here. Let's focus on just binary variables. I'm assuming by truly random you actually mean uniformly at random so that the probability of obtaining 1 is 1/2.

You're conflating the notion of randomness with the notion of dependence I think. Suppose we have two random processes $A$ and $B$ each spitting out binary strings. If they're independent processes then the total entropy is simply the sum of the entropy of process $A$ and the entropy of process $B$. If the processes are dependent then the total entropy is less than this. Really what's your issue here?

If we do have $S(A,B) \lt S(A) + S(B)$ then $A$ and $B$ are both random processes, just not independently so. This is all just standard probability theory.

OK, it's usually expressed in terms of conditional probabilities so that for independent random processes we would write $P(A|B) = P(A)$ and $P(B|A) = P(B)$ or, equivalently, we would write the joint distribution $P(A,B) = P(A)P(B)$

For dependent processes we would have $P(A,B) = P(A)P(B|A) = P(B)P(A|B)$

There's no need at all to redefine anything. There's nothing that you're saying here other than "two random processes can be dependent".

entropy1

Gold Member
If there is no correlation between $A$ and $B$, we have $P(A,B)=P(A)P(B)$, factorization. If there is a correlation between $A$ and $B$, then $P(A,B)≠P(A)P(B)$. Instead, we have $P(A,B)=P(A|B)P(B)$ with $P(A)≠P(A|B)$. We can write $P(A,B=0)=P(A|B=0)P(B=0)$ and $P(A,B=1)=P(A|B=1)P(B=1)$. So the difference with the factorizing $P(A,B=x)=P(A)P(B=x)$ is that $P(A)$ has been replaced by $P(A|B=0)$ in one case and $P(A|B=1)$ in the other case, with $P(A)≠P(A|B=0)$ and $P(A)≠P(A,B=1)$. So you could see it as that the probability of $A$ in the correlating case has two different values, namely $P(A|B=0)$ and $P(A|B=1)$, depending on the outcome of $B$. So this would be a reason for me to call this "not truly random". Does this make any sense?

* With 'A being truly random' I mean that the probability of A doesn't vary.

Last edited:

entropy1

Gold Member
The correlation codes for the (relative) measurement basisses, so there is more information in ensembles A and B than just two times purely random bits.

And I think the entropy of correlated values is lower than uncorrelated values. To get all ones and zero's aligned takes more than pure randomness.

Last edited:

jerromyjon

To get all ones and zero's aligned takes more than pure randomness.
So that's like saying you have a random number generator that has an output of 0 or 1 and places it into variable A, then you use a not function to place the opposite of A into B, and somehow those anticorrelated values are less random because of it?

PeterDonis

Mentor
So that's like saying you have a random number generator that has an output of 0 or 1 and places it into variable A, then you use a not function to place the opposite of A into B, and somehow those anticorrelated values are less random because of it?
He's saying that if you randomly generate one bit, you have one bit of randomness; using a not function to generate a second (anti)correlated bit doesn't generate any additional randomness. You don't have two bits of randomness just because you have two bits. You would have to randomly generate both bits to get two bits of randomness, but if you did that, they wouldn't be correlated.

jerromyjon

You would have to randomly generate both bits to get two bits of randomness, but if you did that, they wouldn't be correlated.
Exactly.
So you could see it as that the probability of A in the correlating case has two different values, namely P(A|B=0)P(A|B=0)P(A|B=0) and P(A|B=1)P(A|B=1)P(A|B=1), depending on the outcome of B. So this would be a reason for me to call this "not truly random".
The correlation codes for the (relative) measurement basisses, so there is more information in ensembles A and B than just two times purely random bits.
Maybe I'm misinterpreting, maybe I'm missing something... but it seems to me the debate centers around 2 separate values with a single random base... or more specifically 2 correlated values.

entropy1

Gold Member
So, if we have a certain amount of correlation, but not complete, do we have a fractional number of bits?

Moreover, if we have a single bit of randomness distributed over two measurement results, isn't that bit a hidden variable?

PeterDonis

Mentor
if we have a certain amount of correlation, but not complete, do we have a fractional number of bits?
Of entropy, yes.

if we have a single bit of randomness distributed over two measurement results, isn't that bit a hidden variable?
If the joint state of the two bits is a "hidden variable", then yes. If we're talking about classical bits, then it's fine to look at it that way, because classical bits can't violate the Bell inequalities. If we're talking about entangled quantum bits, then you can consider their joint quantum state as a "hidden variable", but it can't be a local hidden variable in the Bell sense because measurements on entangled quantum bits can violate the Bell inequalities.

entropy1

Gold Member
because measurements on entangled quantum bits can violate the Bell inequalities.
Even in case of parallel basisses/full correlation?

PeterDonis

Mentor
Even in case of parallel basisses/full correlation?
"Violate the Bell inequalities" means over the full range of possible combinations of measurement settings. Obviously if you only pick the one case where both measurements are parallel, you won't violate the inequalities. So what?

entropy1

Gold Member
Of entropy, yes.
Ok. I think that is important. Are you willing and able to suggest a Google search term for this, I quess, entanglement-entropy? (on that word I find only very advanced articles) PeterDonis

Mentor
entanglement-entropy
Yes, that's a good search term.

on that word I find only very advanced articles
Yes, that's because it is an advanced topic. entropy1

Gold Member
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.

Last edited:

PeterDonis

Mentor
Suppose they are not correlated, and we have P(A,B)=P(A)P(B).
What are these probabilities? What is P(A, B) and what are P(A) and P(B)?

we compare a0..an-1 with b1..bn-1b0, and find total correlation.
What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?

Mentz114

Gold Member
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
The correlation between your strings is found by lining them up and counting coincidences. There is no correlation of a single bit. Coincidences is all you got.

PeterDonis

Mentor
You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros.
No, that's called cherry-picking your data.

Mentz114

Gold Member
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
You are floundering. Correlation is a statistical property. The expectation of the correlation between two random bit strings is zero by definition !

The correlation between streams A and B, $-1 \le \rho_{ab} = (2c-n)/n \le 1$ (where c is the number of coincidences) has expectation zero because the number of coincidences tends to $n/2$. But the expectaion of $\rho^2$ is not zero, so you get fluctuations.

The way to reproduce an EPR dataset is to have a machine that produces two random streams and a EPR demon that inserts anti-correlated pairs at random intervals. If the demon tells you which are its work, you can pick then out and get perfect anti-correlation. The remaining data will have $<\rho>=0$. If all the data is used, then $<\rho>\ \lt 0$. i.e. the expectation of the correlation is negative, not zero.

So a good EPR experiment could comprise a hundred million bit string and give a result like $\hat{\rho} = -0.14325378 \pm 0.00001$ which would show something strange was happening, Time to call Rosencrantz&Guildenstern.

Last edited:

entropy1

Gold Member
What are these probabilities? What is P(A, B) and what are P(A) and P(B)?
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).
What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).

@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.
No, that's called cherry-picking your data.
Yes. I ment it as example of a non-random cause.

You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.

Last edited:

PeterDonis

Mentor
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).
Ok. But "pairs of bits" here means (or should mean--if you are defining it differently, you are doing it wrong) "pairs of bits measured in the same run of the experiment". So these probabilities, to be meaningful, require a certain way of "lining up" the two bit sequences next to each other: bits 0 and 0, bits 1 and 1, bits 2 and 2, etc., of each sample. Otherwise you are making meaningless comparisons; there is no physical meaning to comparing bit 0 from one sample and bit 1 of the other, because they are from different runs of the experiment.

Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.

I ment it as example of a non-random cause.
How you meant it doesn't change the fact that it's meaningless. See above.

You can claim that the entropy of the random content decreases in fractional bits
I have said no such thing. You are confused.

What I said was that if you have a pair of bits that are correlated, the entropy of that pair of bits, as a system, will be less than the entropy of two random uncorrelated bits; and if the correlation is only partial, the entropy of the two-bit system will not be an integral number of bits. But that doesn't mean we made the correlated pair of bits out of the two random uncorrelated bits and thereby decreased their entropy.

entropy1

Gold Member
Ok. But "pairs of bits" here means (or should mean--if you are defining it differently, you are doing it wrong) "pairs of bits measured in the same run of the experiment". So these probabilities, to be meaningful, require a certain way of "lining up" the two bit sequences next to each other: bits 0 and 0, bits 1 and 1, bits 2 and 2, etc., of each sample. Otherwise you are making meaningless comparisons; there is no physical meaning to comparing bit 0 from one sample and bit 1 of the other, because they are from different runs of the experiment.

Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.
If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right? So, what I meant to illustrate is that, since rotating a string does not change its randomness, finding a (strong) correlation with another random string could deny its total randomness.

I understand that in an experimental setting the correlation is linked to the physical setup, which has restrictions to its operation. That is what I mean by 'non-random elements'.

My approach may be seen more theoretical, abstract, for which I hold the reasoning valid (not inflating it needlessly ).
What I said was that if you have a pair of bits that are correlated, the entropy of that pair of bits, as a system, will be less than the entropy of two random uncorrelated bits; and if the correlation is only partial, the entropy of the two-bit system will not be an integral number of bits. But that doesn't mean we made the correlated pair of bits out of the two random uncorrelated bits and thereby decreased their entropy.
AFAICS I agree. Last edited:

PeterDonis

Mentor
If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right?
It is true that P(A) and P(B) are unchanged by reordering the string. That's obvious, because those probabilities only depend on the total numbers of 0 or 1 bits, not on their order. However, just knowing P(A) and P(B), by itself, does not tell you whether a string is "random". In fact, you have not given, anywhere in this thread that I can see, a definition of what you mean by "random".

Also, if we have two strings, and we reorder only one of them, that will, in general, change P(A, B), since that probability relies on comparing corresponding bits of each string. But only one such comparison is actually meaningful: the one that compares bits from each string that came from the same run of the experiment. Any other comparison is meaningless.

I understand that in an experimental setting the correlation is linked to the physical setup, which has restrictions to its operation. That is what I mean by 'non-random elements'.
What "restrictions to its operation" are you talking about? And why do you think such restrictions would be appropriately called "non-random elements"? (Note that this depends on what you mean by "random", which, as I noted above, you have not specified.)

entropy1

Gold Member
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is $(\frac{1}{2})^{n-1}$. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
What "restrictions to its operation" are you talking about? And why do you think such restrictions would be appropriately called "non-random elements"?
This is probably an example of my limited knowledge of English. Maybe "operating conditions" is a better term?
But only one such comparison is actually meaningful: the one that compares bits from each string that came from the same run of the experiment. Any other comparison is meaningless.
Good point. But on what grounds would you call the results 'random'?

Mentz114

Gold Member
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).

P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).

@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.

Yes. I ment it as example of a non-random cause.

You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.
There are many criteria one can use to define the randomness of bit string. The most important is that P(1)=P(0)= 1/2. There is also the auto-correlation which measures how many times a 1 is followed by a 1, and a 0 is followed by a 0. The formula is the same as the correlation between two strings and also has expectation 0.

I have to say that I don't follow what point you are trying to make so I'll leave it there.

• entropy1

"Are entanglement correlations truly random?"

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving