entropy1
- 1,232
- 72
Even in case of parallel basisses/full correlation?PeterDonis said:because measurements on entangled quantum bits can violate the Bell inequalities.
Even in case of parallel basisses/full correlation?PeterDonis said:because measurements on entangled quantum bits can violate the Bell inequalities.
entropy1 said:Even in case of parallel basisses/full correlation?
Ok. I think that is important. Are you willing and able to suggest a Google search term for this, I quess, entanglement-entropy? (on that word I find only very advanced articles)PeterDonis said:Of entropy, yes.
entropy1 said:entanglement-entropy
entropy1 said:on that word I find only very advanced articles
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).Simon Phoenix said:Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.
Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.
But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
entropy1 said:Suppose they are not correlated, and we have P(A,B)=P(A)P(B).
entropy1 said:we compare a0..an-1 with b1..bn-1b0, and find total correlation.
The correlation between your strings is found by lining them up and counting coincidences. There is no correlation of a single bit. Coincidences is all you got.entropy1 said:Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).
Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.
Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.
So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).
Maybe that is what I mean by 'not truly random'.
entropy1 said:You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros.
You are floundering. Correlation is a statistical property. The expectation of the correlation between two random bit strings is zero by definition !entropy1 said:Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).
Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.
Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.
So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).
Maybe that is what I mean by 'not truly random'.
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.PeterDonis said:What are these probabilities? What is P(A, B) and what are P(A) and P(B)?
P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).PeterDonis said:What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
Yes. I ment it as example of a non-random cause.PeterDonis said:No, that's called cherry-picking your data.
entropy1 said:P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).
entropy1 said:I ment it as example of a non-random cause.
entropy1 said:You can claim that the entropy of the random content decreases in fractional bits
If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right? So, what I meant to illustrate is that, since rotating a string does not change its randomness, finding a (strong) correlation with another random string could deny its total randomness.PeterDonis said:Ok. But "pairs of bits" here means (or should mean--if you are defining it differently, you are doing it wrong) "pairs of bits measured in the same run of the experiment". So these probabilities, to be meaningful, require a certain way of "lining up" the two bit sequences next to each other: bits 0 and 0, bits 1 and 1, bits 2 and 2, etc., of each sample. Otherwise you are making meaningless comparisons; there is no physical meaning to comparing bit 0 from one sample and bit 1 of the other, because they are from different runs of the experiment.
Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.
AFAICS I agree.PeterDonis said:What I said was that if you have a pair of bits that are correlated, the entropy of that pair of bits, as a system, will be less than the entropy of two random uncorrelated bits; and if the correlation is only partial, the entropy of the two-bit system will not be an integral number of bits. But that doesn't mean we made the correlated pair of bits out of the two random uncorrelated bits and thereby decreased their entropy.
entropy1 said:If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right?
entropy1 said:I understand that in an experimental setting the correlation is linked to the physical setup, which has restrictions to its operation. That is what I mean by 'non-random elements'.
This is probably an example of my limited knowledge of English. Maybe "operating conditions" is a better term?PeterDonis said:What "restrictions to its operation" are you talking about? And why do you think such restrictions would be appropriately called "non-random elements"?
Good point. But on what grounds would you call the results 'random'?PeterDonis said:But only one such comparison is actually meaningful: the one that compares bits from each string that came from the same run of the experiment. Any other comparison is meaningless.
There are many criteria one can use to define the randomness of bit string. The most important is that P(1)=P(0)= 1/2. There is also the auto-correlation which measures how many times a 1 is followed by a 1, and a 0 is followed by a 0. The formula is the same as the correlation between two strings and also has expectation 0.entropy1 said:P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).
P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).
@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.
Yes. I ment it as example of a non-random cause.
You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.
To put it simply: I am asking if the sources of a correlation are 100% random.Mentz114 said:I have to say that I don't follow what point you are trying to make so I'll leave it there.
What I mean is that the experiment(al setup) is the "cherrypicker" in this case, in my consideration.PeterDonis said:Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.
How you meant it doesn't change the fact that it's meaningless. See above.
I assume you mean 'deviate from randomness. This big topic is part of standard statistical theory and the Wiki articles are a good introduction.entropy1 said:To put it simply: I am asking if the sources of a correlation are 100% random.
You are more than welcome to participate, which I would like, however, I respect any decision you make thereabout.
entropy1 said:By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##.
entropy1 said:I am asking if the sources of a correlation are 100% random.
entropy1 said:What I mean is that the experiment(al setup) is the "cherrypicker" in this case, in my consideration.
entropy1 said:This is probably an example of my limited knowledge of English.
From the data of my computer codePeterDonis said:Where are you getting this definition of "random" from?
Well, I can reassure you that was not my angle of approach. The cherry-picking part I introduced to illustrate how improbable it is to get a correlation out of pure randomness.PeterDonis said:The experimental setup certainly tells you what comparison between two bit strings is meaningful. But you seemed to be saying that any such comparison was meaningful, because you were talking about rearranging how the two bit strings are compared with each other (by, for example, shifting one bit string relative to the other and then comparing). If you do that, you aren't doing what the experimental setup tells you to do; you're doing something different, and meaningless. That's what I meant by cherry-picking the data.
That's not entirely fair - I think it is a matter of starting point.PeterDonis said:I think the entire topic of this thread might be an artifact of your limited knowledge of English. That's why I keep asking what you mean by the word "random"; I don't think you mean what that word usually means in English.
entropy1 said:how improbable it is to get a correlation out of pure randomness.
Well, I am afraid I can't do better than this currently. I will ponder some more.PeterDonis said:But you still haven't really explained what you mean by this.
You should stsrt by finding out the customary meanings of randomness and also how to calculate a correlation.entropy1 said:Well, I am afraid I can't do better than this currently. I will ponder some more.
A major problem in getting your question answered is that your terminology is sloppy, in fact truly sloppy.entropy1 said:Suppose we have two truly random sources A and B that generate bits ('0' or '1') synchronously. If we measure the correlation between the respective bits generated, we find a random, ie no, correlation.
Now suppose A and B are two detectors that register polarization-entangled photons passing respective polarization filters. We can define bits as 'detection'='1' and 'no detection'='0'. A and B individually yield random results. However, there is in almost every case a non-zero correlation, depending on the angle of the filters.
So my question would then be: since the detections of the entangled particles often exhibit a different correlation than truly random sources, are the detections purely random in case of entanglement? (or do they only seem random?)
Demystifier said:Define "truly random"!
You failed to make a definition. The terms "random" and "truly random" are neither used nor defined in probability texts. And after reading more of your posts it is not clear to me what you mean.PeterDonis said:Where are you getting this definition of "random" from?
I thought I remembered the basics, but this morning my meds demand my brain, so I would have to look it up.Zafa Pi said:Can you ask your question from the above formulation?
If the term is not defined in scientific literature, then why are you asking me, a layman, to define it? By the way, I gave one:Zafa Pi said:You failed to make a definition. The terms "random" and "truly random" are neither used nor defined in probability texts.
Anyway, I will think about what I mean by 'truly' random.entropy1 said:By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
entropy1 said:If the term is not defined in scientific literature, then why are you asking me, a layman, to define it?
I don't know what you mean by fragment. Do you have a link?entropy1 said:By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is (12)n−1(12)n−1(\frac{1}{2})^{n-1}. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
I now think you were trying to define a binary normal sequence, but failed.entropy1 said:By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is (12)n−1(12)n−1(\frac{1}{2})^{n-1}. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.