I Are entanglement correlations truly random?

  • #51
Simon Phoenix said:
DrChinese has given the example of full entanglement swapping, but there's a kind of 'half-way house' that might help to shed some light.

Imagine a perfect optical cavity (the experiments were done in very high-Q microwave cavities). Now take a 2 level atom in its excited state (Rydberg atoms can be used as a reasonable approximation to a 2 level atom). Fire this atom through the cavity with a specific transit time such that the field and atom become perfectly entangled.

Now suppose we live in an ideal world and we can maintain the entanglement between the cavity field and the atom. Go make a cup of tea. Ship the atom off to the outer moons of Saturn.

Now take a second atom prepared in its ground state and fire this through the cavity with a different tailored transit time through the cavity. Tailor this time just right and after this second atom has gone through the cavity the 2 atoms are now entangled and the cavity field 'decoupled'.

The two atoms have never directly interacted - and (if we can maintain the entanglement long enough) the two atoms can be fired through the cavity years apart.

OK that's wildly fanciful in terms of shipping things off to Saturn and maintaining entanglement for years - but the experiments have been performed (although with more modest parameters).

Thanks for the great example. It seems more than ever to me as though "entangled" isn't quite the right word for the phenomenon. There is no co-dependence. The two atoms have just been configured/seeded to perform with some predictable similarity. I.e., only their states are correlated.

Question: Does it have to be the same "perfect optical cavity," or could these two atoms be "entangled" via two identical cavities in distant locations?
 
Physics news on Phys.org
  • #52
Simon Phoenix said:
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
What I noticed was, that the case of two truly independent sources differs from the case of pairs of entangled particles, in the sense that the latter case 'contains' a correlation. So, a difference of the latter case with 'true randomness' means that you have either conclude that the latter is 'not truly' random, or you have to extend the definition of 'true randomness' to include correlation.

The difference between the two is that in the first case the sources are independent, and in the second they are dependent. If you define correlation = dependence, then you can just observe that there is a correlation and leave it with that. It would be a tautology. But the correlation has a cause, and the extension with this cause is what, in my eyes, characterizes the difference. But maybe that's a tautology too. :wink:
 
Last edited:
  • #53
entropy1 said:
So, a difference of the latter case with 'true randomness' means that you have either conclude that the latter is 'not truly' random, or you have to extend the definition of 'true randomness' to include correlation.

Seems to me like you're going round in circles a bit here. Let's focus on just binary variables. I'm assuming by truly random you actually mean uniformly at random so that the probability of obtaining 1 is 1/2.

You're conflating the notion of randomness with the notion of dependence I think. Suppose we have two random processes ##A## and ##B## each spitting out binary strings. If they're independent processes then the total entropy is simply the sum of the entropy of process ##A## and the entropy of process ##B##. If the processes are dependent then the total entropy is less than this. Really what's your issue here?

If we do have ##S(A,B) \lt S(A) + S(B)## then ##A## and ##B## are both random processes, just not independently so. This is all just standard probability theory.

OK, it's usually expressed in terms of conditional probabilities so that for independent random processes we would write ##P(A|B) = P(A)## and ##P(B|A) = P(B)## or, equivalently, we would write the joint distribution ##P(A,B) = P(A)P(B)##

For dependent processes we would have ##P(A,B) = P(A)P(B|A) = P(B)P(A|B)##

There's no need at all to redefine anything. There's nothing that you're saying here other than "two random processes can be dependent".
 
  • #54
If there is no correlation between ##A## and ##B##, we have ##P(A,B)=P(A)P(B)##, factorization. If there is a correlation between ##A## and ##B##, then ##P(A,B)≠P(A)P(B)##. Instead, we have ##P(A,B)=P(A|B)P(B)## with ##P(A)≠P(A|B)##. We can write ##P(A,B=0)=P(A|B=0)P(B=0)## and ##P(A,B=1)=P(A|B=1)P(B=1)##. So the difference with the factorizing ##P(A,B=x)=P(A)P(B=x)## is that ##P(A)## has been replaced by ##P(A|B=0)## in one case and ##P(A|B=1)## in the other case, with ##P(A)≠P(A|B=0)## and ##P(A)≠P(A,B=1)##. So you could see it as that the probability of ##A## in the correlating case has two different values, namely ##P(A|B=0)## and ##P(A|B=1)##, depending on the outcome of ##B##. So this would be a reason for me to call this "not truly random". Does this make any sense?

* With 'A being truly random' I mean that the probability of A doesn't vary.
 
Last edited:
  • #55
The correlation codes for the (relative) measurement basisses, so there is more information in ensembles A and B than just two times purely random bits.

And I think the entropy of correlated values is lower than uncorrelated values. To get all ones and zero's aligned takes more than pure randomness.
 
Last edited:
  • #56
entropy1 said:
To get all ones and zero's aligned takes more than pure randomness.
So that's like saying you have a random number generator that has an output of 0 or 1 and places it into variable A, then you use a not function to place the opposite of A into B, and somehow those anticorrelated values are less random because of it?
 
  • #57
jerromyjon said:
So that's like saying you have a random number generator that has an output of 0 or 1 and places it into variable A, then you use a not function to place the opposite of A into B, and somehow those anticorrelated values are less random because of it?

He's saying that if you randomly generate one bit, you have one bit of randomness; using a not function to generate a second (anti)correlated bit doesn't generate any additional randomness. You don't have two bits of randomness just because you have two bits. You would have to randomly generate both bits to get two bits of randomness, but if you did that, they wouldn't be correlated.
 
  • #58
PeterDonis said:
You would have to randomly generate both bits to get two bits of randomness, but if you did that, they wouldn't be correlated.
Exactly.
entropy1 said:
So you could see it as that the probability of A in the correlating case has two different values, namely P(A|B=0)P(A|B=0)P(A|B=0) and P(A|B=1)P(A|B=1)P(A|B=1), depending on the outcome of B. So this would be a reason for me to call this "not truly random".
entropy1 said:
The correlation codes for the (relative) measurement basisses, so there is more information in ensembles A and B than just two times purely random bits.
Maybe I'm misinterpreting, maybe I'm missing something... but it seems to me the debate centers around 2 separate values with a single random base... or more specifically 2 correlated values.
 
  • #59
So, if we have a certain amount of correlation, but not complete, do we have a fractional number of bits?

Moreover, if we have a single bit of randomness distributed over two measurement results, isn't that bit a hidden variable?
 
  • #60
entropy1 said:
if we have a certain amount of correlation, but not complete, do we have a fractional number of bits?

Of entropy, yes.

entropy1 said:
if we have a single bit of randomness distributed over two measurement results, isn't that bit a hidden variable?

If the joint state of the two bits is a "hidden variable", then yes. If we're talking about classical bits, then it's fine to look at it that way, because classical bits can't violate the Bell inequalities. If we're talking about entangled quantum bits, then you can consider their joint quantum state as a "hidden variable", but it can't be a local hidden variable in the Bell sense because measurements on entangled quantum bits can violate the Bell inequalities.
 
  • #61
PeterDonis said:
because measurements on entangled quantum bits can violate the Bell inequalities.
Even in case of parallel basisses/full correlation?
 
  • #62
entropy1 said:
Even in case of parallel basisses/full correlation?

"Violate the Bell inequalities" means over the full range of possible combinations of measurement settings. Obviously if you only pick the one case where both measurements are parallel, you won't violate the inequalities. So what?
 
  • #63
PeterDonis said:
Of entropy, yes.
Ok. I think that is important. Are you willing and able to suggest a Google search term for this, I quess, entanglement-entropy? (on that word I find only very advanced articles) :smile:
 
  • #64
entropy1 said:
entanglement-entropy

Yes, that's a good search term.

entropy1 said:
on that word I find only very advanced articles

Yes, that's because it is an advanced topic. :wink:
 
  • #65
Simon Phoenix said:
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
 
Last edited:
  • #66
entropy1 said:
Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

What are these probabilities? What is P(A, B) and what are P(A) and P(B)?

entropy1 said:
we compare a0..an-1 with b1..bn-1b0, and find total correlation.

What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
 
  • #67
entropy1 said:
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
The correlation between your strings is found by lining them up and counting coincidences. There is no correlation of a single bit. Coincidences is all you got.
 
  • #68
entropy1 said:
You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros.

No, that's called cherry-picking your data.
 
  • #69
entropy1 said:
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
You are floundering. Correlation is a statistical property. The expectation of the correlation between two random bit strings is zero by definition !

The correlation between streams A and B, ##-1 \le \rho_{ab} = (2c-n)/n \le 1## (where c is the number of coincidences) has expectation zero because the number of coincidences tends to ##n/2##. But the expectaion of ##\rho^2## is not zero, so you get fluctuations.

The way to reproduce an EPR dataset is to have a machine that produces two random streams and a EPR demon that inserts anti-correlated pairs at random intervals. If the demon tells you which are its work, you can pick then out and get perfect anti-correlation. The remaining data will have ##<\rho>=0##. If all the data is used, then ##<\rho>\ \lt 0##. i.e. the expectation of the correlation is negative, not zero.

So a good EPR experiment could comprise a hundred million bit string and give a result like ##\hat{\rho} = -0.14325378 \pm 0.00001## which would show something strange was happening, Time to call Rosencrantz&Guildenstern.
 
Last edited:
  • #70
PeterDonis said:
What are these probabilities? What is P(A, B) and what are P(A) and P(B)?
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).
PeterDonis said:
What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).

@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.
PeterDonis said:
No, that's called cherry-picking your data.
Yes. I ment it as example of a non-random cause.

You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.
 
Last edited:
  • #71
entropy1 said:
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).

Ok. But "pairs of bits" here means (or should mean--if you are defining it differently, you are doing it wrong) "pairs of bits measured in the same run of the experiment". So these probabilities, to be meaningful, require a certain way of "lining up" the two bit sequences next to each other: bits 0 and 0, bits 1 and 1, bits 2 and 2, etc., of each sample. Otherwise you are making meaningless comparisons; there is no physical meaning to comparing bit 0 from one sample and bit 1 of the other, because they are from different runs of the experiment.

Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.

entropy1 said:
I ment it as example of a non-random cause.

How you meant it doesn't change the fact that it's meaningless. See above.

entropy1 said:
You can claim that the entropy of the random content decreases in fractional bits

I have said no such thing. You are confused.

What I said was that if you have a pair of bits that are correlated, the entropy of that pair of bits, as a system, will be less than the entropy of two random uncorrelated bits; and if the correlation is only partial, the entropy of the two-bit system will not be an integral number of bits. But that doesn't mean we made the correlated pair of bits out of the two random uncorrelated bits and thereby decreased their entropy.
 
  • #72
PeterDonis said:
Ok. But "pairs of bits" here means (or should mean--if you are defining it differently, you are doing it wrong) "pairs of bits measured in the same run of the experiment". So these probabilities, to be meaningful, require a certain way of "lining up" the two bit sequences next to each other: bits 0 and 0, bits 1 and 1, bits 2 and 2, etc., of each sample. Otherwise you are making meaningless comparisons; there is no physical meaning to comparing bit 0 from one sample and bit 1 of the other, because they are from different runs of the experiment.

Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.
If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right? So, what I meant to illustrate is that, since rotating a string does not change its randomness, finding a (strong) correlation with another random string could deny its total randomness.

I understand that in an experimental setting the correlation is linked to the physical setup, which has restrictions to its operation. That is what I mean by 'non-random elements'.

My approach may be seen more theoretical, abstract, for which I hold the reasoning valid (not inflating it needlessly :biggrin: ).
PeterDonis said:
What I said was that if you have a pair of bits that are correlated, the entropy of that pair of bits, as a system, will be less than the entropy of two random uncorrelated bits; and if the correlation is only partial, the entropy of the two-bit system will not be an integral number of bits. But that doesn't mean we made the correlated pair of bits out of the two random uncorrelated bits and thereby decreased their entropy.
AFAICS I agree. :biggrin:
 
Last edited:
  • #73
entropy1 said:
If a random string x0..xn-1 is random, then the string x1..xn-1x0 is random too, right?

It is true that P(A) and P(B) are unchanged by reordering the string. That's obvious, because those probabilities only depend on the total numbers of 0 or 1 bits, not on their order. However, just knowing P(A) and P(B), by itself, does not tell you whether a string is "random". In fact, you have not given, anywhere in this thread that I can see, a definition of what you mean by "random".

Also, if we have two strings, and we reorder only one of them, that will, in general, change P(A, B), since that probability relies on comparing corresponding bits of each string. But only one such comparison is actually meaningful: the one that compares bits from each string that came from the same run of the experiment. Any other comparison is meaningless.

entropy1 said:
I understand that in an experimental setting the correlation is linked to the physical setup, which has restrictions to its operation. That is what I mean by 'non-random elements'.

What "restrictions to its operation" are you talking about? And why do you think such restrictions would be appropriately called "non-random elements"? (Note that this depends on what you mean by "random", which, as I noted above, you have not specified.)
 
  • #74
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
PeterDonis said:
What "restrictions to its operation" are you talking about? And why do you think such restrictions would be appropriately called "non-random elements"?
This is probably an example of my limited knowledge of English. Maybe "operating conditions" is a better term?
PeterDonis said:
But only one such comparison is actually meaningful: the one that compares bits from each string that came from the same run of the experiment. Any other comparison is meaningless.
Good point. But on what grounds would you call the results 'random'?
 
  • #75
entropy1 said:
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).

P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).

@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.

Yes. I ment it as example of a non-random cause.

You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.
There are many criteria one can use to define the randomness of bit string. The most important is that P(1)=P(0)= 1/2. There is also the auto-correlation which measures how many times a 1 is followed by a 1, and a 0 is followed by a 0. The formula is the same as the correlation between two strings and also has expectation 0.

I have to say that I don't follow what point you are trying to make so I'll leave it there.
 
  • Like
Likes entropy1
  • #76
Mentz114 said:
I have to say that I don't follow what point you are trying to make so I'll leave it there.
To put it simply: I am asking if the sources of a correlation are 100% random.

You are more than welcome to participate, which I would like, however, I respect any decision you make thereabout.
 
Last edited:
  • #77
PeterDonis said:
Similarly, if you pick only the "1" bits out of each sample and match them up with each other, you are making a meaningless comparison.

How you meant it doesn't change the fact that it's meaningless. See above.
What I mean is that the experiment(al setup) is the "cherrypicker" in this case, in my consideration.
 
  • #78
entropy1 said:
To put it simply: I am asking if the sources of a correlation are 100% random.

You are more than welcome to participate, which I would like, however, I respect any decision you make thereabout.
I assume you mean 'deviate from randomness. This big topic is part of standard statistical theory and the Wiki articles are a good introduction.
https://en.wikipedia.org/wiki/Statistical_randomness
and this is useful and mentions higher concepts like spectral decompositions and Hadamard transformations.
https://en.wikipedia.org/wiki/Randomness_tests

This is not part of quantum theory. Correlations in QT play a different but very important role.
 
  • #79
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##.

Where are you getting this definition of "random" from?

entropy1 said:
I am asking if the sources of a correlation are 100% random.

Obviously it depends on the sources.

entropy1 said:
What I mean is that the experiment(al setup) is the "cherrypicker" in this case, in my consideration.

The experimental setup certainly tells you what comparison between two bit strings is meaningful. But you seemed to be saying that any such comparison was meaningful, because you were talking about rearranging how the two bit strings are compared with each other (by, for example, shifting one bit string relative to the other and then comparing). If you do that, you aren't doing what the experimental setup tells you to do; you're doing something different, and meaningless. That's what I meant by cherry-picking the data.
 
  • #80
entropy1 said:
This is probably an example of my limited knowledge of English.

I think the entire topic of this thread might be an artifact of your limited knowledge of English. That's why I keep asking what you mean by the word "random"; I don't think you mean what that word usually means in English.

Perhaps it would help to ask the question a different way: why do you care whether "the sources of a correlation are 100% random"? What would it tell you if the answer was yes? What would it tell you if the answer was no?
 
  • #81
PeterDonis said:
Where are you getting this definition of "random" from?
From the data of my computer code :biggrin:
PeterDonis said:
The experimental setup certainly tells you what comparison between two bit strings is meaningful. But you seemed to be saying that any such comparison was meaningful, because you were talking about rearranging how the two bit strings are compared with each other (by, for example, shifting one bit string relative to the other and then comparing). If you do that, you aren't doing what the experimental setup tells you to do; you're doing something different, and meaningless. That's what I meant by cherry-picking the data.
Well, I can reassure you that was not my angle of approach. The cherry-picking part I introduced to illustrate how improbable it is to get a correlation out of pure randomness.
PeterDonis said:
I think the entire topic of this thread might be an artifact of your limited knowledge of English. That's why I keep asking what you mean by the word "random"; I don't think you mean what that word usually means in English.
That's not entirely fair - I think it is a matter of starting point.
 
  • #82
entropy1 said:
how improbable it is to get a correlation out of pure randomness.

But you still haven't really explained what you mean by this.
 
  • Like
Likes Zafa Pi
  • #83
PeterDonis said:
But you still haven't really explained what you mean by this.
Well, I am afraid I can't do better than this currently. I will ponder some more.
 
  • Like
Likes Zafa Pi and Mentz114
  • #84
entropy1 said:
Well, I am afraid I can't do better than this currently. I will ponder some more.
You should stsrt by finding out the customary meanings of randomness and also how to calculate a correlation.
 
  • Like
Likes Zafa Pi
  • #85
entropy1 said:
Suppose we have two truly random sources A and B that generate bits ('0' or '1') synchronously. If we measure the correlation between the respective bits generated, we find a random, ie no, correlation.

Now suppose A and B are two detectors that register polarization-entangled photons passing respective polarization filters. We can define bits as 'detection'='1' and 'no detection'='0'. A and B individually yield random results. However, there is in almost every case a non-zero correlation, depending on the angle of the filters.

So my question would then be: since the detections of the entangled particles often exhibit a different correlation than truly random sources, are the detections purely random in case of entanglement? (or do they only seem random?)
A major problem in getting your question answered is that your terminology is sloppy, in fact truly sloppy.
Demystifier said:
Define "truly random"!
PeterDonis said:
Where are you getting this definition of "random" from?
You failed to make a definition. The terms "random" and "truly random" are neither used nor defined in probability texts. And after reading more of your posts it is not clear to me what you mean.

Let me give a simple concrete QM example:
Given an entangled pair from state √½(|00⟩ + |11⟩), we let A measure one of the pair at angle 0º, i.e. with measurement operator/observable ##Z =\begin{pmatrix}1&0\\0&-1 \end {pmatrix}##.
We let B measure the other at 30°, i.e. with observable ##½Z + √¾X =\begin{pmatrix}½&√¾\\√¾&-½\end{pmatrix}##.

The joint probability density of (A,B) is (1,1) with prob ⅜, (1,-1) with prob ⅛, (-1,1) with prob ⅛, (-1,-1) with prob ⅜. (1 & -1 are eigenvalues of the observables)
We see A and B agree with prob = ¾ = cos²30º, as usual.
The correlation coefficient is ½.
The marginal density of A is 1 with prob ½, -1 with prob ½. Same for B. A and B are not independent.

All of this is justified by repeated trials in the lab.

Can you ask your question from the above formulation?
 
  • #86
Zafa Pi said:
Can you ask your question from the above formulation?
I thought I remembered the basics, but this morning my meds demand my brain, so I would have to look it up.
 
  • #87
Zafa Pi said:
You failed to make a definition. The terms "random" and "truly random" are neither used nor defined in probability texts.
If the term is not defined in scientific literature, then why are you asking me, a layman, to define it? By the way, I gave one:
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is ##(\frac{1}{2})^{n-1}##. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
Anyway, I will think about what I mean by 'truly' random.
 
Last edited:
  • #88
entropy1 said:
If the term is not defined in scientific literature, then why are you asking me, a layman, to define it?

Because you used it. We need to know what you meant by it.
 
  • Like
Likes Zafa Pi
  • #89
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is (12)n−1(12)n−1(\frac{1}{2})^{n-1}. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
I don't know what you mean by fragment. Do you have a link?
If a1,a2, ... ,al with l=20 is a binary sequence, what is a fragment of 10 bits? Is it a subset of size 10? Is it a contiguous subset like a7,a8, ... ,a16? Or what?
 
  • #90
entropy1 said:
By random I mean that, in the binary case, the limit of l to ∞ of the probability of getting a fragment of n identical bits in a random string of length l is (12)n−1(12)n−1(\frac{1}{2})^{n-1}. There are probably standard deviations one could run on this. My own knowledge of mathematics is too limited for that.
I now think you were trying to define a binary normal sequence, but failed.

"A sequence of bits is random if there exists no Program shorter than it which can produce the same sequence." ~ Kolmogorov
So obviously it is impossible to exhibit a Kolmogorov random sequence.

Neither normality or K-random imply one another. But all of this should be in the Probability section of PF. And none of this is relevant to QM.
 
  • #91
Suppose I walk down the street, and each time I look to my right, a red car is passing. If I don't look, I don't know which color the passing cars have.

So the correlation between me looking and a red car passing is 100%.

So I assume the moments I look are random (A) and the cars passing have FAPP random colors (B).

So, in this case, with the correlation manifesting, are (A) and (B) "truly" random?

Since we generally do not see correlations like this always and everywhere, it should be, however not impossible, improbable to see this. So, I cannot determine whether there is a red car convention in town or not, since I don't know the counterfactual measurements (looking). So, would a string of red cars passing me still be random? After all it would require a red car convention. And if there is NO red car convention, would the string of cars passing still be truly random if the correlation with my looking direction would be 100% red cars? (Or, for that matter, would my peeking be random?)

The problem I see, is that if (A) and (B) are truly random, the measurements should be typical for what is reality. For example, based on my perceptions, I might say that in this street probably only red cars are allowed, while the counterfactual data is in contradiction with that.

You could also see it the other way round: I see typical cars passing, while when I'm not looking only red cars pass which I wouldn't know of. My assessment of the data might lead me to faulty conclusions.

So I think "randomness" is required to accurately assess reality.
 
Last edited:
  • #92
entropy1 said:
Suppose I walk down the street, and each time I look to my right, a red car is passing. If I don't look, I don't know which color the passing cars have.

So the correlation between me looking and a red car passing is 100%.

So I assume the moments I look are random (A) and the cars passing have FAPP random colors (B).

So, in this case, with the correlation manifesting, are (A) and (B) "truly" random?

There was a different thread in the "Set Theory, Logic, Probability and Statistics" forum on this topic. Random is relative to a model or theory. You can't know whether something is "truly" random unless you know what theory is correct. Which, of course, you can never know.

According to QM, the results of certain types of measurements are random, in the sense that QM doesn't propose any means of determining the values ahead of time. According to a different theory (maybe Bohmian mechanics), the results may not be random.

The facts you describe above is consistent with multiple explanations:
  1. All the cars are red.
  2. There are cars of other colors, but for whatever reason, you only have an impulse to look at a car when the car is red.
  3. There are cars of other colors, but just by coincidence, you happened to look at the moments a red car is passing.
  4. Etc.
 
  • #93
bahamagreen said:
I flip a coin and it lands heads. Does it still make sense to describe the probability of a heads for that flip as p=.5, ten minutes after the fact? Does probability even exist for events in the past?

Both of these thoughts goes to a time relationship of randomness... does the standard treatment not take time into account?

The standard mathematical treatment of probability (which uses measure theory) says nothing about events actually happening. It doesn't have any axioms that say you can take random samples. It does not have a model of time as that notion is used in physics. So the standard mathematical theory does not deal with questions about a probability "before" or "after" some time or a probability that changes with the "actual" occurance of an event.

The standard techniques for applying probability theory to real life problems do assume that it is possible to take random samples and that events actually happen (or don't happen). In applications of probability theory the indexing set used in the abstract definition of "stochastic process" is often interpreted to be time in the physical sense.

The distinction between mathematical probability theory and interpretations that people make when applying it is blurred by the fact that only the most advanced texts on mathematical probability theory confine themselves to discussing that theory. The typical textbook on probability theory tries to be helpful by teaching both probability theory and its useful applications. For example, the "conditional probability" P(A|B) has a very abstract mathematical definition. However, typical textbooks present P(A}B) by interpreting it to mean "The probability of event A given that the event B has (actually) happened".

In mathematical probability theory, a specific sequence of numbers can be assigned a probability and it can be a member of a "sample space" on which a probability measure is defined. But there is no definition for a particular sequence of numbers being "random" or "not random". In mathematical probability theory, there is a definition for two random variables to be correlated However there is no definition for two specific sequences of numbers to be correlated. In this thread, there is the usual confusion involving numerical calculations done on specific sets of numbers to estimate mathematical correlation versus the mathematical definition of correlation.

Attempts have been made to create mathematical notions of randomness for specific sequences of numbers. These attempts are not "standard" mathematical probability theory.

When discussing physics, people are making their own interpretations of mathematical probability theory.
 
  • Like
Likes DrChinese
Back
Top