B Is there a definition of randomness?

Click For Summary
The discussion on the definition of randomness reveals that there is no formal definition applicable to philosophical inquiries about randomness, despite established concepts in probability theory. A random process is characterized as unpredictable but can be described probabilistically, and its predictability may vary depending on the theoretical framework used. The conversation touches on the idea that correlation between variables can influence perceptions of randomness, though this does not necessarily diminish the inherent randomness of each variable. The concept of randomness is further complicated by quantum mechanics, where certain processes are fundamentally unpredictable, challenging deterministic interpretations. Ultimately, the nature of randomness remains a complex interplay between predictability, theory, and philosophical considerations.
  • #31
entropy1 said:
Do you mean sample_A=f(human_A) and sample_B=f(human_B), with human_A=g(general_human, random_value) and human_B=g(general_human, random_value)?
Perhaps what you are looking for is this:
Suppose X and Y are correlated and P(X∈A | Y∈B) ≠ P(X∈A), so the events A and B are not independent events. Then knowing Y∈B has changed the probabilities of X. There are examples where knowing Y∈B can increase or decrease the probabilities of X∈A. That is, there are examples where P(X∈A | Y∈B) > P(X∈A) and other examples where P(X∈A | Y∈B) < P(X∈A).

Furthermore, if X and Y are correlated real valued random variables, there are examples where var( X | Y∈B ) < var( X ) and other examples where var( X | Y∈B ) > var( X ) . So knowing Y∈ B can either decrease or increase the random variability of X, depending on whether knowing Y∈B increased or decreased the predictability of X.

The bottom line is that it is not possible to make a general rule about how "random" a variable is simply because you know that it is correlated with another variable.
 
Physics news on Phys.org
  • #32
Is it true that if X and Y are random variables, that X given Y, P(X=1|Y=1), is random too?
 
  • #33
entropy1 said:
Is it correct that if X and Y are random variables, that X given Y, P(X=1|Y=1) is random too?
P(X=1|Y=1) is a specific probability number, not a random variable. (If P(Y=1)=0, then P(X=1|Y=1) is not even a valid probability.)
Suppose X and Y are discrete random variables where (X,Y) are the results of an experiment, E, and P(Y=1) > 0. Then you could define a related experiment, Etill Y=1 where experiment E is repeated till the result has Y=1. The X value result of Etill Y=1 would be a random variable, Z = Xtill Y=1.
 
  • Like
Likes entropy1
  • #34
As the original question was about the definition (or lack thereof) of randomness, I am surprised no one has mentioned the work of Gregory Chaitin; e.g., Part III of https://www.cs.auckland.ac.nz/~chaitin/ait/index.html
 
  • Like
Likes entropy1
  • #35
In passing, random data is incompressible. Sometimes this means that the compression algorithm isn't clever enough to exploit the redundency present.

So: If the data is compressible, it's not random. If it's incompressible it *might* be random.

You may be able to come up with a calculable degree of randomness using entropy/enthalpy calculations.
 
  • Like
Likes FactChecker and entropy1
  • #36
Has the use of Chaitin-Kolmogorov-Solomonoff complexity for the definition of randomness been considered here?
 
  • Like
Likes nomadreid and entropy1
  • #37
@nomadreid: sorry, I had not seen yours when I posted concerning algorithmic complexity. thus, you would hold that Wolfram's Rule #30 is not random? just askin'.
 
  • Like
Likes nomadreid
  • #38
jimfarned said:
@nomadreid: sorry, I had not seen yours when I posted concerning algorithmic complexity. thus, you would hold that Wolfram's Rule #30 is not random? just askin'.
Excellent question. No, I would not consider it random, but rather deterministic chaos. (However, I would not be too dogmatic about it.) The question forces at least a partial definition of randomness, highlighting the differences between non-predictable deterministic behavior (chaos) and non-deterministic behavior (randomness). Chaos is a state where knowledge of the present determines the future but knowledge of the approximate present does not. To put it in terms of algorithmic complexity applied to Wolfram 30, there can exist an algorithmic computer program which can produce the result given the initial conditions, and it will be the same result each time (unlike, say, certain experiments on the quantum level). However, perhaps for a specific computer program we could bring up a concept of "relative randomness" or "randomness for this program" (or "axiom system", to take it back out from IT). Wiser people than I can take it from here...
 
  • #39
If we have A∈{0,1} and P(A)=0.5, and we do 45 throws of A and get 45 times '1', it would be compatible with probability theory, right?

So, if we have B∈{0,1} and P(B)=0.75, and we do 40 throws of B and get 20 times '1', it would also be compatible with probability theory, right?

So, finally, we have C∈{0,1} and P(C)=0.5, and we do 40 throws of B and get 20 times '1'.

Then experiments say nothing about probability, since P(B)≠P(C), while the outcomes are identical in my example. We can measure N(B)/N and get a ratio of 0.5 while we say P(B)=0.75. And we say P(A)=0.5 while we get all 1's.

I can imagine that this would also be possible with large numbers. So shouldn't we say that P(B)=0.5 instead? And P(A)=1.0? However the values given in my examples are perfectly reasonable!

So I can imagine that outcomes tell us not necassarily anything about the probability.
 
  • #40
entropy1 said:
So I can imagine that outcomes tell us not necassarily anything about the probability.

If "tells us" refers to logical deduction, you are correct. The mathematical theory of probability doesn't make definite predictions about actual outcomes. It only makes statements about the probabilities of actual outcomes. So the theory of probability is circular in that respect. Probability theory tells us about the probabilities of things. Likewise, actual outcomes do not provide exact information about probabilities. However, in applied math, real life problems are often approached by assuming data from actual outcomes gives us their probabilities. (Most real life problems are "solved" by making various assumptions.)
 
  • Like
Likes Zafa Pi and nomadreid
  • #41
Stephen Tashi said:
The mathematical theory of probability doesn't make definite predictions about actual outcomes. It only makes statements about the probabilities of actual outcomes. [..] real life problems are often approached by assuming data from actual outcomes gives us their probabilities.
So outcomes define probabilities, and probabilities predict (in a way) (averages of) outcomes? And in practice that goes well?
 
  • #42
Suppose that A,B∈{0,1} P(A)=0.5 and P(B)=0.5, while P(A|B)=1. We could write: P(A|anything)=0.5 and P(A|B)=1. So perhaps we have (at least) two different values for P(A), depending on some condition. So perhaps any probability for a particular variable is variable depending on how you look at that variable? For instance: we could be limited to experiments in which B=1.

Also, the probability of correlation in this example is 1. So we have a phenomenon, correlation, that has a probability on its own, right?
 
Last edited:
  • Like
Likes Stephen Tashi
  • #43
entropy1 said:
So outcomes define probabilities,
That is not an conclusion you can draw from the theory of probability. In applications, people often assume the data about outcomes gives their probabilities.

and probabilities predict (in a way) (averages of) outcomes?
It depends on what you mean by "in a way" and "predict". Probability theory gives the probabilities of outcomes.

And in practice that goes well?
It "probably" does, but there is no absolute guarantee.
 
  • #44
entropy1 said:
Suppose that A,B∈{0,1} P(A)=0.5 and P(B)=0.5, while P(A|B)=1. We could write: P(A|anything)=0.5 and P(A|B)=1. So perhaps we have (at least) two different values for P(A), depending on some condition.
This is a matter of notation. In the notation you are using "P(A)" denotes the probability of A without any other conditions. So the fact that P(A|B) = 1 does not show that there are two different values of P(A). The event "A|B" is a different event than the event "A".

So perhaps any probability for a particular variable is variable depending on how you look at that variable?
You should distinguish between an "event" and (random) "variable". It is correct that one may write different notations involving a random variable or an event and these notations may represent different probabilities.

The words describing an event such as "I get a haircut" do not, by themselves, define a particular probability. So when people ask questions like "What is the probability I get a haircut?" they are not asking a question that has a unique answer - even though the event they describe may be clear.

Probability theory takes place on a "probability space" To describe a probability space, you must describe the set of possible events and assign each event a probability. For example the set of haircut or non-haircut events might be defined on sets of days such as " On a randomly selected day in 2018" or "On a randomly selected Tuesday before my 18th birthday" etc. The terminology "randomly selected" is short hand for the fact that we assign each member of the set an equal probability.
Also, the probability of correlation in this example is 1. So we have a phenomenon, correlation, that has a probability on its own, right?

You need to study the details of probability and statistics to avoid mishaps in terminology. "Correlation" has a technical meaning in probability theory. In your example, the correlation coefficient is 1, but there is no need to speak of a "probability" of it being 1. It is definitely 1.

When we have (X,Y) data generated by a probability model, the sample correlation coefficient (which estimates the population correlation coefficient) can take different values with different probabilities, so it would make sense to talk about the sample correlation coefficient having a probability taking various values.
 
  • Like
Likes nomadreid
  • #45
nomadreid said:
As the original question was about the definition (or lack thereof) of randomness, I am surprised no one has mentioned the work of Gregory Chaitin; e.g., Part III of https://www.cs.auckland.ac.nz/~chaitin/ait/index.html
Chaitlin's definition is:

"something is random if it is algorithmically incompressible or irreducible"

That sounds very similar to the definition of a 'normal' sequence referred to above. It doesn't match the folk notion of randomness because it is a property that applies to a list of outcomes from a process, rather than to the process itself.

It reminds me of a joke somebody told me a very long time ago. A man was having his bathroom refurbished and hired a tiler to lay the floor tiles. The man bought a bunch of tiles, some black and some white, and asked the tiler to use those. 'What pattern would you like?' asked the tiler? 'Oh just lay them down at random' said the man.

Being very literal-minded, the tiler put all the tiles in a cloth bag and selected them one at a time to lay down, without looking.

By pure chance, the floor ended up with a chessboard pattern of diagonals in the middle of it.

The owner complained 'that's not random' and refused to pay the bill. The tiler was cross and said 'I did exactly what you asked me to!'.

The point of the joke, based on the person that told it to me, was that the owner was an idiot, who didn't think hard enough about the meaning of his instructions. But taking a more charitable interpretation, the owner was a Chaitlinist, while the tiler took an epistemological definition of random (ie a process is random from the point of view of person A if person A does cannot predict what will happen next).
 
  • #46
A philosophical comment:

andrewkirk said:
A process may be unpredictable in one theory but predictable in a more sophisticated theory. There's no way we can know that there isn't some currently unknown, more sophisticated theory that can predict outcomes that currently seem random to us. So what we can say is that a given process is random with respect to theory T. That is, predictability depends on what theory we are using to predict.

From the point of view of applying this to foundations of physics I tend to use a similar definition. In the physical sense i think of it so that randomness is "observer dependent". And observer then includes the whole inference machinery of memory and processing capacity. There may be physical limits to what can be "resolved", and this is then observer dependent. Here we can associate an observer with a theory T, if you understand the theory as a result of inference. Then in principled each observers encodes his own theory. But the possible theories are constrained by the complexity.

In this way, applied to say random walks, it allows for "explanatory models" in terms or random walks, where the non-random patterns emerge at other observational scales. The observer dependent randomness may explain why some interactions decouple at high energy, as the causal rules can no longer be coded by the smaller and smaller coherent interacting agents.

/Fredrik
 
  • #47
PeroK said:
A measurement of the spin of an electron will return a value which is not predictable.
Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.
PeroK said:
There is, in fact, an interesting quotation from Griffith's book on QM: "even God doesn't know which electron is which".
Like Bohr with Einstein, I told David several times to stop telling God what he can't do.
 
  • #48
Zafa Pi said:
Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.

Like Bohr with Einstein, I told David several times to stop telling God what he can't do.

Tests for randomness are a little strange. If you have some random process for generating a sequence of 0s and 1s, then the sequence

01010101010101010101010101...

is just as possible as the sequence

001001000011111101101010100...

Daryl
 
  • #49
stevendaryl said:
Tests for randomness are a little strange. If you have some random process for generating a sequence of 0s and 1s, then the sequence

01010101010101010101010101...

is just as possible as the sequence

001001000011111101101010100...

Daryl
That is what I mean.
 
  • #50
Zafa Pi said:
Like Bohr with Einstein, I told David several times to stop telling God what he can't do.
Zafa Pi said:
Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.

It's certainly not impossible to test for randomness, although it is impossible to prove it mathematically.

When it comes to physics, you would have to propose an algorithm for quantum measurements, say, that could then be tested.

The inability, theoretically or practically, to predict the outcome of an experiment is essentially what is meant by randomness - in QM at least.
 
  • Like
Likes Zafa Pi
  • #51
entropy1 said:
That is what I mean.

Finding a pattern in a sequence of 0s and 1s doesn't prove that it's not random. The real test is this:
  • Find a pattern
  • Generate some more bits
  • If the bits are truly randomly, eventually the pattern will break
 
  • Like
Likes PeroK
  • #52
stevendaryl said:
Finding a pattern in a sequence of 0s and 1s doesn't prove that it's not random. The real test is this:
  • Find a pattern
  • Generate some more bits
  • If the bits are truly randomly, eventually the pattern will break

So there is never a point where you conclusively show that the sequence is or is not random, although in a Bayesian sense, you can become more and more confident, one way or the other.
 
  • #53
stevendaryl said:
So there is never a point where you conclusively show that the sequence is or is not random, although in a Bayesian sense, you can become more and more confident, one way or the other.

And, allows you to model a physical process using the theory of random variables. In fact, at least in this case there is a chance that it really is a random variable. Whereas, for example, modelling the Earth as a test particle in it's solar orbit is known not to be exactly the case!
 
  • Like
Likes stevendaryl
  • #54
stevendaryl said:
Tests for randomness are a little strange. If you have some random process for generating a sequence of 0s and 1s, then the sequence

01010101010101010101010101...

is just as possible as the sequence

001001000011111101101010100...

Daryl
Indeed, but they are both unlikely.
 
  • #55
PeroK said:
The inability, theoretically or practically, to predict the outcome of an experiment is essentially what is meant by randomness - in QM at least.
Would you say the results of coin flipping in a wind tunnel satisfies randomness in QM? Many don't.
I think it is a paradigm for randomness, and perhaps could be used as a definition of random.
 
  • #56
stevendaryl said:
If you have some random process for generating a sequence of 0s and 1s, then the sequence

01010101010101010101010101...

is just as possible as the sequence

001001000011111101101010100...
Wouldn't you need a process for random(ly) generating a sequence... ?
No... that's wrong...

Wouldn't you need a process for generating a random sequence... ?

A random process won't necessarily generate any sequence, will it ?
What I mean is... the process itself should not be random, only the numbers it generates.

I am probably wrong, though...
stevendaryl said:
Tests for randomness are a little strange.
Zafa Pi said:
I told David several times to stop...
You really think I pay heed to David ?? . :oldeyes: .
 
Last edited:
  • #57
OCR said:
You really think I pay heed to David ??
My bad. I should have trusted that you know every electron by name.
 
  • #58
Zafa Pi said:
Would you say the results of coin flipping in a wind tunnel satisfies randomness in QM? Many don't.
I think it is a paradigm for randomness, and perhaps could be used as a definition of random.

Randomness in QM is different, because you have perfect information. You have an ensemble of electrons that are spin-up in the z-direction; you measure their spin in the x-direction and you get spin-up 50% and spin-down 50%.

The theory of QM predicts this and suggests that there is no further information that could possibly be available to you (hidden variables) that would allow you to predict when an electron will be spin-up and spin-down.

Tossing a coin is random because you have inexact information about the experiment. Let's take a concrete example. In a month's time or so someone will toss the coin at the Superbowl. How could you predict the `outcome of the coin toss? Assume you know who will do it and which physical coin will be used. How do you predict the variables of the coin toss itself? You would, somehow, have to study every cell of that person's body to determine how high they will throw the coin etc. In fact, it may also depend on all their interactions and activities in the intervening month. It may depend on the weather and exactly when the toss takes place - to the nearest second at least.

I know there are those who cling to determinism in these cases, but I'm sceptical that any theory, measuring process (to get all the necessary initial conditions) and computer power could ever predict such a thing. Let alone, say, the coin toss for Superbowl 2019 or 2029.

That, in my view, is a different sort of randomness, caused by the nature of complex, dynamical systems. And I don't see how any theory of everything could ever predict outcomes like these.
 
  • #59
PeroK said:
The theory of QM predicts this and suggests that there is no further information that could possibly be available to you (hidden variables) that would allow you to predict when an electron will be spin-up and spin-down.
I am guessing you are drawing on Bell's Theorem here.

If you are then the argument for Nondeterminism doesn't work, as Bell requires an assumption of Counterfactual Definiteness, which is not compatible with Determinism (It assumes that the experimenter could have made a different measurement from the one they made). That is, the argument assumes its conclusion.
 
Last edited:
  • #60
PeroK said:
Randomness in QM is different, because you have perfect information. You have an ensemble of electrons that are spin-up in the z-direction; you measure their spin in the x-direction and you get spin-up 50% and spin-down 50%.

The theory of QM predicts this and suggests that there is no further information that could possibly be available to you (hidden variables) that would allow you to predict when an electron will be spin-up and spin-down.
I acknowledged this about QM (a mathematical theory) in post #47, and some label this intrinsically random. But QM doesn't actually produce the spin values (up or down), for that you need a S/G apparatus. And I asked how do you check that the values produce by the device are "random"?
PeroK said:
Tossing a coin is random because you have inexact information about the experiment.
The outcome of the coin toss is affected by the neurons in the tosser's brain and the position and momentum of a zillion air molecules. If you accept QM then these are all subject to quantum effects. No hidden variables, right?
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K