B Is there a definition of randomness?

AI Thread Summary
The discussion on the definition of randomness reveals that there is no formal definition applicable to philosophical inquiries about randomness, despite established concepts in probability theory. A random process is characterized as unpredictable but can be described probabilistically, and its predictability may vary depending on the theoretical framework used. The conversation touches on the idea that correlation between variables can influence perceptions of randomness, though this does not necessarily diminish the inherent randomness of each variable. The concept of randomness is further complicated by quantum mechanics, where certain processes are fundamentally unpredictable, challenging deterministic interpretations. Ultimately, the nature of randomness remains a complex interplay between predictability, theory, and philosophical considerations.
entropy1
Messages
1,232
Reaction score
72
Is there a definition of "random(ness)"? Is it defined?
 
Physics news on Phys.org
There is no formal definition. The items used in probability theory, such as random variables and stochastic processes, have formal definitions, but these do not help with questions such as 'what does random mean', which are philosophical rather than mathematical or scientific.

There is also a formal definition about infinite sequences of digits, called 'normal', which has some similarities to the folk notion of 'randomness'. But again it does not help in philosophical discussions on topics such as 'Is the world deterministic or random?'
 
  • Like
Likes Demystifier, QuantumQuest, StoneTemplePython and 1 other person
As I understand , a Random process is one that cannot be predicted but can be described probabilistically.
 
WWGD said:
As I understand , a Random process is one that cannot be predicted but can be described probabilistically.
That's the common understanding of it. The twist comes in the meaning of 'can be predicted'. A process may be unpredictable in one theory but predictable in a more sophisticated theory. There's no way we can know that there isn't some currently unknown, more sophisticated theory that can predict outcomes that currently seem random to us. So what we can say is that a given process is random with respect to theory T. That is, predictability depends on what theory we are using to predict.
 
  • Like
Likes Fra and WWGD
It's often good to think of "random" in terms of the information available to make a guess at the outcome, rather than what the outcome will be. It's the theory of guessing a result rather than the theory of the process itself. That avoids the complaint about calling an outcome that has already occurred "random" (such as a coin toss that has occurred but not seen). It also makes Bayesian theory, where probabilities are adjusted when more information is obtained, more natural. And it allows us to call something "random" if we know that it is deterministic, but we do not know enough to know the outcome and must guess.
 
  • Like
Likes QuantumQuest, entropy1 and StoneTemplePython
If you can't predict the next outcome, how come you can predict the average outcome?
 
entropy1 said:
If you can't predict the next outcome, how come you can predict the average outcome?
Are you talking about the population average or a sample average?
For population average:
It's just common experience, like predicting a coin toss or the roll of dice. We have seen enough to estimate the probabilities.

For a sample average:
You can not predict exactly. You can calculate the expected mean and the variance of a sample average. The answers will depend on the sample size.
 
  • Like
Likes nomadreid
andrewkirk said:
That's the common understanding of it. The twist comes in the meaning of 'can be predicted'. A process may be unpredictable in one theory but predictable in a more sophisticated theory. There's no way we can know that there isn't some currently unknown, more sophisticated theory that can predict outcomes that currently seem random to us. So what we can say is that a given process is random with respect to theory T. That is, predictability depends on what theory we are using to predict.
True. Depending on the tools, information available at a given point, I should have said, and within a theory. A good question is whether there are phenomena that are somehow intrinsically unpredictable, i.e., not predictable within any system.
 
  • Like
Likes StoneTemplePython
WWGD said:
True. Depending on the tools, information available at a given point, I should have said, and within a theory. A good question is whether there are phenomena that are somehow intrinsically unpredictable, i.e., not predictable within any system.

Your original post was awfully close to the (Frank) Knight definition that:

risk is where a future outcome is unknown but the distribution is known/knowable. Uncertainty is where neither the future outcome or distribution is known.


Economists are fond of bringing this up. Perhaps a touch too simple but it's worth thinking on.
 
  • Like
Likes WWGD
  • #10
Could one say that if we have two variables A and B that correlate for, say, 50%, that A (or B) is less random because knowledge of the outcome of B (or A) increases the likelyhood of a correct prediction of the outcome of A (or B)?
 
  • #11
entropy1 said:
Could one say that if we have two variables A and B that correlate for, say, 50%, that A (or B) is less random because knowledge of the outcome of B (or A) increases the likelyhood of a correct prediction of the outcome of A (or B)?
That line of thought would open a can of worms. Every coin flip has an outcome that is completely determined by the down-side of the coin. It would not be correct to say that that makes the up-side outcome less random.
 
  • Like
Likes Sean Nelson, QuantumQuest and StoneTemplePython
  • #12
FactChecker said:
That line of thought would open a can of worms. Every coin flip has an outcome that is completely determined by the down-side of the coin. It would not be correct to say that that makes the up-side outcome less random.
That seems to me comparable to predicting the outcome after observing it, which would not do justice to the notion of prediction.

Is there a definition of 'prediction' in this context?
 
  • #13
Would an intrinsically random process necessarily have correlation 0 with any other process?
 
  • #14
WWGD said:
Would an intrinsically random process necessarily have correlation 0 with any other process?
That's a good question. I think so, yes. If there is dependence, correlation, I would suggest randomness has been limited. That is the issue I was getting at. :smile:
 
  • #15
entropy1 said:
That's a good question. I think so, yes. If there is dependence, correlation, I would suggest randomness has been limited. That is the issue I was getting at. :smile:
Wonder how this would pan out Mathematically and Physically.
 
  • #16
entropy1 said:
That's a good question. I think so, yes. If there is dependence, correlation, I would suggest randomness has been limited. That is the issue I was getting at. :smile:
I think you are taking this in a direction that will not pay off. There are too many things that occur together, where you would not want to say that either one makes the other less random.

Example: Pick a random person out of a crowd. His height is related to the length of his left arm, right arm, left leg, right leg, weight, belt size, sex, age, etc., etc., etc. None of this makes anyone of them more or less random.
Yet they are correlated, so knowing one does help to make the others more predictable. But the one you need to know is itself random.
 
Last edited:
  • #17
WWGD said:
A good question is whether there are phenomena that are somehow intrinsically unpredictable, i.e., not predictable within any system.
I wrote an essay about this a few years back, which you may find interesting:

https://wordpress.com/post/sageandonions.wordpress.com/75

My conclusion was that, unless we put artificial constraints on what counts as a theory, there is no such thing as intrinsically unpredictable, since we can imagine a theory that I call the 'M-law', which lists every event that happens anywhere in spacetime. No event is unpredictable under that theory. Such a theory would be unknowable by humans, but that's beside the point.

For fans of MWI, knowing the M-law is equivalent to knowing which of Everett's infinite set of parallel universes we are in. But I was not aware of that equivalence at the time of writing the essay.

I was unable to think of any constraint that excluded the M-law that wasn't just obviously constructed just for the purpose of excluding it. Any more general constraint rule I tried ended up excluding theories we would like to include, even down to Newton's laws of motion.
 
  • #18
In quantum theory there is a definition of randomness that is inconsistent with certain types of determinism.

If we impose the condition that the deterministic theory is "local" (no faster than light propagation), then one can show that the randomness of quantum mechanics is incompatible with that type of determinism. Because in an operational sense, we believe that no one yet has technology that permits faster than light communication, quantum theory can guarantee randomness.

However, in a more general sense, if one allows the deterministic theory to be nonlocal, then the randomness of quantum theory is compatible with determinism, and ignorance of the initial conditions.

https://arxiv.org/abs/1708.00265
Certified randomness in quantum physics
Antonio Acín, Lluis Masanes
(Submitted on 1 Aug 2017)
The concept of randomness plays an important role in many disciplines. On one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other hand, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions on the devices that are difficult to meet in practice. However, quantum technologies allow for new methods for generating certified randomness. These methods are known as device-independent because do not rely on any modeling of the devices. Here we review the efforts and challenges to design device-independent randomness generators.
 
  • Like
Likes FactChecker
  • #19
FactChecker said:
I think you are taking this in a direction that will not pay off. There are too many things that occur together, where you would not want to say that either one makes the other less random.

Example: Pick a random person out of a crowd. His height is related to the length of his left arm, right arm, left leg, right leg, weight, belt size, sex, age, etc., etc., etc. None of this makes anyone of them more or less random.
Yet they are correlated, so knowing one does help to make the others more predictable. But the one you need to know is itself random.
Then, perhaps, the non-random factor lies in picking a person rather than a cat, dog or snake? The properties are inherent to the person. If we take 'properties' as 'outcome', they are correlated. It would be like measuring circles on both sides and finding that they are both round.
 
  • #20
andrewkirk said:
I wrote an essay about this a few years back, which you may find interesting:

https://wordpress.com/post/sageandonions.wordpress.com/75

My conclusion was that, unless we put artificial constraints on what counts as a theory, there is no such thing as intrinsically unpredictable, since we can imagine a theory that I call the 'M-law', which lists every event that happens anywhere in spacetime. No event is unpredictable under that theory. Such a theory would be unknowable by humans, but that's beside the point.

Your M-Law isn't compatible with Quantum Mechanics. A measurement of the spin of an electron will return a value which is not predictable. You would need to impose a specific interpretation of QM like the MWI and the existence of some meta-universe where the M-Law could operate across all worlds. But, MWI is only an interpretation and may not have any physical validity.

I think there is also a problem if the universe is spatially infinite, in that the M-Law would have to process an infinite amount of information. That can only be done if there is already a known pattern. I think there are computability issues in handling - dare I say it - a random set of initial conditions! Again, you would need to impose the theory that the universe's initial conditions could be predicted - precisely at every point in infinite space - by some prior law. And, even that appears not to work if the universe had no beginning. In that case, your M-Law needs to gather an infinite amount of data at some arbitrary initial time to get started.

I would say that the M-Law is roughly equivalent to God and I would question the existence of either.

PS the universe may be finite, have a defined beginning with a well-defined set of initial conditions and QM might be amenable to some sort of intrinsic predictability argument, but I don't believe we can assume any of those to be the case.

PPS in fact, in QM, talking about the precise position and momentum of every particle is not possible. Initial conditions in QM are intrinsically probabalistic.
 
Last edited:
  • Like
Likes nomadreid
  • #21
entropy1 said:
Then, perhaps, the non-random factor lies in picking a person rather than a cat, dog or snake? The properties are inherent to the person. If we take 'properties' as 'outcome', they are correlated. It would be like measuring circles on both sides and finding that they are both round.
Now you are trying to isolate the cause of the random behavior of the selected person's right arm length (for example). That is possible but it does not change the fact that the selected arm length is random. A non-constant function of a random variable is a random variable.

Similarly, I could attempt to isolate the random behavior of a coin toss to the motion of the hand that flips the coin. That does not make the result of the coin toss any less random.
 
Last edited:
  • #22
FactChecker said:
Now you are trying to isolate the cause of the random behavior of the selected person's right arm length (for example). That is possible but it does not change the fact that the selected arm length is random. A non-constant function of a random variable is a random variable.

Similarly, I could attempt to isolate the random behavior of a coin toss to the motion of the hand that flips the coin. That does not make the result of the coin toss any less random.
Sorry, there is a misunderstanding I see: I was talking here, and here, about a correlation between two strings of data A and B. I see now that I never introduced that I was talking about that. Sorry.
 
  • #23
entropy1 said:
If you can't predict the next outcome, how come you can predict the average outcome?

We can't (with certainty) predict the average outcome. The "expected value" of a random variable has a mathematical definition. There is nothing in probability theory that guarantees that an actual set of outcomes will have an average equal to the expected value.

Attempts to connect the concept of probability in definite way with actual events have (so far) been unsuccessful. To guarantee some outcome will actually happen contradicts the notion that there is something probabilistic about it actually happening.
 
  • Like
Likes Zafa Pi and entropy1
  • #24
entropy1 said:
Sorry, there is a misunderstanding I see: I was talking here, and here, about a correlation between two strings of data A and B. I see now that I never introduced that I was talking about that. Sorry.
No, I understood that. I only brought up the example of a function because that can be the strongest correlation possible. The relationship between correlated variables is usually weaker than a functional relationship. If a function of a random variable is random, then we have to conclude that a correlated variable with a weaker relationship than a function is random.
 
  • #25
FactChecker said:
No, I understood that. I only brought up the example of a function because that can be the strongest correlation possible. The relationship between correlated variables is usually weaker than a functional relationship. If a function of a random variable is random, then we have to conclude that a correlated variable with a weaker relationship than a function is random.
Would you be willing to illustrate that mathematically a bit? I can't seem to see what you mean by text only.
 
  • #26
Do you mean sample_A=f(human_A) and sample_B=f(human_B), with human_A=g(general_human, random_value) and human_B=g(general_human, random_value)?
 
  • #27
entropy1 said:
Would you be willing to illustrate that mathematically a bit? I can't seem to see what you mean by text only.
Suppose we have random variables X, Y = 2X and Z = Y + ε = 2X + ε, where ε is an independent random variable. All three variables X, Y, Z, are correlated. Y is a function of X, but is still considered a random variable. Z is related to X, but is not a deterministic function of X. It is still considered a random variable. It's connection (correlation) to X is weaker than Y's is. There is no reason to say that Y or Z are less random than X. We could have just as easily started with Y, X = 0.5Y, Z = Y + ε.
 
Last edited:
  • #28
PeroK said:
A measurement of the spin of an electron will return a value which is not predictable.
It is predictable by the M-law.

Later on you say that the M-law is roughly equivalent to [some mainstream concepts of] God. It has one similarity, which is that of omniscience. But there is a lot more to mainstream concepts of God than just omniscience. The M-law could just be a list written down by some 25-dimensional being that is observing our spacetime, multiverse or whatever from the outside, transcending our space and time - but presumably having its own meta-time. It doesn't need to be supreme. It could in its turn be being observed by a 52-dimensional being, and that by a 104-dimensional being, and so on ad infinitum.
PeroK said:
I would question the existence of either.
So do I. But the question is not whether there exists a theory or being that could predict the spin of the electron, but rather, is it in principle impossible that such a theory or being could exist? Of course we can't say, even though we may strongly suspect that no such theory or being exists in fact.

We could avoid the difficulty by limiting the allowable means of prediction to those that we currently know exist and to be accessible to humans. But if we go down that path, what is predictable changes over time. As I point out in the essay, that means that a thousand years ago the year of the next appearance of Halley's comment was unpredictable. And in practice, that's what people mean by predictable. That definition is purely epistemological. It's only when people try to talk about fundamental or ontological unpredictability that they run into trouble.
 
  • #29
andrewkirk said:
It is predictable by the M-law.

You can't sweep away the evidence of QM and will a deterministic theory of everything into existence by writing those six words.

There is, in fact, an interesting quotation from Griffith's book on QM: "even God doesn't know which electron is which". If God doesn't know, then neither do your hypothetical 25-dimensional beings.
 
  • Like
Likes nomadreid
  • #30
PeroK said:
There is, in fact, an interesting quotation from Griffith's book on QM: "even God doesn't know which electron is which". If God doesn't know, then neither do your hypothetical 25-dimensional beings.
Indistinguishability of particles has nothing to do with the predictability of future observations.

Predictions under QM are probabilistic. But there is nothing in QM - including Bell's Theorem - that says QM can't be part of, and consistent with, some larger theory that is not probabilistic.
 
  • Like
Likes Zafa Pi
  • #31
entropy1 said:
Do you mean sample_A=f(human_A) and sample_B=f(human_B), with human_A=g(general_human, random_value) and human_B=g(general_human, random_value)?
Perhaps what you are looking for is this:
Suppose X and Y are correlated and P(X∈A | Y∈B) ≠ P(X∈A), so the events A and B are not independent events. Then knowing Y∈B has changed the probabilities of X. There are examples where knowing Y∈B can increase or decrease the probabilities of X∈A. That is, there are examples where P(X∈A | Y∈B) > P(X∈A) and other examples where P(X∈A | Y∈B) < P(X∈A).

Furthermore, if X and Y are correlated real valued random variables, there are examples where var( X | Y∈B ) < var( X ) and other examples where var( X | Y∈B ) > var( X ) . So knowing Y∈ B can either decrease or increase the random variability of X, depending on whether knowing Y∈B increased or decreased the predictability of X.

The bottom line is that it is not possible to make a general rule about how "random" a variable is simply because you know that it is correlated with another variable.
 
  • #32
Is it true that if X and Y are random variables, that X given Y, P(X=1|Y=1), is random too?
 
  • #33
entropy1 said:
Is it correct that if X and Y are random variables, that X given Y, P(X=1|Y=1) is random too?
P(X=1|Y=1) is a specific probability number, not a random variable. (If P(Y=1)=0, then P(X=1|Y=1) is not even a valid probability.)
Suppose X and Y are discrete random variables where (X,Y) are the results of an experiment, E, and P(Y=1) > 0. Then you could define a related experiment, Etill Y=1 where experiment E is repeated till the result has Y=1. The X value result of Etill Y=1 would be a random variable, Z = Xtill Y=1.
 
  • Like
Likes entropy1
  • #34
As the original question was about the definition (or lack thereof) of randomness, I am surprised no one has mentioned the work of Gregory Chaitin; e.g., Part III of https://www.cs.auckland.ac.nz/~chaitin/ait/index.html
 
  • Like
Likes entropy1
  • #35
In passing, random data is incompressible. Sometimes this means that the compression algorithm isn't clever enough to exploit the redundency present.

So: If the data is compressible, it's not random. If it's incompressible it *might* be random.

You may be able to come up with a calculable degree of randomness using entropy/enthalpy calculations.
 
  • Like
Likes FactChecker and entropy1
  • #36
Has the use of Chaitin-Kolmogorov-Solomonoff complexity for the definition of randomness been considered here?
 
  • Like
Likes nomadreid and entropy1
  • #37
@nomadreid: sorry, I had not seen yours when I posted concerning algorithmic complexity. thus, you would hold that Wolfram's Rule #30 is not random? just askin'.
 
  • Like
Likes nomadreid
  • #38
jimfarned said:
@nomadreid: sorry, I had not seen yours when I posted concerning algorithmic complexity. thus, you would hold that Wolfram's Rule #30 is not random? just askin'.
Excellent question. No, I would not consider it random, but rather deterministic chaos. (However, I would not be too dogmatic about it.) The question forces at least a partial definition of randomness, highlighting the differences between non-predictable deterministic behavior (chaos) and non-deterministic behavior (randomness). Chaos is a state where knowledge of the present determines the future but knowledge of the approximate present does not. To put it in terms of algorithmic complexity applied to Wolfram 30, there can exist an algorithmic computer program which can produce the result given the initial conditions, and it will be the same result each time (unlike, say, certain experiments on the quantum level). However, perhaps for a specific computer program we could bring up a concept of "relative randomness" or "randomness for this program" (or "axiom system", to take it back out from IT). Wiser people than I can take it from here...
 
  • #39
If we have A∈{0,1} and P(A)=0.5, and we do 45 throws of A and get 45 times '1', it would be compatible with probability theory, right?

So, if we have B∈{0,1} and P(B)=0.75, and we do 40 throws of B and get 20 times '1', it would also be compatible with probability theory, right?

So, finally, we have C∈{0,1} and P(C)=0.5, and we do 40 throws of B and get 20 times '1'.

Then experiments say nothing about probability, since P(B)≠P(C), while the outcomes are identical in my example. We can measure N(B)/N and get a ratio of 0.5 while we say P(B)=0.75. And we say P(A)=0.5 while we get all 1's.

I can imagine that this would also be possible with large numbers. So shouldn't we say that P(B)=0.5 instead? And P(A)=1.0? However the values given in my examples are perfectly reasonable!

So I can imagine that outcomes tell us not necassarily anything about the probability.
 
  • #40
entropy1 said:
So I can imagine that outcomes tell us not necassarily anything about the probability.

If "tells us" refers to logical deduction, you are correct. The mathematical theory of probability doesn't make definite predictions about actual outcomes. It only makes statements about the probabilities of actual outcomes. So the theory of probability is circular in that respect. Probability theory tells us about the probabilities of things. Likewise, actual outcomes do not provide exact information about probabilities. However, in applied math, real life problems are often approached by assuming data from actual outcomes gives us their probabilities. (Most real life problems are "solved" by making various assumptions.)
 
  • Like
Likes Zafa Pi and nomadreid
  • #41
Stephen Tashi said:
The mathematical theory of probability doesn't make definite predictions about actual outcomes. It only makes statements about the probabilities of actual outcomes. [..] real life problems are often approached by assuming data from actual outcomes gives us their probabilities.
So outcomes define probabilities, and probabilities predict (in a way) (averages of) outcomes? And in practice that goes well?
 
  • #42
Suppose that A,B∈{0,1} P(A)=0.5 and P(B)=0.5, while P(A|B)=1. We could write: P(A|anything)=0.5 and P(A|B)=1. So perhaps we have (at least) two different values for P(A), depending on some condition. So perhaps any probability for a particular variable is variable depending on how you look at that variable? For instance: we could be limited to experiments in which B=1.

Also, the probability of correlation in this example is 1. So we have a phenomenon, correlation, that has a probability on its own, right?
 
Last edited:
  • Like
Likes Stephen Tashi
  • #43
entropy1 said:
So outcomes define probabilities,
That is not an conclusion you can draw from the theory of probability. In applications, people often assume the data about outcomes gives their probabilities.

and probabilities predict (in a way) (averages of) outcomes?
It depends on what you mean by "in a way" and "predict". Probability theory gives the probabilities of outcomes.

And in practice that goes well?
It "probably" does, but there is no absolute guarantee.
 
  • #44
entropy1 said:
Suppose that A,B∈{0,1} P(A)=0.5 and P(B)=0.5, while P(A|B)=1. We could write: P(A|anything)=0.5 and P(A|B)=1. So perhaps we have (at least) two different values for P(A), depending on some condition.
This is a matter of notation. In the notation you are using "P(A)" denotes the probability of A without any other conditions. So the fact that P(A|B) = 1 does not show that there are two different values of P(A). The event "A|B" is a different event than the event "A".

So perhaps any probability for a particular variable is variable depending on how you look at that variable?
You should distinguish between an "event" and (random) "variable". It is correct that one may write different notations involving a random variable or an event and these notations may represent different probabilities.

The words describing an event such as "I get a haircut" do not, by themselves, define a particular probability. So when people ask questions like "What is the probability I get a haircut?" they are not asking a question that has a unique answer - even though the event they describe may be clear.

Probability theory takes place on a "probability space" To describe a probability space, you must describe the set of possible events and assign each event a probability. For example the set of haircut or non-haircut events might be defined on sets of days such as " On a randomly selected day in 2018" or "On a randomly selected Tuesday before my 18th birthday" etc. The terminology "randomly selected" is short hand for the fact that we assign each member of the set an equal probability.
Also, the probability of correlation in this example is 1. So we have a phenomenon, correlation, that has a probability on its own, right?

You need to study the details of probability and statistics to avoid mishaps in terminology. "Correlation" has a technical meaning in probability theory. In your example, the correlation coefficient is 1, but there is no need to speak of a "probability" of it being 1. It is definitely 1.

When we have (X,Y) data generated by a probability model, the sample correlation coefficient (which estimates the population correlation coefficient) can take different values with different probabilities, so it would make sense to talk about the sample correlation coefficient having a probability taking various values.
 
  • Like
Likes nomadreid
  • #45
nomadreid said:
As the original question was about the definition (or lack thereof) of randomness, I am surprised no one has mentioned the work of Gregory Chaitin; e.g., Part III of https://www.cs.auckland.ac.nz/~chaitin/ait/index.html
Chaitlin's definition is:

"something is random if it is algorithmically incompressible or irreducible"

That sounds very similar to the definition of a 'normal' sequence referred to above. It doesn't match the folk notion of randomness because it is a property that applies to a list of outcomes from a process, rather than to the process itself.

It reminds me of a joke somebody told me a very long time ago. A man was having his bathroom refurbished and hired a tiler to lay the floor tiles. The man bought a bunch of tiles, some black and some white, and asked the tiler to use those. 'What pattern would you like?' asked the tiler? 'Oh just lay them down at random' said the man.

Being very literal-minded, the tiler put all the tiles in a cloth bag and selected them one at a time to lay down, without looking.

By pure chance, the floor ended up with a chessboard pattern of diagonals in the middle of it.

The owner complained 'that's not random' and refused to pay the bill. The tiler was cross and said 'I did exactly what you asked me to!'.

The point of the joke, based on the person that told it to me, was that the owner was an idiot, who didn't think hard enough about the meaning of his instructions. But taking a more charitable interpretation, the owner was a Chaitlinist, while the tiler took an epistemological definition of random (ie a process is random from the point of view of person A if person A does cannot predict what will happen next).
 
  • #46
A philosophical comment:

andrewkirk said:
A process may be unpredictable in one theory but predictable in a more sophisticated theory. There's no way we can know that there isn't some currently unknown, more sophisticated theory that can predict outcomes that currently seem random to us. So what we can say is that a given process is random with respect to theory T. That is, predictability depends on what theory we are using to predict.

From the point of view of applying this to foundations of physics I tend to use a similar definition. In the physical sense i think of it so that randomness is "observer dependent". And observer then includes the whole inference machinery of memory and processing capacity. There may be physical limits to what can be "resolved", and this is then observer dependent. Here we can associate an observer with a theory T, if you understand the theory as a result of inference. Then in principled each observers encodes his own theory. But the possible theories are constrained by the complexity.

In this way, applied to say random walks, it allows for "explanatory models" in terms or random walks, where the non-random patterns emerge at other observational scales. The observer dependent randomness may explain why some interactions decouple at high energy, as the causal rules can no longer be coded by the smaller and smaller coherent interacting agents.

/Fredrik
 
  • #47
PeroK said:
A measurement of the spin of an electron will return a value which is not predictable.
Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.
PeroK said:
There is, in fact, an interesting quotation from Griffith's book on QM: "even God doesn't know which electron is which".
Like Bohr with Einstein, I told David several times to stop telling God what he can't do.
 
  • #48
Zafa Pi said:
Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.

Like Bohr with Einstein, I told David several times to stop telling God what he can't do.

Tests for randomness are a little strange. If you have some random process for generating a sequence of 0s and 1s, then the sequence

01010101010101010101010101...

is just as possible as the sequence

001001000011111101101010100...

Daryl
 
  • #49
stevendaryl said:
Tests for randomness are a little strange. If you have some random process for generating a sequence of 0s and 1s, then the sequence

01010101010101010101010101...

is just as possible as the sequence

001001000011111101101010100...

Daryl
That is what I mean.
 
  • #50
Zafa Pi said:
Like Bohr with Einstein, I told David several times to stop telling God what he can't do.
Zafa Pi said:
Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.

It's certainly not impossible to test for randomness, although it is impossible to prove it mathematically.

When it comes to physics, you would have to propose an algorithm for quantum measurements, say, that could then be tested.

The inability, theoretically or practically, to predict the outcome of an experiment is essentially what is meant by randomness - in QM at least.
 
  • Like
Likes Zafa Pi
Back
Top