What is the definition of randomness in mathematics and physics?

  • Thread starter Aidyan
  • Start date
  • Tags
    Randomness
In summary, the conversation discusses the definitions and distinctions between randomness, probabilistic processes, and non-deterministic processes, as well as the challenges in scientifically defining randomness. The use of examples, such as human decision-making and coin flips, is utilized to showcase the differences between these concepts. It is ultimately concluded that while pseudo randomness can be identified, true randomness cannot be scientifically observed or defined.
  • #1
Aidyan
180
13
The Oxford English Dictionary defines 'random' as: "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice". However, if we intend randomness as events with equal frequency probability this can't be. Think for example of the frequency of symbol sequences of a crypted text file. So I'm wondering if there exists a rigorous definition of randomness in mathematics and/or physics which can be interpreted as the above definition?
 
Physics news on Phys.org
  • #2
The symbol frequencies are pseudorandom, not truly random.
 
  • #3
bpet said:
The symbol frequencies are pseudorandom, not truly random.

Yes, and the question could be rephrased as: "how does someone distinguish between pseudorandomness and randomness?"
 
  • #4
In CS, it is common to distinguish between probabilistic and non-deterministic processes.

A probabilistic process usually can generate events which satisfy the law of large numbers. If you run it a large number of times, you'll observe the statistical distribution of the chances of the individual events occurring.

A non-deterministic process can generate events at will, there is only choice and no probability involved.

The difference is best explained with a probabilistic 50/50 coin flip and a non-deterministic coin flip. The first will satisfy the law of large numbers (LLN), and -with high probability- with ten flips you end up close to a five times heads up distribution. The latter doesn't abide the LLN and anything can happen with ten coin flips.

The mathematical underlying models often used are probabilistic and non-deterministic automata.

(Non-deterministic automata are interesting in CS since, in general, software isn't probabilistic but non-deterministic. Checking for bugs, for example, with probabilistic means makes sense but cannot prove the absence of them.)

[ I wish I could explain it better but even on the Wikipedia pages non-determinism and probability are conflated terms, which is a no-go area in CS theory.]
 
Last edited by a moderator:
  • #5
MarcoD said:
The difference is best explained with a probabilistic 50/50 coin flip and a non-deterministic coin flip. The first will satisfy the law of large numbers (LLN), and -with high probability- with ten flips you end up close to a five times heads up distribution. The latter doesn't abide the LLN and anything can happen with ten coin flips.

What is a "non-deterministic coin flip"? As I understand it coin flips always satisfy the LLN. Apart from this, what are the non-deterministic processes in the physical world? Some examples? (possibly NON quantum mechanical examples)
 
  • #6
Aidyan said:
What is a "non-deterministic coin flip"? As I understand it coin flips always satisfy the LLN. Apart from this, what are the non-deterministic processes in the physical world? Some examples? (possibly NON quantum mechanical examples)

Probabilism and Non-determinism are the mathematical made exact notions of mechanisms involving probability or possibility, respectively.

The best example I know of to make the distinction apparent is a human. Let's say you have a switch where you can either light a red or a green bulb and a human operating that switch. Would you assign probabilities to that, like 50/50 or something? No, a human has a choice in operating it, and, for instance, might just always choose red.

I don't know if non-deterministic processes in the real world really exist. To be honest, it is something which cannot be observed, non-determinism is just a technical notion which comes in handy in CS.

(To really observe non-determinism you would need to find a process where it just looks infeasible to assign probabilities to it. I.e., something which fluctuates that wildly that your best guess is that it is non-deterministic. It is noteworthy that QM, for instance, doesn't observe non-determinism, which makes it very likely that physicist are observing not even a probabilistic process, but an entirely causal/mechanical system. Uh, IMO.)
 
Last edited by a moderator:
  • #7
MarcoD said:
The best example I know of to make the distinction apparent is a human. Let's say you have a switch where you can either light a red or a green bulb and a human operating that switch. Would you assign probabilities to that, like 50/50 or something? No, a human has a choice in operating it, and, for instance, might just always choose red.

But a human can somehow simulate randomness and if I see only the light without knowing if it is a human or natural source modulating it, I won't be able to distinguish between deterministic or non-deterministic, a random or pseudo-random symbol sequence.

MarcoD said:
I don't know if non-deterministic processes in the real world really exist. To be honest, it is something which cannot be observed, non-determinism is just a technical notion which comes in handy in CS.

Yes, precisely. That's why I'm wondering if there is any scientific method to conceive as randomness as: "Having no definite aim or purpose; not sent or guided in a particular direction". Despite what most believe there is none.
 
  • #8
Aidyan said:
But a human can somehow simulate randomness and if I see only the light without knowing if it is a human or natural source modulating it, I won't be able to distinguish between deterministic or non-deterministic, a random or pseudo-random symbol sequence.

Pseudo randomness sometimes can be discovered, like in those famous 2D plots of random generators. Apart from that, yah, you're correct.

Yes, precisely. That's why I'm wondering if there is any scientific method to conceive as randomness as: "Having no definite aim or purpose; not sent or guided in a particular direction". Despite what most believe there is none.

I don't follow what you're saying here? Randomness is usually associated with probabilism, therefor your definition, to me, reads more as a definition of non-determinism. And that cannot be observed, I think. (Except for finding a process where events occur in such a manner that there doesn't seem to be a feasible manner of assigning probabilities. That would be difficult; well, on the other hand, maybe life is exactly like that, and it isn't that hard.)
 
  • #9
MarcoD said:
I don't follow what you're saying here? Randomness is usually associated with probabilism, therefor your definition, to me, reads more as a definition of non-determinism. And that cannot be observed, I think.

Yes, the question was if it is possible to distinguish between the two determinisms. If the answer is no, then I think it is sufficiently clear now. Thanks.
 
  • #10
Aidyan said:
The Oxford English Dictionary defines 'random' as: "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice". However, if we intend randomness as events with equal frequency probability this can't be. Think for example of the frequency of symbol sequences of a crypted text file. So I'm wondering if there exists a rigorous definition of randomness in mathematics and/or physics which can be interpreted as the above definition?

A process is random if the available information is useless in predicting the next outcome.
 
  • #11
lavinia said:
A process is random if the available information is useless in predicting the next outcome.

Well, this could be a good working definition in many cases but it makes randomness knowledge dependent. My capacity to make use of the information in predicting the evolution of a system depends from my understanding of it and the laws that rule it. If today I'm not able to do this and the process "appears" random, tomorrow I may have a better theoretical background and randomness "disappeares". So it would be a subjective category not an objective one as science requests.
 
  • #12
lavinia said:
A process is random if the available information is useless in predicting the next outcome.

That's not true. Randomness does not mean the behavior of a system cannot be modeled. Stochastic processes can be modeled if we can assign probabilities. These processes are still random even though we have information which allow us to model them probabilistically.

The next outcome (x) will have a probability p(x). The uncertainty associated with predicting that outcome can be assigned a measure: U=p(x)(1-p(x)). U will be maximal when p(x)=0.5 and approach 0 as p(x) approaches either 1 or 0. Often U is normalized: U= 4(p(x)(1-p(x)).
 
  • #13
lavinia said:
A process is random if the available information is useless in predicting the next outcome.
Just piling on, there's at least two problems here:
- Useless is far too strong a term.
- "Next outcome" rules out continuous random processes.

A process is "random" if the future evolution of the process is not uniquely determined by any (knowable) set of initial data.

To be picky, there is no reason to exclude processes whose evolution over time is uniquely determined by initial data. For example, given the probability space {4}, I guarantee that every time you randomly select from this space you will get a four.
 
  • #14
SW VandeCarr said:
That's not true. Randomness does not mean the behavior of a system cannot be modeled. Stochastic processes can be modeled if we can assign probabilities. These processes are still random even though we have information which allow us to model them probabilistically.

The next outcome (x) will have a probability p(x). The uncertainty associated with predicting that outcome can be assigned a measure: U=p(x)(1-p(x)). U will be maximal when p(x)=0.5 and approach 0 as p(x) approaches either 1 or 0. Often U is normalized: U= 4(p(x)(1-p(x)).

But the question was what is randomness, not what a stochastic process is. And all depends from what we mean by "useful". If U nears 0 it is because the event is very likely or very unlikely. The former case can't be taken as a definition for randomness, the latter perhaps only if p(x)=1/N with N->infinity the nr. of possible outcomes. But then it hardly can be said to be "useful" for predicting the next outcome.
 
  • #15
D H said:
A process is "random" if the future evolution of the process is not uniquely determined by any (knowable) set of initial data.

If you include the parenthesis then it makes again the notion of randomness knowledge-observer dependent, not an intrinsic behavior of phenomena. If you exclude it then I wonder what that process might be? The only process I can think of is in QM without hidden variables. But there we won't also find any definition of randomness.

I think that all boils down to the conclusion that randomness isn't a universally defined scientific concept. Despite widespread belief, the concept of "randomness" is not a scientific but a subjective category, like "beauty" or alike, which only gives a sense of our ignorance not an intrinsic property of processes or things.
 
  • #16
Aidyan said:
But the question was what is randomness, not what a stochastic process is. And all depends from what we mean by "useful". If U nears 0 it is because the event is very likely or very unlikely. The former case can't be taken as a definition for randomness, the latter perhaps only if p(x)=1/N with N->infinity the nr. of possible outcomes. But then it hardly can be said to be "useful" for predicting the next outcome.

I was responding specifically to the post I quoted which described randomness incorrectly. A stochastic process is a process that can be described by a random variable. Look up the definition of a random variable. As for the broader definition of randomness, look up the Kolmogorov definition. It's considered the most rigorous generally accepted definition as far as I know.
 
  • #17
If you are after a formal definition, the wiki article on random variables is pretty good:
http://en.wikipedia.org/wiki/Random_variable#Formal_definition

You are going to have to understand measure theory before you can make sense of that definition. Knowing the axiom of choice won't hurt.

Without that knowledge, descriptions of randomness are going to look like handwaving. Just because the lay description is a bit loosey-goosey doesn't mean that a formal definition doesn't exist.
 
  • #18
Aidyan said:
Well, this could be a good working definition in many cases but it makes randomness knowledge dependent. My capacity to make use of the information in predicting the evolution of a system depends from my understanding of it and the laws that rule it. If today I'm not able to do this and the process "appears" random, tomorrow I may have a better theoretical background and randomness "disappeares". So it would be a subjective category not an objective one as science requests.

that is correct. But in some physical phenomena there is no information set that improves predicatability. To call information dependence subjective I find wrong. It is not subjective but lawfully determined.
 
  • #19
Well, my understanding of a random variable/process is one in which

individual values cannot be predicted, but these outcomes (their values) can

only be described probabilistically, and this is supposed to be an intrinsic issue

and not just knowledge-dependent.
 
  • #20
Bacle said:
Well, my understanding of a random variable/process is one in which

individual values cannot be predicted, but these outcomes (their values) can

only be described probabilistically, and this is supposed to be an intrinsic issue

and not just knowledge-dependent.

Stochastic processes always involve knowledge since they have a history.
 
  • #21
But the knowledge is not deterministic, it is a probabilistic one; we cannot predict
any individual/specific outcome; the best we can do knowledge-wise is to have a
distribution.

What I meant to say is there may be a distinction between intrinsic and extrinsic
randomness. A variable may be extrinsically random to someone without the knowledge
of how the process works, but not random to someone else who does. One would
then say that the process is intrinsically random if specific outcomes or values of the
variable cannot be predicted , but the behavior of the variable can be described probabilistically.

A coin toss may be the standard example. The history may tell us that P(head)=a
and p(Tails)= 1-a , but we cannot predict, for any given throw, whether we will
get heads or tails; our knowledge goes only so far as to tell us the long-term
limiting proportion of heads/tails to the total.
 
  • #22
Aidyan said:
The Oxford English Dictionary defines 'random' as: "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice". However, if we intend randomness as events with equal frequency probability this can't be. Think for example of the frequency of symbol sequences of a crypted text file. So I'm wondering if there exists a rigorous definition of randomness in mathematics and/or physics which can be interpreted as the above definition?

No, there is no single rigorous definition for randomness based on the Oxford English Dictionary definition; like any other natural language definition, it can have multiple interpretations, based on the multiple interpretations of each of its component clauses and the words making up those clauses in differing context.

Here is the definition for Kolmogorov randomness: "a string of bits is random if and only if it is shorter than any computer program that can produce that string."

"
 
  • #23
xxxx0xxxx said:
Here is the definition for Kolmogorov randomness: "a string of bits is random if and only if it is shorter than any computer program that can produce that string.""

Yes, Kolomogorf definition is probably the most rigorous one. However, as far as I understand it, it isn't very useful since there is no way to determine that computer program. Chaitin showed that any given symbol sequence cannot be proved to be randomn because of the limits imposed by Goedel incompleteness theorem. In other words, there will be never a method which will distinguish between truly random and pseudo-random events. The distinction between random and pseudo-random is unwarranted. "True randomness" is a meaningless notion which humans use only to hide their ignorance.
 
  • #24
Aidyan said:
Yes, Kolomogorf definition is probably the most rigorous one. However, as far as I understand it, it isn't very useful since there is no way to determine that computer program. Chaitin showed that any given symbol sequence cannot be proved to be randomn because of the limits imposed by Goedel incompleteness theorem. In other words, there will be never a method which will distinguish between truly random and pseudo-random events. The distinction between random and pseudo-random is unwarranted. "True randomness" is a meaningless notion which humans use only to hide their ignorance.

Well, I kinda agree with that as well. Common sense tells you that you can't tell something is random just by looking at it, even if it has high entropy.

Cryptographers believe that the one-time pad is the only completely secure encyphering technique since you use it once and throw it away. Its only random if you don't use it again.

But you didn't ask for a theorem, only a definition :)
 
Last edited:
  • #25
"No, there is no single rigorous definition for randomness based on the Oxford English Dictionary definition"

But, why use the definition in an English dictionary, which , almost necessarily gives a more colloquial and less accurate definition that you would find, say, in a stochastic calculus book?
 
  • #26
Bacle said:
A variable may be extrinsically random to someone without the knowledge of how the process works, but not random to someone else who does.

Yes indeed. You describe more rigorously what I mean with randomness as being a "knowledge-dependent" notion.

Bacle said:
One would then say that the process is intrinsically random if specific outcomes or values of the variable cannot be predicted , but the behavior of the variable can be described probabilistically.

But with this you exchanged only your observer position: from someone with knowledge to someone without the knowledge. Randomness can at bets be taken as a measure of our ignorance (entropy alike), but I don't think there exists anything like an "intrinsic randomness" in the 'real world out there', it is only in our mind. The no hidden variables interpretation of QM might be the example of real intrinsic randomness. But, as far as I know, there is no rigorous definition of randomness which distinguishes between intrinsic and extrinsic, pseudo- and true randomness. It remains an intuitive category.

Bacle said:
A coin toss may be the standard example. The history may tell us that P(head)=a and p(Tails)= 1-a , but we cannot predict, for any given throw, whether we will get heads or tails; our knowledge goes only so far as to tell us the long-term limiting proportion of heads/tails to the total.

Would we know everything down to the molecular level of the forces involved and have sufficient computing power to trace back the chain of cause and effects it would be predictable.
 
  • #27
Bacle said:
"No, there is no single rigorous definition for randomness based on the Oxford English Dictionary definition". But, why use the definition in an English dictionary, which , almost necessarily gives a more colloquial and less accurate definition that you would find, say, in a stochastic calculus book?

That's why I asked for an accurate definition.

I was interested at the idea to distinguish between random and pseudo-random events. For example, think you have a radiotelescope and receive a signal which can be interpreted as a possible signal coming from an extraterrestrial intelligence. But you don't know the language, those beings might communicate with a set of symbols we interpret as random when in reality they are only pseudo-random. Is there any way to distinguish between the two? I don't think so, because it is knowledge- and code-dependent. Therefore "randomness" is a subjective concept, like the word "ignorance" or "beauty", not an objective or "intrinsic" property of things or a process.
 
  • #28
Here is something that might interest you, go to episode 5.
http://www.bbc.co.uk/podcasts/series/iots
 
  • #29
Jobrag said:
Here is something that might interest you, go to episode 5.
http://www.bbc.co.uk/podcasts/series/iots

Thanks, interesting podcast. Finally it confirms, randomness is a 'slippery' thing: there are sequences of symbols one produces deterministically (nice was the example of pi and prime numbers) and yet they pass all the definitions of randomness statisticians could think of. Obviously, simply because randomness is not an intrinsic objective property of things or processes, it does not have any concrete existence in itself, but is a relative subjective mental category. And also the idea to connect "lack of purpose", or "lack of will", or "lack of conscious choice" to random events is an unwarranted logical inference. Finally the good old Democritus was right.
 
  • #30
Aidyan said:
That's why I asked for an accurate definition.

I was interested at the idea to distinguish between random and pseudo-random events. For example, think you have a radiotelescope and receive a signal which can be interpreted as a possible signal coming from an extraterrestrial intelligence. But you don't know the language, those beings might communicate with a set of symbols we interpret as random when in reality they are only pseudo-random. Is there any way to distinguish between the two? I don't think so, because it is knowledge- and code-dependent. Therefore "randomness" is a subjective concept, like the word "ignorance" or "beauty", not an objective or "intrinsic" property of things or a process.

But, wouldn't the probabilistic definition, i.e., having a distribution, or, seeing that its values converge , in some precise sense, to a distribution, make the signal non-random, by this definition? Maybe we can then define noise as a collection of signals that do not reveal any probabilistic pattern.

I think the problem with the symbols may not be so much in whether they are random or not, but more in interpreting and assigning some meaning to them. I don't know , tho, if we can assume that a probabilistic pattern ( i.e., converging to a distribution
) reveals or suggests that there is some meaning attached to the signal or not.
 
  • #31
Aidyan said:
That's why I asked for an accurate definition.

I was interested at the idea to distinguish between random and pseudo-random events. For example, think you have a radiotelescope and receive a signal which can be interpreted as a possible signal coming from an extraterrestrial intelligence. But you don't know the language, those beings might communicate with a set of symbols we interpret as random when in reality they are only pseudo-random. Is there any way to distinguish between the two? I don't think so, because it is knowledge- and code-dependent. Therefore "randomness" is a subjective concept, like the word "ignorance" or "beauty", not an objective or "intrinsic" property of things or a process.

Ah well, we're talking about a different kind of animal than randomness, signals always contain extra information which we call "noise." In this case you must be able to distinquish the part of the signal containing useful information from the part that is just noise (another word for the part we're not interested in detecting). The way that it is done, is you have to know something about the information in the signal a priori, eg its modulation scheme.

It is a peculiar property of signals that the more noisy, i.e. random, they seem to be, the more information they contain.
 
  • #32
Still, re recovering noisy signals without knowing the original, I have heard of
some techniques using maximum-likelihood estimation (with the two parameters being mean and variance, tho cheating, in that variance is ultimately held fixed) , in which the parameter
value (re the mean) that maximizes the likelihood of the received signal is that of the average of a collection of samples of signals taken at different receivers. This technique is used at a lower level with cell-phones, in which noisy signals are received, and then an approximation to the original signal is made by averaging out the signals received at different towers. I don't know how well this would extend to signals coming from the outside, but it seems like something to start with.
 
  • #33
Bacle said:
Still, re recovering noisy signals without knowing the original, I have heard of
some techniques using maximum-likelihood estimation (with the two parameters being mean and variance, tho cheating, in that variance is ultimately held fixed) , in which the parameter
value (re the mean) that maximizes the likelihood of the received signal is that of the average of a collection of samples of signals taken at different receivers. This technique is used at a lower level with cell-phones, in which noisy signals are received, and then an approximation to the original signal is made by averaging out the signals received at different towers. I don't know how well this would extend to signals coming from the outside, but it seems like something to start with.

Yes I think that that's a power management technique but sometimes the actual information is modulated using a so-called "spread spectrum" technique; the modulation scheme depends on some pseudo-random sequencing of power, frequency, phase, or amplitude. Provided the receiver knows the sequencing scheme, the signal can be detected from the noise by averaging. GPS for instance works along these lines, as well as CDMA (3G) cell phones (although the power is high enough on cell phones that they can use a rake receiver to pull the signal out immediately; this is primarily for combating fading due to multipath propagation).
 
  • #34
Bacle said:
I think the problem with the symbols may not be so much in whether they are random or not, but more in interpreting and assigning some meaning to them.

In fact. Think of the decimal digits sequence of pi which are compeltely random, but they have a clear defined meaning. But only if you know what that strange number pi is. And yet there is that deeply ingraved pre-conception that randomness = no meaning, no purpose, no conscious choice, etc.
 
  • #35
xxxx0xxxx said:
Ah well, we're talking about a different kind of animal than randomness, signals always contain extra information which we call "noise." In this case you must be able to distinquish the part of the signal containing useful information from the part that is just noise (another word for the part we're not interested in detecting). The way that it is done, is you have to know something about the information in the signal a priori, eg its modulation scheme.

As I understand it "noise" is considered a random process. The separation of noise from "useful information" is knowledge/observer dependent. It is the type of knowledge a priori which makes the difference. If we know nothing all might appear noisy, i.e., random.
 

Similar threads

Replies
12
Views
731
  • STEM Academic Advising
Replies
2
Views
1K
Replies
26
Views
1K
  • General Discussion
6
Replies
190
Views
9K
  • Quantum Interpretations and Foundations
Replies
14
Views
2K
Replies
19
Views
2K
  • STEM Academic Advising
Replies
13
Views
2K
  • Introductory Physics Homework Help
Replies
1
Views
2K
  • General Math
Replies
33
Views
5K
  • STEM Academic Advising
Replies
4
Views
2K
Back
Top