What's the meaning of "random" in Mathematics?

  • B
  • Thread starter fbs7
  • Start date
  • Featured
  • #1
345
37

Main Question or Discussion Point

Physics, Economists, Biologists, Astronomers and my brother all love the word "Random", as that allows allows them to get out of clockwork processes and allow for variations due to unknowns or whatever else.

But, how does a Mathematician reconcile itself with the idea of random? There's no axiom for "choice", no function for "random value", no explanation of what "chance" is.

Meanwhile I heard that someone spent 500 pages of logic to prove that 1+1 = 2 (or something like that), so how is it possible that mathematicians and logicians spend all that trouble to prove some really basic stuff, while at the same time just accept theories around probabilities and random numbers without (as least from my untrained point of view) an axiomatic foundation for choice?
 
  • Like
Likes FactChecker, TachyonLord, Demystifier and 1 other person

Answers and Replies

  • #2
13,258
10,246
Physics, Economists, Biologists, Astronomers and my brother all love the word "Random", as that allows allows them to get out of clockwork processes and allow for variations due to unknowns or whatever else.

But, how does a Mathematician reconcile itself with the idea of random? There's no axiom for "choice", no function for "random value", no explanation of what "chance" is.

Meanwhile I heard that someone spent 500 pages of logic to prove that 1+1 = 2 (or something like that), so how is it possible that mathematicians and logicians spend all that trouble to prove some really basic stuff ...
As a programmer you should know that ##1+1=0##, so basic stuff is quite relative here.
... while at the same time just accept theories around probabilities and random numbers without (as least from my untrained point of view) an axiomatic foundation for choice?
Random are the possible values of a measurable function on a probability space ##(\Omega,\Sigma,P)## to a measure space ##(\Omega',\Sigma')##.

Axiomatic enough?
 
  • Like
Likes Matt Benesi and suremarc
  • #3
345
37
As a programmer you should know that ##1+1=0##, so basic stuff is quite relative here.

Random are the possible values of a measurable function on a probability space ##(\Omega,\Sigma,P)## to a measure space ##(\Omega',\Sigma')##.

Axiomatic enough?

Well, f(x) = 1/(1+x) has possible values on [0,1] on a space defined by [0,∞]... how is that different than random?

Also, if that "measurable function" is a random value, then can someone write a formula for a truly random function (as opposed to pseudo-random)? How would it define a value, if the value is random?

I know one can write a formula to express probability, and that's fine. Say, probably = 1/(2+x) or something like that. But does that doesn't define a random value, it just gives an average, over a large enough number of samples, for the sampled values to fall within a range.

Like, probability of a duck to be male, if you count 1 million ducks, is I guess 50%. But then the "choice" that the duck is male or female is not random, it's defined by a well-known biological process.
 
Last edited:
  • #4
13,258
10,246
Well, f(x) = 1/(1+x) has possible values on [0,1] on a space defined by [0,∞]... how is that different than random?
You need a bit more than just the intervals: sigma algebras (the measures) on ##[0,1]## and ##[0,\infty)##, plus a probability measure to get the usual properties on ##[0,1]##: ##P:\Sigma \longrightarrow [0,1], P(\emptyset)=0, P(\Omega)=1, P(\dot{\cup} A_i)=\sum P(A_i)##.
Also, if that "measurable function" is a random value, then what is the function?
The function isn't random, it's a random variable ##X\, : \,(\Omega, \Sigma ,P) \longrightarrow (\Omega',\Sigma')\,.##
How would it define a value, if the value is random?
##P## by relating a certain real value to each element of ##\Sigma \subseteq \mathbb{P}(\Omega)##. The values themselves aren't random, they are determined. They represent a probability, and this is fixed by the function, aka random variable. Randomness is just the common word for probability. It is a confusing word, as the probability value itself is not random.

You can handle the subject as ordinary real analysis. Randomness is it's interpretation, not the method.
 
  • Like
Likes fbs7
  • #5
345
37
Oh... randomness is interpretation...

So a "random" variable is really just another variable, just like "time" is just a variable without anything different than say a "mass" variable. Our interpretation of "time" makes it special, but that meaning of something flowing over.. whatever... is not mathematical, it's just human interpretation.

Likewise, the "random" nature of a variable is just human interpretation, right?

Does that mean that, because "random" is a semantic thing that we use to interpret the variable, that "random" is not really a mathematical concept (any more than "time" is)? After all, a variable is just a variable, and mathematically nothing makes them any different from one another?
 
  • #6
13,258
10,246
Well, I would agree here. Of course the setup is made for probabilistic calculations and to grasp random processes. But the machinery is real analysis, although on special domains and with certain restrictions. The art of all is, and this cannot be solved by the machinery, to determine which specific setup describes a given random outcome. You can calculate your motions relativistic or classic, the equations cannot decide for you which one to use. I think the separation of a given experiment from the calculus necessary to describe it is actually an achievement which wasn't obvious from the start. It helps to avoid confusion and to encapsulate the discussion. And of course we won't need to talk about sigma algebras, if - as at school - some colored balls are selected from some pots. But in the end it led to a false understanding of randomness by equaling it with uncertainty. It's a bit as in physics, where old metaphors from a century ago are still in the heads of people.
 
  • Like
Likes fbs7
  • #7
345
37
Once again thank you so much for the kind and patient explanations to this old bloke here.

I see that many people here cherish teaching and explaining as the highest form of discourse, and for my part I try to understand as much as I can, to not waste even a word from such wonderful mentoring -- for which you have my dear thanks!

I guess that many people here are teachers in their areas, is that a reasonable assumption?
 
  • Like
Likes morix
  • #8
13,258
10,246
I guess that many people here are teachers in their areas, is that a reasonable assumption?
We have all kind of members, including teachers and current, future or former professors. Many others just have studied the fields they answer in. So yes, the general level of competence is significantly higher than on other internet forums. That does not mean that people wouldn't mistakes, at least they happen to me, usually due to bad reading, but those are normally quickly corrected. Me, too, finds it refreshing to read about things I have no expertise in, but can be sure it's on a scientific level.
 
  • #9
33,746
5,434
As a programmer you should know that 1+1=0, so basic stuff is quite relative here.
In a one-bit adder, this is true, but only because we've lost the carry digit. In any other context, 1 + 1 = 2, in any base higher than 2, and 1 + 1 = 10, in base-2.
 
  • #10
13,258
10,246
In a one-bit adder, this is true, but only because we've lost the carry digit. In any other context, 1 + 1 = 2, in any base higher than 2, and 1 + 1 = 10, in base-2.
I thought of a Boolean variable. In many languages "IF X = true" can also be written "IF X = 1". In any case, I wanted to stress that even ##1+1=2## isn't automatically a given truth.
 
  • #11
33,746
5,434
I thought of a Boolean variable. In many languages "IF X = true" can also be written "IF X = 1". In any case, I wanted to stress that even ##1+1=2## isn't automatically a given truth.
But strictly speaking, the operation on Boolean variables that corresponds to addition is OR, in which case true OR true = true. Using 1 for true, we have 1 + 1 = 1. If we take this further to include AND, we would have true AND true = true, as well. In neither case do we have 1 + 1 = 0.
 
  • #12
13,258
10,246
You're right. My only excuse is: far too many COBOL and RPG switches ...
 
  • Like
Likes jedishrfu and Mark44
  • #13
FactChecker
Science Advisor
Gold Member
5,718
2,119
Oh... randomness is interpretation...

So a "random" variable is really just another variable, just like "time" is just a variable without anything different than say a "mass" variable. Our interpretation of "time" makes it special, but that meaning of something flowing over.. whatever... is not mathematical, it's just human interpretation.
I wouldn't say that. Whether a variable is truely random or is determined by unknown factors is a physics concern, not a mathematical concern. Mathematically, once a variable is assumed to be random, it is treated rigorously as such. It has a probability distribution, etc. The subject is well established and there are many good texts on probability theory.
 
  • Like
Likes StoneTemplePython
  • #14
345
37
I wouldn't say that. Whether a variable is truely random or is determined by unknown factors is a physics concern, not a mathematical concern. Mathematically, once a variable is assumed to be random, it is treated rigorously as such. It has a probability distribution, etc. The subject is well established and there are many good texts on probability theory.
Hmm... my mind screws were more in place with the idea that "random" is an interpretation thing.... :-(

If I say x ∈ X, how do I know if this is a random variable or not? What makes this set X different than all other sets that can be defined in mathematics, that makes it a "set of random values" (or whatever the proper terminology for that), if not human semantic that attaches the word "random" to X?
 
  • #15
FactChecker
Science Advisor
Gold Member
5,718
2,119
My point is that mathematics does not know or care why something has been identified as random, as long as it has been and a probability distribution of its values is defined. It may be "random" due to a human decision to rely on probabilities rather than attempt to figure out all the necessary physics to make it deterministic. We treat a coin toss as random because the idea of making it deterministic is inconcievable. There are innumerable examples like that. The location of raindrops on a square foot of ground would be random for all practical purposes.
 
  • #16
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,164
567
As is often the case, I look to Feller volume 1 for inspiration.

Feller said:
A function defined on a sample space is called a random variable... The term random variable is somewhat confusing; random function would be more appropriate (the independent variable being a point in the sample space, that is, the outcome of an experiment).
which directly contradicts this:
Oh... randomness is interpretation...
So a "random" variable is really just another variable, just like "time" is just a variable without anything different than say a "mass" variable...
Likewise, the "random" nature of a variable is just human interpretation, right?
The issue is: I don't think there is a satisfying B level answer to this thread.
 
  • Like
Likes FactChecker
  • #17
33,746
5,434
You're right. My only excuse is: far too many COBOL and RPG switches ...
There is one Boolean operation that works like what you described: XOR. true XOR true = false. In symbols, 1 ⊕ 1 = 0.
 
  • #18
13,258
10,246
So a "random" variable is really just another variable, just like "time" is just a variable without anything different than say a "mass" variable.
I wouldn't say that.
I do agree. My point of view is
In general, I think the connection between probability theory and the measure theory is typically underemphasised in introductory courses on probability (at least for non-mathematicians). Also, just for OP's reference: https://en.wikipedia.org/wiki/Measure_(mathematics)
Once we setup the mathematical framework, we are in the middle of measure theory and the word random is obsolete. Distributions and other probability specific quantities are merely more properties and well defined functions.

To transport the term randomness from the experiment into the math does in my opinion more harm than good. It is neither necessary nor does it provide additional insights. Randomness is coded by the choice of specific measure spaces, a probability measure, resp. a distribution function. Once we arrived there, it will only be used to interpret the results in terms of the experiment again, but cannot affect calculations itself. Within mathematics, randomness, which we discuss here to be axiomatically defined, is in my opinion simply a synonym for a specific and deterministic calculus. Thus randomness is left behind as a property of the experiment only, and in this regard as a property of a variable same as time or mass would be.
 
  • #19
FactChecker
Science Advisor
Gold Member
5,718
2,119
I do agree. My point of view is

Once we setup the mathematical framework, we are in the middle of measure theory and the word random is obsolete. Distributions and other probability specific quantities are merely more properties and well defined functions.
I see your point. But general measure theory does not include the requirement that the total measure remain 1. That is the essential property that allows one to interpret it as a probability of a random variable. The scaling of Bayes' rule is to retain it as a probability.
 
  • #21
345
37
As is often the case, I look to Feller volume 1 for inspiration.

which directly contradicts this:

The issue is: I don't think there is a satisfying B level answer to this thread.
Fair enough. So, in the world beyond B-level, if I have an independent variable x ∈ A and a function f(x) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { x^2 } 2 } ##, and I have another variable y ∈ B and a second function g(y) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { y^2 } 2 } ##, if I don't attach some human interpretation to the variables and formulas, how would I know if f(x) is a random function ( that for example expresses a normal probability distribution for the number of customers in a shop based on the amount of rain), and g(y) is a regular explicit formula ( that for example describes the exact, non-random, number of items some clockwork machine will build in 1 hour based on the hardness of the raw materials fed to it)?

That is, what are the mathematical qualities of the domains A and B (or the functions f(x) and g(x)) that make one related to "random" and "probability" and the other just another explicit formula?
 
Last edited:
  • #22
FactChecker
Science Advisor
Gold Member
5,718
2,119
That is, what are the mathematical qualities of the domains A and B (or the functions f(x) and g(x)) that make one related to "random" and "probability" and the other just another explicit formula?
A random variable does not have a representation as a deterministic function. There is no "y=f(x)" giving a value of the variable y. The probability density function of a random variable does not give you the value of the variable; it gives the probability that the variable will have the value x.
 
  • Like
Likes Delta2 and Klystron
  • #23
13,258
10,246
As the discussion meanwhile reflects more of the time and school dependent interpretations, though still axiomatics, I will kindly ignore the "B" level, the more as the OP has received his answers. However, I think the debate itself is a fruitful one, as it appears that even the knowing differ on the interpretations. If so, then the dispute should take place. I'm almost certain the OP agrees with this htjack, especially as it is not a distractive subject but merely a distracted level.

A random variable does not have a representation as a deterministic function.
It has according to Wikipedia:
A random variable ##{\displaystyle f \colon \Omega \to \Omega'}## is a measurable function from a set of possible outcomes ##{\displaystyle \Omega }## to a measurable space ##{\displaystyle \Omega'}##. ##^*)##
Let ##{\displaystyle (\Omega ,\Sigma ,P)}## be a probability space and ##{\displaystyle (\Omega ',\Sigma ')}## a measurable space. A ##{\displaystyle (\Sigma ,\Sigma ')}-##measurable function ##{\displaystyle X\colon \Omega \to \Omega '}## is then a ##{\displaystyle \Omega '}##-random variable on ##{\displaystyle \Omega }##.
and nLab:
The formalization of this idea in modern probability theory (Kolmogorov 33, III) is to take a random variable to be a measurable function ##f## on a probability space ##(\Omega,P)## (e.g. Grigoryan 08, 3.2, Dembo 12, 1.2.1). ...##^*)##
So the random variable is a function on a configuration space and as such it is deterministic.

However
One thinks of ##\Omega## as the space of all possible configurations (all the “possible worlds” with respect to the idealized situation under consideration), thinks of the measure ##P(A)## of any subset of it as the probability that one of the configurations ##x\in A \subseteq \Omega ## is randomly realized, and thinks of ##f(x)## as the value of the given random variable in the situation of that configuration [##A##].
##^*)##
*) Variable names changed in accordance to previous posts. Emphasis mine.

Personally, I appreciate this modern view very much and wished I would have learnt it this way. An analytical approach would have been far easier for me to understand as this mambo jumbo probability gibberish about ##X##, which I actually had encountered - friendly confused with combinatorics. In this sense I admit that there are different views around, especially historically and if distribution (probability measure ##P\,##), random variable (measurable function ##f\, : \,\Omega \longrightarrow \Omega'\,##) and randomness (so to say the sigma algebra ##\Sigma## over ##\Omega\,##) are not properly defined, or distinguished. But I definitely like the deterministic approach within a once set up calculus. ##f(A)## is different from ##P(A)##. So whether a random variable ##X## is considered to be ##X=f## or ##X=P## makes a difference here. I stay with Kolmogoroff and consider ##X=f## and ##P## the evaluation of ##A \in \Sigma##.
 
Last edited:
  • #24
FactChecker
Science Advisor
Gold Member
5,718
2,119
I stand corrected. If one talks about functions on a very specialized space, then a random variable can be defined as a function on the set of possible outcomes. But I think this is a very specific setup designed to make it a deterministic function and is not at all what the OP would consider a general deterministic function. So IMHO, to imply that mathematics does not consider it a special case is misleading.

EDIT: Actually @fresh_42 's answer may, indeed, be what the OP was looking for. I may have underestimated the sophistication of his question since I have never thought of it this way.
 
Last edited:
  • #25
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,164
567
Fair enough. So, in the world beyond B-level, if I have an independent variable x ∈ A and a function f(x) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { x^2 } 2 } ##, and I have another variable y ∈ B and a second function g(y) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { y^2 } 2 } ##
my view is it's inappropriate for you to jump straight into continuous random variables. Start with coin tossing / Bernouli's. You can achieve remarkably sophisticated results with 0s and 1s. Moreover if you don't know what a Dedekind cut is (adjacent thread) you can't possibly understand what's going on with general random variables.

Speaking of coin tossing, there's probably a joke in here given the earlier discussion of bits, XORs, etc. and some of the comments made by @fresh_42 @fresh_##\mathbb F_2##
- - - -
As for the rest of the posts here, I think introducing measures right away is a mistake. Start with a discrete sample space and tease out information. Don't introduce random variables even in this setting until much later. Focus on the sample space and events, over and over. Really this is the core OP's question -- to understand the mathematical treatment of "randomness" you need to get your head around what's going on with these idealized experiments that are defined by sample space(s) -- that's where the "randomness" is modeled.

- - - -
A common theme in my posts is to use basic lightweight machinery, and only use heavier machinery if absolutely needed. It's part of the reason I use ##\text{GM}\leq \text{AM}## over and over. There's a similar idea with Feller vol 1.

So the random variable is a function on a configuration space and as such it is deterministic.
fair but I already said this... I'll restate it with different underlining for others benefit:

Feller said:
A function defined on a sample space is called a random variable... The term random variable is somewhat confusing; random function would be more appropriate (the independent variable being a point in the sample space, that is, the outcome of an experiment).
again the 'randomness' lurks in the sample space.

There are a lot of people on PF who seem to say and think that probability is merely a special case of measure theory. (I'm not sure whether Fresh is one per se, but a forum search will see many others). I find this humorous as it seems to miss the point. Here's a nice zinger from a favorite blogger:

Tao said:
At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate. At a practical level, the opposite is true: just as number theorists study concepts (e.g. primality) that have the same meaning in every numeral system that models the natural numbers, we shall see that probability theorists study concepts (e.g. independence) that have the same meaning in every measure space that models a family of events or random variables. And indeed, just as the natural numbers can be defined abstractly without reference to any numeral system (e.g. by the Peano axioms), core concepts of probability theory, such as random variables, can also be defined abstractly, without explicit mention of a measure space; we will return to this point when we discuss free probability later in this course.
https://terrytao.wordpress.com/2010/01/01/254a-notes-0-a-review-of-probability-theory/

so for starting out: why not focus on probabilistic concepts as opposed to representation in terms of measures? If we have a discrete sample space we do have this choice and this is exactly where Feller vol 1 fits in.

(outside the scope thought: even in a discrete setting, dominated convergence can help streamline an awful lot arguments with stochastic processes... I just don't want to put the cart in front of the horse here)
 
  • Like
Likes Auto-Didact

Related Threads on What's the meaning of "random" in Mathematics?

  • Last Post
Replies
16
Views
4K
Replies
4
Views
3K
Replies
8
Views
786
  • Last Post
Replies
2
Views
1K
Replies
41
Views
6K
Replies
2
Views
588
Replies
4
Views
2K
Replies
14
Views
2K
Top