B What's the meaning of "random" in Mathematics?

268
23
As is often the case, I look to Feller volume 1 for inspiration.

which directly contradicts this:

The issue is: I don't think there is a satisfying B level answer to this thread.
Fair enough. So, in the world beyond B-level, if I have an independent variable x ∈ A and a function f(x) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { x^2 } 2 } ##, and I have another variable y ∈ B and a second function g(y) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { y^2 } 2 } ##, if I don't attach some human interpretation to the variables and formulas, how would I know if f(x) is a random function ( that for example expresses a normal probability distribution for the number of customers in a shop based on the amount of rain), and g(y) is a regular explicit formula ( that for example describes the exact, non-random, number of items some clockwork machine will build in 1 hour based on the hardness of the raw materials fed to it)?

That is, what are the mathematical qualities of the domains A and B (or the functions f(x) and g(x)) that make one related to "random" and "probability" and the other just another explicit formula?
 
Last edited:

FactChecker

Science Advisor
Gold Member
2018 Award
4,611
1,570
That is, what are the mathematical qualities of the domains A and B (or the functions f(x) and g(x)) that make one related to "random" and "probability" and the other just another explicit formula?
A random variable does not have a representation as a deterministic function. There is no "y=f(x)" giving a value of the variable y. The probability density function of a random variable does not give you the value of the variable; it gives the probability that the variable will have the value x.
 

fresh_42

Mentor
Insights Author
2018 Award
9,017
6,298
As the discussion meanwhile reflects more of the time and school dependent interpretations, though still axiomatics, I will kindly ignore the "B" level, the more as the OP has received his answers. However, I think the debate itself is a fruitful one, as it appears that even the knowing differ on the interpretations. If so, then the dispute should take place. I'm almost certain the OP agrees with this htjack, especially as it is not a distractive subject but merely a distracted level.

A random variable does not have a representation as a deterministic function.
It has according to Wikipedia:
A random variable ##{\displaystyle f \colon \Omega \to \Omega'}## is a measurable function from a set of possible outcomes ##{\displaystyle \Omega }## to a measurable space ##{\displaystyle \Omega'}##. ##^*)##
Let ##{\displaystyle (\Omega ,\Sigma ,P)}## be a probability space and ##{\displaystyle (\Omega ',\Sigma ')}## a measurable space. A ##{\displaystyle (\Sigma ,\Sigma ')}-##measurable function ##{\displaystyle X\colon \Omega \to \Omega '}## is then a ##{\displaystyle \Omega '}##-random variable on ##{\displaystyle \Omega }##.
and nLab:
The formalization of this idea in modern probability theory (Kolmogorov 33, III) is to take a random variable to be a measurable function ##f## on a probability space ##(\Omega,P)## (e.g. Grigoryan 08, 3.2, Dembo 12, 1.2.1). ...##^*)##
So the random variable is a function on a configuration space and as such it is deterministic.

However
One thinks of ##\Omega## as the space of all possible configurations (all the “possible worlds” with respect to the idealized situation under consideration), thinks of the measure ##P(A)## of any subset of it as the probability that one of the configurations ##x\in A \subseteq \Omega ## is randomly realized, and thinks of ##f(x)## as the value of the given random variable in the situation of that configuration [##A##].
##^*)##
*) Variable names changed in accordance to previous posts. Emphasis mine.

Personally, I appreciate this modern view very much and wished I would have learnt it this way. An analytical approach would have been far easier for me to understand as this mambo jumbo probability gibberish about ##X##, which I actually had encountered - friendly confused with combinatorics. In this sense I admit that there are different views around, especially historically and if distribution (probability measure ##P\,##), random variable (measurable function ##f\, : \,\Omega \longrightarrow \Omega'\,##) and randomness (so to say the sigma algebra ##\Sigma## over ##\Omega\,##) are not properly defined, or distinguished. But I definitely like the deterministic approach within a once set up calculus. ##f(A)## is different from ##P(A)##. So whether a random variable ##X## is considered to be ##X=f## or ##X=P## makes a difference here. I stay with Kolmogoroff and consider ##X=f## and ##P## the evaluation of ##A \in \Sigma##.
 
Last edited:

FactChecker

Science Advisor
Gold Member
2018 Award
4,611
1,570
I stand corrected. If one talks about functions on a very specialized space, then a random variable can be defined as a function on the set of possible outcomes. But I think this is a very specific setup designed to make it a deterministic function and is not at all what the OP would consider a general deterministic function. So IMHO, to imply that mathematics does not consider it a special case is misleading.

EDIT: Actually @fresh_42 's answer may, indeed, be what the OP was looking for. I may have underestimated the sophistication of his question since I have never thought of it this way.
 
Last edited:

StoneTemplePython

Science Advisor
Gold Member
975
474
Fair enough. So, in the world beyond B-level, if I have an independent variable x ∈ A and a function f(x) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { x^2 } 2 } ##, and I have another variable y ∈ B and a second function g(y) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { y^2 } 2 } ##
my view is it's inappropriate for you to jump straight into continuous random variables. Start with coin tossing / Bernouli's. You can achieve remarkably sophisticated results with 0s and 1s. Moreover if you don't know what a Dedekind cut is (adjacent thread) you can't possibly understand what's going on with general random variables.

Speaking of coin tossing, there's probably a joke in here given the earlier discussion of bits, XORs, etc. and some of the comments made by @fresh_42 @fresh_##\mathbb F_2##
- - - -
As for the rest of the posts here, I think introducing measures right away is a mistake. Start with a discrete sample space and tease out information. Don't introduce random variables even in this setting until much later. Focus on the sample space and events, over and over. Really this is the core OP's question -- to understand the mathematical treatment of "randomness" you need to get your head around what's going on with these idealized experiments that are defined by sample space(s) -- that's where the "randomness" is modeled.

- - - -
A common theme in my posts is to use basic lightweight machinery, and only use heavier machinery if absolutely needed. It's part of the reason I use ##\text{GM}\leq \text{AM}## over and over. There's a similar idea with Feller vol 1.

So the random variable is a function on a configuration space and as such it is deterministic.
fair but I already said this... I'll restate it with different underlining for others benefit:

Feller said:
A function defined on a sample space is called a random variable... The term random variable is somewhat confusing; random function would be more appropriate (the independent variable being a point in the sample space, that is, the outcome of an experiment).
again the 'randomness' lurks in the sample space.

There are a lot of people on PF who seem to say and think that probability is merely a special case of measure theory. (I'm not sure whether Fresh is one per se, but a forum search will see many others). I find this humorous as it seems to miss the point. Here's a nice zinger from a favorite blogger:

Tao said:
At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate. At a practical level, the opposite is true: just as number theorists study concepts (e.g. primality) that have the same meaning in every numeral system that models the natural numbers, we shall see that probability theorists study concepts (e.g. independence) that have the same meaning in every measure space that models a family of events or random variables. And indeed, just as the natural numbers can be defined abstractly without reference to any numeral system (e.g. by the Peano axioms), core concepts of probability theory, such as random variables, can also be defined abstractly, without explicit mention of a measure space; we will return to this point when we discuss free probability later in this course.
https://terrytao.wordpress.com/2010/01/01/254a-notes-0-a-review-of-probability-theory/

so for starting out: why not focus on probabilistic concepts as opposed to representation in terms of measures? If we have a discrete sample space we do have this choice and this is exactly where Feller vol 1 fits in.

(outside the scope thought: even in a discrete setting, dominated convergence can help streamline an awful lot arguments with stochastic processes... I just don't want to put the cart in front of the horse here)
 

fresh_42

Mentor
Insights Author
2018 Award
9,017
6,298
There are a lot of people on PF who seem to say and think that probability is merely a special case of measure theory. (I don't think Fresh is one per se, but a forum search will see many others).
Meanwhile, I am. I find it far more transparent than counting colored balls! I had to learn stochastic in my second year by ...
As for the rest of the posts here, I think introducing measures right away is a mistake. Start with a discrete sample space and tease out information.
... with the result, that it was incredibly tough to form a calculus. The terms likelihood, probability or random always remained foggy, badly defined terms and those in the know appeared to me like Merlin.
An analytical approach would have been far easier for me to understand than this mumbo jumbo probability gibberish about ##X##, which I actually had encountered - friendly confused with combinatorics.
So in my opinion, this ...
I think introducing measures right away is a mistake. Start with a discrete sample space and tease out information.
... is the mistake. However, it hits a notorious weakness of mine. I'm no friend of the common pedagogic concepts which proceed along the lines:
  1. It <insert a content of your choice, e.g. calculating 3-5 or introducing partial differentials, or complex numbers etc.> is impossible.
  2. It is too difficult for you.
  3. We will deal with it later.
  4. Btw., it is now possible, not difficult at all, and now is the time. :-p
I really hate this approach. It is based on the assumption of stupidity, and it fools students. In my opinion we should start to teach actual mathematics instead of procrastinate content over and over again. No wonder that people think ##17 - 25 \cdot 0## is mathematics!

I do not see any difficulties in the introduction of sigma algebras. Group theory and topology, too, are second year stuff and of comparable difficulty in my mind. Language may have to be adapted, e.g. by easy examples, but not content. As far as I see it, discrete and continuous random variables do not require a different treatment. The separation of randomness as part of the experiment, and not part of the calculus is rather appealing to me. I know that this might not be common sense, but as far as I'm concerned, it should be. The old fashioned methods didn't work well enough.
 
268
23
Wow, so much brilliant and deep discussion here! Thank you! Either I'm going to learn something, or my brain will fry away! :biggrin:

I want to learn, so I went to read about Kolmogorov, de Finetti and Cox; Kolmogorov is way out of my league with "measures" and "σ-algebra" are way too abstract for me.

But I found Cox's postulates charming! They sound rather intuitive!! Gotta love them 5 Cox postulates (as described in https://itschancy.wordpress.com/2013/11/02/a-foundation-for-bayesian-statistics-part-two-coxs-postulates/)

(a) Cox-plausibilities are real numbers
(b) If two claims are equal in Boolean algebra, they have the same Cox-plausibility
(c) If two claims A and B and prior information X, then there exists a conjunction function f such that

A^B | X = f( A|X, B|A^X )

(d) etc...

That's brilliant! So this Cox guy didn't worry about a "random" nature of anything; he instead assumed "claims" that have a "plausibility". I can dig that! For example if I assume the claim "a 6-sided die can roll a 1" is "equal" (whatever that means) to the claim "a 6-sided die can roll a 2", etc..., then from (b) it follows that the plausibility that a 6-sided die can roll a 1 will be 1/6. No random dimgby-domgby or jingty-jumpty! I'll see if I can prove from (c) that the claim that two die rolls will add to 8 has a plausibility of 5/36!

Cox seemed to me to be a more accessible theory of probably than Kolmogorov, but from what I read most people seems to love Kolmogorov more than Cox, is that right?
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,611
1,570
- - - -
A common theme in my posts is to use basic lightweight machinery, and only use heavier machinery if absolutely needed. It's part of the reason I use ##\text{GM}\leq \text{AM}## over and over. There's a similar idea with Feller vol 1.
Regardless of the technicalities of this thread, reading Feller is a good recommendation in all cases.
 
268
23
.. Moreover if you don't know what a Dedekind cut is (adjacent thread) you can't possibly understand what's going on with general random variables. ...
Yes, them Dedekind cuts got me good! And are still getting me. I kinda got (more or less) the example in Wikipedia on why √2 ∈ ℝ using Dedekind cuts -- that is to prove that whatever a/b such that (a/b)2 < 2, then there exists another rational p/q such that (p/q)2 < 2, therefore there is no upper rational value to the set of ℚ such that (a/b)2 < 2, therefore √2 ∈ ℝ.

I can read the formulas the Dedekind thingie, but understanding them is a different level. Like, when I read them, I immediately thought that whatever r ∈ ℝ and whatever ε, there will be a p/q ∈ ℚ such that (p/q - r) <= ε (because that's the construction of the Dedekind cut thingie)... so if I take the limit... lim ε->0 (p/q - r ) <= lim ε->0 ε = 0.. therefore in the lim ε->0 p/q = r... which is obviously wrong... brain blows up :frown:
 

fresh_42

Mentor
Insights Author
2018 Award
9,017
6,298
Yes, them Dedekind cuts got me good!
You might want to look for the alternative. It requires Cauchy sequences and equivalence classes. At least those are useful anyway, whereas Dedekind cuts are just this. Google "real numbers as Cauchy limits" or so.
 

StoneTemplePython

Science Advisor
Gold Member
975
474
... is the mistake. However, it hits a notorious weakness of mine. I'm no friend of the common pedagogic concepts which proceed along the lines:
  1. It <insert a content of your choice, e.g. calculating 3-5 or introducing partial differentials, or complex numbers etc.> is impossible.
  2. It is too difficult for you.
  3. We will deal with it later.
  4. Btw., it is now possible, not difficult at all, and now is the time. :-p
I really hate this approach. It is based on the assumption of stupidity, and it fools students. In my opinion we should start to teach actual mathematics instead of procrastinate content over and over again. No wonder that people think ##17 - 25 \cdot 0## is mathematics!

I do not see any difficulties in the introduction of sigma algebras...
Yes, I picked up on this in a spat over prime numbers in the pre-calc forum. I actually am (semi) sympathetic to this in general.

I don't think it applies here though, in particular the underlined part.

Feller vol 1 does not assume stupidity on the part of the reader, is rigorous, is the book that got probability accepted by mathematicians outside the USSR, explicitly constrains itself to denumerable sample spaces to focus on probabilistic, not-analytic challenges (it even tells us that the sequel, vol 2 introduces measures to generalize the settings), contains an awful lot of analysis, includes original difficult results (e.g. Feller-Erdos-Pollard), and spawned real research (e.g. one I like: subsequent to publishing the first edition of vol 1, KL Chung pointed out that using countable state Markov Chain results from Kolmogorov, and a very well chosen chain, implies Feller-Erdos-Pollard) that was incorporated in version 2 and 3 of vol 1, either directly or as footnotes.

My approach of start simple and build really is close to how Polya would proceed (I think).
 
Last edited:
268
23
I was just rolling in my bed, unable to sleep, and now I know the reason for that -- that's because the concept of "random" is finally getting in order in my head. What a trip this was!! What makes sense to me is that there are really 3 concepts of "random", and people refer with the same word for completely different things:

(a) One is the is a well-structured, axiomatic, abstract mathematical structure that defines and studies "probability". This is not based on any actual dice rolling or some ghost taking cards out of a deck, but it is instead a logical system built around abstract concepts like a probability space and a measurement space. As such, it's rather beautiful. Here "random" doesn't really mean anything, as it's an axiom, and we could just as well call "bananility" instead of "probability" and the mathematical structure would be exactly the same.

(b) Another is the mysterious realm of quantum mechanics, where for some crazy odd reason real objects do seem to exactly follow laws derived from the abstractions above. Here "random" really means random, there's no other way to describe. Why quantum objects behave so is a mind-blowing question, and I suspect it's one of the greatest mysteries of physics, but thankfully everyday dudes like me don't have to worry about it and have no use for that, we can just have faith in physicists to get stuff to work by using those rules, and hopefully not blow the planet to pieces while doing that.

(c) Another is the macroscopic realm that we all handle everyday. Here the "random" really means unknown. There's no real random in macroscopic. We think the cards from the deck as random just because they are turned face down, and if we could calculate exactly all the forces acting in the dice we could deterministically predict which number would be rolled. One could despair with the unknown, but by making assumptions (like the deck is not missing cards and the cards are equally probable) and by applying that mathematical framework, we can make guesses and estimates of outcomes, which, if we assumed right will, in large numbers, be close to the mathematical predictions.

What's awesome is that (c) is routinely used by billions of people. It's actually very amazing if one considers of it - regular joes use it everyday, for example saying "wow, what a hail-mary pass -- he'll never be able to repeat that!" to express how unlikely p(x)2 is, without really knowing why that's correct. We joes don't care, and do not use, anything about tensors or Hilbert matrices or the 350-millionth digit on pi, but we use probability as commonly as we use algebra to verify the bills.

For that reason I now have renewed respect for probability theory, due to widespread use, and now I think that field is one of the Titans of mathematics, with the same practical utility as algebra and geometry!!! Once again thanks all for this very inspiring discussion!
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,611
1,570
That is a good overview of the situation. One comment I would make is that the different ways of looking at it are really all compatible and complement each other. The axiomatic view really does describe the view of (c), even though the translation between the two may appear difficult. In a famous disagreement between Einstein and Neils Bohr, Einstein contended that (c) was the only situation and there was always a hidden, unknown cause for every "random" event. (I hope that I am not butchering this). A famous Einstein quote is "God does not play dice." At the quantum level, Bohr is considered the winner of that disagreement because of experiments by John Stewart Bell. I highly recommend that you look at the volumes "An Introduction to Probability Theory and its Applications" by Feller. They may be expensive and hard to find, but they are classics.

In the axiomatic view, it is irrelevant whether there is fundamental randomness or just unknown deterministic causes. The mathematics doesn't care. But a great strength of mathematics is that the logic and validity holds in many different applications.
 
Last edited:

Ray Vickson

Science Advisor
Homework Helper
10,635
1,663
I was just rolling in my bed, unable to sleep, and now I know the reason for that -- that's because the concept of "random" is finally getting in order in my head. What a trip this was!! What makes sense to me is that there are really 3 concepts of "random", and people refer with the same word for completely different things:

(a) One is the is a well-structured, axiomatic, abstract mathematical structure that defines and studies "probability". This is not based on any actual dice rolling or some ghost taking cards out of a deck, but it is instead a logical system built around abstract concepts like a probability space and a measurement space. As such, it's rather beautiful. Here "random" doesn't really mean anything, as it's an axiom, and we could just as well call "bananility" instead of "probability" and the mathematical structure would be exactly the same.

(b) Another is the mysterious realm of quantum mechanics, where for some crazy odd reason real objects do seem to exactly follow laws derived from the abstractions above. Here "random" really means random, there's no other way to describe. Why quantum objects behave so is a mind-blowing question, and I suspect it's one of the greatest mysteries of physics, but thankfully everyday dudes like me don't have to worry about it and have no use for that, we can just have faith in physicists to get stuff to work by using those rules, and hopefully not blow the planet to pieces while doing that.

(c) Another is the macroscopic realm that we all handle everyday. Here the "random" really means unknown. There's no real random in macroscopic. We think the cards from the deck as random just because they are turned face down, and if we could calculate exactly all the forces acting in the dice we could deterministically predict which number would be rolled. One could despair with the unknown, but by making assumptions (like the deck is not missing cards and the cards are equally probable) and by applying that mathematical framework, we can make guesses and estimates of outcomes, which, if we assumed right will, in large numbers, be close to the mathematical predictions.

What's awesome is that (c) is routinely used by billions of people. It's actually very amazing if one considers of it - regular joes use it everyday, for example saying "wow, what a hail-mary pass -- he'll never be able to repeat that!" to express how unlikely p(x)2 is, without really knowing why that's correct. We joes don't care, and do not use, anything about tensors or Hilbert matrices or the 350-millionth digit on pi, but we use probability as commonly as we use algebra to verify the bills.

For that reason I now have renewed respect for probability theory, due to widespread use, and now I think that field is one of the Titans of mathematics, with the same practical utility as algebra and geometry!!! Once again thanks all for this very inspiring discussion!
The late physicist E.T Jaynes wrote a provocative book "Probability Theory: the Logic of Science", Cambridge University Press, 2003, in which he essentially rejects the very idea of "randomness". That's right, a large probability book by somebody who does not believe in randomness! For Jaynes (and several others---maybe mostly physicists), probability is associated with a "degree of plausibility". He shows that using some reasonable axioms about how plausibilities combine, you can end up with multiplication laws like P(A & B) = P(A) P(B|A), etc. His book essentially tries to stay away from the whole "Kolmogorov" measure-theoretic way of doing probability, and so can only treat problems that do not involve things like ##P(\lim_{n \to \infty} A_n)## (but can certainly deal with things like ##\lim_{n \to \infty} P(A_n)##).

In his third chapter entitled "Elementary Sampling Theory", he says on pp. 73-74 (after developing the basic probability distributions):
"In the case of sampling with replacement, we apply this strategy as follows.
(1) Suppose that, after tossing the ball in, we shake up the urn. However complicated the problem was initially, it now becomes many orders of magnitude more complicated, because the solution now depends on every detail of the precise way we shake it, in addition to all the factors mentioned above.
(2) We now assert that the shaking has somehow made all these details irrelevant, so that the problem reverts back to the simple one where the Bernoulli urn rule applies.
(3) We invent the dignified-sounding word randomization to describe what we have done. This term is, evidently, a euphemism whose real meaning is: deliberately throwing away relevant information when it becomes too complicated for us to handle."

"We have described this procedure in laconic terms, because an antidote is needed for the impression created by some writers on probability theory, who attach a kind of mystical significance to it. For some, declaring a problem to be "randomized" is an incantation with the same purpose and effect as those uttered by an exorcist to drive out evil spirits; i.e., it cleanses the subsequent calculations and renders them immune to criticism. We Agnostics often envy the True Believer, who thus acquires so easily that sense of security which is forever denied to us."

Jaynes goes on some more about this issue, often revisiting it in subsequent chapters. Lest you think that his book is just "hand-waving", be assured that it is satisfying technical, presenting most of the usual equations that you will find in other books at the senior undergraduate and perhaps beginning graduate level (at least in "applied" courses). The man is highly opinionated and I do not subscribe to all he posits, but I find the approach interesting and refreshing, even though it is one I, personally, would not embrace. He does end the book with a long appendix outlining other approaches to probability, including the usual measure-theoretic edifice.
 
Last edited:

Buzz Bloom

Gold Member
1,892
312
If I say x ∈ X, how do I know if this is a random variable or not?
Hi fbs:

This seems like a very strange question to me. If you say x ∈ X, you know something about x and X. Presumably you would know if x is a random variable if someone you believe to be knowledgeable tells you it is a random variable. What is needed by someone with the appropriate knowledge is that the process for obtaining values for x is a random process. So the randomness of a variable is determined by whether the process for obtaining values for the variable is a random process.

I am guessing you have some uncertainty about what it means for a process to be random. A random process is a process for which it is impossible by any means to know in advance what a particular value will be. This is the distinction between a random process and a pseudo-random process. If the process is pseudo-random, and you know the nature of this process and its initial conditions, in principle you can calculate the next value it will generate.

I hope this is helpful.

Regards,
Buzz
 
268
23
Hi fbs:

This seems like a very strange question to me. If you say x ∈ X, you know something about x and X. Presumably you would know if x is a random variable if someone you believe to be knowledgeable tells you it is a random variable. What is needed by someone with the appropriate knowledge is that the process for obtaining values for x is a random process. So the randomness of a variable is determined by whether the process for obtaining values for the variable is a random process.

I am guessing you have some uncertainty about what it means for a process to be random. A random process is a process for which it is impossible by any means to know in advance what a particular value will be. This is the distinction between a random process and a pseudo-random process. If the process is pseudo-random, and you know the nature of this process and its initial conditions, in principle you can calculate the next value it will generate.

I hope this is helpful.

Regards,
Buzz
Appreciate it. I was stuck with the matter that logic is completely deterministic. If you have propositions A, B, C that are either true or false, then you'll always get other propositions D, E, F that are true or false as consequence from that. No changes, ever. So if f(x) = x2 and x=2, then always f(x) = 4. If so, then how could a "random" value ever be the result of a logical sequence from true/false propositions?

The Cox formulation untied that knot for me, through that abstract concept called "plausibility", which isn't mathematically defined -- it's an axiom. From what I understood, you don't have to define the process of rolling a dice, we just have to assume that plausibility(rolled-a-3) exists and ∈ [0..1]. Similarly, with Cox you don't need to define a process through which a ghostly hand will "choose" a fruit from a bag of fruits -- what's the hand? what's choosing? There's no need for that; you just assume that ∃ picked-an-orange and that plausibility(picked-an-orange) = plausibility(picked-an-apple) = plausibility(picked-a-lemmon) to get all kinds of useful calculations from that. There's no violation of determinism of logic that way.

I'm probably murdering poor Cox here, but that's how I untangled that knot, in my mind :biggrin:
 
268
23
The late physicist E.T Jaynes wrote a provocative book "Probability Theory: the Logic of Science", Cambridge University Press, 2003, in which he essentially rejects the very idea of "randomness". That's right, a large probability book by somebody who does not believe in randomness! For Jaynes (and several others---maybe mostly physicists), probability is associated with a "degree of plausibility". He shows that using some reasonable axioms about how plausibilities combine, you can end up with multiplication laws like P(A & B) = P(A) P(B|A), etc.
Yay! Gotta love Cox & Jaynes!! Hooray to them! I somehow suspect that Kolmogorov and Cox/Jaynes are equivalent, as (I suspect) they come to the same conclusions through (I suspect) different axiomatic processes, but I did find Cox infinitely easier to grasp.
 

Math_QED

Homework Helper
1,003
316
You might want to look for the alternative. It requires Cauchy sequences and equivalence classes. At least those are useful anyway, whereas Dedekind cuts are just this. Google "real numbers as Cauchy limits" or so.
Imo the better approach. It gets the student introduced to sequences, something fundamenal in analysis.
 

andrewkirk

Science Advisor
Homework Helper
Insights Author
Gold Member
3,694
1,328
You're right. My only excuse is: far too many COBOL and RPG switches ...
Yes, 1+1=0 is not true in a Boolean Algebra. But it is true in the field ##\mathbb Z_2##.
 

andrewkirk

Science Advisor
Homework Helper
Insights Author
Gold Member
3,694
1,328
my mind screws were more in place with the idea that "random" is an interpretation thing
I suggest you hold on to that idea. The meaning of 'random' in the everyday world is a philosophical issue. There have been countless millions of words written in philosophical journals and the like about whether the universe is 'random', but few of them make sense because the definition of 'random' is not specified with sufficient clarity.

Even in mathematics there is no definition of 'random'. The word is only used in conjunction with another word, usually 'variable'. We have 'random variables' and 'stochastic processes' that are precisely defined terms, but there is no adjective 'random' in probability theory.
 

The Physics Forums Way

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top