fbs7 said:
Fair enough. So, in the world beyond B-level, if I have an independent variable x ∈ A and a function f(x) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { x^2 } 2 } ##, and I have another variable y ∈ B and a second function g(y) = ## \frac 1 { \sqrt { 2 \pi }} e ^ { - \frac { y^2 } 2 } ##
my view is it's inappropriate for you to jump straight into continuous random variables. Start with coin tossing / Bernouli's. You can achieve remarkably sophisticated results with 0s and 1s. Moreover if you don't know what a Dedekind cut is (adjacent thread) you can't possibly understand what's going on with general random variables.
Speaking of coin tossing, there's probably a joke in here given the earlier discussion of bits, XORs, etc. and some of the comments made by
@fresh_42 @fresh_##\mathbb F_2##
- - - -
As for the rest of the posts here, I think introducing measures right away is a mistake. Start with a discrete sample space and tease out information. Don't introduce random variables even in this setting until much later. Focus on the sample space and events, over and over. Really this is the core OP's question -- to understand the mathematical treatment of "randomness" you need to get your head around what's going on with these idealized experiments that are defined by sample space(s) -- that's where the "randomness" is modeled.
- - - -
A common theme in my posts is to use basic lightweight machinery, and only use heavier machinery if absolutely needed. It's part of the reason I use ##\text{GM}\leq \text{AM}## over and over. There's a similar idea with Feller vol 1.
fresh_42 said:
So the random variable is a function on a configuration space and as such it is deterministic.
fair but I already said this... I'll restate it with different underlining for others benefit:
Feller said:
A function defined on a sample space is called a random variable... The term random variable is somewhat confusing; random function would be more appropriate (the independent variable being a point in the sample space, that is, the outcome of an experiment).
again the 'randomness' lurks in the sample space.
There are a lot of people on PF who seem to say and think that probability is merely a special case of measure theory. (I'm not sure whether Fresh is one per se, but a forum search will see many others). I find this humorous as it seems to miss the point. Here's a nice zinger from a favorite blogger:
Tao said:
At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate. At a practical level, the opposite is true: just as number theorists study concepts (e.g. primality) that have the same meaning in every numeral system that models the natural numbers, we shall see that probability theorists study concepts (e.g. independence) that have the same meaning in every measure space that models a family of events or random variables. And indeed, just as the natural numbers can be defined abstractly without reference to any numeral system (e.g. by the Peano axioms), core concepts of probability theory, such as random variables, can also be defined abstractly, without explicit mention of a measure space; we will return to this point when we discuss free probability later in this course.
https://terrytao.wordpress.com/2010/01/01/254a-notes-0-a-review-of-probability-theory/
so for starting out: why not focus on probabilistic concepts as opposed to representation in terms of measures? If we have a discrete sample space we do have this choice and this is exactly where Feller vol 1 fits in.
(outside the scope thought: even in a discrete setting, dominated convergence can help streamline an awful lot arguments with stochastic processes... I just don't want to put the cart in front of the horse here)