Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Motivation behind random variables?

  1. Sep 10, 2010 #1
    What is the motivation behind random variables in probability theory?

    The definition is easy to understand. Given a probability space (Ω, μ), a random variable on that space is an integrable function X:Ω→R. So essentially, it allows you to work in the concrete representation R instead of the abstract Ω.

    But why is that useful?

    Take a simple example of flipping a pair of coins in order. Our event set is Ω={HH, HT, TH, TT}. Our probability function is μ({HH}) = μ({HT}) = μ({TH}) = μ({TT}) = 1/4.

    The random variables on this set can be a simple numeric assignment: X(HH) = 1, X(HT) = 2, X(TH) = 3, X(TT) = 4. Or it can be in a different order. Or it can be non-injective, such as the constant random variable X(s) = 1, or the "outcome same" function, X(HH) = X(TT) = 1, X(HT) = X(TH) = 0.

    How is that useful for analysis? Does it have to do with the random variable's role in defining the moments of a distribution? Or perhaps random variables simply aren't useful in the finite case?

    The reason I'm trying to understand this is I am (trying) to learn a little about non-commutative probability theory (and ultimately, a little about quantum mechanics).

    The source I'm working off of is a paper by Mitchener (http://www.uni-math.gwdg.de/mitch/free.pdf [Broken]) which has a very succinct introduction, but after chapter 2, becomes much more abstract than I care to deal with. I'm looking for simple applications that can be modeled in software, not the hardcore theory.

    Ideally, I think I'd be satisfied if I could find a concrete non-commutative probability model for the game found in Sigfpe's blog on negative probabilities (http://blog.sigfpe.com/2008/04/negative-probabilities.html).

    But the first step is to understand why random variables are so important in probability theory. since non-commutative probability is described exclusively in terms of them.

    Any help would be greatly appreciated.
    Last edited by a moderator: May 4, 2017
  2. jcsd
  3. Sep 10, 2010 #2
    One example is where you have several random variables representing various measurements of a single system; the event space formulation describes the dependence between those measurements in the most general possible way. For example with coin flipping, the measurements "number of heads" and "time of first head" relate in a nontrivial way but can be described in terms of their underlying events.

    The event space formulation is most useful because it unifies probability and the theory of integration, one important consequence being that all random variables have a cdf regardless of whether they have a pmf or pdf or neither (e.g. Cantor), and other quantities such as moments (when they exist) can be expressed as a Stieltjes integral wrt the cdf.

    Unfortunately the above paper's succint introduction puts the rest of it on shaky foundations. Theorem 1.9, which seems to be central to the arguments of the rest of the paper, states incorrectly and without proof that "random variables with the same moments have the same probability laws". A famous counterexample is the lognormal distribution, in fact there is an infinite family of distributions with all the same moments as lognormal (Heyde 1963; see also the Moment Problem). Fallacies like Theorem 1.9 tend to arise from misapplication of Taylor's theorem. Also there is no mention of random variables with infinite moments such as Cauchy and Pareto.
    Last edited by a moderator: May 4, 2017
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook