# Measurability of random variables

1. Apr 15, 2012

### the4thamigo_uk

Ive been working with random variables for a while and only today have I come up with a basic question that undermines what I thought I knew...

If I have two random variables X and Y, when am I allowed to multiply them? i.e. Z=XY

Let S_1 and S_1 be sigma algebras such that S_1 is contained in S_2

Cases

i) X and Y are both S_1 measurable

It seems clear that Z=XY exists and is also S_1 measurable

ii) X is S_1 measurable and Y is S_2 measurable

In this case X is also S_2 measurable, but Y is not S_1 measurable. (Am I correct to say this?)

Can we form Z=XY and if so does Z simply become S_2 measurable?

iii) Assume S_3 is not a subset of either S_1 or S_2

Can we write Z=XY?

2. Apr 15, 2012

### SW VandeCarr

Probabilities are by definition measures. If X and Y are independent RVs, then P(X)P(Y) is the product of the two probabilities. If the the sets X and Y are disjoint, then P(X)P(Y)=0. If the sets X and Y are dependent, then the product depends on the degree of dependence as measured by the intersection of the sets X and Y within the probability space. Probabilities are measures on sets in a probability space.

Last edited: Apr 15, 2012
3. Apr 16, 2012

### Stephen Tashi

I think the question entails what to do if the random variables are not defined on the same probability space since it mentions two different sigma algebras.

4. Apr 16, 2012

### SW VandeCarr

In which case the answer is......?

5. Apr 16, 2012

### mathman

In all my courses in probability theory I have never encountered a situation involving more than one random variable where they were not defined on the same probability space, with the same sigma algebra.

6. Apr 17, 2012

### Stephen Tashi

Let S_1 and S_2 be sigma algebras such that S_1 is "contained" in S_2.

"Contained" is an ambiguous word in many contexts, but perhaps that's what we should say since the sigma algebra is a collection of sets rather than a set of sets.

You didn' t mention the probability measure or measures we are using. I assume you mean that there is some probablity measure $\mu$ defined on S_1.

I agree, but this is from vague memory of measure theory.

Specifying a sigma algebra S_2 (even as a subset of S_1) doesn't specify a measure for it. The usual terminology for functions is something like $\mu$-measureable where $\mu$ is the probability measure. You aren't specifying any measure for S_2. This brings up the interesting question of whether we could say "Let S_2 have the same probability measure as S_1". I don't think that is technically correct if S_1 $\neq$ S_2. The measures would be functions that had different domains, so they would not be "the same" function. Let's say that there is a measure $\mu_2$ on S_2 that agrees with $\mu$ on the sets that are common to both sigma algebras.

.

I think so, in spirit, but in addition to technicality that you should be talking about the measurability with respect to measures instead of sigma algebras, there is the technicality that X restricted to smaller domain is not the same function as X. The function Y isn't measureable "on S_1" merely because it isn't defined there. This raises the interesting question of whether there is a unique extension of Y that is. I don't know the answer to that.

I think Z = XY is $\mu_2$ measureable where X denotes the restriction of X to S_2.

This doesn't make sense as a question. Where does S_3 enter the picture? Is it to be the domain of Y?

I think you can define a "product measure" on tuples of sets, each taken from a different sigma algebra. So if X is $\mu$ measureable on S_1 and Y is $\mu_3$ measureable on S_3 then you can implement the idea of an independent realization of X and Y by taking - what should I say? The cartesian product of S_1 with S_3? That terminology may only apply to sets, but you get the idea. You can define a product measure $\mu\mu_3 \$ on that collection of ordered pairs of sets.

If you are trying to deal with a situation where there is a dependence between X and Y then you have to say more about what relates them before we can make progress.

7. Apr 17, 2012

### SW VandeCarr

Random variables are specifically defined in terms of mappings from the interval [0.1] to an event space. That is, within the context of probability theory. I don't know how you define the independence or the lack of independence between two random variables in terms of more than one probability space. Arguably, two random variables could be disjoint with each defined in their own probability space such that the addition of the associated two probabilities is not defined, However, this is not something I have ever encountered. Could you give an appropriate context where two random variables are defined in such a way that the addition of the two associated probabilities is not defined? ("Appropriate" meaning within the context of probability theory where random variables are defined.)

Last edited: Apr 17, 2012
8. Apr 17, 2012

### Stephen Tashi

If you roll a fair tie and then toss a fair coin, it is perfectly ordinary to say that the result of the coin toss is independent of the die roll.

You can hardly state any statistical problem of moderate complexity without getting into several different probability spaces. For example, the probability space for 5 independent random draws from a normal distribution is not the same as probability space for 1 random draw from a normal distribution.

9. Apr 17, 2012

### SW VandeCarr

First, with coin tossing, you're talking about mutually exclusive events. This is different from the usual definition of two iid RVs A,B such that their sum is P(A)+P(B)-P(A)P(B) in a common probability space. Clearly the intersection must be defined in terms of a common probability space. Disjoint events cannot occur at the same time, which limits the kinds of events that can occur.

A probability space consists of the triple of 1)a sample space of outcomes, 2)a sigma algebra over an event space where zero or more outcomes are associated with each event and 3) a probability measure assigned to each event.

Outcomes can be complex. The sequence HHTHTTHHTH is am outcome. In fact, any set of events associated with any number of coin tosses will have a total probability which cannot exceed one in a single experiment.

You can define multiple probability spaces to correspond to multiple experiments, but you can also partition a single probability space to correspond to a combined set of experiments when such aggregation makes sense.

Last edited: Apr 17, 2012
10. Apr 17, 2012

### chiro

Intuitively, I can't see what you couldn't define Z in terms of X and Y if they have valid probability density functions, no matter what the measure.

The only thing though is you have a non-zero probability of something corresponding to +infinity or -infinity where the other variable has a non-zero probability of being non-zero.

This is the only thing that wouldn't make sense since usually we have to have something finite unless you specifically define characteristics that make sense of non-finite realizations of your random variables.

Also I'm assuming that X and Y are just real numbers when they are realized. The probability space for Z will simply be the Cartesian product for the spaces X and Y in the way that an event for Z will depend on each realization of X with a realization of Y in the same way that we generate [0,1]x[0,1] for the Cartesian product. Note that I am talking about event generation and not probability generation for the events: this will depend on the distribution itself and things like whether there are any dependencies between X and Y.

11. Apr 18, 2012

### Stephen Tashi

I don't know how this impacts anything that I wrote or whether it was intended to.

That's not controversial. It amounts to saying that a probability space is triple consisting of a set, a sigma algebra of subsets of that set and a function that defines a probability measure on the sigma algebra.

I don't think you can partition the real numbers in any way to turn them into 5-tuples.

You can form a set that is the product of 5 sets. You can form the product sigma algebra of 5 sigma algebras and you can form the product measure of 5 measures. (Most practical probability books don't treat the measure theory aspects of probability rigorously so they don't bother with such things.)

12. Apr 18, 2012

### Stephen Tashi

It's an interesting task to try to translate between the terminology of intermediate probabiity theory and the terminology of measure theory.

A "random variable" in measure theory is a function from some set to the real numbers. There is some sigma algebra you care about on the real numbers. The random variable has the property that if you take the inverse image of any set in that sigma algebra under the random variable, it will be a set in another sigma algebra in the domain of the random variable that you also care about and that set will be measureable by the measure defined on that sigma algebra. So a "random variable" in measure theory can't be defined without reference to a measure (and the other things).

(It's ironic that I'm playing the role of measure theory person. It isn't the way I think about probability theory and I was never any good at measure theory. Maybe this is penance.)

So what plays the role of a "probability density function" in measure theory? I wish a real measure theory expert would tell us this. Basically, a "measure" is a function that defines a type of abstract integration. A density (in ordinary probability theory) is the derivative of a particular kind of integral. So I think the analog of a probability density function would be a function that was, in some sense, a derivative of a measure. This is called a "Radon-Nikodyn derivative".

I don't know whether the4thamigo_uk is is interested in this or whether he wants to step back from the measure theory cliff.

13. Apr 18, 2012

### chiro

Provided that you have the right measures and that the values for each 'distribution' are finite, then wouldn't you still get a final 'distribution' that satisfies the axioms and produces finite results?

14. Apr 18, 2012

### Stephen Tashi

I think the answer to that is yes, in practical terms. From a rigorous point of view, we would have to define what "distribution" is in measure theoretic terms to sort it out.

The way ordinary probability texts sidestep measure theory is to use specific methods of integration. They use Riemann (or similar) integrals for continuous random variates and for discrete distributions they use summation. From the point of view of measure theory, both of these methods are the beginnings of measures.

It is easy to invent examples of random variates that aren't purely continuous or discrete. For example, define the random variable X (in practical terms) as follows. Flip a fair coin. If the coin lands heads then X = 1. If the coin lands tails then let X be the result of a draw from a uniform random variable u on the interval [0,2]. Practical people know how to handle the distribution of X through a mixture of Riemann integration and summation, but you can't write a simple exposition of a theory of distribution functions and densities that handles this type of situation unless you get into forms of integration and differentiation that are more general than Riemann integration and summation.

If we look at a simple definite integral from calculus $\int_a^b f(x) dx$, we can pretend f(x) is "given" and the definite integral can be regarded as a function whose domain is the collection of sets of the form [a,b] and whose range is the real numbers. The reason it is isn't a measure on the real numers is that it doesn't produce an answer on all the sets in a sigma algebra on the real numbers. You have to struggle to extend the definition of integral in order to get results on all the wierd sets than can crop up in a sigma algebra.

My education went from Riemann integration to measure theory with only a brief stop at the Riemann-Stieltjes integral, but I think that type of integration is one way of handling the mixture of continuous and discrete random variates.

The outlook of measure theory is "Let's assume I've solved all the integration theory. We aren't going to worry about how I did it, or whether there is any underlying function f(x) that I'm integrating over this collection of sets, or whether I'm using a mixture of Riemann integration and summation. We'll assume I have a measure, so if you give me a set in the sigma algebra then I can assign it a number and the way this function behaves on the sets resembles the way that simple theories of integration behave on the sets they can deal with."

If you want to go from measure theory to probability measures to something resembling probablity densities or cumulative distributions, you need more theoretical machinery. My point is that densities and distributions are not "built-in" to the basics of measure theory. A measure is like a "black box" process. You can speculate that it comes from integrating a specific function by using a specific method of integration, but nothing in the definition of measure guarantees that this is how it operates.