Measurability of random variables

In summary: This doesn't make sense as a question. Where does S_3 enter the picture? Is it to be the domain of Y?A product measure is a measure on tuples of sets, each taken from a set that includes the other two. It seems like this might be what you are asking for, but I'm not sure.
  • #1
the4thamigo_uk
47
0
Ive been working with random variables for a while and only today have I come up with a basic question that undermines what I thought I knew...

If I have two random variables X and Y, when am I allowed to multiply them? i.e. Z=XY

Let S_1 and S_1 be sigma algebras such that S_1 is contained in S_2

Cases

i) X and Y are both S_1 measurable

It seems clear that Z=XY exists and is also S_1 measurable

ii) X is S_1 measurable and Y is S_2 measurable

In this case X is also S_2 measurable, but Y is not S_1 measurable. (Am I correct to say this?)

Can we form Z=XY and if so does Z simply become S_2 measurable?

iii) Assume S_3 is not a subset of either S_1 or S_2

Can we write Z=XY?Thanks for your help
 
Physics news on Phys.org
  • #2
the4thamigo_uk said:
Can we write Z=XY?Thanks for your help

Probabilities are by definition measures. If X and Y are independent RVs, then P(X)P(Y) is the product of the two probabilities. If the the sets X and Y are disjoint, then P(X)P(Y)=0. If the sets X and Y are dependent, then the product depends on the degree of dependence as measured by the intersection of the sets X and Y within the probability space. Probabilities are measures on sets in a probability space.
 
Last edited:
  • #3
SW VandeCarr said:
If the sets X and Y are dependent, then the product depends on the degree of dependence as measured by the intersection of the sets X and Y within the probability space.

I think the question entails what to do if the random variables are not defined on the same probability space since it mentions two different sigma algebras.
 
  • #4
Stephen Tashi said:
I think the question entails what to do if the random variables are not defined on the same probability space since it mentions two different sigma algebras.

In which case the answer is...?
 
  • #5
In all my courses in probability theory I have never encountered a situation involving more than one random variable where they were not defined on the same probability space, with the same sigma algebra.
 
  • #6
the4thamigo_uk said:
Ive been working with random variables for a while and only today have I come up with a basic question that undermines what I thought I knew...

If I have two random variables X and Y, when am I allowed to multiply them? i.e. Z=XY

Let S_1 and S_1 be sigma algebras such that S_1 is contained in S_2

Let S_1 and S_2 be sigma algebras such that S_1 is "contained" in S_2.

"Contained" is an ambiguous word in many contexts, but perhaps that's what we should say since the sigma algebra is a collection of sets rather than a set of sets.

Cases

i) X and Y are both S_1 measurable

You didn' t mention the probability measure or measures we are using. I assume you mean that there is some probablity measure [itex] \mu [/itex] defined on S_1.

It seems clear that Z=XY exists and is also S_1 measurable

I agree, but this is from vague memory of measure theory.

ii) X is S_1 measurable and Y is S_2 measurable
Specifying a sigma algebra S_2 (even as a subset of S_1) doesn't specify a measure for it. The usual terminology for functions is something like [itex] \mu[/itex]-measureable where [itex] \mu [/itex] is the probability measure. You aren't specifying any measure for S_2. This brings up the interesting question of whether we could say "Let S_2 have the same probability measure as S_1". I don't think that is technically correct if S_1 [itex] \neq [/itex] S_2. The measures would be functions that had different domains, so they would not be "the same" function. Let's say that there is a measure [itex] \mu_2 [/itex] on S_2 that agrees with [itex] \mu [/itex] on the sets that are common to both sigma algebras.

In this case X is also S_2 measurable, but Y is not S_1 measurable. (Am I correct to say this?)
.

I think so, in spirit, but in addition to technicality that you should be talking about the measurability with respect to measures instead of sigma algebras, there is the technicality that X restricted to smaller domain is not the same function as X. The function Y isn't measureable "on S_1" merely because it isn't defined there. This raises the interesting question of whether there is a unique extension of Y that is. I don't know the answer to that.

Can we form Z=XY and if so does Z simply become S_2 measurable?

I think Z = XY is [itex] \mu_2 [/itex] measureable where X denotes the restriction of X to S_2.





iii) Assume S_3 is not a subset of either S_1 or S_2

Can we write Z=XY?

This doesn't make sense as a question. Where does S_3 enter the picture? Is it to be the domain of Y?

I think you can define a "product measure" on tuples of sets, each taken from a different sigma algebra. So if X is [itex] \mu [/itex] measureable on S_1 and Y is [itex] \mu_3 [/itex] measureable on S_3 then you can implement the idea of an independent realization of X and Y by taking - what should I say? The cartesian product of S_1 with S_3? That terminology may only apply to sets, but you get the idea. You can define a product measure [itex] \mu\mu_3 \ [/itex] on that collection of ordered pairs of sets.

If you are trying to deal with a situation where there is a dependence between X and Y then you have to say more about what relates them before we can make progress.
 
  • #7
Stephen Tashi said:
I think the question entails what to do if the random variables are not defined on the same probability space since it mentions two different sigma algebras.

Random variables are specifically defined in terms of mappings from the interval [0.1] to an event space. That is, within the context of probability theory. I don't know how you define the independence or the lack of independence between two random variables in terms of more than one probability space. Arguably, two random variables could be disjoint with each defined in their own probability space such that the addition of the associated two probabilities is not defined, However, this is not something I have ever encountered. Could you give an appropriate context where two random variables are defined in such a way that the addition of the two associated probabilities is not defined? ("Appropriate" meaning within the context of probability theory where random variables are defined.)
 
Last edited:
  • #8
SW VandeCarr said:
don't know how you define the independence or the lack of independence between two random variables in terms of more than one probability space.

If you roll a fair tie and then toss a fair coin, it is perfectly ordinary to say that the result of the coin toss is independent of the die roll.

You can hardly state any statistical problem of moderate complexity without getting into several different probability spaces. For example, the probability space for 5 independent random draws from a normal distribution is not the same as probability space for 1 random draw from a normal distribution.
 
  • #9
Stephen Tashi said:
If you roll a fair tie and then toss a fair coin, it is perfectly ordinary to say that the result of the coin toss is independent of the die roll.

You can hardly state any statistical problem of moderate complexity without getting into several different probability spaces. For example, the probability space for 5 independent random draws from a normal distribution is not the same as probability space for 1 random draw from a normal distribution.

First, with coin tossing, you're talking about mutually exclusive events. This is different from the usual definition of two iid RVs A,B such that their sum is P(A)+P(B)-P(A)P(B) in a common probability space. Clearly the intersection must be defined in terms of a common probability space. Disjoint events cannot occur at the same time, which limits the kinds of events that can occur.

A probability space consists of the triple of 1)a sample space of outcomes, 2)a sigma algebra over an event space where zero or more outcomes are associated with each event and 3) a probability measure assigned to each event.

Outcomes can be complex. The sequence HHTHTTHHTH is am outcome. In fact, any set of events associated with any number of coin tosses will have a total probability which cannot exceed one in a single experiment.

You can define multiple probability spaces to correspond to multiple experiments, but you can also partition a single probability space to correspond to a combined set of experiments when such aggregation makes sense.
 
Last edited:
  • #10
the4thamigo_uk said:
Ive been working with random variables for a while and only today have I come up with a basic question that undermines what I thought I knew...

If I have two random variables X and Y, when am I allowed to multiply them? i.e. Z=XY

Let S_1 and S_1 be sigma algebras such that S_1 is contained in S_2

Cases

i) X and Y are both S_1 measurable

It seems clear that Z=XY exists and is also S_1 measurable

ii) X is S_1 measurable and Y is S_2 measurable

In this case X is also S_2 measurable, but Y is not S_1 measurable. (Am I correct to say this?)

Can we form Z=XY and if so does Z simply become S_2 measurable?

iii) Assume S_3 is not a subset of either S_1 or S_2

Can we write Z=XY?

Thanks for your help

Intuitively, I can't see what you couldn't define Z in terms of X and Y if they have valid probability density functions, no matter what the measure.

The only thing though is you have a non-zero probability of something corresponding to +infinity or -infinity where the other variable has a non-zero probability of being non-zero.

This is the only thing that wouldn't make sense since usually we have to have something finite unless you specifically define characteristics that make sense of non-finite realizations of your random variables.

Also I'm assuming that X and Y are just real numbers when they are realized. The probability space for Z will simply be the Cartesian product for the spaces X and Y in the way that an event for Z will depend on each realization of X with a realization of Y in the same way that we generate [0,1]x[0,1] for the Cartesian product. Note that I am talking about event generation and not probability generation for the events: this will depend on the distribution itself and things like whether there are any dependencies between X and Y.
 
  • #11
SW VandeCarr said:
First, with coin tossing, you're talking about mutually exclusive events.
I don't know how this impacts anything that I wrote or whether it was intended to.

A probability space consists of the triple of 1)a sample space of outcomes, 2)a sigma algebra over an event space where zero or more outcomes are associated with each event and 3) a probability measure assigned to each event.

That's not controversial. It amounts to saying that a probability space is triple consisting of a set, a sigma algebra of subsets of that set and a function that defines a probability measure on the sigma algebra.


you can also partition a single probability space to correspond to a combined set of experiments when such aggregation makes sense.

I don't think you can partition the real numbers in any way to turn them into 5-tuples.

You can form a set that is the product of 5 sets. You can form the product sigma algebra of 5 sigma algebras and you can form the product measure of 5 measures. (Most practical probability books don't treat the measure theory aspects of probability rigorously so they don't bother with such things.)
 
  • #12
chiro said:
Intuitively, I can't see what you couldn't define Z in terms of X and Y if they have valid probability density functions, no matter what the measure.

It's an interesting task to try to translate between the terminology of intermediate probabiity theory and the terminology of measure theory.

A "random variable" in measure theory is a function from some set to the real numbers. There is some sigma algebra you care about on the real numbers. The random variable has the property that if you take the inverse image of any set in that sigma algebra under the random variable, it will be a set in another sigma algebra in the domain of the random variable that you also care about and that set will be measureable by the measure defined on that sigma algebra. So a "random variable" in measure theory can't be defined without reference to a measure (and the other things).

(It's ironic that I'm playing the role of measure theory person. It isn't the way I think about probability theory and I was never any good at measure theory. Maybe this is penance.)

So what plays the role of a "probability density function" in measure theory? I wish a real measure theory expert would tell us this. Basically, a "measure" is a function that defines a type of abstract integration. A density (in ordinary probability theory) is the derivative of a particular kind of integral. So I think the analog of a probability density function would be a function that was, in some sense, a derivative of a measure. This is called a "Radon-Nikodyn derivative".

I don't know whether the4thamigo_uk is is interested in this or whether he wants to step back from the measure theory cliff.
 
  • #13
Provided that you have the right measures and that the values for each 'distribution' are finite, then wouldn't you still get a final 'distribution' that satisfies the axioms and produces finite results?
 
  • #14
chiro said:
Provided that you have the right measures and that the values for each 'distribution' are finite, then wouldn't you still get a final 'distribution' that satisfies the axioms and produces finite results?

I think the answer to that is yes, in practical terms. From a rigorous point of view, we would have to define what "distribution" is in measure theoretic terms to sort it out.

The way ordinary probability texts sidestep measure theory is to use specific methods of integration. They use Riemann (or similar) integrals for continuous random variates and for discrete distributions they use summation. From the point of view of measure theory, both of these methods are the beginnings of measures.

It is easy to invent examples of random variates that aren't purely continuous or discrete. For example, define the random variable X (in practical terms) as follows. Flip a fair coin. If the coin lands heads then X = 1. If the coin lands tails then let X be the result of a draw from a uniform random variable u on the interval [0,2]. Practical people know how to handle the distribution of X through a mixture of Riemann integration and summation, but you can't write a simple exposition of a theory of distribution functions and densities that handles this type of situation unless you get into forms of integration and differentiation that are more general than Riemann integration and summation.

If we look at a simple definite integral from calculus [itex] \int_a^b f(x) dx [/itex], we can pretend f(x) is "given" and the definite integral can be regarded as a function whose domain is the collection of sets of the form [a,b] and whose range is the real numbers. The reason it is isn't a measure on the real numers is that it doesn't produce an answer on all the sets in a sigma algebra on the real numbers. You have to struggle to extend the definition of integral in order to get results on all the weird sets than can crop up in a sigma algebra.

My education went from Riemann integration to measure theory with only a brief stop at the Riemann-Stieltjes integral, but I think that type of integration is one way of handling the mixture of continuous and discrete random variates.

The outlook of measure theory is "Let's assume I've solved all the integration theory. We aren't going to worry about how I did it, or whether there is any underlying function f(x) that I'm integrating over this collection of sets, or whether I'm using a mixture of Riemann integration and summation. We'll assume I have a measure, so if you give me a set in the sigma algebra then I can assign it a number and the way this function behaves on the sets resembles the way that simple theories of integration behave on the sets they can deal with."

If you want to go from measure theory to probability measures to something resembling probablity densities or cumulative distributions, you need more theoretical machinery. My point is that densities and distributions are not "built-in" to the basics of measure theory. A measure is like a "black box" process. You can speculate that it comes from integrating a specific function by using a specific method of integration, but nothing in the definition of measure guarantees that this is how it operates.
 

1. What is a random variable?

A random variable is a mathematical concept used in probability theory and statistics to represent a numerical outcome from a random event or experiment. It can take on different values with certain probabilities associated with each value.

2. How is the measurability of random variables determined?

The measurability of a random variable is determined by its ability to be assigned a numerical value and its corresponding probability. This can be done by defining a sample space, assigning probabilities to each possible outcome, and then mapping the outcomes to numerical values.

3. What is the significance of measurability in probability theory?

In probability theory, measurability is important because it allows us to calculate and analyze the likelihood of different outcomes in a random experiment. It also allows us to calculate important measures such as expected values and variances.

4. Can all random variables be measured?

No, not all random variables are measurable. In order for a random variable to be measurable, it must satisfy certain mathematical properties and be defined on a measurable space. Otherwise, it is considered non-measurable.

5. How is measurability related to the concept of a sigma-algebra?

A sigma-algebra is a mathematical structure that contains a collection of events or outcomes that are measurable. Measurability of a random variable is closely related to the sigma-algebra it is defined on, as it must be measurable with respect to this sigma-algebra in order for it to be a valid random variable.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
332
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
366
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
900
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
436
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Precalculus Mathematics Homework Help
Replies
3
Views
940
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
611
Back
Top