Register to reply

Measure theoretic definition of conditional expecation

Share this thread:
logarithmic
#1
Apr8-12, 03:14 AM
P: 108
I've been looking at the measure theoretic definition of a conditional expectation and it doesn't make too much sense to me.

Consider the definition given here: https://en.wikipedia.org/wiki/Condit...mal_definition

It says for a probability space [itex](\Omega,\mathcal{A},P)[/itex], and sigma fields [itex]\mathcal{B}\subset\mathcal{A}[/itex], the random variable [itex]Y=E(X|\mathcal{B})[/itex] is the conditional expectation, if it satisfies

[itex]\int_{B}YdP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex] (*).

But clearly setting [itex]Y=X[/itex] satisfies (*). And it goes on to say that conditional expectations are almost surely unique. So this means that [itex]E(X|\mathcal{B})=Y=X[/itex] almost surely?

If we consider the following example [itex]\Omega=\{1,2,3\}[/itex], [itex]\mathcal{A}[/itex] is the power set of [itex]\Omega[/itex], [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex], [itex]X(\omega)=\omega[/itex] and [itex]P(1) = .25, P(2)=.65, P(3)=.1[/itex], then if you write you (*) for all the elements of [itex]\mathcal{B}[/itex], you'll get [itex]E(X|\mathcal{B})=X[/itex]. But clearly this isn't correct, given {3}, the conditional expectation should be 3, and given {1,2} the conditional expectation should be [itex]1\frac{.25}{.9} + 2\frac{.65}{.9}[/itex].

It's usually said that sigma fields model information. I also don't see what sort of information [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex] gives.

Can someone explain where my understanding is wrong, and how this relates to the more intuitive definition of conditional expectations for random variables:
[itex]E(X|Y)=\int_{\mathbb{R}}xf_{X|Y}(x,y)dx[/itex].
Phys.Org News Partner Science news on Phys.org
Security CTO to detail Android Fake ID flaw at Black Hat
Huge waves measured for first time in Arctic Ocean
Mysterious molecules in space
chiro
#2
Apr8-12, 05:27 AM
P: 4,572
Quote Quote by logarithmic View Post
I've been looking at the measure theoretic definition of a conditional expectation and it doesn't make too much sense to me.

Consider the definition given here: https://en.wikipedia.org/wiki/Condit...mal_definition

It says for a probability space [itex](\Omega,\mathcal{A},P)[/itex], and sigma fields [itex]\mathcal{B}\subset\mathcal{A}[/itex], the random variable [itex]Y=E(X|\mathcal{B})[/itex] is the conditional expectation, if it satisfies

[itex]\int_{B}YdP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex] (*).

But clearly setting [itex]Y=X[/itex] satisfies (*). And it goes on to say that conditional expectations are almost surely unique. So this means that [itex]E(X|\mathcal{B})=Y=X[/itex] almost surely?

If we consider the following example [itex]\Omega=\{1,2,3\}[/itex], [itex]\mathcal{A}[/itex] is the power set of [itex]\Omega[/itex], [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex], [itex]X(\omega)=\omega[/itex] and [itex]P(1) = .25, P(2)=.65, P(3)=.1[/itex], then if you write you (*) for all the elements of [itex]\mathcal{B}[/itex], you'll get [itex]E(X|\mathcal{B})=X[/itex]. But clearly this isn't correct, given {3}, the conditional expectation should be 3, and given {1,2} the conditional expectation should be [itex]1\frac{.25}{.9} + 2\frac{.65}{.9}[/itex].

It's usually said that sigma fields model information. I also don't see what sort of information [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex] gives.

Can someone explain where my understanding is wrong, and how this relates to the more intuitive definition of conditional expectations for random variables:
[itex]E(X|Y)=\int_{\mathbb{R}}xf_{X|Y}(x,y)dx[/itex].
Hey logarithmic.

The easiest way to think about conditional expectation in the intuitive way (not rigorous measure theoretic way although it's equivalent) is that you want to find the mean for the variable X given some conditional information present in Y.

If you want to get an actual number, then Y needs to be 'realized': in other words, you will need specific information about what realization Y actually takes to get a number. If you don't have this, then you will get something in terms of the random variable Y itself which will give you a function and not a number that has been evaluated.

One way to think of this visually is to assume that Z = f(X,Y) is a joint distribution and that if you take one 'slice' of this PDF with respect to a specific Y = y (the realization of Y is y) then you will get a slice of this bivariate distribution which will give you a normal univariate distribution for X given Y = y which can be seen as just a normal univariate distribution and then when you take the expectation of this, the intuitive idea is just the same as taking the expectation of a univariate distribution.

Now X and Y may not be just single random variables and they may represent something more complex, but the idea is the same.
logarithmic
#3
Apr8-12, 05:40 AM
P: 108
Quote Quote by chiro View Post
Hey logarithmic.

The easiest way to think about conditional expectation in the intuitive way (not rigorous measure theoretic way although it's equivalent) is that you want to find the mean for the variable X given some conditional information present in Y.

If you want to get an actual number, then Y needs to be 'realized': in other words, you will need specific information about what realization Y actually takes to get a number. If you don't have this, then you will get something in terms of the random variable Y itself which will give you a function and not a number that has been evaluated.

One way to think of this visually is to assume that Z = f(X,Y) is a joint distribution and that if you take one 'slice' of this PDF with respect to a specific Y = y (the realization of Y is y) then you will get a slice of this bivariate distribution which will give you a normal univariate distribution for X given Y = y which can be seen as just a normal univariate distribution and then when you take the expectation of this, the intuitive idea is just the same as taking the expectation of a univariate distribution.

Now X and Y may not be just single random variables and they may represent something more complex, but the idea is the same.
Yes that makes perfect sense to me. I already fully understand that definition of the conditional expectation.

It's the measure theoretic definition I don't understand.

chiro
#4
Apr8-12, 06:00 AM
P: 4,572
Measure theoretic definition of conditional expecation

For the measure theoretic definition, from what you've said it seems that you are integrating over the region of B for X with respect to the probability measure P.

The measure you are integrating with respect to will depend on the nature of P which if it is a Borel measure, should satisfy all the probability axioms for that measure.

Again think of what the slice is in the context. Think about what the slice is with respect to X (for B) in the context of probability space in terms of the visual description above.
logarithmic
#5
Apr8-12, 06:11 AM
P: 108
Quote Quote by chiro View Post
For the measure theoretic definition, from what you've said it seems that you are integrating over the region of B for X with respect to the probability measure P.

The measure you are integrating with respect to will depend on the nature of P which if it is a Borel measure, should satisfy all the probability axioms for that measure.

Again think of what the slice is in the context. Think about what the slice is with respect to X (for B) in the context of probability space in terms of the visual description above.
When you integrate over the region of B for X wrt the probability measure P, what you get is not the conditional expectation.

The conditional expectation is the random variable [itex]Y:=E(X|\mathcal{B})[/itex], such that if you did the above to Y, you would get the result above.

[itex]\int_{B}YdP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex]

Which leads to the statement [itex]E(X|\mathcal{B}) = X[/itex] a.s. for all sigma fields [itex]\mathcal{B}[/itex], which makes no sense.

How would you calculate the conditional expectation in the example in the original post? And how is conditioning on a sigma field related to conditioning on a random variable? The former does make sense to me, the latter doesn't
chiro
#6
Apr8-12, 06:25 AM
P: 4,572
Quote Quote by logarithmic View Post
When you integrate over the region of B for X wrt the probability measure P, what you get is not the conditional expectation.

The conditional expectation is the random variable [itex]Y:=E(X|\mathcal{B})[/itex], such that if you did the above to Y, you would get the result above.

[itex]\int_{B}YdP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex]

Which leads to the statement [itex]E(X|\mathcal{B}) = X[/itex] for all sigma fields [itex]\mathcal{B}[/itex], which makes no sense.

How would you calculate the conditional expectation in the example in the original post? And how is conditioning on a sigma field related to conditioning on a random variable? The former does make sense to me, the latter doesn't
I just took a look at this:

http://en.wikipedia.org/wiki/Conditi...mal_definition

If we use this definition then the definition of the slice(s) comes from realizing that we only integrate over the right region (note the Beta in the wiki definition).

Maybe something is ambiguous in the definition you have been given.
logarithmic
#7
Apr8-12, 06:31 AM
P: 108
Quote Quote by chiro View Post
I just took a look at this:

http://en.wikipedia.org/wiki/Conditi...mal_definition

If we use this definition then the definition of the slice(s) comes from realizing that we only integrate over the right region (note the Beta in the wiki definition).

Maybe something is ambiguous in the definition you have been given.
The definition I'm using is the wiki definition, and I'm still not understanding what you're saying. What's the right region, and why should that expression for the integration be true? And how does that not imply that E(X|B) = X almost surely?
chiro
#8
Apr8-12, 06:47 AM
P: 4,572
Quote Quote by logarithmic View Post
The definition I'm using is the wiki definition, and I'm still not understanding what you're saying. What's the right region, and why should that expression for the integration be true? And how does that not imply that E(X|B) = X almost surely?
In the above definition we integrate over every element of the B. It's very subtle but if you read the wiki definition, the region of integration B is for every element of β (in other words we are finding E[X|β] and B which is the region has the form B is an element of β]

I think (but I'm not sure) that the probability space must include all events including those from not only X but also from β as well which means that you should be treating everything as if it's one giant distribution. In other words, the thing you are integrating with respect to (the dP(w)) is the entire probability space that has X and θ as its subsets.

Does this make sense?
logarithmic
#9
Apr8-12, 07:03 AM
P: 108
Quote Quote by chiro View Post
In the above definition we integrate over every element of the B. It's very subtle but if you read the wiki definition, the region of integration B is for every element of β (in other words we are finding E[X|β] and B which is the region has the form B is an element of β]

I think (but I'm not sure) that the probability space must include all events including those from not only X but also from β as well which means that you should be treating everything as if it's one giant distribution. In other words, the thing you are integrating with respect to (the dP(w)) is the entire probability space that has X and θ as its subsets.

Does this make sense?
It says:
[itex]\int_{B}E(X|\mathcal{B})dP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex] for each [itex]B \in \mathcal{B}[/itex]. (*)

So the region of integration isn't the union of all the [itex]B \in \mathcal{B}[/itex], otherwise it would say to integrate over [itex]\mathcal{B}[/itex] instead of [itex]B[/itex]. Nor does integrating [itex]\mathcal{B}[/itex] make sense, as the region of integration for Lebesgue integrals is always a subset of the sample space [itex]\mathcal{\Omega}[/itex], and never a sigma field like [itex]\mathcal{B}[/itex].

In my example where [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex], (*) is saying, the following 4 equalities hold:
[itex]\int_{\{1,2\}}E(X|\mathcal{B})dP = \int_{\{1,2\}} X dP[/itex]
[itex]\int_{\{3\}}E(X|\mathcal{B})dP = \int_{\{3\}}X dP[/itex]
[itex]\int_{\varnothing}E(X|\mathcal{B})dP = \int_{\varnothing} X dP[/itex]
[itex]\int_{\Omega}E(X|\mathcal{B})dP = \int_{\Omega} X dP[/itex].

And no, your last paragraph doesn't really make sense. I'm already integrating wrt the entire sample space [itex]\Omega[/itex].

In my example on the original post where [itex]X(\omega) = \omega[/itex], we have [itex]X(\omega) = 1I_{\{1\}}(\omega)+2I_{\{2\}}(\omega)+3I_{\{3\}}(\omega)[/itex], where I is an indicator function.

Then the integral
[itex]\int_{\{1,2\}}X(\omega)dP(\omega) = \int X(\omega)I_{\{1,2\}}(\omega)dP(\omega)=\int 1I_{\{1\}}(\omega)+2I_{\{2\}}(\omega)dP(\omega)[/itex]
, i.e. the last line integrate over the sample space, and since this is a simple function, the result is
[itex]1P({1})+2P({2})[/itex].

And somehow, this should be equal to
[itex]\int_{\{1,2\}}E(X|\mathcal{B})(\omega)dP(\omega)[/itex].
chiro
#10
Apr8-12, 07:09 AM
P: 4,572
Quote Quote by logarithmic View Post
It says:
[itex]\int_{B}E(X|\mathcal{B})dP = \int_{B} X dP[/itex] for all [itex]B\in\mathcal{B}[/itex] for each [itex]B \in \mathcal{B}[/itex]. (*)

So the region of integration isn't the union of all the [itex]B \in \mathcal{B}[/itex], otherwise it would say to integrate over [itex]\mathcal{B}[/itex] instead of [itex]B[/itex]. Nor does integrating [itex]\mathcal{B}[/itex] make sense, as the region of integration for Lebesgue integrals is always a subset of the sample space [itex]\mathcal{\Omega}[/itex], and never a sigma field like [itex]\mathcal{B}[/itex].

In my example where [itex]\mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}[/itex], (*) is saying, the following 4 equalities hold:
[itex]\int_{\{1,2\}}E(X|\mathcal{B})dP = \int_{\{1,2\}} X dP[/itex]
[itex]\int_{\{3\}}E(X|\mathcal{B})dP = \int_{\{3\}}X dP[/itex]
[itex]\int_{\varnothing}E(X|\mathcal{B})dP = \int_{\varnothing} X dP[/itex]
[itex]\int_{\Omega}E(X|\mathcal{B})dP = \int_{\Omega} X dP[/itex].
I see what you're saying and I agree with you but I think what we are integrating over is with respect to the entire probability space (the P(w) in dP(w)) and the β represents the constraint with respect to the entire space of P(w).

The thing that I think that β is referring to is the actual subset of the space that represents the events corresponding to X|B. In other words, we get the actual subset of these events that are a subset of P(w) and integrate over these events only.

In the visual sense we might have say a Y|X scenario in which we get a slice but in this measure theoretic viewpoint, we consider Y and X in the context of the probability measure space P(w).
logarithmic
#11
Apr8-12, 07:18 AM
P: 108
Quote Quote by chiro View Post
I see what you're saying and I agree with you but I think what we are integrating over is with respect to the entire probability space (the P(w) in dP(w)) and the β represents the constraint with respect to the entire space of P(w).

The thing that I think that β is referring to is the actual subset of the space that represents the events corresponding to X|B. In other words, we get the actual subset of these events that are a subset of P(w) and integrate over these events only.

In the visual sense we might have say a Y|X scenario in which we get a slice but in this measure theoretic viewpoint, we consider Y and X in the context of the probability measure space P(w).
I don't think any of this makes sense in the context of Lebesgue integration. Can you give a concrete example.

The dP(w) term doesn't say anything about what we're integrating over, it's just notation to remind us of the measure that is used to evaluate the integral, i.e. in Lebesgue integration it is defined that
[itex]\int I_{S}(\omega)dP(\omega)=P(S)[/itex].
chiro
#12
Apr8-12, 07:27 AM
P: 4,572
Quote Quote by logarithmic View Post
I don't think any of this makes sense in the context of Lebesgue integration. Can you give a concrete example.
I'm not sure why this doesn't make sense.

If the measure was the standard infinitesimal measure used in Riemann integration for a one-dimensional integral, then the measure would be dx (or whatever dummy variable x is).

In this context the measure refers to any general measure, but the space is not over the real line but instead over a set corresponding to the exhaustable probability space P with the corresponding measure that satisfies the Kolmogorov probability axioms. In other words instead of the measure refering to the set R and the measure being infinitesimal, the set being refered to is the probability space P which is a set like R, with a measure satisfying the probability axioms.

If I were going to use an example of a Lebesgue measure instead of an infinitesimal one I would create a very simple space satisfying the Kolmogorov axioms and use the characteristic function for defining the integration.

Did you have a specific set P in the mind?
logarithmic
#13
Apr8-12, 07:30 AM
P: 108
Quote Quote by chiro View Post
I'm not sure why this doesn't make sense.

If the measure was the standard infinitesimal measure used in Riemann integration for a one-dimensional integral, then the measure would be dx (or whatever dummy variable x is).

In this context the measure refers to any general measure, but the space is not over the real line but instead over a set corresponding to the exhaustable probability space P with the corresponding measure that satisfies the Kolmogorov probability axioms. In other words instead of the measure refering to the set R and the measure being infinitesimal, the set being refered to is the probability space P which is a set like R, with a measure satisfying the probability axioms.

If I were going to use an example of a Lebesgue measure instead of an infinitesimal one I would create a very simple space satisfying the Kolmogorov axioms and use the characteristic function for defining the integration.

Did you have a specific set P in the mind?
Yes, I set out an example in the original post.

And I worked through it, and I always get [itex]E(X|\mathcal{B}) = X[/itex].

"but the space is not over the real line but instead over a set corresponding to the exhaustable probability space P"

P is not a probability space, it's a probability measure, i.e. it's not a set, but a function [itex]P:\mathcal{A}\to[0,1][/itex]. The integration is over the set [itex]B\subset\Omega[/itex], the triplet [itex](\Omega, \mathcal{A}, P)[/itex] is the probability space.

I have no problem with computing the integral like
[itex]\int_{\{1,2\}} X dP[/itex],
which the definition requires (see post 9 for my working out).

But I see no reason why this must equal
[itex]\int_{\{1,2\}} E(X|\mathcal{B}) dP[/itex].

And saying that these 2 integrals are equal, together with a.s. uniqueness implies that [itex]E(X|\mathcal{B}) = X[/itex] a.s.. This doesn't make sense because the random variable analogue, i.e. [itex]E(X|Y) = X[/itex], is wrong for random variables Y.
chiro
#14
Apr8-12, 07:53 AM
P: 4,572
Quote Quote by logarithmic View Post
Yes, I set out an example in the original post.

And I worked through it, and I always get [itex]E(X|\mathcal{B}) = X[/itex].

"but the space is not over the real line but instead over a set corresponding to the exhaustable probability space P"

P is not a probability space, it's a probability measure, i.e. it's not a set, but a function [itex]P:\mathcal{A}\to[0,1][/itex]. The integration is over the set [itex]B\subset\Omega[/itex], the triplet [itex](\Omega, \mathcal{A}, P)[/itex] is the probability space.

I have no problem with computing the integral like
[itex]\int_{\{1,2\}} X dP[/itex],
which the definition requires (see post 9 for my working out).

But I see no reason why this must equal
[itex]\int_{\{1,2\}} E(X|\mathcal{B}) dP[/itex].

And saying that these 2 integrals are equal, together with a.s. uniqueness implies that [itex]E(X|\mathcal{B}) = X[/itex] a.s.. This doesn't make sense because the random variable analogue, i.e. [itex]E(X|Y) = X[/itex], is wrong for random variables Y.
Yeah sorry P is a measure in this context but it is defined relative to the probability space itself.

What I think is happening is that B is generated that corresponds to the actual events corresponding to X|B and not just related to either X or B independently. So instead of the {1,2} for unconditional X you would get a different set for the conditional problem that corresponds to the appropriate events.

So with regards to your examples, I have a feeling that you are going to get completely different things that you integrate over for each different example (in other words it won't be {1,2} for all the cases but a set that reflects the events described by the conditional representation).
logarithmic
#15
Apr8-12, 08:05 AM
P: 108
Quote Quote by chiro View Post
Yeah sorry P is a measure in this context but it is defined relative to the probability space itself.

What I think is happening is that B is generated that corresponds to the actual events corresponding to X|B and not just related to either X or B independently. So instead of the {1,2} for unconditional X you would get a different set for the conditional problem that corresponds to the appropriate events.

So with regards to your examples, I have a feeling that you are going to get completely different things that you integrate over for each different example (in other words it won't be {1,2} for all the cases but a set that reflects the events described by the conditional representation).
Your last 2 paragraphs don't make sense to me. The problem is that your language is very loose and not explicit or mathematical.

What do you mean that B is generated? What's the actual event corresponding to X|B? What do you mean by getting different sets of conditional problems? etc.

{1,2} is just 1 element of [itex]\mathcal{B}[/itex], that I used as an example. But (*) needs to hold for all 4 elements of [itex]\mathcal{B}[/itex] too.
chiro
#16
Apr8-12, 08:17 AM
P: 4,572
Quote Quote by logarithmic View Post
Your last 2 paragraphs don't make sense to me. The problem is that your language is very loose and not explicit or mathematical.

What do you mean that B is generated? What's the actual event corresponding to X|B? What do you mean by getting different sets of conditional problems? etc.

{1,2} is just 1 element of [itex]\mathcal{B}[/itex], that I used as an example. But (*) needs to hold for all 4 elements of [itex]\mathcal{B}[/itex] too.
Well according to the wiki definition our B is an element of β.

To me this implies that a valid B that is an element of the set β but it is not constant in the way that you are describing. I am interpreting the theorem to say that for this theorem to hold, an element must exist from the set β that satisfies this identity.

It wouldn't make sense if it were the way you wrote it and I'm sure you agree. As long as a valid B exists, then the formula should hold.

You may have missed this crucial statement (from the wiki site)

This is not a constructive definition; we are merely given the required property that a conditional expectation must satisfy.
You have to find the B to get the actual answer, this theorem just says that if the right B exists then the conditional expectation exists for all general X and β
logarithmic
#17
Apr8-12, 08:37 AM
P: 108
Quote Quote by chiro View Post
Well according to the wiki definition our B is an element of β.

To me this implies that a valid B that is an element of the set β but it is not constant in the way that you are describing. I am interpreting the theorem to say that for this theorem to hold, an element must exist from the set β that satisfies this identity.

It wouldn't make sense if it were the way you wrote it and I'm sure you agree. As long as a valid B exists, then the formula should hold.

You may have missed this crucial statement (from the wiki site)



You have to find the B to get the actual answer, this theorem just says that if the right B exists then the conditional expectation exists for all general X and β
Yes, [itex]B\in\mathcal{B}[/itex], but [itex]\mathcal{B}[/itex] is a sigma field, which means that [itex]B\in\mathcal{B}[/itex] is a family of subsets of [itex]\Omega[/itex]. So [itex]B[/itex] being an element of [itex]\mathcal{B}[/itex] is means that [itex]B[/itex] is a subset of [itex]\Omega[/itex].

The definition isn't that there exists an element [itex]B\in\mathcal{B}[/itex] such that (*) holds, it says (*) is satisfied for all [itex]B\in\mathcal{B}[/itex]. Here (*) is the integral equality in the original post, which must hold by definition.

I'm not sure how the nonconstructive part is relevant. Sure it doesn't given an explicit formula for [itex]E(X|\mathcal{B})[/itex], but we can find [itex]E(X|\mathcal{B})[/itex] easily. Just set [itex]E(X|\mathcal{B})=X[/itex], then in (*) we have LHS = RHS.
chiro
#18
Apr8-12, 08:41 AM
P: 4,572
It's getting late here so I'll reply sometime tomorrow


Register to reply

Related Discussions
Conditional expectation, Lebesgue measure Calculus & Beyond Homework 2
Value of a measure theoretic integral over a domain shrinking to a single set Set Theory, Logic, Probability, Statistics 2
Weird statement in my book about (measure theoretic) conditional expectation Set Theory, Logic, Probability, Statistics 5
Confused about set-theoretic definition of a function Set Theory, Logic, Probability, Statistics 4
Set theoretic definition of a singleton. Set Theory, Logic, Probability, Statistics 7