logarithmic
- 103
- 0
I've been looking at the measure theoretic definition of a conditional expectation and it doesn't make too much sense to me.
Consider the definition given here: https://en.wikipedia.org/wiki/Conditional_expectation#Formal_definition
It says for a probability space (\Omega,\mathcal{A},P), and sigma fields \mathcal{B}\subset\mathcal{A}, the random variable Y=E(X|\mathcal{B}) is the conditional expectation, if it satisfies
\int_{B}YdP = \int_{B} X dP for all B\in\mathcal{B} (*).
But clearly setting Y=X satisfies (*). And it goes on to say that conditional expectations are almost surely unique. So this means that E(X|\mathcal{B})=Y=X almost surely?
If we consider the following example \Omega=\{1,2,3\}, \mathcal{A} is the power set of \Omega, \mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}, X(\omega)=\omega and P(1) = .25, P(2)=.65, P(3)=.1, then if you write you (*) for all the elements of \mathcal{B}, you'll get E(X|\mathcal{B})=X. But clearly this isn't correct, given {3}, the conditional expectation should be 3, and given {1,2} the conditional expectation should be 1\frac{.25}{.9} + 2\frac{.65}{.9}.
It's usually said that sigma fields model information. I also don't see what sort of information \mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\} gives.
Can someone explain where my understanding is wrong, and how this relates to the more intuitive definition of conditional expectations for random variables:
E(X|Y)=\int_{\mathbb{R}}xf_{X|Y}(x,y)dx.
Consider the definition given here: https://en.wikipedia.org/wiki/Conditional_expectation#Formal_definition
It says for a probability space (\Omega,\mathcal{A},P), and sigma fields \mathcal{B}\subset\mathcal{A}, the random variable Y=E(X|\mathcal{B}) is the conditional expectation, if it satisfies
\int_{B}YdP = \int_{B} X dP for all B\in\mathcal{B} (*).
But clearly setting Y=X satisfies (*). And it goes on to say that conditional expectations are almost surely unique. So this means that E(X|\mathcal{B})=Y=X almost surely?
If we consider the following example \Omega=\{1,2,3\}, \mathcal{A} is the power set of \Omega, \mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\}, X(\omega)=\omega and P(1) = .25, P(2)=.65, P(3)=.1, then if you write you (*) for all the elements of \mathcal{B}, you'll get E(X|\mathcal{B})=X. But clearly this isn't correct, given {3}, the conditional expectation should be 3, and given {1,2} the conditional expectation should be 1\frac{.25}{.9} + 2\frac{.65}{.9}.
It's usually said that sigma fields model information. I also don't see what sort of information \mathcal{B}=\{\{1,2\},\{3\},\varnothing,\Omega\} gives.
Can someone explain where my understanding is wrong, and how this relates to the more intuitive definition of conditional expectations for random variables:
E(X|Y)=\int_{\mathbb{R}}xf_{X|Y}(x,y)dx.
Last edited: