I Help understanding conditional expectation identity

AI Thread Summary
The discussion focuses on understanding the conditional expectation identity within a probability space defined by integrable random variables and sub-sigma-algebras. It establishes that the conditional expectation of a random variable given a sigma-algebra is a measurable and integrable function that satisfies specific integral properties. The confusion arises regarding the equality involving three random variables and how it relates to the definitions provided. The clarification involves applying the law of iterated expectation and recognizing the measurability of the indicator function, allowing the conditional expectation to be "pulled out" of the integral. This leads to a clearer understanding of the relationship between the conditional expectations and the original random variables.
psie
Messages
315
Reaction score
40
TL;DR Summary
I'm reading a proof on conditional probabilities and there is an identity involving conditional expectation which I'm stuck on.
Let ##(\Omega,\mathcal{F},P)## be a probability space, and let us define the conditional expectation ##{\rm E}[X\mid\mathcal{G}]## for integrable random variables ##X:\Omega\to\mathbb{R}##, i.e. ##X\in L^1(P)##, and sub-sigma-algebras ##\mathcal{G}\subseteq\mathcal{F}##.

Definition 1: The conditional expectation ##{\rm E}[X\mid\mathcal{G}]## of ##X## given ##\mathcal{G}## is the random variable ##Z## having the following properties:
(i) ##Z## is integrable, i.e. ##Z\in L^1(P)##.
(ii) ##Z## is (##\mathcal{G},\mathcal{B}(\mathbb{R}))##-measurable.
(iii) For any ##A\in\mathcal{G}## we have $$\int_A Z\,\mathrm dP=\int_A X\,\mathrm dP.$$

Definition 2: If ##X\in L^1(P)## and ##Y:\Omega\to\mathbb{R}## is any random variable, then the conditional expectation of ##X## given ##Y## is defined as $${\rm E}[X\mid Y]:={\rm E}[X\mid\sigma(Y)],$$ where ##\sigma(Y)=\{Y^{-1}(B)\mid B\in\mathcal{B}(\mathbb{R})\}## is the sigma-algebra generated by ##Y##.

If ##\mathcal{G}=\sigma(Y)##, then (iii) in definition 1 says that $${\rm E}[\mathbf{1}_A{\rm E}[X\mid Y]]={\rm E}[\mathbf{1}_AX],\quad \forall A\in\sigma(Y).\tag1$$

Now, in a proof I'm reading currently, there are three random variables ##U,S,T## and the following computation appears in the proof: $$\int_{T^{-1}(B)} U\,\mathrm dP={\rm E}[\mathbf{1}_B(T)U]={\rm E}[\mathbf{1}_B(T){\rm E}[U\mid S,T]].$$I simply do not comprehend the last equality, that is ##{\rm E}[\mathbf{1}_B(T)U]={\rm E}[\mathbf{1}_B(T){\rm E}[U\mid S,T]]##. How does this follow from the definitions above and the identity ##(1)##? I'm grateful for any help on this.
 
Physics news on Phys.org
I think I understand the identity in question now. First, from Durret's book, we have

If ##X## is ##\mathcal G##-measurable and ##E|Y|,E|XY|<\infty##, then $$E[XY|\mathcal{G}]=XE[Y|\mathcal{G}].$$

Second, we need the Tower property or the law of the iterated expectation, that is ##E[Y|\mathcal{H}]= E\big[E[Y\mid \mathcal G]\mid \mathcal H\big]##, where ##\mathcal H\subset\mathcal G##.

By the latter property, we have $$E[\mathbf1_B(T)U]=E[E[\mathbf1_B(T)U\mid S,T]].$$ Now, by the theorem in Durret, ##\mathbf1_B(T)=\mathbf1_{T^{-1}(B)}## is ##\sigma(T)##-measurable, and this is a subset of ##\sigma(S,T)=\sigma(\sigma(S)\cup\sigma(T))##. So we can "pull it out", and we are left with $$E[\mathbf1_B(T)U]=E[\mathbf1_B(T)E[U\mid S,T]],$$which is the desired identity.
 
Back
Top