- #1

- 533

- 1

My book tries to illustrate the conditional expectation for a random variable [itex]X(\omega)[/itex] on a probability space [itex](\Omega,\mathscr F,P)[/itex] by asking me to consider the sigma-algebra [itex]\mathscr G = \{ \emptyset, \Omega \}[/itex], [itex]\mathscr G \subset \mathscr F[/itex]. It then argues that [itex]E[X|\mathscr G] = E[X][/itex] (I'm fine with that). But it claims this should make sense, since [itex]\mathscr G[/itex] "gives us no information." How is this supposed to make sense? In what regard does the sigma-algebra [itex]\mathscr G[/itex] give us "no information" about [itex]X[/itex]? I mean, if you know the values [itex]X[/itex] takes on [itex]\mathscr G[/itex], you know [itex]X(\omega)[/itex] everywhere, right?! So this obviously is the wrong interpretation (in fact, any sigma-algebra necessarily contains [itex]\Omega[/itex], so this interpretation would make conditional expectation useless) but I can't think of what the right one is...

Last edited: