My book tries to illustrate the conditional expectation for a random variable [itex]X(\omega)[/itex] on a probability space [itex](\Omega,\mathscr F,P)[/itex] by asking me to consider the sigma-algebra [itex]\mathscr G = \{ \emptyset, \Omega \}[/itex], [itex]\mathscr G \subset \mathscr F[/itex]. It then argues that [itex]E[X|\mathscr G] = E[X][/itex] (I'm fine with that). But it claims this should make sense, since [itex]\mathscr G[/itex] "gives us no information." How is this supposed to make sense? In what regard does the sigma-algebra [itex]\mathscr G[/itex] give us "no information" about [itex]X[/itex]? I mean, if you know the values [itex]X[/itex] takes on [itex]\mathscr G[/itex], you know [itex]X(\omega)[/itex] everywhere, right?! So this obviously is the wrong interpretation (in fact, any sigma-algebra necessarily contains [itex]\Omega[/itex], so this interpretation would make conditional expectation useless) but I can't think of what the right one is...(adsbygoogle = window.adsbygoogle || []).push({});

**Physics Forums | Science Articles, Homework Help, Discussion**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Weird statement in my book about (measure theoretic) conditional expectation

**Physics Forums | Science Articles, Homework Help, Discussion**