# Weird statement in my book about (measure theoretic) conditional expectation

1. Oct 8, 2011

### AxiomOfChoice

My book tries to illustrate the conditional expectation for a random variable $X(\omega)$ on a probability space $(\Omega,\mathscr F,P)$ by asking me to consider the sigma-algebra $\mathscr G = \{ \emptyset, \Omega \}$, $\mathscr G \subset \mathscr F$. It then argues that $E[X|\mathscr G] = E[X]$ (I'm fine with that). But it claims this should make sense, since $\mathscr G$ "gives us no information." How is this supposed to make sense? In what regard does the sigma-algebra $\mathscr G$ give us "no information" about $X$? I mean, if you know the values $X$ takes on $\mathscr G$, you know $X(\omega)$ everywhere, right?! So this obviously is the wrong interpretation (in fact, any sigma-algebra necessarily contains $\Omega$, so this interpretation would make conditional expectation useless) but I can't think of what the right one is...

Last edited: Oct 8, 2011
2. Oct 12, 2011

### Eynstone

Think of a sigma algebra as 'containing information'.Since G is the trivial sigma algebra, it contains no intrinsic information & doesn't affect the expectation.
I must admit that this terminology is vague & nearly metaphorical. It's perfectly fine if you stash this terminology if it doesn't suit your intuition.

3. Oct 12, 2011

### Cant or Wont

I don't know how the book you're following sets it out.

But consider discrete random variables X,Z and the E(X|Z=z) for distinct z's and how the sigma algebra generated by Z partions Omega. So consider first the functions measurable wrt to the trivial sigma algebra. Then a richer sigma algebra, and you might get more of a feel for the idea of "information" in the sigma algebra.

Even defining your random variables, Omega etc. and doing the calculations may make the idear clearer to you.

4. Oct 12, 2011

### disregardthat

But what information is hidden if G is the trivial sigma algebra?

5. Oct 12, 2011

### Cant or Wont

A lot? Potentially none - X might be G measurable.

6. Oct 12, 2011

### micromass

What $E[X\vert \mathcal{G}]$ means is that you know the information that X takes in $\mathcal{G}$.

So the clue is that $E[X\vert \mathcal{G}]$ is $\mathcal{G}$-measurable. In fact, it is the $\mathcal{G}$-random variable that approximates X best (and this can be made rigorous).

So $E[X\vert\mathcal{G}]$ is an approximation of X that is $\mathcal{G}$-measurable. So for any ]a,b[, we know that

$$\{E[X\vert\mathcal{G}]~\in ]a,b[\}\in \mathcal{G}\}$$

What happens if we have $\mathcal{G}=\{\emptyset,\Omega\}$, then we know that

$$\{E[X\vert\mathcal{G}]\in ]a,b[\}\in \{\emptyset,\Omega\}$$

But this places severe restrictions on $E[X\vert\mathcal{G}]$. In fact, it forces this random variable to be constant!!

If we take $\mathcal{G}$ to be finer (thus to contain more sets), then we allow $$E[X\vert \mathcal{G}]$$ to take on more values. Specifically, we allow it to approximate X better.

For example, if $\mathcal{G}=\{\emptyset,\Omega, G,G^c\}$, then we must have

$$\{E[X\vert\mathcal{G}]\in ]a,b[\}\in \{\emptyset,\Omega,G,G^c\}$$

This does not force our random variable to be constant. Indeed, we now allow $E[X\vert\mathcal{G}]$ to take different values on G and Gc. So our random variable is now 2-valued!

The finer we make $\mathcal{G}$, the more variable the $E[X\vert \mathcal{G}]$ can be. And the better the approximation can be!!

I hope this helped.