Weird statement in my book about (measure theoretic) conditional expectation

  • #1
My book tries to illustrate the conditional expectation for a random variable [itex]X(\omega)[/itex] on a probability space [itex](\Omega,\mathscr F,P)[/itex] by asking me to consider the sigma-algebra [itex]\mathscr G = \{ \emptyset, \Omega \}[/itex], [itex]\mathscr G \subset \mathscr F[/itex]. It then argues that [itex]E[X|\mathscr G] = E[X][/itex] (I'm fine with that). But it claims this should make sense, since [itex]\mathscr G[/itex] "gives us no information." How is this supposed to make sense? In what regard does the sigma-algebra [itex]\mathscr G[/itex] give us "no information" about [itex]X[/itex]? I mean, if you know the values [itex]X[/itex] takes on [itex]\mathscr G[/itex], you know [itex]X(\omega)[/itex] everywhere, right?! So this obviously is the wrong interpretation (in fact, any sigma-algebra necessarily contains [itex]\Omega[/itex], so this interpretation would make conditional expectation useless) but I can't think of what the right one is...
 
Last edited:

Answers and Replies

  • #2
336
0
Think of a sigma algebra as 'containing information'.Since G is the trivial sigma algebra, it contains no intrinsic information & doesn't affect the expectation.
I must admit that this terminology is vague & nearly metaphorical. It's perfectly fine if you stash this terminology if it doesn't suit your intuition.
 
  • #3
I don't know how the book you're following sets it out.

But consider discrete random variables X,Z and the E(X|Z=z) for distinct z's and how the sigma algebra generated by Z partions Omega. So consider first the functions measurable wrt to the trivial sigma algebra. Then a richer sigma algebra, and you might get more of a feel for the idea of "information" in the sigma algebra.

Even defining your random variables, Omega etc. and doing the calculations may make the idear clearer to you.
 
  • #4
disregardthat
Science Advisor
1,866
34
But what information is hidden if G is the trivial sigma algebra?
 
  • #5
A lot? Potentially none - X might be G measurable.
 
  • #6
22,129
3,297
What [itex]E[X\vert \mathcal{G}][/itex] means is that you know the information that X takes in [itex]\mathcal{G}[/itex].

So the clue is that [itex]E[X\vert \mathcal{G}][/itex] is [itex]\mathcal{G}[/itex]-measurable. In fact, it is the [itex]\mathcal{G}[/itex]-random variable that approximates X best (and this can be made rigorous).

So [itex]E[X\vert\mathcal{G}][/itex] is an approximation of X that is [itex]\mathcal{G}[/itex]-measurable. So for any ]a,b[, we know that

[tex]\{E[X\vert\mathcal{G}]~\in ]a,b[\}\in \mathcal{G}\}[/tex]

What happens if we have [itex]\mathcal{G}=\{\emptyset,\Omega\}[/itex], then we know that

[tex]\{E[X\vert\mathcal{G}]\in ]a,b[\}\in \{\emptyset,\Omega\}[/tex]

But this places severe restrictions on [itex]E[X\vert\mathcal{G}][/itex]. In fact, it forces this random variable to be constant!!

If we take [itex]\mathcal{G}[/itex] to be finer (thus to contain more sets), then we allow [tex]E[X\vert \mathcal{G}][/tex] to take on more values. Specifically, we allow it to approximate X better.

For example, if [itex]\mathcal{G}=\{\emptyset,\Omega, G,G^c\}[/itex], then we must have


[tex]\{E[X\vert\mathcal{G}]\in ]a,b[\}\in \{\emptyset,\Omega,G,G^c\}[/tex]

This does not force our random variable to be constant. Indeed, we now allow [itex]E[X\vert\mathcal{G}][/itex] to take different values on G and Gc. So our random variable is now 2-valued!

The finer we make [itex]\mathcal{G}[/itex], the more variable the [itex]E[X\vert \mathcal{G}][/itex] can be. And the better the approximation can be!!

I hope this helped.
 

Related Threads on Weird statement in my book about (measure theoretic) conditional expectation

Replies
19
Views
3K
Replies
2
Views
2K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
7
Views
3K
  • Last Post
Replies
2
Views
4K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
3
Views
658
  • Last Post
Replies
5
Views
3K
  • Last Post
Replies
3
Views
5K
Replies
1
Views
1K
Top