I Question on problem 7 on July Challenge

  • I
  • Thread starter Thread starter BWV
  • Start date Start date
  • Tags Tags
    Challenge
AI Thread Summary
The discussion centers around the application of Fubini's theorem and Jensen's inequality in the context of integrating expectations involving Brownian motion. Participants explore why Jensen's inequality does not prevent the interchange of expectation and integration when moving from the expectation of a function to the function of an expectation. The key point is that while Jensen's inequality applies to non-linear functions, the linearity of integration allows for the expectation to be moved in or out of the integral without violating the inequality. The conversation also touches on convergence issues and the conditions under which these mathematical operations are valid, emphasizing the importance of understanding the underlying properties of the functions involved. Overall, the exchange highlights the nuances of probability theory and integration in stochastic calculus.
BWV
Messages
1,581
Reaction score
1,933
TL;DR Summary
Question on problem 7 on July Challenge
Trying to follow and learn from the solution and did not want to clutter up the original thread

nuuskur said:
Fubini allows us to change order of integration, so we get
<br /> \mathbb EX = \mathbb E \left ( \int _0^t W_s^2ds\right ) = \int _0^t \mathbb E(W_s^2)ds = \int_0^t sds = \frac{t^2}{2}<br />

My naive question is why doesn't Jensen's Inequality prevent this step?

246277


Where you are swapping the expectation of a function for applying the function to the expectation which according to the inequality, the two expressions are equal only if the function is linear, which W^2 is not
 
Physics news on Phys.org
It's @Math_QED's question. Let's ask him.
 
BWV said:
Summary: Question on problem 7 on July Challenge

Trying to follow and learn from the solution and did not want to clutter up the original thread
My naive question is why doesn't Jensen's Inequality prevent this step?

View attachment 246277

Where you are swapping the expectation of a function for applying the function to the expectation which according to the inequality, the two expressions are equal only if the function is linear, which W^2 is not

I don't really understand your question. What is ##\varphi## here? Is it ##\varphi(x) = x^2?##. In that case, Jenssen says that

$$E\left[\left(\int_0^t W_s^2 ds\right)^2\right]= E[X^2] \geq (EX)^2 = \left(\int_0^t W_s^2 ds\right)^2$$

and I don't see how that contradicts the statement you mention.
 
Last edited by a moderator:
Math_QED said:
Are you sure this is correct? By definition, if XX is a random variable on a probability space (Ω,F,P)(Ω,F,P), then


E(Y):=∫ΩY(ω)P(dω)E(Y):=∫ΩY(ω)P(dω)​
Agreed, my initial post didn't make much sense at all. Post-edit it should be ok.

Math_QED said:
I don't really understand your question. What is φφ here?
My guess is he assumed I used Jensen's inequality to arrive at whatever I arrived at, the \varphi is supposed to be a convex map, which one, I can't say.
 
  • Like
Likes member 587159
Math_QED said:
Now it is correct, though I'm not sure if this integral exists if and only if ##\int_\Omega Y dP## exists. It is definitely true that if the expectation exists and is finite, then your integral exists and is equal to the expectation.
Right right. The equality holds precisely when Y has a probability density function. My probability theory has several layers of rust on it by now :(
 
nuuskur said:
Right right. The equality holds precisely when Y has a probability density function. My probability theory has several layers of rust on it by now :(

Now that I think about it, my previous reply was incorrect. Isn't the correct formula

##E(Y) = \int_{\mathbb{R}} yf_Y(y) dy##

where ##f_Y## is a density function of ##Y## (assuming it exists!). The formula you listed in post #3 can't be correct for the simple reason that ##Y(s)## with ##s \in \mathbb{R}## does not make sense because ##Y## is not necessarily defined on the real numbers. I suggest you edit the post with the correct definition ##EY = \int_\Omega Y dP## and then also the rest of the answer.

Indeed, using this definition

$$E(X) = \int_\Omega X dP = \int_\Omega \left(\int_0^t W_s^2 ds\right) dP = \int_0^t \left( \int_\Omega W_s^2 dP\right) ds = \int_0^t E[W_s^2] ds$$

and the third equality follows by Fubini.
 
Last edited by a moderator:
  • Like
Likes nuuskur
@Math_QED Agreed again. My technique is garbage and sloppy. Will try to mend my ways.
 
Math_QED said:
I don't really understand your question. What is ##\varphi## here? Is it ##\varphi(x) = x^2?##. In that case, Jenssen says that

$$E\left[\left(\int_0^t W_s^2 ds\right)^2\right]= E[X^2] \geq (EX)^2 = \left(\int_0^t W_s^2 ds\right)^2$$

and I don't see how that contradicts the statement you mention.
Ok forget it then, thought I could learn something but too far outside my pay grade
 
BWV said:
Ok forget it then, thought I could learn something but too far outside my pay grade

Please do ask. I'm just trying to understand your confusion so I can give you my best answer :).
 
  • #10
The question was basic - why doesn't Jensen apply when moving the expectation inside / outside of the integral?

Im sure my confusion stems from what is getting integrated over what -the 'function' here which is being moved is integrating over the variance (time)

Not sure how to interpret integrating X over t, have some experience with stoc calculus but its always derivatives of X as in Ito's Lemma
 
  • #11
BWV said:
Ok forget it then, thought I could learn something but too far outside my pay grade

maybe the way to think of it is define ##g## by
##g(W_s)= W_s^2##

so ##g## is strictly convex but

##h_t(X) = \int_0^t X dx##
is linear -- i.e. homogenous with respect to scaling and additivity. Linear functions are convex and concave. So the problem is

##E\big[ h_t(g(W_s))\big]= h_t(E\big[g(W_s))\big]##
by linearity (subject to convergence considerations -- I'm fairly confident you could justify the interchange on dominated convergence as well )

- - - -
what Jensen says is
## h_t\Big(E\big[g(W_s)\big]\Big) \gt h_t\Big(g(E\big[W_s\big])\Big)##
because ##g## is strictly convex (and ##W_s## isn't constant with probability 1)

or more simply, for any ##s \gt 0##
##E\big[g(W_s)\big] \gt g(E\big[W_s\big])##
then we sum or integrate over this point-wise bound to recover the above
 
  • Like
Likes BWV
  • #12
Thanks - so because integration is linear, it meets an equality condition of Jensen & it can be moved in or outside of the expectation operator
 
  • #13
BWV said:
Thanks - so because integration is linear, it meets an equality condition of Jensen & it can be moved in or outside of the expectation operator

if we don't worry too much about convergence issues then I suppose you can say this. The more basic idea is Linearity of Expectations. (Then bring up Jensen for functions that aren't linear/affine but are known to be convex or concave)

I'm told that Brownian motion is Riemann integrable, and in any case an important way to think about very complicated stuff is with countable sets when possible (not always possible but you should try) and then countable sets with finite ones wherever possible.

So write out the Riemann Integral which I assume exists

##S_n := \frac{1}{n}\sum_{i=1}^n g(W_{\frac{it}{n}}) \approx \int_{0}^t g(W_s) ds##

and by linearity of expectations, for any natural number ##n##, you always have

##E\big[S_n\big] = E\big[\frac{1}{n}\sum_{i=1}^n g(W_{\frac{it}{n}}) \big] = \frac{1}{n}\sum_{i=1}^n E\big[g(W_{\frac{it}{n}}) \big]##
(this always holds, irrespective of whether they are independent, something a lot of people stumble on.)

The convergence issue that can come up is that when you pass limits, in general
##\lim_{n \to \infty} E\big[X_n\big] \neq E\big[\lim_{n \to \infty} X_n\big]##
though dominated convergence can frequently be used to alleviate situations like this.
 
  • Like
Likes BWV

Similar threads

3
Replies
137
Views
19K
3
Replies
101
Views
18K
2
Replies
61
Views
11K
Replies
42
Views
10K
Back
Top