Proving Difficult Integral: \alpha(t) Monotonically Increasing

  • Thread starter Thread starter sparkster
  • Start date Start date
  • Tags Tags
    Integral
sparkster
Messages
153
Reaction score
0
Let \alpha(t) be monotically increasing on [0,1]. Prove that
\lim_{n \rightarrow \infty} \int_0^1 t^n d\alpha(t)=\alpha(1)-\alpha(1-) where \alpha(1-)=\lim_{t \rightarrow 1^{-}} \alpha(t).

Here's what I have so far. I know that \alpha(t) is monotonically increasing, so it has at most countably many points of discontinuity. So it is continuous almost everywhere which implies that it is Riemann integrable. That means that \int_0^1 t^n d\alpha(t)=\int_0^1 t^n \alpha ' (t) dt where the second integral is just a plain Riemann integral.

Then integrating by parts with u=t^n and dv=\alpha ' (t)dt, I get that \int_0^1 t^n d\alpha(t)=\int_0^1 t^n \alpha ' (t) dt= \alpha(1) - \int_0^1 \alpha(t) n t^{n-1} dt. This is where I'm stuck. I can't get that \lim_{n \rightarrow \infty} \int_0^1 \alpha(t) n t^{n-1} dt = \alpha(1-) In fact, it looks like it should blow up to me.

Any help would be appreciated.
 
Last edited:
Physics news on Phys.org
You can't assume d\alpha(t)=\alpha'(t)dt. For example, if \alpha(t) is a http://en.wikipedia.org/wiki/Devil%27s_staircase" function, its derivative is zero where it's defined (almost everywhere). However it is true that:

\int_a^b f(x) dg(x)=f(b)g(b)-f(a)g(a)-\int_a^b g(x) df(x)

This gives the same result, but you should be more careful. You can write:

\int_0^1 \alpha(t) n t^{n-1} dt = \int_0^a \alpha(t) n t^{n-1} dt + \int_a^1 \alpha(t) n t^{n-1} dt

where 0<a<1. Since addition is continuous (and assuming everything converges), you can take the limits of each integral on the RHS seperately. The limit of the first integral is zero by uniform conintuity. So try to bound the second one, taking a limit as a->1 at the end.
 
Last edited by a moderator:
I see. The theorem I was trying to use requires that \alpha &#039; is Riemann integrable. What are the conditions for the first equation you gave? Either I don't remember it, or it's not in baby Rudin (and I'm guessing on the former).

Also, and I'm sorry for asking, how does uniform continuity give the first one 0? For the second one, I should try to bound it above and below and apply the squeeze theorem?

Analysis has always been my weakest area, so thanks for trying to help me.
 
According to wikipedia that identity is true whenever either of the integrals exist, but I'll try to find a more reliable source.

Next, I meant uniform convergence, sorry. This can be proved using the fact that alpha is bounded and a<1. Finally, yes, the squeeze theorem should work for the other one.
 
StatusX said:
According to wikipedia that identity is true whenever either of the integrals exist, but I'll try to find a more reliable source.

It's in Foundations of Mathematical Analysis, Johnsonbaugh and Pfaffenberger theorem 53.3 (it's in google book search if you don't have it). It requires f and g to be bounded, and f to be integrable with respect to g (g integrable w.r.t. f follows).

Baby rudin has a couple integration by parts, but I don't think one as general as this.
 
StatusX said:
Next, I meant uniform convergence...
Now that makes sense. Thanks.

shmoe said:
It's in Foundations of Mathematical Analysis, Johnsonbaugh and Pfaffenberger theorem 53.3 (it's in google book search if you don't have it). It requires f and g to be bounded, and f to be integrable with respect to g (g integrable w.r.t. f follows).

Baby rudin has a couple integration by parts, but I don't think one as general as this.
Thanks!
 
shmoe said:
It's in Foundations of Mathematical Analysis, Johnsonbaugh and Pfaffenberger theorem 53.3 (it's in google book search if you don't have it). It requires f and g to be bounded, and f to be integrable with respect to g (g integrable w.r.t. f follows).

Baby rudin has a couple integration by parts, but I don't think one as general as this.

Yea, that sounds right. Thanks.
 
Last edited:
Back
Top