Uniform convergence and improper integration

boombaby
Messages
129
Reaction score
0

Homework Statement


Suppose g and f_n are defined on [1,+infinity), are Riamann-integrable on [t,T] whenever 1<=t<T<+infinity. |f_n|<=g, f_n->f uniformly on every compact subset of [1,+infinity), and
\int^{\infty}_{1} g(x)dx&lt;\infty.
Prove that
lim_{n-&gt;\infty} \int^{\infty}_{1} f_{n}(x)dx =\int^{\infty}_{1} f(x)dx


Homework Equations





The Attempt at a Solution


If I let h_{n}(u)=\int^{u}_{1} f_{n}(x)dx, then lim_{n-&gt;\infty} h_{n}(u)=\int^{u}_{1} f(x)dx=h(u) for each u in [1,+infinity). it is equivalent to prove that lim_{n-&gt;\infty}lim_{u-&gt;\infty} h_{n}(u)=lim_{u-&gt;\infty}lim_{n-&gt;\infty} h_n(u). This is true if h_n converges uniformly to h on [1,+infinity). This is where I got stuck. Actually, I'm not sure if h_n indeed converges uniformly...Or, is there any other way to prove it? Any hint? Thanks a lot!
 
Physics news on Phys.org
Well. I think you can prove uniform convergence of h_n directly.
You want to show that \forall \varepsilon &gt;0: \exist N&gt;0 such that whenever n>N |h_n(u)-h(u)|\leq\varepsilon\quad \forall u\in[1,\infty), right?
First pick u_0\in[1,\infty) such that \int_{u_0}^\infty{dxg(x)}\leq \varepsilon/4. Then pick an N with |f_n(x)-f(x)|\leq\frac{\varepsilon}{2(u_0-1)} for all n>N and x\in[1,u_0] This N should do the job.

You can write for any u
<br /> |h_n(u)-h(u)|=\left|\int_1^u{dx[f_n(x)-f(x)]}\right|\leq\left|\int_1^{u_0\wedge u}{dx[f_n(x)-f(x)]}\right|+\chi_{\{u_0&lt;u\}}\left|\int_{u_0}^u{dx[f_n(x)-f(x)]}\right|<br />

For n>N the first term is clearly bounded by \varepsilon/2. The second term is only there if u&gt;u_0. In this case you can use |f_n(x)|&lt;=g(x) for all x (which implies |f(x)|&lt;=g(x) so
<br /> \left|\int_{u_0}^u{dx[f_n(x)-f(x)]}\right|\leq 2\int_{u_0}^u{dxg(x)}\leq 2\int_{u_0}^\infty{dxg(x)}\leq\varepsilon/2<br />
where the last inequality holds by the choice of u_0.
 
Last edited:
This is brilliant, thanks!
u_0 is exactly what I didn't get! Thanks again!:)
 
boombaby said:
This is brilliant, thanks!
Don't be exaggerating:smile:
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...

Similar threads

Back
Top