Proof involving sequences of functions and uniform convergence

kidmode01
Messages
53
Reaction score
0

Homework Statement


Let \phi_n(x) be positive valued and continuous for all x in [-1,1] with:

\lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) = 1

Suppose further that \{\phi_n(x)\} converges to 0 uniformly on the intervals [-1,-c] and [-c,1] for any c > 0

Let g be any function which is continuous on [-1,1].

Show that:

\lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) g(x)dx = g(0)


Homework Equations



Theorem:

If the functions \{f_n\} and F are integrable on a bounded closed set E, and \{f_n\} converges to F pointwise on E, and if \Vert {f_n} \Vert < M for some M and all n = 1,2,3,..., then:

\lim_{n\rightarrow\infty} \int_{E} {f_n} = \lim_{n\rightarrow\infty} \int_{E} F



The Attempt at a Solution



Alrighty, so the first thing I did was apply the theorem on:

f_n = \phi_n on the intervals [-1,-c] and [-c,1]

and came up with the following integrals(since uniform convergence implies pointwise convergence):

\lim_{n\rightarrow\infty} \int_{-1}^{-c} \phi_n(x) = 0
and
\lim_{n\rightarrow\infty} \int_{-c}^{-1} \phi_n(x) = 0

If we add these integrals together we know what happens at c=0.

Looking at our new sequence:

f_n = \phi_n g(x)

This sequence is uniformly bounded on [-1,-c] and [-c,1] and converges pointwise to 0.

Thus I think because of the continuity of \phi_n and g, and the uniform convergence of \phi_n:

\lim_{n\rightarrow\infty} \int_{-1}^{-c} \phi_n(x) g(x) = 0
\lim_{n\rightarrow\infty} \int_{-c}^{-1} \phi_n(x) g(x) = 0

Again if we add these integrals together we know what happens when c = 0. But I'm not entirely sure where to go from here or if I'm going in the right direction. Any help is appreciated.
 
Physics news on Phys.org
Second attempt at solution

Okay so I've tried a little bit more now:

Just adding and subtracting a term from the limit we want to prove:

<br /> \lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) g(x)dx = <br /> \lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x)(g(x)-g(0))dx + \lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) g(0)dx<br />

but

\lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) g(0)dx = g(0)

and by the mean value theorem for integrals we know there exists a t in [-1,1] such that:

\lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x)(g(x)-g(0))dx = (g(t)-g(0))\lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) = g(t) - g(0)

and now I want to make an argument that states g(t) must be g(0).

So I started by breaking my integral up:
<br /> \lim_{n\rightarrow\infty} \int_{-1}^{-c} \phi_n(x) g(x) + \lim_{n\rightarrow\infty} \int_{-c}^{1} \phi_n(x) g(x) = 0<br />

as c goes to zero.

But then g(t) - g(0) = 0 and thus g(t) = g(0) and then we would have:

<br /> \lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) g(x)dx = \lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x)(g(x)-g(0))dx + \lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) g(0)dx = g(t) - g(0) + g(0) = g(0)

The very last step seems a little hand wavy, I hope someone can help me.

Edit: (I also realize some of the stuff in my first post is wrong, but it won't let me edit it)
 
Last edited:
You MUST mean phi_n converges uniformly to zero on [-1,-c] and [c,1] for any c>0. Not [-1,-c] and [-c,1]. If that were the case then the integral of phi_n would go to zero as n->infinity.
 
Right, thanks for pointing that out.

Then my integrals at the bottom of my second post change to:

<br /> <br /> \lim_{n\rightarrow\infty} \int_{-1}^{-c} \phi_n(x) g(x) + \lim_{n\rightarrow\infty} \int_{c}^{1} \phi_n(x) g(x) = 0<br /> <br />

and then as c goes to zero:
<br /> 0 = \lim_{n\rightarrow\infty} \int_{-1}^{-c} \phi_n(x) g(x) + \lim_{n\rightarrow\infty} \int_{c}^{1} \phi_n(x) g(x) = \lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x)(g(x)-g(0))dx = (g(t)-g(0))\lim_{n\rightarrow\infty} \int_{-1}^{1} \phi_n(x) = g(t) - g(0)

Edit: Ya this is wrong.

I just don't feel right about this though.
 
Last edited:
Ah I'm all messed up now lol
 
g(x) is continuous at 0. So if c is small enough g(x) is 'almost' g(0) on [-c,c]. phi_n is 'almost' zero outside of [-c,c] for large enough n. Now put in some epsilons to quantify 'almost'.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...

Similar threads

Replies
8
Views
1K
Replies
18
Views
2K
Replies
13
Views
1K
Replies
7
Views
1K
Replies
2
Views
2K
Back
Top