Uniform convergence, mean convergence, mean-square convergence

In summary, the conversation discusses the relationships between different modes of convergence (L^1, L^2, and L^{\infty}) for both finite and infinite intervals. The speaker has found some relationships between these modes, but also has counter-examples where these relationships do not hold. They are seeking help and examples for proving these relationships for infinite intervals. The conversation also briefly touches on the convergence of a Gaussian function and the definition of a delta function. One person suggests examples for convergence to the Heaviside function and the unit step function, and the other person suggests using distributions.
  • #1
Lajka
68
0
Hi,

I was always troubled by the relationships between these modes of convergence ([itex]L^1, L^2[/itex], and [itex]L^{\infty} [/itex] convergences, to be precise), so I took some books and decided to establish some relations between them. For some, I succeeded, for others I did not. Here's what I did so far:

If I is a finite interval:
[itex]L^{\infty} [/itex] implies [itex]L^{1} [/itex] (proven)
[itex]L^{\infty} [/itex] implies [itex]L^{2} [/itex] (proven)
[itex]L^{2} [/itex] implies [itex]L^{1} [/itex] (proven)

none of the converses are true (found counter-examples for each)

If I is an infinite interval:
[itex]L^{\infty} [/itex] doesn't imply [itex]L^{1} [/itex] (counter-example)
[itex]L^{\infty} [/itex] doesn't imply [itex]L^{2} [/itex] (counter-example)
[itex]L^{2} [/itex] doesn't imply [itex]L^{1} [/itex] (counter-example)

However, I'm yet to see if, for infinite intervals:
[itex]L^{1} [/itex] implies [itex]L^{\infty} [/itex]
[itex]L^{2} [/itex] implies [itex]L^{\infty} [/itex]
[itex]L^{1} [/itex] implies [itex]L^{2} [/itex]

Intuitively, I believe that [itex]L^{1} [/itex] (or [itex]L^{2} [/itex]) cannot imply uniform convergence (I just imagine a Gaussian which shrinks (keeping his height constant)). However, I cannot think of an example to disprove if [itex]L^{1} [/itex] implies [itex]L^{2} [/itex].

Also, I should say that I don't know how to prove any of this conclusions for infinite intervals, so I'm just trying to find counter-examples (for finite intervals, I just used Cauchy-Schwarz inequality and mean value theorem, but none of that can be used here).

So, any help is appreciated, thanks.

P.S. Just one more thing that caught up in my mind whilst I was writing this: If I take a Gaussian and make new functions by letting it shrink (but increasing its height this time), that sequence would converge to what? I know that's one of the ways to define a delta function, but in a light of this discussion, I would probably say it would converge pointwise to [itex]f(x) \equiv 0[/itex].
 
Physics news on Phys.org
  • #2
Hi Lajka! :smile:

Lajka said:
Hi,

I was always troubled by the relationships between these modes of convergence ([itex]L^1, L^2[/itex], and [itex]L^{\infty} [/itex] convergences, to be precise), so I took some books and decided to establish some relations between them. For some, I succeeded, for others I did not. Here's what I did so far:

If I is a finite interval:
[itex]L^{\infty} [/itex] implies [itex]L^{1} [/itex] (proven)
[itex]L^{\infty} [/itex] implies [itex]L^{2} [/itex] (proven)
[itex]L^{2} [/itex] implies [itex]L^{1} [/itex] (proven)

none of the converses are true (found counter-examples for each)

If I is an infinite interval:
[itex]L^{\infty} [/itex] doesn't imply [itex]L^{1} [/itex] (counter-example)
[itex]L^{\infty} [/itex] doesn't imply [itex]L^{2} [/itex] (counter-example)
[itex]L^{2} [/itex] doesn't imply [itex]L^{1} [/itex] (counter-example)

However, I'm yet to see if, for infinite intervals:
[itex]L^{1} [/itex] implies [itex]L^{\infty} [/itex]
[itex]L^{2} [/itex] implies [itex]L^{\infty} [/itex]
[itex]L^{1} [/itex] implies [itex]L^{2} [/itex]

Intuitively, I believe that [itex]L^{1} [/itex] (or [itex]L^{2} [/itex]) cannot imply uniform convergence (I just imagine a Gaussian which shrinks (keeping his height constant)).

For the first two, consider this:

[tex]f_n:[0,1]\rightarrow \mathbb{R}:t\rightarrow \left\{\begin{array}{ccc}
0 & \text{if} & 0\leq t\leq \frac{1}{2}\\
nt-\frac{1}{2}n & \text{if} & \frac{1}{2}\leq t\leq \frac{1}{2}+\frac{1}{n}\\
1 & \text{if} & \frac{1}{2}+\frac{1}{n}\leq t\leq 1\\
\end{array}\right.[/tex]

[/QUOTE]
However, I cannot think of an example to disprove if [itex]L^{1} [/itex] implies [itex]L^{2} [/itex].

Try the function

[tex]f_n:[0,1]\rightarrow \mathbb{R}:t\rightarrow \left\{\begin{array}{ccc}
0 & \text{if} & 0\leq t\leq \frac{1}{n}\\
t^{-2/3} & \text{if} & \frac{1}{n}\leq t\leq 1\\
\end{array}\right.[/tex]


Also, I should say that I don't know how to prove any of this conclusions for infinite intervals, so I'm just trying to find counter-examples (for finite intervals, I just used Cauchy-Schwarz inequality and mean value theorem, but none of that can be used here).

So, any help is appreciated, thanks.

P.S. Just one more thing that caught up in my mind whilst I was writing this: If I take a Gaussian and make new functions by letting it shrink (but increasing its height this time), that sequence would converge to what? I know that's one of the ways to define a delta function, but in a light of this discussion, I would probably say it would converge pointwise to [itex]f(x) \equiv 0[/itex].

It doesn't converge pointswise, since it doesn't converge for x=0. It does converge to f(x)=0 almost everywhere. But that's the best you can do.
If you allow distributions, then the functions converge to the Dirac Delta distribution.
 
  • #3
Hey micromass! :)

First of all, thanks for resolving my dilemma about the delta function. :)

Now, as for your examples, I only have one problem, and that's that they're defined over finite intervals, and I needed examples of the functions which converge (or not) over infinite intervals. Or maybe I'm missing something here?

For the first two, consider this:

[tex]f_n:[0,1]\rightarrow \mathbb{R}:t\rightarrow \left\{\begin{array}{ccc}
0 & \text{if} & 0\leq t\leq \frac{1}{2}\\
nt-\frac{1}{2}n & \text{if} & \frac{1}{2}\leq t\leq \frac{1}{2}+\frac{1}{n}\\
1 & \text{if} & \frac{1}{2}+\frac{1}{n}\leq t\leq 1\\
\end{array}\right.[/tex]

This converges to the Heaviside function, right? great example, but the fact that it's over finite domain bothers me. However, while reading your examples, I thought of this. Observe this function
[tex]f_n:\mathbb{R}\rightarrow \mathbb{R}:t\rightarrow \left\{\begin{array}{ccc}
-1 & \text{if} & - \infty < t < - \frac{1}{n}\\
nt & \text{if} & -\frac{1}{n}\leq t\leq \frac{1}{n}\\
1 & \text{if} & \frac{1}{n} < t < + \infty\\
\end{array}\right.[/tex]
which also converges to the unit step function, if I'm not mistaken, and it also converges in [itex]L^1[/itex] and in [itex]L^2[/itex], but not it [itex]L^{\infty}[/itex]. What do you think, is this an okay example, does it make sense?

Try the function

[tex]f_n:[0,1]\rightarrow \mathbb{R}:t\rightarrow \left\{\begin{array}{ccc}
0 & \text{if} & 0\leq t\leq \frac{1}{n}\\
t^{-2/3} & \text{if} & \frac{1}{n}\leq t\leq 1\\
\end{array}\right.[/tex]

If not for the fact that it's defined over finite domain, this would be an awesome example. I mean, it still is :D it's just that I needed it to be defined over infinite interval.

Thanks again for your help micromass, and If my reasoning isn't proper here, please let me know!
 

1. What is uniform convergence?

Uniform convergence is a type of convergence that occurs when a sequence of functions converges to a single function at a uniform rate. This means that the difference between the sequence of functions and the limiting function becomes smaller and smaller, but remains within a specific bound for all values of the input variable.

2. What is the difference between uniform convergence and mean convergence?

The main difference between uniform convergence and mean convergence is that while uniform convergence requires the difference between the sequence of functions and the limiting function to remain within a specific bound for all values of the input variable, mean convergence only requires the average of these differences to approach zero as the number of terms in the sequence increases.

3. How is mean-square convergence different from uniform and mean convergence?

Mean-square convergence is another type of convergence that is similar to mean convergence, but instead of taking the average of the differences between the sequence of functions and the limiting function, it takes the square of these differences. This type of convergence is often used when dealing with stochastic processes or random variables.

4. What are some applications of uniform and mean convergence in mathematics?

Uniform and mean convergence are important concepts in mathematical analysis and are used in various applications, such as in the study of series and sequences, differential equations, and functional analysis. They also play a crucial role in the theory of approximation, where they are used to determine the convergence of approximations to a given function.

5. How can we determine if a sequence of functions converges uniformly, in mean, or in mean-square?

The most common way to determine the type of convergence of a sequence of functions is by using specific convergence tests, such as the Weierstrass M-test for uniform convergence, the Cauchy criterion for mean convergence, and the Borel-Cantelli lemma for mean-square convergence. These tests involve checking certain conditions, such as the boundedness of the sequence of functions, the convergence of the series of their differences, and the convergence of the series of their squares.

Similar threads

Replies
4
Views
751
Replies
1
Views
163
Replies
15
Views
2K
Replies
3
Views
1K
Replies
2
Views
1K
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
26
Views
1K
  • Math Proof Training and Practice
Replies
4
Views
1K
Replies
16
Views
2K
  • Calculus
Replies
2
Views
1K
Back
Top