- #1
- 19,579
- 25,565
Given a function in ##f \in L_2(\mathbb{R})-\{0\}## which is non-negative almost everywhere. Then ##w-lim_{n \to \infty} f_n = 0## with ##f_n(x):=f(x-n)##. Why?
##f\in L_2(\mathbb{R})## means ##f## is Lebesgue square integrable, i.e. ##\int_\mathbb{R} |f(x)|^2 \,dx< \infty ##. Weak convergence towards zero means ##\int_\mathbb{R} f(x-n)g(x)\,dx \rightarrow 0 ## for all ##g\in L_2(\mathbb{R})##.
I've tried to solve this with Hölder's inequality, but that leads to
$$
\int_\mathbb{R} f(x-n)g(x)\,dx \le \int_\mathbb{R} |f(x-n)g(x)|\,dx \le C\cdot ||f_n||_2
$$
which would be fine if limit and integral would be allowed to switch. Unfortunately, this is the standard example, where it is not allowed:
$$ \lim_{n \to \infty} \int_\mathbb{R} e^{-(x-n)^2}\,dx = \lim_{n \to \infty} \sqrt{\pi} = \sqrt{\pi} \neq \int_\mathbb{R} \lim_{n \to \infty} e^{-(x-n)^2}\,dx = \int_\mathbb{R} 0 = 0 $$
The bulk of the function ##e^{-(x-n)^2}## if transported to infinity vanishes, but does not in the integral for a fixed ##n##. Thus the boundness of ##f_n## and ##g## will have to be used in a sense, that for large ##n## the bulk of ##f_n## meets an area where ##g## is close to zero and vice versa. I was looking for a nice little Lemma which deals with this situation, but couldn't find one. I also didn't manage to see, where the non-negativity of ##f## comes into play. The way it has been presented in the book makes me think, it's not very difficult, but I simply don't see the trick, i.e. the theorem which allows me to conclude that for large ##n## the product ##f_n \cdot g## is close enough to zero at the critical locations where ##g## isn't, resp. ##f_n## isn't. My suspicion is, that this is the reason for ##f(x) \ge 0 \text{ a.e. }## to avoid situations where negative function values can compensate.
Is there a theorem which quantifies this intuition?
##f\in L_2(\mathbb{R})## means ##f## is Lebesgue square integrable, i.e. ##\int_\mathbb{R} |f(x)|^2 \,dx< \infty ##. Weak convergence towards zero means ##\int_\mathbb{R} f(x-n)g(x)\,dx \rightarrow 0 ## for all ##g\in L_2(\mathbb{R})##.
I've tried to solve this with Hölder's inequality, but that leads to
$$
\int_\mathbb{R} f(x-n)g(x)\,dx \le \int_\mathbb{R} |f(x-n)g(x)|\,dx \le C\cdot ||f_n||_2
$$
which would be fine if limit and integral would be allowed to switch. Unfortunately, this is the standard example, where it is not allowed:
$$ \lim_{n \to \infty} \int_\mathbb{R} e^{-(x-n)^2}\,dx = \lim_{n \to \infty} \sqrt{\pi} = \sqrt{\pi} \neq \int_\mathbb{R} \lim_{n \to \infty} e^{-(x-n)^2}\,dx = \int_\mathbb{R} 0 = 0 $$
The bulk of the function ##e^{-(x-n)^2}## if transported to infinity vanishes, but does not in the integral for a fixed ##n##. Thus the boundness of ##f_n## and ##g## will have to be used in a sense, that for large ##n## the bulk of ##f_n## meets an area where ##g## is close to zero and vice versa. I was looking for a nice little Lemma which deals with this situation, but couldn't find one. I also didn't manage to see, where the non-negativity of ##f## comes into play. The way it has been presented in the book makes me think, it's not very difficult, but I simply don't see the trick, i.e. the theorem which allows me to conclude that for large ##n## the product ##f_n \cdot g## is close enough to zero at the critical locations where ##g## isn't, resp. ##f_n## isn't. My suspicion is, that this is the reason for ##f(x) \ge 0 \text{ a.e. }## to avoid situations where negative function values can compensate.
Is there a theorem which quantifies this intuition?
Last edited: