Another question about Fourier series convergence

In summary, the conversation is about proving a theorem related to the convergence of Fourier series. The proof is first shown for a simple case and then extended to integrable, monotonic functions on an interval. However, the question is raised about whether this proof can still be applied to non-monotonic functions, such as √|x|*sin(1/x). The speaker doubts the validity of their proof for all non-monotonic functions and seeks clarification on the matter.
  • #1
Boorglar
210
10
I am trying to prove a theorem related to the convergence of Fourier series. I will post my proof below, so first check it and then my question will make sense.

Is there any flaw in my proof? Also, here I proved it for integrable functions monotonic on an interval on the left of 0. But what if the function was not monotonic on any interval around 0, and was not Lipschitz continuous either? For example, f(x) = √|x|*sin(1/x). Can I still use step functions to approximate it as in the proof for monotonic functions? I know it oscillates wildly, but by taking sufficiently small intervals, it would seem that the limit would still go to f(0-) over 2. And yet I know this is not possible since there are continuous functions whose Fourier series do not converge at some points of continuity, and my proof would lead to a contradiction if it were true for all non-monotonic functions. So is my proof flawed, and if not, what prevents it from working for functions like √|x|*sin(1/x)?
 
Physics news on Phys.org
  • #2
I am trying to prove the following, under appropriate conditions for f which are mentioned:
[tex] \lim_{n \rightarrow \infty} \frac{1}{\pi}\int_{-\pi}^{0}f(x)\frac{\sin(nx)}{x}dx = f(0-)\frac{1}{\pi}\int_{-\infty}^{0}\frac{\sin(x)}{x}dx = \frac{f(0-)}{2} [/tex]

First I proved it for f(x) = 1:

[tex] \lim_{n \rightarrow \infty} \frac{1}{\pi}\int_{-\pi}^{0}\frac{\sin(nx)}{x}dx = \lim_{n \rightarrow \infty} \frac{1}{\pi}\int_{-n\pi}^{0}\frac{\sin(u)}{u}du = \frac{1}{\pi}\int_{-\infty}^{0}\frac{\sin(x)}{x}dx = \frac{1}{2} [/tex]

Then for any step function s(x) which takes on values [itex] c_1, c_2, ... ,c_n [/itex] on [itex] (0 , x_1) , (x_1 , x_2) , ... , (x_{n-1} , x_n)[/itex]. Here the only interval of interest is [itex] (0 , x_1) [/itex] since in the other intervals the limit vanishes by the Riemann-Lebesgue lemma. But on the interval of interest, s(x) is a constant multiple of 1, so the limit is the value [itex] \frac{c_1}{2} = \frac{s(0-)}{2}[/itex].

Now consider an integrable, monotonic function on [itex] [-\pi , 0] [/itex]. By a property of integrable functions, for any [itex] \epsilon > 0 [/itex] there exist step functions such that [itex] s_1 ≤ f ≤ s_2 [/itex] and [itex] f-s_1 < \epsilon[/itex] and [itex]s_2-f < \epsilon [/itex] on [-π , 0].

We also choose the step functions such that their constant value on the interval next to x = 0 is f(0-) (if f is monotonic around 0, we surely can do this since the maximum error will occur at the other end point, so we can choose the length of the interval so as to minimize this error).

Then [tex] \frac{1}{\pi}\int_{-\pi}^{0}s_2(x)\frac{\sin(nx)}{x}dx - \epsilon \frac{1}{\pi}\int_{-\pi}^{0}\frac{\sin(nx)}{x}dx ≤ \frac{1}{\pi}\int_{-\pi}^{0}f(x)\frac{\sin(nx)}{x}dx ≤ \frac{1}{\pi}\int_{-\pi}^{0}s_1(x)\frac{\sin(nx)}{x}dx + \epsilon \frac{1}{\pi}\int_{-\pi}^{0}\frac{\sin(nx)}{x}dx [/tex]

And by taking the limit as n goes to infinity,
[tex] \frac{f(0-)}{2} - \frac{\epsilon}{2} ≤ \lim_{n \rightarrow \infty} \frac{1}{\pi}\int_{-\pi}^{0}f(x)\frac{\sin(nx)}{x}dx ≤ \frac{f(0-)}{2} + \frac{\epsilon}{2} [/tex] This inequality is true for any positive epsilon (choose the appropriate step functions) so it must be true that the limit is actually [itex] \frac{f(0-)}{2} [/itex] QED.
 
Last edited:
  • #3
Actually the last line was imprecise: what I meant is that by choosing my step functions so that epsilon is small enough, and then by choosing n large enough, I can make the integral within any given positive number error from f(0-)/2.
 
Last edited:

1. What is a Fourier series?

A Fourier series is a mathematical representation of a periodic function as a sum of sinusoidal functions with different frequencies and amplitudes. It is named after the French mathematician Joseph Fourier, who first introduced the concept in the early 19th century.

2. How does a Fourier series converge?

A Fourier series converges when the sum of the infinite terms of the series approaches the original periodic function as a limit. This means that as more terms are added to the series, the approximation of the original function becomes more accurate.

3. What is the importance of Fourier series convergence?

Fourier series convergence is important because it allows us to approximate complex periodic functions with simpler sinusoidal functions, making it easier to analyze and understand these functions. It also has applications in many areas of science and engineering, such as signal processing, image compression, and heat transfer.

4. How do you determine if a Fourier series converges?

There are various tests that can be used to determine the convergence of a Fourier series, such as the Dirichlet test, the Abel-Poisson test, and the Weierstrass M-test. These tests involve checking the properties of the coefficients and the periodic function itself.

5. Can a Fourier series diverge?

Yes, a Fourier series can diverge if the coefficients or the periodic function do not meet certain conditions. For example, if the coefficients do not decrease fast enough or the function is not periodic, the series may not converge. In such cases, the series is said to be divergent.

Similar threads

  • Topology and Analysis
Replies
4
Views
192
Replies
4
Views
184
  • Topology and Analysis
Replies
3
Views
907
  • Calculus and Beyond Homework Help
Replies
3
Views
128
Replies
14
Views
3K
Replies
2
Views
321
  • Math POTW for University Students
Replies
1
Views
408
  • Topology and Analysis
Replies
2
Views
1K
  • Topology and Analysis
Replies
6
Views
2K
  • Topology and Analysis
Replies
21
Views
1K
Back
Top