Help needed regarding proof of definite integral problem

rudra
Messages
14
Reaction score
0

Homework Statement



f(x) is a bounded function and integrable on [a,b] . a, b are real constants. We have to prove that

i) An = ab f(x)cos(nx) dx → 0 when n→∞
ii)Bn = ab f(x)sin(nx) dx → 0 when n→∞


Homework Equations



Parseval's formula : For uniform convergence of f(x) with its Fourier series in [a,b]

abf2(x) dx = L * ( a02/2 + Ʃ( an2+bn2 ) )


where a0 an bn are Fourier Coefficient. L = (b-a)/2

The Attempt at a Solution



If f(x) is bounded integrable in [-π,π]
Then by Parseval's Theorem
πf2(x) dx = L * ( A02/2 + Ʃ( An2+Bn2 ) )

Hence Ʃ( An2+Bn2 ) ) ≤ 1/L * ( πf2(x) dx )

Hence Ʃ( An2+Bn2 ) ) converges to a finite value.

Hence An → 0 when n→∞ and Bn → 0 when n→∞

The above proof is assumed the interval is [-π,π] .

But the problem is when interval is [a,b]. Fourier coefficient An is given by
(1/L)*ab f(x)cos(nπx/L) dx

Any idea How to proceed further?
 
Physics news on Phys.org
rudra said:

Homework Statement



f(x) is a bounded function and integrable on [a,b] . a, b are real constants. We have to prove that

i) An = ab f(x)cos(nx) dx → 0 when n→∞
ii)Bn = ab f(x)sin(nx) dx → 0 when n→∞


Homework Equations



Parseval's formula : For uniform convergence of f(x) with its Fourier series in [a,b]

abf2(x) dx = L * ( a02/2 + Ʃ( an2+bn2 ) )


where a0 an bn are Fourier Coefficient. L = (b-a)/2

The Attempt at a Solution



If f(x) is bounded integrable in [-π,π]
Then by Parseval's Theorem
πf2(x) dx = L * ( A02/2 + Ʃ( An2+Bn2 ) )

Hence Ʃ( An2+Bn2 ) ) ≤ 1/L * ( πf2(x) dx )

Hence Ʃ( An2+Bn2 ) ) converges to a finite value.

Hence An → 0 when n→∞ and Bn → 0 when n→∞

The above proof is assumed the interval is [-π,π] .

But the problem is when interval is [a,b]. Fourier coefficient An is given by
(1/L)*ab f(x)cos(nπx/L) dx

Any idea How to proceed further?

Just apply a scale-location transformation of the form x = A*u + B, to transform the x-interval [a,b] into the u-interval [-π,π].
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top