ognik said:
Not sure I get that, they seem to replace ALL x's in the eqtn with t? While Fourier series can be applied to discontinuous functions, in this eqtn that is not known, the eqtn would apply to continuous functions as well...
All I mean is that in Fourier Analysis, or really in general, sometimes there are very good reasons to change the variable of integration so as not to confuse it with other variables. For example, in PDE's, if you're doing Fourier Analysis, and you want to write your final answer down in all its glory, you're really going to want your Fourier integrals not to be using a variable of integration that is one of the ultimate independent variables. In the http://mathhelpboards.com/potw-university-students-34/problem-week-182-september-22-2015-a-16370.html, the solution to the pde is
$$u(x,t)=\sum_{n=1}^{\infty}\left(a_n \sin\left[(n\pi)^2 t\right]+b_n \cos\left[(n\pi)^2t\right] \right) \sin(n\pi x),$$
where
\begin{align*}
a_n&=\frac{2}{(n\pi)^2} \int_0^1 g(x) \, \sin(n\pi x) \, dx \\
b_n&=2\int_0^1 f(x) \, \sin(n\pi x) \, dx.
\end{align*}
Now really, if I want to write this as one glorious equation, I'm not going to use $x$'s in the $a_n$ and $b_n$ equations. Nor, really, am I going to use $t$'s. Those are the two independent variables in the original PDE. I might use a greek letter like $\xi$. So the final answer would be
$$u(x,t)=\sum_{n=1}^{\infty}\left(\frac{2}{(n\pi)^2} \int_0^1 g(\xi) \, \sin(n\pi \xi) \, d\xi \cdot \sin\left[(n\pi)^2 t\right]+2\int_0^1 f(\xi) \, \sin(n\pi \xi) \, d\xi \cdot \cos\left[(n\pi)^2t\right] \right) \sin(n\pi x).$$
Now, you see, if I decided I wanted, for some reason, to pull the $\sin(n\pi x)$ into the integrals, I could do that with no confusion. Dummy variables are dummy variables. You can use whichever one you want. Just don't let it get mixed up with any other variables outside the little world of the integral.
Good link, thanks. I'll have to go back and check my book's usage when time, but to confirm - this implies to use this technique when there would be, for example, discontinuities at $ \pm \infty $, hence we use limits to get an answer?
Or there are a couple other cases where you might need it, but yes.
--------------
I have an example using what I think, that you might like to comment on: $ f(x)=\left\{ -\frac{1}{2}(\pi + x), -\pi \le x \lt 0 \right\}, \left\{ \frac{1}{2}(\pi - x), 0 \lt x \le \pi \right\} $
(BTW, how do I have more than 1 line inside the {}?)
Plot this function.
This is an odd function, so $a_n$=0, $ b_n = \frac{1}{\pi}\int_{-\pi}^{\pi}f(x)Sin (nx) \,dx $
The discontinuity at 0 means I need an intermediate variable, so $ b_n = \lim_{{t}\to{0}} \frac{1}{\pi} [\int_{-\pi}^{t}f(x)Sin (nx) \,dx + \int_{t}^{\pi}f(x)Sin (nx) \,dx] $
$ = \lim_{{t}\to{0}} \frac{1}{2\pi} [-\int_{-\pi}^{t}(\pi + x)Sin (nx) \,dx + \int_{t}^{\pi} (\pi-x) Sin (nx) \,dx] $
Is that correct use of the intermediate variable t? (also is my Fourier correct so far?)
Not all discontinuities require this sort of treatment. Your discontinuity is a simple jump discontinuity. You do need to break up your integral into two regions, but using variables in the limits isn't necessary. The reason you know that is because the integrals you get, when you simply break it up into two regions, converge. They exist. Hence, no need for variables in the limits.
------
While looking at this example further, the book gives an answer of $ f(x)= \sum_{n=1}^{\infty} Sin\left(\frac{nx}{n}\right) $ ... that must be a typo, I get $ f(x)= \sum_{n=1}^{\infty} \frac{1}{n} Sin\left({nx}\right) $?
Probably. You can confirm with Wolfram|Alpha.