# Creating intuition about Laplace & Fourier transforms

Hey everyone,

I've been reading up a bit on control systems theory, and needed to brush up a bit on my Laplace transforms. I know how to transform and invert the transform for pretty much every reasonable function, I don't have any technical issue with that. My only problem is that some theoretical things are not entirely intuitive.

This is my current understanding of the Fourier transform: We have an function, member of an infinite dimensional space, and we want to decompose it in terms of basis functions $e^{j\omega t}$. To do so, we project (analogously to taking the dot product) our function onto the basis functions, using the following definition of an inner product for the function space: $<f, g> = \int_{-\infty}^{+\infty}f(t)\cdot g^*(t)dt$, which gives rise to the Fourier transform: $F(j\omega) = \int_{-\infty}^{+\infty} f(t)\ e^{-j\omega t}\,dt$. Reconstructing the original function from the coefficients times the basis functions gives $f(t) = \frac{1}{2\pi}\int_{-\infty}^{+\infty} F(j\omega)\ e^{j\omega t}\,d\omega$

Now, looking at the definition of the Laplace transform, it looks like it's trying to do the same thing using underdamped/overdamped sinusoids of the form $e^{\sigma t}e^{j\omega t}$, instead of pure sinusoids like the FT. This enables representing functions which don't vanish at infinity, but rather can diverge exponentially. However, it doesn't quite fit in the framework I laid above. First of all, the inversion integral confuses me. $\mathcal{L}^{-1} \{F(s)\}(t) = f(t) = \frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}\,ds$. Why am I free to do the integration for any value of σ, as long as it's bigger than the real part of the rightmost pole? If I plug in $s = \sigma + j \omega, ds = j d\omega$, I can pull $e^{\sigma t}$ out of the integral, to get $\mathcal{L}^{-1} \{F(s)\}(t) = f(t) = \frac{e^{\sigma t}}{2\pi}\int_{\infty}^{+\infty}F(\sigma+j\omega)e^{j\omega t}\,d\omega$. It appears strange to me that the $e^{\sigma t}$ term has to be exactly compensated by the fact that the transform is being evaluated more to the right or to the left. Also, when analyzing systems using the Laplace transform, the main thing we're interested in is the location of its zeros and poles. Intuitively, I would expect that, since the function "blows up" at the poles, there would be large contributions from those amplitudes in the inversion integral if it passed near the poles. However, if it can pass arbitrarily far away from the poles, it doesn't make as much sense...

I'm sorry if I'm not explaining myself as well as I wanted to, but this isn't really that easy to express...

Cheers

jbunniii
Homework Helper
Gold Member
I don't know much about Laplace transforms, so I will limit my remarks to the Fourier transform.

Your intuitive explanation of the Fourier transform is fine, but there are some technicalities.

A complex exponential, say ##g(t) = e^{i\omega t}##, is not a basis function for your inner product space. Indeed, it is not even an element of the space, because
$$\langle g,g\rangle = \int_{-\infty}^{\infty} e^{i\omega t}e^{-i\omega t} dt = \int_{-\infty}^{\infty} (1) dt = \infty$$
This problem disappears if we integrate over a finite interval, as with Fourier series. In that case, the complex exponentials are members of the inner product space. You still have to be careful about calling them a "basis" in that case.

First, you have to specify what kind of basis you mean. For infinite-dimensional spaces, there are (at least) two notions of basis.

A Hamel basis is a basis in the linear algebra sense: every element of the space can be expressed as a linear combination of FINITELY MANY elements of the basis. The complex exponentials are clearly not a Hamel basis. (However, Zorn's lemma does guarantee that every vector space has a Hamel basis, even if we cannot construct one explicitly!)

On the other hand, a Hilbert basis is a set of orthonormal elements of the space, the span of which is DENSE in the space. This means that you can approximate any element of the space arbitrarily closely by taking finite sums of basis elements. Of course, we have to define what "closeness" means. What it means here is that, given a Hilbert basis ##\{g_n\}##, the approximation error
$$\left\| f - \sum_{n=1}^{N} a_n g_n \right\|$$
can be made arbitrarily small by judicious choice of ##N## and ##a_n##. The norm is defined in terms of the inner product: ##\| \cdot \| = \sqrt{\langle \cdot, \cdot \rangle}##.

Note that this does NOT guarantee that ##f(x)## and ##\sum_{n=1}^{N} a_n g_n(x)## can be made arbitrarily close simultaneously for all points ##x## (pointwise convergence), only that the norm of their difference can be made as small as we like (convergence in norm).

Again, the above is valid for Fourier SERIES: the key distinction versus the Fourier TRANSFORM is that the integration interval is finite.

I had thought about how the fact that the norm of the basis elements isn't really defined since the integral doesn't converge, but didn't really reach any conclusion... How can you justify integrating over a finite interval?

You mention the Hilbert basis won't guarantee pointwise convergence, but the Fourier series does, provided the function is sufficiently well behaved, right?