# Why should a Fourier transform not be a change of basis?

• I
Gold Member

## Summary:

Fourier transform seems easily understandable as a change of basis operator. However, it is said that time and frequency domains are not bases but vector spaces. Why and what would be the consequences?
I was content with the understanding of the Fourier transform (FT) as a change of basis, from the time to the frequency basis or vice versa, an approach that I have often seen reflected in texts.

It makes sense, since it is the usual trick so often done in Physics: you have a problem that is not easily solved from the usual perspective; thus you decide to change perspective so as to simplify the operations; once that you have solved your problem, if needed, you shift back to the original perspective. A possible example, among many others: a convolution in the time basis is simply a multiplication in the frequency basis.

Certainly, the usual term is “time domain” and “frequency domain”. But I thought that this is because the analyzed vector is in this case at the same time a function (a signal), where the input (the domain) is the dimensions (time instants in the time basis; frequencies in the frequency basis) and the output is the magnitude of the signal in each dimension, although nothing should prevent the use of the term “basis” if one takes the vector instead of the function stance.

Furthermore, the meaning of the FT expression…

$$X(f) = \int_{ - \infty }^\infty {x(t){e^{ - i2\pi ft}}dt}$$

seems to shine out under that change of basis approach:

- The object is a periodical signal (even if it is only so in the limit, at infinity), so it looks logical that the basis vector (here the basis function) is a rotating disk, represented for example by a complex exponential.

- In particular, a basis function in the frequency domain would be a disk rotating at a fixed frequency, and in order to obtain its coordinates in the time basis, one should project it over each time basis vector, i.e. each time instant.

- The next step would be carrying out a dot product (here called inner product) between the analyzing basis vector thus obtained and the analyzed signal, so as to obtain the amount or coordinate of the latter in the relevant frequency.

- There are some peculiarities, but they can be explained as logical adjustments: there is an integral in the inner product instead of a summation because we are adding a set of infinite continuous elements, the direct and indirect inverse transform have opposite signs in the exponent because that is the way to invert the operation (after all phases are added in the exponent)…

- It is true that a vector space with only two bases would look weird, but I have heard that there are fractional Fourier transforms and that the time and frequency bases are the “pure” examples (separated BTW by a right angle), but there are many other intermediate bases, which pick up part of time and part of frequency.

However, I am puzzled to hear that it is categorically said that FT is NOT a “change of basis” operator. See here for example in this sense: https://math.stackexchange.com/ques...basis-or-is-it-a-linear-transformation/123553.

I gather that this warning does not spoil the idea that the FT is a trick to carry out an operation in an easier way since it would still be a transformation from one vector space to a more convenient one.

But I do feel frustrated if this means that the easy understanding of the meaning of the equation is not valid.

So my questions are:
• Is the understanding that FT is not a change of basis but a transformation between vector spaces a peaceful idea or is there some debate about it?
• Even if the answer is that it’s a transformation between vector spaces, can we still use the idea that it is an inner product? With what then, if it is not with a “basis” function?
• The link above seems to identify the reason that there would be no canonical isomorphism between the two “spaces”. If so, why that? Really, I can hardly imagine things having more structural similarity than time and frequency, which are two sides of the same coin…

Last edited:
• atyy and jasonRF

Related Linear and Abstract Algebra News on Phys.org
Isaac0427
Gold Member
Summary:: Fourier transform seems easily understandable as a change of basis operator. However, it is said that time and frequency domains are not bases but vector spaces. Why and what would be the consequences?

However, I am puzzled to hear that it is categorically said that FT is NOT a “change of basis” operator. See here for example in this sense.
I have always thought of the Fourier Transform the way you are thinking of it. There is no link to your source that says it is not-- and I would likely be equally puzzled by that assertion.

One thought I have, without seeing the source, is that there may be some mathematical semantics at play here-- semantics a physicist probably would not care about.

• atyy and jasonRF
jasonRF
Gold Member
My hunch is you can think about it either way (edit: I'm sure you can think about it either way). I am not a mathematician so am not interested in arguing that one way is best, but perhaps my background in electrical engineering makes it natural to think of Fourier transforms as a change of basis.

It is easiest to think this way in discrete time. The DFT matrix ##\mathbf{D}## diagonalizes circulant matrices. What is a circulant matrix? These notes show how they are used to represent discrete-time circular convolution (convolution of periodic discrete-time functions)
http://web.mit.edu/18.06/www/Spring17/Circulant-Matrices.pdf
Basically, a discrete circular convolution of ##h = g \ast f## can be written in matrix form by assembling the samples of ##g[n]## into a circulant matrix ##\mathbf{C}_g##, the elements of ##h[n]## and ##f[n]## into column vectors ##\mathbf{h}## and ##\mathbf{f}##, so that ##\mathbf{h} = \mathbf{C}_g \mathbf{f}##. However, the DFT matrix diagonalizes any circulant matrix, so that we have ##\mathbf{h} = \mathbf{D}^{-1} \mathbf{\Lambda}_g \mathbf{D}\mathbf{f}##. Of course, in real life we can save computation by using the clever FFT algorithm to compute ##\mathbf{D}\mathbf{f} = FFT(\mathbf{f}) ##, and ##\Lambda_g = diag( FFT(\mathbf{g}))##, and finally an inverse FFT instead of ##\mathbf{D}^{-1}##. EDIT: this exact operation is at the heart of a lot of digital signal processing.

In continuous time it is similar. Instead we now have an integral convolution operator that is diagonalized by the continuous Fourier transform: ##h(t) = L_g f = \int_{-\infty}^\infty d\tau \, g(t-\tau) f(\tau) = \mathcal{F}^{-1} \left[ G(\omega) \mathcal{F}f \right] ##, where ##\mathcal{F}## is the Fourier transform and ##G(\omega) = \mathcal{F} g## now plays the role of the diagonal matrix of eigenvalues. The analogy between continuous and discrete time is striking:$$\begin{eqnarray*} \mathbf{h} = \mathbf{C}_g \mathbf{f} = \mathbf{D}^{-1}\left[ \mathbf{\Lambda}_g \mathbf{D}\mathbf{f}\right] \\ h = L_g f = \mathcal{F}^{-1} \left[ G(\omega) \mathcal{F}f \right] \end{eqnarray*}$$
I would say both look like a Fourier transform is used as a change of basis transformation to diagonalize a convolution operator.

Of course, both the DFT and the continuous Fourier transform are also linear transformations (aren't all change-of-basis matrices?).

jason

Last edited:
• atyy
martinbn
I don't see how you can think of it as a change of basis, even on an intuitive level? What are the two basis?

It is probably better to think of it as finding the coefficients of a vector with respect to a basis.

• Infrared, jasonRF and BvU
FactChecker
Gold Member
It changes a lot more than the basis. It is a mapping from one space to an entirely different space.

• jasonRF and BvU
jasonRF
Gold Member
It changes a lot more than the basis. It is a mapping from one space to an entirely different space.
Good question... In the discrete ##N## dimensional case it is easy, since the Fourier basis vectors are column vectors of complex sinusoids with an integer number of periods in ##N##. In this case, if you have a complex sinusoid ##\mathbf{s}## with an integer number of periods in ##N##, then ##\mathbf{D} \mathbf{s} = a \mathbf{e}_k## where ##\mathbf{e}_k## is a vector with a single non-zero element.

For continuous time, at least for me, it is natural to think about these things for tempered distributions. In this case the Fourier basis is the collection of complex exponentials with arbitrary frequency, and the Fourier transform is then ##\mathcal{F} e^{i \omega_k t} = 2\pi \delta(\omega-\omega_k)##. But we are no longer technically thinking about a Fourier integral anymore (edit: although we still are thinking about Fourier transforms).

One way to get around this is to think of functions ##m(t) e^{i\omega_k t}##, where the 'windowing' function ##m(t)## is a member of ##\mathcal{S}##, which is the space of continuously differentiable functions with derivatives of all orders that decay faster than any power of ##1/|t|## as ##t\rightarrow \infty## (this is called the Schwartz space). An example is ##e^{- b \, t^2}##. I chose this space because the Fourier transform is a one-to-one and onto mapping from ##\mathcal{S}## to ##\mathcal{S}##, and because it is a natural way to construct a finite-length sinusoid that has very nice properties (edit: I should have used the word 'integrable'; 'finite-length' isn't correct). If we let ##M(\omega)=\mathcal{F} m##, then ##\mathcal{F}\left[ m(t) e^{i\omega_k t}\right] = M(\omega-\omega_k)##. Due to the properties of the Fourier transform, if ##m## is 'wide', then ##M## is 'narrow' (uncertainty relation). So the Fourier basis is made up of the windowed exponentials. This is what an analog spectrum analyzer is doing under the hood, and the user gets to select the width of the functions ##m##; the wider we make ##m## the narrower the frequency resolution (and of course the measurement then takes more time...).

jason

Last edited:
jasonRF
Gold Member
It changes a lot more than the basis. It is a mapping from one space to an entirely different space.
I didn't see this before I posted my last post (EDIT I obviously quoted this above so did see it, but I was clearly thinking about the prior post). I now understand the objection (my lack of math knowledge has caught up to me here!). Does restricting the space to ##\mathcal{S}## like I did in previous post address this? If not, then I need to retract my assertion that it is a mathematical change of basis, although the notion can help with the intuition (at least for me).

edit: even if it does address the problem, the restriction to ##\mathcal{S}## throws out a lot of functions, so I can understand why it may not be an interesting case to many folks.

jason

Last edited:
jasonRF
Gold Member
I am clearly in over my head here - I have the urge to continuously edit my posts....

If I had a better understanding before I posted, I should have either 1) not posted at all, or 2) only stated the finite dimensional case , and then mentioned that technically the continuous case deals with infinite-dimensional spaces so it is not obvious (at least to me) whether it is truly a change of basis.

Along those lines, should this thread be in the analysis forum?

Jason

martinbn
For the finite dimensional case may be it should stay here. And I think you might have good point, that in this case you can think of it as a change of basis. And in more generally it may be still a good intuition.

Gold Member
Thanks to all for the comments!

One first thing: I have also found this other thread in this very same forum, on the same subject, which is quite helpful.

Putting all clues together, I tend to think that:

- It is true that FT (infinite frequencies, continuous) seems to be strictly speaking a transformation from one Hilbert space to another, not a change of basis. Instead, change of basis would be a valid approach for the discrete FT (finite, discrete). I am not sure about Fourier series (infinite, discrete), though.
- It is not clear to me why this happens, what is specific to FT with regard to DFT that invalidates for the first the "change of basis approach". Some of you ask: which basis would you be changing from? Well, I would tend to answer: the basis composed of either infinite frequencies or infinite time (or space) points, the same that you use in the context of DFT, only with infinite/continuous basis vectors/functions, what is the problem?
- I do see however that, if you put the direct FT and the inverse FT in matrices (filled with dots, as they have infinite entries) and multiply them, you don't get the equivalent of an identity matrix.
- In this sense, as noted in this post of the above thread, the inner product of one of the apparent basis with another is not 0 (but divergent) and the inner product of one with another is not 1 (but infinite).
- In any case, I also think that the intuition of a change of basis is better than nothing for the FT as well, as after all the "spaces" are playing the role of "perspectives" and the mathematical expressions are almost identical. Maybe in the FT you don't project one frequency over the time basis vectors, but it is said that you project over eigenfunctions, so it is a similar method...

Conclusion: let us accept that, technically speaking, the FT is different even if it roughly shares the same intuitive approach, but I am really eager to learn a clear and straight-forward reason for such technical difference.

Last edited:
Gold Member
- It is true that FT (infinite frequencies, continuous) seems to be strictly speaking a transformation from one Hilbert space to another, not a change of basis. Instead, change of basis would be a valid approach for the discrete FT (finite, discrete). I am not sure about Fourier series (infinite, discrete), though.
...
- I do see however that, if you put the direct FT and the inverse FT in matrices (filled with dots, as they have infinite entries) and multiply them, you don't get the equivalent of an identity matrix.
- In this sense, as noted in this post of the above thread, the inner product of one of the apparent basis with another is not 0 (but divergent) and the inner product of one with another is not 1 (but infinite).
I have checked that in the context of Fourier series the term "basis function" is customarily used and in fact, in this context, the basis functions can be proved to be orthogonal. And although it is true that the specific terms "time basis function" and "frequency basis function" are scarcely used, I think that this is because:
- Strictly speaking, the complex exponential exp(i*2pi*t*f) is not the basis function. The basis function is a time point or a frequency. You only get the said expression when the time point is projected over a number of frequencies or a frequency is projected over a number of time points. It then happens that the same expression is valid in the two directions, by just swapping the role of the variable (when you project over time, the variable is time; when you project over frequencies, it is frequency).
- Actually this is not so different from what happens in any transformation btw bases, take for example a 2D rotation. Basis vectors have dull names like-i-hat or primed i-hat, until you project one from basis A over those of basis B, in which case you get mathematical expression like cosine or sine of the angle of separation btw the two bases. Also the sines and the cosines of the rotation matrix are the same in both directions, they just have to be re-arranged, as one matrix is the transpose of the other.
- The peculiarity of the Fourier case is that you get a synthetic expression encompassing all the projections of each basis vector over the vectors of the other basis and that is probably why people get attached to the over-arching expression "Fourier basis function".

So the change of basis approach seems to be valid for DFT, for Fourier series and not for the FT and hence the problem seems to stem from the fact that the FT does not address a naturally periodic function and only works inasmuch as it uses the artifact of making the period (and hence also the integration interval) "infinite". Thus it would not be a problem of discreteness, not even of the infiniteness of the dimensions, but of the infiniteness of the interval...

I wonder if I am on the right track and if so what the reason is for this specificity (infinity of the period = interval) kicking the FT out of the "change of basis" structure.

atyy
It's usually considered a change of basis in QM, where the change from representation of the quantum state in the position basis to representation of the quantum state in the momentum basis is via a Fourier transform.
https://quantummechanics.ucsd.edu/ph130a/130_notes/node82.html
http://www.physics.umd.edu/courses/Phys622/ji/lecture9.pdf

There's the technicality that the quantum state of ordinary QM is a vector in Hilbert space, but the basis functions are not in the Hilbert space. The Hilbert space is the space of square-integrable functions, but the position and momentum basis vectors are not square-integrable functions. For this reason, one refers to rigged Hilbert space:
https://en.wikipedia.org/wiki/Rigged_Hilbert_space
https://arxiv.org/abs/quant-ph/0502053

Last edited:
FactChecker
Gold Member
I have checked that in the context of Fourier series the term "basis function" is customarily used and in fact, in this context, the basis functions can be proved to be orthogonal.
...
So the change of basis approach seems to be valid for DFT, for Fourier series and not for the FT and hence the ...
I wonder if I am on the right track and if so what the reason is for this specificity (infinity of the period = interval) kicking the FT out of the "change of basis" structure.
Usually, the phrase "change of basis" is used in reference to a mapping between two sets of basis vectors in the same vector space. I don't think that is true of the FT. The FT is a mapping from functions of time to functions of frequency, so I do not like the use of that term in that context. But I can't think of any serious problem with misusing that phrase or with thinking of it that way as long as you are aware of the issues and do not take it too literally.

Let's extend JasonRF's comments. Perhaps it might be more precise to consider the space here as a Hilbert space.

Gold Member
There's the technicality that the quantum state of ordinary QM is a vector in Hilbert space, but the basis functions are not in the Hilbert space. The Hilbert space is the space of square-integrable functions, but the position and momentum basis vectors are not square-integrable functions.
Thanks atty, very documented as always.

What you say here, is it also applicable to time and frequency? Are the time and frequency basis vectors not square-integrable either?

Yes, those are good points.
My point about Hilbert spaces was not that I was certain about them, but rather that people were discussing functions and vector spaces and I suspected that all of the writers might be correct in a more general mathematical structure.

Sometimes physicists can made a lot of progress understanding concepts if a mathematician can give a quick comment. Physicists are not always precise enough and don't realize that they all really agree. A mathematician can add that precision so we can more clearly understand the situation.

I am not a mathematician. So if a Hilbert space is not the concept that helps us understand, just ignore my Hilbert space comment.

I am sure only that there is a better mathematical way of answering the question of whether a Fourier transform is or is not a change of basis.
The original question is important and we all would like to find a good answer to it. I hope that this longer comment clears up why I made the possible unhelpful suggestion about Hilbert spaces.

Gold Member
Usually, the phrase "change of basis" is used in reference to a mapping between two sets of basis vectors in the same vector space. I don't think that is true of the FT. The FT is a mapping from functions of time to functions of frequency
You also say above that FT is a "mapping from one vector space to another vector space".

I understand that here you are pointing out what is technically correct, strictly speaking, even if you accept that the change of basis intuition may be helpful/didactic. But I wonder what the technical issue is.

I thought that you talk about a different vector space when the object changes. But here we have a signal which is a unique object, which is being analyzed from different perspectives, different representation systems. Why do we need to identify the perspectives with the spaces and not the bases?

Does the technical issue have to do with the difficulty of identifying orthonormal bases?

Certainly, in the case of the FT (continuous, infinite interval) finding an orthonormal basis is actually an issue because multiplying a basis function by itself does give 1 but infinity. And then we cannot in principle divide by infinity, because that is undefined..., but even here I wonder if you can actually divide by the same infinity (the same eternity, if the differential is time) that you had multiplied by, which is perfectly defined.

But if that were the problem, there would be no issue with Fourier series or DFT, right? Or is the problem another one?

atyy
Thanks atty, very documented as always.

What you say here, is it also applicable to time and frequency? Are the time and frequency basis vectors not square-integrable either?
Yes, position-momentum of QM are exactly analogous to time-frequency of the traditional Fourier transform (so the position-momentum uncertainty principle of QM is the same as the time-frequency uncertainty of traditional Fourier analysis). So the (representation of the) basis vectors are not square-integrable either.

Last edited:
... Certainly, in the case of the FT (continuous, infinite interval) finding an orthonormal basis is actually an issue because multiplying a basis function by itself does give 1 but infinity. And then we cannot in principle divide by infinity, because that is undefined..., but even here I wonder if you can actually divide by the same infinity (the same eternity, if the differential is time) that you had multiplied by, which is perfectly defined. ...
Sometimes multiplying and dividing by infinity is valid. One method that might work is just take the integral over a finite interval and take the limit as the interval expands to infinity. Another might be to use L'Hopitals rule before taking the limit. One of Feynmann's tricks was to take the infinite Fourier integral, rotate it into the complex plane, apply a damping exponential factor to make it converge and then rotate the finite answer back into the real plane. If you haven't taken relativistic quantum mechanics, you might not be familiar with that trick. As always, if this comment doesn't help, just ignore it. Ed

FactChecker
Gold Member
You also say above that FT is a "mapping from one vector space to another vector space".
Yes.
I understand that here you are pointing out what is technically correct, strictly speaking, even if you accept that the change of basis intuition may be helpful/didactic.
I agree, as long as it is not taken too literally in advanced operations.
But I wonder what the technical issue is.
That is a good question which I don't feel qualified to answer authoritatively. But suppose a person tried to use ##\vec{x} - A\vec{x}##, which can be done if ##A## is a change of basis transformation. In the case of the Fourier transformation, ##A\vec{x}## is in a different vector space than the one ##\vec{x}## is in. The subtraction would not make sense.

Last edited:
etotheipi
Gold Member
2019 Award
But suppose a person was interested in the difference ##\vec{x} - A\vec{x}##, where ##A## is a change of basis transformation. It would not make sense if the vector spaces were different.
Isn't the change of basis transformation the identity linear transformation on ##V##, and is usually presented as its matrix representation ##[\text{id}_V]_{\beta}^{\beta'}## w.r.t. the two different bases of ##V##? So it is an automorphism ##\text{id}_V : V \rightarrow V##. And ##\vec{x} - A\vec{x} = \vec{x} - \text{id}_V \vec{x} = \vec{0}##.

Last edited:
FactChecker
Gold Member
Isn't the change of basis transformation the identity linear transformation on ##V##, and is usually presented as its matrix representation ##[\text{id}_V]_{\beta}^{\beta'}## w.r.t. the two different bases of ##V##? So it is an automorphism ##\text{id}_V : V \rightarrow V##. And ##\vec{x} - A\vec{x} = \vec{x} - \text{id}_V \vec{x} = \vec{0}##.
Yes. I misspoke. I meant that A mapped to a different vector space. So it is not a change of basis transformation. I will correct my sloppy post.

• etotheipi
atyy
##A## could be the identity linear transformation with a matrix representation ##[A]^{\beta}_{\beta'}## with respect to the bases ##\beta## and ##\beta'## (##[A]^{\beta}_{\beta'}## is the change of basis matrix). Then you have$$A\vec{x} = \vec{x}$$$$[A]^{\beta}_{\beta'} [\vec{x}]_{\beta} = [\vec{x}]_{\beta'}$$and ##\vec{x} \in V##, ##A\vec{x} \in V##, so you can definitely subtract them.