I Why should a Fourier transform not be a change of basis?

Saw
Gold Member
Messages
631
Reaction score
18
TL;DR Summary
Fourier transform seems easily understandable as a change of basis operator. However, it is said that time and frequency domains are not bases but vector spaces. Why and what would be the consequences?
I was content with the understanding of the Fourier transform (FT) as a change of basis, from the time to the frequency basis or vice versa, an approach that I have often seen reflected in texts.

It makes sense, since it is the usual trick so often done in Physics: you have a problem that is not easily solved from the usual perspective; thus you decide to change perspective so as to simplify the operations; once that you have solved your problem, if needed, you shift back to the original perspective. A possible example, among many others: a convolution in the time basis is simply a multiplication in the frequency basis.

Certainly, the usual term is “time domain” and “frequency domain”. But I thought that this is because the analyzed vector is in this case at the same time a function (a signal), where the input (the domain) is the dimensions (time instants in the time basis; frequencies in the frequency basis) and the output is the magnitude of the signal in each dimension, although nothing should prevent the use of the term “basis” if one takes the vector instead of the function stance.

Furthermore, the meaning of the FT expression…

X(f) = \int_{ - \infty }^\infty {x(t){e^{ - i2\pi ft}}dt}

seems to shine out under that change of basis approach:

- The object is a periodical signal (even if it is only so in the limit, at infinity), so it looks logical that the basis vector (here the basis function) is a rotating disk, represented for example by a complex exponential.

- In particular, a basis function in the frequency domain would be a disk rotating at a fixed frequency, and in order to obtain its coordinates in the time basis, one should project it over each time basis vector, i.e. each time instant.

- The next step would be carrying out a dot product (here called inner product) between the analyzing basis vector thus obtained and the analyzed signal, so as to obtain the amount or coordinate of the latter in the relevant frequency.

- There are some peculiarities, but they can be explained as logical adjustments: there is an integral in the inner product instead of a summation because we are adding a set of infinite continuous elements, the direct and indirect inverse transform have opposite signs in the exponent because that is the way to invert the operation (after all phases are added in the exponent)…

- It is true that a vector space with only two bases would look weird, but I have heard that there are fractional Fourier transforms and that the time and frequency bases are the “pure” examples (separated BTW by a right angle), but there are many other intermediate bases, which pick up part of time and part of frequency.

However, I am puzzled to hear that it is categorically said that FT is NOT a “change of basis” operator. See here for example in this sense: https://math.stackexchange.com/ques...basis-or-is-it-a-linear-transformation/123553.

I gather that this warning does not spoil the idea that the FT is a trick to carry out an operation in an easier way since it would still be a transformation from one vector space to a more convenient one.

But I do feel frustrated if this means that the easy understanding of the meaning of the equation is not valid.

So my questions are:
  • Is the understanding that FT is not a change of basis but a transformation between vector spaces a peaceful idea or is there some debate about it?
  • Even if the answer is that it’s a transformation between vector spaces, can we still use the idea that it is an inner product? With what then, if it is not with a “basis” function?
  • The link above seems to identify the reason that there would be no canonical isomorphism between the two “spaces”. If so, why that? Really, I can hardly imagine things having more structural similarity than time and frequency, which are two sides of the same coin…
 
Last edited:
  • Like
Likes atyy and jasonRF
Physics news on Phys.org
Saw said:
Summary:: Fourier transform seems easily understandable as a change of basis operator. However, it is said that time and frequency domains are not bases but vector spaces. Why and what would be the consequences?

However, I am puzzled to hear that it is categorically said that FT is NOT a “change of basis” operator. See here for example in this sense.
I have always thought of the Fourier Transform the way you are thinking of it. There is no link to your source that says it is not-- and I would likely be equally puzzled by that assertion.

One thought I have, without seeing the source, is that there may be some mathematical semantics at play here-- semantics a physicist probably would not care about.
 
  • Like
Likes atyy and jasonRF
Here are a couple of links in that sense.

This first one is the one I forgot to link to:

https://math.stackexchange.com/ques...basis-or-is-it-a-linear-transformation/123553

In this second one you can find the statement in the comments:

https://math.stackexchange.com/questions/3800610/how-does-orthogonality-between-frequency-and-time-bases-condition-the-expression3
 
My hunch is you can think about it either way (edit: I'm sure you can think about it either way). I am not a mathematician so am not interested in arguing that one way is best, but perhaps my background in electrical engineering makes it natural to think of Fourier transforms as a change of basis.

It is easiest to think this way in discrete time. The DFT matrix ##\mathbf{D}## diagonalizes circulant matrices. What is a circulant matrix? These notes show how they are used to represent discrete-time circular convolution (convolution of periodic discrete-time functions)
http://web.mit.edu/18.06/www/Spring17/Circulant-Matrices.pdf
Basically, a discrete circular convolution of ##h = g \ast f## can be written in matrix form by assembling the samples of ##g[n]## into a circulant matrix ##\mathbf{C}_g##, the elements of ##h[n]## and ##f[n]## into column vectors ##\mathbf{h}## and ##\mathbf{f}##, so that ##\mathbf{h} = \mathbf{C}_g \mathbf{f}##. However, the DFT matrix diagonalizes any circulant matrix, so that we have ##\mathbf{h} = \mathbf{D}^{-1} \mathbf{\Lambda}_g \mathbf{D}\mathbf{f}##. Of course, in real life we can save computation by using the clever FFT algorithm to compute ##\mathbf{D}\mathbf{f} = FFT(\mathbf{f}) ##, and ##\Lambda_g = diag( FFT(\mathbf{g}))##, and finally an inverse FFT instead of ##\mathbf{D}^{-1}##. EDIT: this exact operation is at the heart of a lot of digital signal processing.

In continuous time it is similar. Instead we now have an integral convolution operator that is diagonalized by the continuous Fourier transform: ##h(t) = L_g f = \int_{-\infty}^\infty d\tau \, g(t-\tau) f(\tau) = \mathcal{F}^{-1} \left[ G(\omega) \mathcal{F}f \right] ##, where ##\mathcal{F}## is the Fourier transform and ##G(\omega) = \mathcal{F} g## now plays the role of the diagonal matrix of eigenvalues. The analogy between continuous and discrete time is striking:$$
\begin{eqnarray*}
\mathbf{h} = \mathbf{C}_g \mathbf{f} = \mathbf{D}^{-1}\left[ \mathbf{\Lambda}_g \mathbf{D}\mathbf{f}\right] \\
h = L_g f = \mathcal{F}^{-1} \left[ G(\omega) \mathcal{F}f \right]
\end{eqnarray*}$$
I would say both look like a Fourier transform is used as a change of basis transformation to diagonalize a convolution operator.

Of course, both the DFT and the continuous Fourier transform are also linear transformations (aren't all change-of-basis matrices?).

jason
 
Last edited:
  • Like
Likes atyy
I don't see how you can think of it as a change of basis, even on an intuitive level? What are the two basis?

It is probably better to think of it as finding the coefficients of a vector with respect to a basis.
 
  • Like
Likes Infrared, jasonRF and BvU
It changes a lot more than the basis. It is a mapping from one space to an entirely different space.
 
  • Like
Likes jasonRF and BvU
FactChecker said:
It changes a lot more than the basis. It is a mapping from one space to an entirely different space.

Good question... In the discrete ##N## dimensional case it is easy, since the Fourier basis vectors are column vectors of complex sinusoids with an integer number of periods in ##N##. In this case, if you have a complex sinusoid ##\mathbf{s}## with an integer number of periods in ##N##, then ##\mathbf{D} \mathbf{s} = a \mathbf{e}_k## where ##\mathbf{e}_k## is a vector with a single non-zero element.

For continuous time, at least for me, it is natural to think about these things for tempered distributions. In this case the Fourier basis is the collection of complex exponentials with arbitrary frequency, and the Fourier transform is then ##\mathcal{F} e^{i \omega_k t} = 2\pi \delta(\omega-\omega_k)##. But we are no longer technically thinking about a Fourier integral anymore (edit: although we still are thinking about Fourier transforms).

One way to get around this is to think of functions ##m(t) e^{i\omega_k t}##, where the 'windowing' function ##m(t)## is a member of ##\mathcal{S}##, which is the space of continuously differentiable functions with derivatives of all orders that decay faster than any power of ##1/|t|## as ##t\rightarrow \infty## (this is called the Schwartz space). An example is ##e^{- b \, t^2}##. I chose this space because the Fourier transform is a one-to-one and onto mapping from ##\mathcal{S}## to ##\mathcal{S}##, and because it is a natural way to construct a finite-length sinusoid that has very nice properties (edit: I should have used the word 'integrable'; 'finite-length' isn't correct). If we let ##M(\omega)=\mathcal{F} m##, then ##\mathcal{F}\left[ m(t) e^{i\omega_k t}\right] = M(\omega-\omega_k)##. Due to the properties of the Fourier transform, if ##m## is 'wide', then ##M## is 'narrow' (uncertainty relation). So the Fourier basis is made up of the windowed exponentials. This is what an analog spectrum analyzer is doing under the hood, and the user gets to select the width of the functions ##m##; the wider we make ##m## the narrower the frequency resolution (and of course the measurement then takes more time...).

jason
 
Last edited:
FactChecker said:
It changes a lot more than the basis. It is a mapping from one space to an entirely different space.
I didn't see this before I posted my last post (EDIT I obviously quoted this above so did see it, but I was clearly thinking about the prior post). I now understand the objection (my lack of math knowledge has caught up to me here!). Does restricting the space to ##\mathcal{S}## like I did in previous post address this? If not, then I need to retract my assertion that it is a mathematical change of basis, although the notion can help with the intuition (at least for me).

edit: even if it does address the problem, the restriction to ##\mathcal{S}## throws out a lot of functions, so I can understand why it may not be an interesting case to many folks.

jason
 
Last edited:
I am clearly in over my head here - I have the urge to continuously edit my posts...

If I had a better understanding before I posted, I should have either 1) not posted at all, or 2) only stated the finite dimensional case , and then mentioned that technically the continuous case deals with infinite-dimensional spaces so it is not obvious (at least to me) whether it is truly a change of basis.

Along those lines, should this thread be in the analysis forum?

Jason
 
  • #10
For the finite dimensional case may be it should stay here. And I think you might have good point, that in this case you can think of it as a change of basis. And in more generally it may be still a good intuition.
 
  • #11
Thanks to all for the comments!

One first thing: I have also found this other thread in this very same forum, on the same subject, which is quite helpful.

Putting all clues together, I tend to think that:

- It is true that FT (infinite frequencies, continuous) seems to be strictly speaking a transformation from one Hilbert space to another, not a change of basis. Instead, change of basis would be a valid approach for the discrete FT (finite, discrete). I am not sure about Fourier series (infinite, discrete), though.
- It is not clear to me why this happens, what is specific to FT with regard to DFT that invalidates for the first the "change of basis approach". Some of you ask: which basis would you be changing from? Well, I would tend to answer: the basis composed of either infinite frequencies or infinite time (or space) points, the same that you use in the context of DFT, only with infinite/continuous basis vectors/functions, what is the problem?
- I do see however that, if you put the direct FT and the inverse FT in matrices (filled with dots, as they have infinite entries) and multiply them, you don't get the equivalent of an identity matrix.
- In this sense, as noted in this post of the above thread, the inner product of one of the apparent basis with another is not 0 (but divergent) and the inner product of one with another is not 1 (but infinite).
- In any case, I also think that the intuition of a change of basis is better than nothing for the FT as well, as after all the "spaces" are playing the role of "perspectives" and the mathematical expressions are almost identical. Maybe in the FT you don't project one frequency over the time basis vectors, but it is said that you project over eigenfunctions, so it is a similar method...

Conclusion: let us accept that, technically speaking, the FT is different even if it roughly shares the same intuitive approach, but I am really eager to learn a clear and straight-forward reason for such technical difference.
 
Last edited:
  • #12
Saw said:
- It is true that FT (infinite frequencies, continuous) seems to be strictly speaking a transformation from one Hilbert space to another, not a change of basis. Instead, change of basis would be a valid approach for the discrete FT (finite, discrete). I am not sure about Fourier series (infinite, discrete), though.
...
- I do see however that, if you put the direct FT and the inverse FT in matrices (filled with dots, as they have infinite entries) and multiply them, you don't get the equivalent of an identity matrix.
- In this sense, as noted in this post of the above thread, the inner product of one of the apparent basis with another is not 0 (but divergent) and the inner product of one with another is not 1 (but infinite).

I have checked that in the context of Fourier series the term "basis function" is customarily used and in fact, in this context, the basis functions can be proved to be orthogonal. And although it is true that the specific terms "time basis function" and "frequency basis function" are scarcely used, I think that this is because:
- Strictly speaking, the complex exponential exp(i*2pi*t*f) is not the basis function. The basis function is a time point or a frequency. You only get the said expression when the time point is projected over a number of frequencies or a frequency is projected over a number of time points. It then happens that the same expression is valid in the two directions, by just swapping the role of the variable (when you project over time, the variable is time; when you project over frequencies, it is frequency).
- Actually this is not so different from what happens in any transformation btw bases, take for example a 2D rotation. Basis vectors have dull names like-i-hat or primed i-hat, until you project one from basis A over those of basis B, in which case you get mathematical expression like cosine or sine of the angle of separation btw the two bases. Also the sines and the cosines of the rotation matrix are the same in both directions, they just have to be re-arranged, as one matrix is the transpose of the other.
- The peculiarity of the Fourier case is that you get a synthetic expression encompassing all the projections of each basis vector over the vectors of the other basis and that is probably why people get attached to the over-arching expression "Fourier basis function".

So the change of basis approach seems to be valid for DFT, for Fourier series and not for the FT and hence the problem seems to stem from the fact that the FT does not address a naturally periodic function and only works inasmuch as it uses the artifact of making the period (and hence also the integration interval) "infinite". Thus it would not be a problem of discreteness, not even of the infiniteness of the dimensions, but of the infiniteness of the interval...

I wonder if I am on the right track and if so what the reason is for this specificity (infinity of the period = interval) kicking the FT out of the "change of basis" structure.
 
  • #13
It's usually considered a change of basis in QM, where the change from representation of the quantum state in the position basis to representation of the quantum state in the momentum basis is via a Fourier transform.
https://quantummechanics.ucsd.edu/ph130a/130_notes/node82.html
http://www.physics.umd.edu/courses/Phys622/ji/lecture9.pdf

There's the technicality that the quantum state of ordinary QM is a vector in Hilbert space, but the basis functions are not in the Hilbert space. The Hilbert space is the space of square-integrable functions, but the position and momentum basis vectors are not square-integrable functions. For this reason, one refers to rigged Hilbert space:
https://en.wikipedia.org/wiki/Rigged_Hilbert_space
https://arxiv.org/abs/quant-ph/0502053
 
Last edited:
  • #14
Saw said:
I have checked that in the context of Fourier series the term "basis function" is customarily used and in fact, in this context, the basis functions can be proved to be orthogonal.
...
So the change of basis approach seems to be valid for DFT, for Fourier series and not for the FT and hence the ...
I wonder if I am on the right track and if so what the reason is for this specificity (infinity of the period = interval) kicking the FT out of the "change of basis" structure.
Usually, the phrase "change of basis" is used in reference to a mapping between two sets of basis vectors in the same vector space. I don't think that is true of the FT. The FT is a mapping from functions of time to functions of frequency, so I do not like the use of that term in that context. But I can't think of any serious problem with misusing that phrase or with thinking of it that way as long as you are aware of the issues and do not take it too literally.
 
  • #15
Let's extend JasonRF's comments. Perhaps it might be more precise to consider the space here as a Hilbert space.
 
  • #16
atyy said:
There's the technicality that the quantum state of ordinary QM is a vector in Hilbert space, but the basis functions are not in the Hilbert space. The Hilbert space is the space of square-integrable functions, but the position and momentum basis vectors are not square-integrable functions.

Thanks atty, very documented as always.

What you say here, is it also applicable to time and frequency? Are the time and frequency basis vectors not square-integrable either?
 
  • #17
Yes, those are good points.
My point about Hilbert spaces was not that I was certain about them, but rather that people were discussing functions and vector spaces and I suspected that all of the writers might be correct in a more general mathematical structure.

Sometimes physicists can made a lot of progress understanding concepts if a mathematician can give a quick comment. Physicists are not always precise enough and don't realize that they all really agree. A mathematician can add that precision so we can more clearly understand the situation.

I am not a mathematician. So if a Hilbert space is not the concept that helps us understand, just ignore my Hilbert space comment.

I am sure only that there is a better mathematical way of answering the question of whether a Fourier transform is or is not a change of basis.
The original question is important and we all would like to find a good answer to it. I hope that this longer comment clears up why I made the possible unhelpful suggestion about Hilbert spaces.
 
  • #18
FactChecker said:
Usually, the phrase "change of basis" is used in reference to a mapping between two sets of basis vectors in the same vector space. I don't think that is true of the FT. The FT is a mapping from functions of time to functions of frequency

You also say above that FT is a "mapping from one vector space to another vector space".

I understand that here you are pointing out what is technically correct, strictly speaking, even if you accept that the change of basis intuition may be helpful/didactic. But I wonder what the technical issue is.

I thought that you talk about a different vector space when the object changes. But here we have a signal which is a unique object, which is being analyzed from different perspectives, different representation systems. Why do we need to identify the perspectives with the spaces and not the bases?

Does the technical issue have to do with the difficulty of identifying orthonormal bases?

Certainly, in the case of the FT (continuous, infinite interval) finding an orthonormal basis is actually an issue because multiplying a basis function by itself does give 1 but infinity. And then we cannot in principle divide by infinity, because that is undefined..., but even here I wonder if you can actually divide by the same infinity (the same eternity, if the differential is time) that you had multiplied by, which is perfectly defined.

But if that were the problem, there would be no issue with Fourier series or DFT, right? Or is the problem another one?
 
  • #19
Saw said:
Thanks atty, very documented as always.

What you say here, is it also applicable to time and frequency? Are the time and frequency basis vectors not square-integrable either?

Yes, position-momentum of QM are exactly analogous to time-frequency of the traditional Fourier transform (so the position-momentum uncertainty principle of QM is the same as the time-frequency uncertainty of traditional Fourier analysis). So the (representation of the) basis vectors are not square-integrable either.
 
Last edited:
  • #20
Saw said:
... Certainly, in the case of the FT (continuous, infinite interval) finding an orthonormal basis is actually an issue because multiplying a basis function by itself does give 1 but infinity. And then we cannot in principle divide by infinity, because that is undefined..., but even here I wonder if you can actually divide by the same infinity (the same eternity, if the differential is time) that you had multiplied by, which is perfectly defined. ...
Sometimes multiplying and dividing by infinity is valid. One method that might work is just take the integral over a finite interval and take the limit as the interval expands to infinity. Another might be to use L'Hopitals rule before taking the limit. One of Feynmann's tricks was to take the infinite Fourier integral, rotate it into the complex plane, apply a damping exponential factor to make it converge and then rotate the finite answer back into the real plane. If you haven't taken relativistic quantum mechanics, you might not be familiar with that trick. As always, if this comment doesn't help, just ignore it. Ed
 
  • #21
Saw said:
You also say above that FT is a "mapping from one vector space to another vector space".
Yes.
I understand that here you are pointing out what is technically correct, strictly speaking, even if you accept that the change of basis intuition may be helpful/didactic.
I agree, as long as it is not taken too literally in advanced operations.
But I wonder what the technical issue is.
That is a good question which I don't feel qualified to answer authoritatively. But suppose a person tried to use ##\vec{x} - A\vec{x}##, which can be done if ##A## is a change of basis transformation. In the case of the Fourier transformation, ##A\vec{x}## is in a different vector space than the one ##\vec{x}## is in. The subtraction would not make sense.
 
Last edited:
  • #22
FactChecker said:
But suppose a person was interested in the difference ##\vec{x} - A\vec{x}##, where ##A## is a change of basis transformation. It would not make sense if the vector spaces were different.

Isn't the change of basis transformation the identity linear transformation on ##V##, and is usually presented as its matrix representation ##[\text{id}_V]_{\beta}^{\beta'}## w.r.t. the two different bases of ##V##? So it is an automorphism ##\text{id}_V : V \rightarrow V##. And ##\vec{x} - A\vec{x} = \vec{x} - \text{id}_V \vec{x} = \vec{0}##.
 
Last edited by a moderator:
  • #23
etotheipi said:
Isn't the change of basis transformation the identity linear transformation on ##V##, and is usually presented as its matrix representation ##[\text{id}_V]_{\beta}^{\beta'}## w.r.t. the two different bases of ##V##? So it is an automorphism ##\text{id}_V : V \rightarrow V##. And ##\vec{x} - A\vec{x} = \vec{x} - \text{id}_V \vec{x} = \vec{0}##.
Yes. I misspoke. I meant that A mapped to a different vector space. So it is not a change of basis transformation. I will correct my sloppy post.
 
  • Like
Likes etotheipi
  • #24
FactChecker said:
Yes.I agree, as long as it is not taken too literally in advanced operations.That is a good question which I don't feel qualified to answer authoritatively. But suppose a person tried to use ##\vec{x} - A\vec{x}##, which can be done if ##A## is a change of basis transformation. In the case of the Fourier transformation, ##A\vec{x}## is in a different vector space than the one ##\vec{x}## is in. The subtraction would not make sense.

It doesn't make sense to do ##\vec{x} - A\vec{x}## if ##A## is a change of basis matrix. It only makes sense if ##A## is a linear operator. In that expression, one must use the same basis for ##\vec{x}##, ##A## and ##A\vec{x}##.
 
  • #25
atyy said:
It doesn't make sense to do ##\vec{x} - A\vec{x}## if ##A## is a change of basis matrix. It only makes sense if ##A## is a linear operator. In that expression, one must use the same basis for ##\vec{x}##, ##A## and ##A\vec{x}##.

##A## could be the identity linear transformation with a matrix representation ##[A]^{\beta}_{\beta'}## with respect to the bases ##\beta## and ##\beta'## (##[A]^{\beta}_{\beta'}## is the change of basis matrix). Then you have$$A\vec{x} = \vec{x}$$$$[A]^{\beta}_{\beta'} [\vec{x}]_{\beta} = [\vec{x}]_{\beta'} $$and ##\vec{x} \in V##, ##A\vec{x} \in V##, so you can definitely subtract them.
 
  • #26
atyy said:
It doesn't make sense to do ##\vec{x} - A\vec{x}## if ##A## is a change of basis matrix. It only makes sense if ##A## is a linear operator. In that expression, one must use the same basis for ##\vec{x}##, ##A## and ##A\vec{x}##.
Every matrix is a linear operator.
 
  • #27
FactChecker said:
Every matrix is a linear operator.

No. It has to transform correctly under the change of basis.
 
  • #28
FactChecker said:
Every matrix is a linear operator.

I think we should perhaps say that every linear transformation ##T## has a matrix representation ##\mathcal{M}(T)## (assuming we deal with finite vector spaces only). Just like a vector is not equal to its coordinate matrix w.r.t. some basis, a linear transformation is not equal to its coordinate matrix w.r.t. two bases.
 
  • Like
Likes FactChecker
  • #29
FactChecker said:
It changes a lot more than the basis. It is a mapping from one space to an entirely different space.

The way to see the Fourier transform heuristically as a change of basis is to treat the function ##\phi(x)## that is being Fourier transformed as the representation of the vector, ie. ##\phi(x)## are the coordinates of a vector, with ##x## being the index for the coordinates. Its Fourier transform ##\tilde{\phi}(k)## is the representation of the same vector in different coordinates.

The technical issue is that the representation of the basis vectors is not normalizable, but heuristically it works out as follows. Suppose one were working in 2D, then orthonormal basis vectors would have representation ##[1,0]## and ##[0,1]##, ie. each basis vector should have a peak for one index, and zero for other indices. For a continuous index, one would get a Dirac delta with a peak at only one ##x## (since ##x## is the index). In the position representation, the position basis vectors are ##\delta(x-y)## (where ##y## is a continuous index) and the momentum basis vectors are ##\text{exp}(ikx)## (where k is a continuous index).

So suppose the vector we want to represent is ##|\Phi\rangle##, then in the position representation, we can write it heuristically (without being careful about normalization and signs) using either a sum of position basis vectors or momentum basis vectors as follows:
##\langle x |\Phi\rangle = \phi(x) = \int{\phi(y) \delta(x-y)dy} = \int{\tilde{\phi}(k) \text{exp}(ikx)} dk##
 
Last edited:
  • Like
  • Informative
Likes jasonRF, Saw and FactChecker
  • #30
This was already said, but because the thread was revived I will repeat it. To illustrate the problem with the fact that there are two different spaces one needs to look at the simplest case, periodic functions in one varible. The Fourier transform maps ##L^2(\mathbb R/\mathbb Z)## to ##l^2(\mathbb Z)##. So it would be hard to view this as a change of basis.
 
  • Informative
Likes etotheipi
  • #31
atyy said:
The way to see the Fourier transform heuristically as a change of basis is to treat the function ϕ(x) that is being Fourier transformed as the representation of the vector, ie. ϕ(x) are the coordinates of a vector, with x being the index for the coordinates. Its Fourier transform ϕ~(k) is the representation of the same vector in different coordinates.
⟨x|Φ⟩=ϕ(x)=∫ϕ(y)δ(x−y)dy=∫ϕ~(k)exp(ikx)dk

I am glad you used this term, "heuristically", because that was one of the objectives of the OP: to confirm that the view of the FT as something akin to a change of basis can help to solve the problem of better understanding its mechanics, mechanics which consist precisely of solving problems by changing "perspective" or "representation system".

It is fine for me if the FT is not, strictly speaking, a change of basis. As FactChecker also said, the analogy may fly

FactChecker said:
as long as it is not taken too literally in advanced operations.

Yes, analogies cannot be taken too far. They only work within their domain of applicability, which is given by their practical purpose. In this case, it is solving problems from another "angle". (By the way, it is said that the angle between time and frequency domains is 90 degrees...) If this other angle is another "space" instead of a "basis", that simply means that we need a generalized term that, at least for this purpose, covers the two things: "representation system", for example?

A different thing is that the second scope is, yes, understanding well why "change of basis" is not technically accurate. Here I still have questions:

* Does this apply only to the continuous case with infinite interval?

* The basis functions are not square-integrable, they don't belong to the same vector space. I see, although I still have to assimilate it...

*
atyy said:
The technical issue is that the representation of the basis vectors is not normalizable,

This is linked, I understand, to the fact that the product of one basis vector (function) with itself is not 1 but infinity. But it would be 1 if you could normalize by dividing by the same infinite time interval over which you have integrated... Physics4Fun gave some solutions by means of which you could divide by infinity. In any case, I tend to think that this should not be an insurmountable problem.

* Then there is the issue of the subtraction not being possible, but I did not grasp if in the end this objection was maintained.

* Latest objection:

martinbn said:
The Fourier transform maps ##L^2(\mathbb R/\mathbb Z)## to ##l^2(\mathbb Z)##.

Could you elaborate on that?
 
  • #32
Is the terminology "Fourier transform" unambiguous? Is it to be a map from a vector space to possibility different vector space of coefficients? - or is it to be a map from a vector space of coefficients into itself?

On the one hand we can define a transformation ##c_f(v)## from a vector space ##V## into the (in general different) vector space ##C## whose elements are coefficients of vectors in ##V## in some basis ##B_f##. (e.g. ##V## can be an n-dimensional vector space real valued functions defined on [0,1] and ##C## can be the vector space of n-tuples of real numbers)

On the other hand we can consider a transformation ##T## from ##C## into itself defined as follows: To find ##T(c)## let ##v## be the vector in ##V## who coefficients in the basis ##B_t## are the n-tuple ##c##. Then ##T(c)## is defined as ##c_f(v)## which is the n-tuple of coefficients for ##v## in a different basis ##B_f##.

Does "Fourier transform" refer to a function like ##c_f(v)## or does it refer to a function like ##T##?
 
  • #33
martinbn said:
The Fourier transform maps ##L^2(\mathbb R/\mathbb Z)## to ##l^2(\mathbb Z)##. So it would be hard to view this as a change of basis.

I think you're thinking of Fourier series. A function doesn't need to be periodic to have a Fourier transform.
 
  • #34
Saw said:
This is linked, I understand, to the fact that the product of one basis vector (function) with itself is not 1 but infinity. But it would be 1 if you could normalize by dividing by the same infinite time interval over which you have integrated... Physics4Fun gave some solutions by means of which you could divide by infinity. In any case, I tend to think that this should not be an insurmountable problem.

If you extend the notion of "basis", you may not need the property that "basis functions" be square-integrable. The Dirac delta function, which is a "basis function", is not even a proper function, and cannot be multiplied by itself. In quantum mechanics, the quantum state must be square-integrable so that we can calculate probabilities. Since not all "basis functions" are square integrable, they can be "basis functions" but not quantum states. Take a look at rigged Hilbert spaces to see how it may be possible to extend the notion of "basis". Here is one of the previously linked references and a couple of others about rigged Hilbert spaces.

http://galaxy.cs.lamar.edu/~rafaelm/webdis.pdf
Quantum Mechanics in Rigged Hilbert Space Language
Rafael de la Madrid Modino

https://arxiv.org/abs/quant-ph/0502053
The role of the rigged Hilbert space in Quantum Mechanics
R. de la Madrid

https://arxiv.org/abs/1411.3263
Quantum Physics and Signal Processing in Rigged Hilbert Spaces by means of Special Functions, Lie Algebras and Fourier and Fourier-like Transforms
Enrico Celeghini, Mariano A. del Olmo

Celeghini and del Olmo state the interesting point that for a rigged Hilbert space, some bases may be countably infinite, whereas others may be uncountably infinite. This is in contrast to Hilbert space where all bases have the same cardinality.

Karen Smith's notes are also interesting as she discusses ideas that even for Hilbert spaces (without going to rigged Hilbert spaces), there are different notions of what a basis is.

http://www.math.lsa.umich.edu/~kesmith/infinite.pdf
Bases for Infinite Dimensional Vector Spaces
Karen E. Smith
 
Last edited:
  • #35
Infrared said:
I think you're thinking of Fourier series. A function doesn't need to be periodic to have a Fourier transform.
I am talking about Fourier transform in general. Fourier series is a special case. If you have a locally compact abelian group ##G##, its dual ##\widehat{G}## is as well, and the Fourier transform takes a function with domain the group ##f:G\rightarrow\mathbb C##, to a function on the dual group ##\widehat{f}:\widehat{G}\rightarrow\mathbb C##. In the case of ##G=\mathbb R##, the dual is (noncannonically) isomorphic to itself. In the case of ##G=\mathbb R/\mathbb Z##, the dual is ##\widehat{G}=\mathbb Z##.

https://en.wikipedia.org/wiki/Pontryagin_duality
 
  • #36
I agree that is true, but it looks to me like the locally compact group considered in the OP and throughout the throughout the thread is ##(\mathbb{R},+),## so I'm not sure why the Fourier transform on a circle is directly relevant.

Working over ##\mathbb{R}##, you can definitely view the Fourier transform as map from a space to itself, e.g. use the space of Schwartz functions. I still agree it doesn't look like a change of basis.
 
  • #37
The other heuristic that suggests looking at the Fourier transform as something like a change of basis is the fact that it diagonalizes differentiation. Wikipedia's article on Spectral Theory makes the interesting comment:

"There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann.[5] The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis.

The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sense the spectral theory of differentiation qua differential operator. But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space). On the other hand it is simple to construct a group algebra, the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality."
 
  • #38
Infrared said:
I agree that is true, but it looks to me like the locally compact group considered in the OP and throughout the throughout the thread is ##(\mathbb{R},+),## so I'm not sure why the Fourier transform on a circle is directly relevant.

Working over ##\mathbb{R}##, you can definitely view the Fourier transform as map from a space to itself, e.g. use the space of Schwartz functions. I still agree it doesn't look like a change of basis.
Well, I was thinking about Fourier in general, and the circle case makes the point obvious. I also think that it doesn't look like a change of bases. Even more, I am not sure how standard the terminology about position and momentum basis is, but it is somewhat misleading. The spaces in question have countable bases, while these position and momentum "bases" are uncountable.
 
  • #39
In some cases, the energy basis is a true basis that is countable, while position and momentum "bases" are generalized "bases" that are uncountable. See the rigged Hilbert spaces references above. It is interesting that even for vector spaces, whether the basis is countable or not depends on the definition, and may differ between a Fourier basis and a Hamel basis.
 
  • #40
atyy said:
In some cases, the energy basis is a true basis that is countable, while position and momentum "bases" are generalized "bases" that are uncountable. See the rigged Hilbert spaces references above. It is interesting that even for vector spaces, whether the basis is countable or not depends on the definition, and may differ between a Fourier basis and a Hamel basis.
The rigged Hilbert spaces do not change anything about the fact that the position "basis" is uncountable. I didn't see where in the references it is called a basis.
 
  • Like
Likes FactChecker
  • #43
martinbn said:
I didn't see where in the references it is called a basis.

Take a look at post $34.
 
  • #44
atyy said:
Take a look at post $34.
I did, it desn't change anything. Refferences to rigged Hilbert spaces will not change the fact that the exponentals are uncountable and the spaces are seperable.
 
Back
Top