Momentum operator as the generator of translations.

alexepascual
Messages
371
Reaction score
1
The explanations I have found in quantum mechanics books as to why the momentum operator is considered to be the "generator of translations" are a little difficult and not very intuitive.
Could someone help me on this?
What I am looking for is some explanation in terms of pictures, or at least in terms of states and what you do to them (change them, measure them, change basis, etc.)
I'll appreciate it.
--Alex--
 
Physics news on Phys.org
Try http://www.math.ohio-state.edu/~gerlach/math/BVtypset/node111.html
 
Last edited by a moderator:
It might be appropriate to look into why angular momenum is the generator of rotations first, because this is slightly easier to understand. Once you have the basic idea, the connection between ordinary momentum and translations becomes easier to understand.
 
Robphy and Slyboy,
Thanks for your answers, that't get me going on this topic. I'll have to think about it, read the article, think some more...
I am sure I'll have more questions about it later. Now I'll have to do my homework (the one you guys gave me).
Thanks again,
--Alex--
 
Last edited:
Do you have a problem with the classical mechanics version, too?
 
Turin,
I guess I do have a problem with this concept in classical mechanics too as I don't remember studying at that time. I now looked for it in Thornton, and I don't see it.
However, I think the link posted by Robphy talks about translations of a classical funcion.
From all I have seen until now, it appears that the clue is understanding the expression of the translation as a Taylor series, and then converting it to an exponential.
I'll keep thinking about it. If you have any clues, I'll appreciate it.
 
I am hoping more to get a clue from your thread here than give clues.

The only things I understand about it are that the canonical momentum is the generator of generalized position translations (which covers linear as well as angular, and whatever non-physical abstractions that you decide to dream up). The fact that this is true for linear momentum and Cartesian coordinates of a free particle I believe to be the best starting point. And the issue is not trivialized by considering only one dimensional motion, so I [humbly] suggest looking into that.

The Hamiltonian is just ~p2. What the Hamiltonian means in Classical mechanics is something for which I suppose you must come to your own personal level of acceptance. I personally like the "total" energy interpretation, with the one exception that it is, in general, only the "generalized" or "canonical" energy, and may not necessarily represent the physical ability of the system to do work. It has a conjugate, the time, and the Hamiltonian is, in fact, the generator of time translations of the system.

So, anyway, the partial derivative with respect to the momentum gives you the rate of change of position. In this way I understand it as the generator of position translation. However, there is this issue of infinitesimal vs. finite, and the need to go to exponentiation to get the finite translation. I am not at all sure about this, but I think that may be one big difference between Classical and Quantum.

I will try to think about this a little tonight.
 
Last edited:
slyboy said:
It might be appropriate to look into why angular momenum is the generator of rotations first, because this is slightly easier to understand. Once you have the basic idea, the connection between ordinary momentum and translations becomes easier to understand.

I have been reading your responses and have been conntemplating a about orientation.


http://focus.aps.org/stories/v10/st3/pic-v10-st3-1.jpg

Seeing double. Researchers have caught glimpses of a rare event in which a single photon splits in two. This calorimeter, which contains 400 kg of liquid krypton, detected the photon pairs.

http://focus.aps.org/story/v10/st3


To be able to create this "diversion", could you not set up spin orientations in cryptography? This woud be the jest of Penrose's quanglement issue, displayed?

Also understanding that a "medium" is essential between cryptography pairs, how could we move the understanding of the http://superstringtheory.com/forum/stringboard/messages22/66.html ( matrices ) through feynmen toy models and understand that there must be a way in which to interpret dynamcial movement in the spacetime fabric?

Do you understand this? Any correction from others appreciated as well.

A strange thought just crossed my mind about Alice. Imagine, a http://wc0.worldcrossing.com/WebX?14@134.wWtJbPzXbda.6@.1dde7e87/2 . Turn on your speakers if you like :smile: Don't forget to scroll down right away to the post in question.
 
Last edited by a moderator:
Do you understand this? Any correction from others appreciated as well.

No. Unfortunately, I haven't really got a clue what you are talking about. What do you mean by cryptography requires a "medium"?
 
  • #10
slyboy said:
What do you mean by cryptography requires a "medium"?

Fiber optics.

I am more interested in teleportation, and spin orientations being effected.

The quantum world is complex issue here but if we are to speak about quantum geometry and gravity, these issues must be addressed?

I believe this subject in terms of teleportation has potential. That we move from computerzation, to teleportation, raises a interesting feature of consideration to me.

If the graviton has dimensional significance, then the interpretation of that quantum world must be based on teleportation principals?
 
  • #11
Quantum crypto doesn't necessarily require a "medium". See the experiments on free-space quantum cryptography form example.

I don't really see what teleportation has to do with gravity.
 
  • #12
slyboy said:
Quantum crypto doesn't necessarily require a "medium". See the experiments on free-space quantum cryptography form example.

On second part for sure.


http://www.esi-topics.com/enc/interviews/Prof-Nicolas-Gisin.jpg

Quantum physics is in an especially interesting phase. Until recently, the conceptual difficulties were considered of limited importance, since they had no practical effects. This led John Bell to his famous declaration: QM is fine FAPP (For All Practical Purposes)! But today technology and the discovery of the power of quantum information processing have dramatically changed the picture. Today, conceptual questions have potential applications, and technological breakthroughs open the way to new fundamental tests of the theory. My goal is to be an active player in this exciting dialog between fundamental and applied physics.

http://www.esi-topics.com/enc/interviews/Prof-Nicolas-Gisin.html

I was thinking of the Professors early work with entanglement and the link he forged. His experiment with fiber optics, was over ten miles and held great interest with the telephone companies in Switzerland.


I don't really see what teleportation has to do with gravity

Orientations in spin can be effected by magnetic fields? Oreintations can be effected by gravitational fields? Gr on a classical level in regards to probe B, while at the quantum level how so? No?

So there is this "distance", and no medium ( this is not about the aether question so often brought up, but is about the significance of dimensions)? :smile:
 
Last edited:
  • #13
from generator of translation to teleportation, wow!
Answering the original question, try working the operations out with the generator of INFINITESSIMAL translation.Express it as a taylor expansion to first order than group up the derivatives with respect to the three dimensions and you get a gradient operator, which is proportional to the momentum operator in the spatial representation.
 
Last edited:
  • #14
Thank you Tavi. This week and part of last week I have been kind of busy working on other stuff. In my free time I have been doing a little more reading from Feynman. These last few days I have been a little tired at night to work on derivations. But as sson as I can I'll go back to it and I'll consider using the first two terms in a Taylor series.
Thanks again,
Alex
 
  • #15
Hey alexpascual, I vigorously recommend Sakurai's Modern Quantum Mechanics. It is a very readable and close introduction. I haven't studied the collisions part, but the rest is great!
 
  • #16
On the same token, I a bit puzzled about angular momentum. From Newton, it seems that any central force field will preserve it, even if not spherically symmetric. But for Noether theorem, rotational symmetry seems a prerequisite. I am missing something.
 
  • #17
Isn't a central force requirement the same thing as rotational symmetry?
 
  • #18
Ok guys, I am back.
I read the link suggested by Robphy. But I still have some questions.
I will try for the moment to stay away from rotations and concentrate on translations in one spatial dimension.
It appears that the math allows you to treat translations using an operator without having to plug-in the momentum. From this, I would assume that this operator is a mathematical beast that is more general than the specific applications we may find in quantum mechanics. I am trying to narrow the field as much as possible. So I will try to understand the generator of translations as a mathematical instrument without involving momentum.

If I have a function f(x), and I move this function to the left by a distance d, the new function fd(x) will be equivalent to f(x+a).
f(x+a) can be expanded into a Taylor series and it can be proven to be equivalent to exp(a d/dx)f(x).
I haven't read any thing that would indicate that this is only true for certain functions. It appears it would work for any function. Is that right?
Now, d/dx is called the generator of translations.
If we can plug any finite number for "a" and let the operator do its work for all values of x of the original function to obtain the new function that would be great, and I would understand why d/dx could be called "the generator of translations"
Now, if we think of the operator as a matrix, and divide the motion in small steps, then for each step we would apply the operator with a/N instead of a.
Each time we apply the operator, the function will move a little bit, so the result of d/dx for the same value of x the next time won't be the same. This shows that we can't just calculate the derivative for each value of x for the initial position and use that in the diagonal of the operator.
If I lost you here, I think I am just getting to the point (by a different route) that this operator does not have the position kets as its eigenvectors (using the language of QM.) So the matrix can't be diagonal when in the x-basis.
Now, I understand that exp(a d/dx)f(x) can be expanded into a Taylor series to carry out the calculations, but that doesn't seem to make it any easier.
What I am getting at, is that if I wanted to use the beautifully simple expression exp( a d/dx) to write a computer program to move f(x) by a finite distance, I doubt it would be of much help.
If on the other hand I had been able to write a diogonal matrix where the diagonal elements are a*exp(d/dx) and apply that matrix once, that would have been great. I call it a matrix because that's the way I am visualizing it, although I realize it would be more of a "machine" than a matrix and that's why in this case is better to call it an operator.
I hope I didn't confuse you too much. maybe I was expecting too much from this "generator of translations". If you understand my confusing post, maybe you can point towards mistakes in my reasoning that may be causing my difficulties.
I'll appreciate your input.
 
Last edited:
  • #19
arivero said:
On the same token, I a bit puzzled about angular momentum. From Newton, it seems that any central force field will preserve it, even if not spherically symmetric. But for Noether theorem, rotational symmetry seems a prerequisite. I am missing something.


This is a subtle remark !
I guess you think of a force F = f(r,theta,phi) 1_r ; so it is a central force (only a 1_r component) but it is not spherically symmetric. Well, I guess that the clue is that you cannot derive such a force from a potential, and hence it cannot be formulated in a Lagrangian fashion, so Noether's theorem (which only applies to Lagrangian systems) doesn't work...

cheers,
Patrick.
 
  • #20
alex,
Two brief comments (I will refer to Shankar for more detail later; I remember this translation operator being mentioned in there, but I will need to refresh.):
For some reason, I think that there should be a √(-1) in the exp, but I'm not sure.
(Why) do you think that the translation operator should be diagonal in the position basis?
 
  • #21
Turin:
I think the i (imaginary unit) only appears when you use momentum as the generator. If you look at the momentum operator in the x-basis, you'll see that it has an i. so the i you are mentioning cancels that i.
I am pretty sure you don't need the i unless you use the momentum operator. For the moment I am trying to keep things as simple as possible and not use momentum..
I don't think the matrix should be diagonal in the x-basis. I only said that if it were diagonal, it would be nice and the spatial trasnslation operator in it's exponential form would be computationally more useful.
I understand that the formalism does not necessarily seek computational efficiency but it also seeks simplicity (in it's form) and elegance.
The only method that I can think of to carry out the operation of the translation operator is to convert it back to a Taylor series.
Of course it would be easier to just shift the x, but I guess in Hilbert space that's not the way to do things.
 
  • #22
alexepascual said:
If you look at the momentum operator in the x-basis, you'll see that it has an i.
Of course :redface: . I was thinking about it with the momentum in there, and I was thinking that d/dx was the momentum. Whoops. I should have also realized that the i was not needed just by considering the Taylor expansion.




alexepascual said:
I only said that if it were diagonal, it would be nice and the spatial trasnslation operator in it's exponential form would be computationally more useful.
By its very nature it cannot be diagonal in the position basis, as it takes a position to another position. If it were diagonal, then it would be useless, because it would represent a trivial translation of a position to itself.




alexepascual said:
... it would be easier to just shift the x, but I guess in Hilbert space that's not the way to do things.
I don't quite understand what you are saying here.
 
  • #23
alexepascual said:
The explanations I have found in quantum mechanics books as to why the momentum operator is considered to be the "generator of translations" are a little difficult and not very intuitive.
Could someone help me on this?
What I am looking for is some explanation in terms of pictures, or at least in terms of states and what you do to them (change them, measure them, change basis, etc.)
I'll appreciate it.
What I am about to discuss is something which I would have thought is discussed in the references you are using. But just in case ...

Let X and P be the position and momentum observables for a particle moving in one dimension. Suppose that we take the physical measuring device which measures "x" and translate it by an amount "a" in the positive x-direction. Then, the new observable for position is given by

X' = X - a .

This is so because any measurement which would have yielded the "result" (eigenvalue) x, relative to the original observable X, now yields the "result" (new eigenvalue) x' = x -a, relative to the new observable X'.

As it turns out, the linear operator U(a) defined by

U(a) = e-iPa/h_bar

is a unitary operator, and it has the property

U(a)XU(a)t = X - a = X' .

-----------------

Instead of considering a translation of the measuring device, we can alternatively consider a translation of the apparatus which prepares the quantum system to be in a state |f>. A translation of that apparatus through a distance "a" in the positive x-direction results in a new state |f'> which is given by

|f'> = U(a)|f> .

In position representation, it turns out that

f'(x) = <x|f'> =<x| (U(a)|f>) = (<x|U(a)) |f> = <x -a|f> = f(x -a) .

-----------------

Next, consider the translation of an entire experimental arrangement through a distance "a" in the positive x-direction. That is, we move all instruments - those pertaining to both preparation and measurement of the quantum system. We then find that for any observable A measured in the experiment, the "matrix elements" of A are invariant; i.e.

<g'|A'|f'> = <g|A|f> .

This is so because

|f'> = U(a)|f>, |g'> = U(a)|g>, and A' = U(a)AU(a)t ,

and it happens that U(a) is a unitary operator. What we have, in effect, done is performed a "change of basis".

In relation to this scenario, the following four statements are equivalent:

(1) all matrix elements are invariant;
(2) U(a) is unitary;
(3) a translation of the entire experimental arrangement results in a "change of basis";
(4) the space we are working in has "translational symmetry".

-----------------

Finally, note that, in the above discussion, the active perspective has been taken. That is, we have imagined that various "instruments" of an experimental arrangement have been "actively" moved from one location in space to another. In a scenario like the last one, in which all of the instruments are moved together as a unit, we can, in virtue of "translational symmetry", take the passive perspective, whereby the "coordinate system" itself is translated by the amount "-a".
 
  • #24
alexepascual said:
It appears that the math allows you to treat translations using an operator without having to plug-in the momentum.
The translation operator uses "d/dx" ... and "d/dx" is(!) precisely the operator "iP/hbar" in x-representation; i.e.

(Pf)(x) = <x|P|f> = -ihbar(df/dx) .

Nevertheless, you can still try to understand it without any explicit mention of "momentum"; i.e. just by talking about "d/dx".

alexepascual said:
So I will try to understand the generator of translations as a mathematical instrument without involving momentum.

If I have a function f(x), and I move this function to the left by a distance d, the new function fd(x) will be equivalent to f(x+a).
f(x+a) can be expanded into a Taylor series and it can be proven to be equivalent to exp(a d/dx)f(x).
I haven't read any thing that would indicate that this is only true for certain functions. It appears it would work for any function. Is that right?
Yes, what you say is (essentially) correct. (However, since we are talking about functions which are equal to their Taylor series expansion, we need to make a restriction to arbitrary analytic functions.)

alexepascual said:
Now, if we think of the operator as a matrix, and divide the motion in small steps, then for each step we would apply the operator with a/N instead of a.
Each time we apply the operator, the function will move a little bit, so the result of d/dx for the same value of x the next time won't be the same. This shows that we can't just calculate the derivative for each value of x for the initial position and use that in the diagonal of the operator.
If I lost you here, I think I am just getting to the point (by a different route) that this operator does not have the position kets as its eigenvectors (using the language of QM.) So the matrix can't be diagonal when in the x-basis.
Exactly! ... And Turin offers a very simple argument for why this so:

turin said:
By its very nature it cannot be diagonal in the position basis, as it takes a position to another position. If it were diagonal, then it would be useless, because it would represent a trivial translation of a position to itself.
Here is another way of saying the same thing. If an operator is diagonal in the "x-basis", then it must commute with the x-operator. But (for the infinitesimal case (and, therefore, for the finite case)) this is obviously not so:

[x, d/dx]f = x(d/dx)f - (d/dx)(xf) = -f ;

that is,

[x, d/dx] = -1 .

Now ... if we look at all those "crazy" Taylor expansions and try to see what makes a linear operator L a suitable "candidate" for being a "generator" of translations, represented by e-iLa for the finite case, then we find that the necessary and sufficient condition is just

[X, L] = i .

This condition, in turn, uniquely defines L up to the addition of an arbitrary function of X (i.e. L and L + f(X) are the "same" in this regard). We simply "throw away" all those possible "extra" functions of X (since they commute with X), leaving us with the unique choice:

L = P/hbar .

And so ... "P/hbar" is the generator of translations.

Next:

alexepascual said:
If we can plug any finite number for "a" and let the operator do its work for all values of x of the original function to obtain the new function that would be great, and I would understand why d/dx could be called "the generator of translations"
... Now, I understand that exp(a d/dx)f(x) can be expanded into a Taylor series to carry out the calculations, but that doesn't seem to make it any easier.
... The only method that I can think of to carry out the operation of the translation operator is to convert it back to a Taylor series.
Of course it would be easier to just shift the x, but I guess in Hilbert space that's not the way to do things.
The expansion into a Taylor series is not intended to be a tool for calculational purposes! It is the manner in which we prove(!) that

[ea(d/dx)f](x) = f(x + a)

within any domain over which the function f is analytic.

Finally:

alexepascual said:
What I am getting at, is that if I wanted to use the beautifully simple expression exp( a d/dx) to write a computer program to move f(x) by a finite distance, I doubt it would be of much help.
If on the other hand I had been able to write a diogonal matrix where the diagonal elements are a*exp(d/dx) and apply that matrix once, that would have been great. I call it a matrix because that's the way I am visualizing it, although I realize it would be more of a "machine" than a matrix and that's why in this case is better to call it an operator.
I hope I didn't confuse you too much. maybe I was expecting too much from this "generator of translations". If you understand my confusing post, maybe you can point towards mistakes in my reasoning that may be causing my difficulties.
I'll appreciate your input.
At this juncture, Alex, I must compliment you on your exceptional powers of visualization.

... No, you didn't confuse me. The "machine" you describe is one for which you have effectively taken, so to speak, the "operator" out of the "eigenprojections" and put(!) into the "eigenvalues". I doubt, however, that this sort of construction will offer any improvements over the "standard" Theory of Linear Operators.

As it turns out, the diagonal form of the operator you seek, according to the definition U(a) = e-iPa/h_bar, is just

U(a) = Integral { e-ipa/h_bar |p><p| dp } .
 
Last edited:
  • #25
Eye_in_the_Sky:
I have read your second post, as well as your private message.
I think your explanation is quite detailed and now I understand this topic better.
I should point out that at the beginnning of this thread I was focusing more on momentum and looking at the "physics". Later on, I after reading a little more, I was under the impression that there was a more general mathematical mechanism worth exploring.
To be more specific, I thought that exp(a d/dx) should be able to displace any function (analytic as you point oput) along the x-axis (this would be a mathematical fact not necessarily connected with physics) . I wanted to see how this exponential could do this.
You mention in one of your posts, that a Taylor series is not intended as a calculation tool. Probably you mean that it is not very efficient.
In my exploration of this topic (before your post), I took a function y=x^2 and tried to move it using the exponential. I could not see any other way the exponential could do it's work but with a Taylor series. As the Taylor series for x^2 is very short, it was easy to use. This may seem trivial to you, but I enjoyed seing how the function had been moved a distance "a".
I understand your suggestion that if we make a change to the momentum basis, then the operation can be done with a single diagonal matrix.
I was thinking how this idea could be interpreted if we are moving a function that doesn't have anything to do with physics. I realize that some of these mathematical methods become useful only when applied to solve problems in quantum mechanics, but if the same technique can be applied outside of physics, even if it is not efficient for that purpose, this could provide (at least for me) a better understanding of the underlying math.
I haven't worked this thing out, but I was thinking that outside of physics there is no momentum (well, you could argue against this, but you know what I mean) so we can't use momentum. But what you do when you change from the position basis to the momentum basis is a Fourier transform right?.
So, If I take my x^2 function and do a Fourier transform on it, I should be able to use whatever I get to construct my diagonal matrix and move the function (in a basis of complex sinusoidal functions) the required distance right? Then I would have to convert back to the position basis. I'll try this unless you warn me that it won't work.
Looking at the effect of the exponential exp(a d/dx) in the position basis, I noticed that when it acts on each component of the vector it is applied to, the result of that partial operation is placed in the same component of the result vector as it was in the vector that the operator acted on. In other words, you take the function at a particular value of x, say x=3, calculate all the derivatives at that point, and the result you get from the Taylor series is the new value of the function at x=3. This would give the appearence of certain "diagonality". But then I realize that when we talk about diagonality, normally we are talking about matrices, and in that case (matrices) we are multiplying the component or components of a vector by a number/numbers. And I see that in the case of the differential operator in the Taylor series form, we are multiplying times 1, and then adding the other terms that contain the derivatives, which is very different from the action of a matrix.
On the other hand, I guess if we wanted to use a matrix in the position basis to move the function, one way to do it would be with a matrix with all zeroes except for a line parallel to the diagonal which would be full of Dirac deltas.
This scheme could be discretized (getting a finite delta) for calculation with the computer.
Is that right?
Thinking outside the box, (forgive me for this, I am just thinking aloud) we could build a diagonal matrix that does the moving, but it would be uggly, conceptually and computationally, as there would be an infinity in the matrix anywhere the value of the original function is zero, and the computation errors would be large when the value of the original function is very small. So this would prove that most of the time thinking outside the box doesn't pay. But I still like doing it.
Except for the experiment (exercise) I mention above that I still want to do, I am kind of satisfied with my present understanding (thanks to your careful explanations) . I think I feel comfortable enough to continue studying. Now I am starting chpter 2 of Sakurai.
By the way, towards the end of chapter 1, I found a reference to the dimensionality of the wave function, question you answered for me on another thread.

Eye_in_the_Sky and Turin,
I know I can be kind of irritating sometimes. Thanks for being patient with me.
Oh! by the way, I am curious to know (I don't remember if I asked you before) in what country/state you guys live. (I am in California, USA)

-Alex-
 
Last edited:
  • #26
alexepascual said:
... what you do when you change from the position basis to the momentum basis is a Fourier transform right?.
So, If I take my x^2 function and do a Fourier transform on it, I should be able to use whatever I get to construct my diagonal matrix and move the function (in a basis of complex sinusoidal functions) the required distance right?
I hope I don't reveal too much of my ignorance, but, what is the FT of x2? Off the top of my head, I would assume that it is not defined. Certainly the DC component would be undefined, but I suspect that the whole thing is undefined. I actually went through the calculation explicitly and got infinity. I could not find the transform in the few tables to which I have access at the moment.
 
  • #27
alexepascual said:
... what you do when you change from the position basis to the momentum basis is a Fourier transform right?.
Yes, that is right.


So, If I take my x^2 function and do a Fourier transform on it, I should be able to use whatever I get to construct my diagonal matrix and move the function (in a basis of complex sinusoidal functions) the required distance right? Then I would have to convert back to the position basis. I'll try this unless you warn me that it won't work.
The function x2 isn't a good choice. The Fourier integral is going to "bust"(!) since

x2 --> infinity, as x --> +/- infinity .

But instead of looking at a specific example, why not just look at the general situation? We have:

[1] f(x) = 1/sqrt{2 pi} Integral { eikx F(k) dk } ,

[2] F(k) = 1/sqrt{2 pi} Integral { e-ikx f(x) dx } .

Now, apply d/dx to equation [1], and get the correspondence

[3] [(d/dx)f](x) <--> ik F(k) .

Next, consider the Fourier transform G(k) of f(x+a). It's just

G(k) = 1/sqrt{2 pi} Integral { e-ikx f(x+a) dx }

= 1/sqrt{2 pi} Integral { e-ik(x-a) f(x) dx }

= eika F(k)

from which we get the correspondence

[4] f(x+a) <--> eika F(k) .

And this is just what you wanted to see: multiplying the Fourier transform
F(k) by eika will "shift" the original function f(x) by the amount "a", as required, to give f(x+a).

Finally, if you want to go one more step, compare [4] to [3] and get confirmation that

f(x+a) = [ea(d/dx)f](x) .


Looking at the effect of the exponential exp(a d/dx) in the position basis, I noticed that when it acts on each component of the vector it is applied to, the result of that partial operation is placed in the same component of the result vector as it was in the vector that the operator acted on. In other words, you take the function at a particular value of x, say x=3, calculate all the derivatives at that point, and the result you get from the Taylor series is the new value of the function at x=3. This would give the appearence of certain "diagonality".
There appears to be "linguistic" problem here. Look at the phrase:

"... is the new value of the function at x=3" .

This phrase should read:

"... is the value of a new function at x=3" .

What you get from the Taylor series is a new function g(x) = f(x+a), and
g(3) = f(3+a).

If I have understood you correctly, then what I have just said should cause you to want to retract your concluding statement:

This would give the appearence of certain "diagonality".
Next, regarding:

... I see that in the case of the differential operator in the Taylor series form, we are multiplying times 1, and then adding the other terms that contain the derivatives, which is very different from the action of a matrix.
Observe that this "operation" is in fact represented by a linear operator, say L, where

L = Sigma_[n = 0 to infinity] { (1/n!) [a(d/dx)]n } .

But, any linear operator will have a matrix representation relative to the basis of your choice.

So, if we choose, for example the discrete basis of "simple" polynomials

pn(x) = xn

(note that these are not in our Hilbert space (since they are not square integrable)), then relative to this basis we will have a matrix representation for L. (... Do you know how to do this? (If not, and you'd like to, please ask.))

On, the other hand, we could instead choose a continuous basis, indexed by a continuous parameter x', say

Dx'(x) = Dirac_delta(x-x') .

This is just the usual |x> basis (in its own x'-space); i.e. <x'|x>.

Or, we could choose a different continuous basis, indexed by a continuous parameter k, say

hk(x) = 1/sqrt{2 pi} eikx .

Relative to this basis, our operator L turns out to be diagonal.

(... Again, if you are unsure as to how to get the matrix representation relative to a given basis for a linear operator, and would like to know more, feel free to ask.)
 
Last edited:
  • #28
Eye in the Sky has it right; the exponential of a derivative operator is a Taylor Series. This idea has its major home in the theory of Lie Groups,and/or continuous groups, where one talks about generators of infitesimal displacements. In physics, it used to be, at least, that the initial introduction to the ideas of generalized translations came from the study of rigid body mechanics and the theory of contact transformations. These studies helped to build familiarity with some of the more sophisticated ideas that often were reintroduced in QM in the study of angular momenta. In particular, the theory of and representations of finite rotations are of signal importance in both nuclear and particle physics -- one example is the great practicality of the Jacob and Wick helicity representions -- spin is quantized along the particle's direction of motion.
It get's worse: the Poincare group has ten generators, 3 momenta, one energy, 3 angular momenta, and 3 Lorentz boosts -- there's a good discussion of this in Chapter II of Weinberg's The Quantum Theory of Fields.

I'm very curious about the background's of those having difficulties, or providing episodic essays on the generator and exponential form of spatial translations. I say this with all due respect, but given the "right background" -- i.e knowing something about continuous groups, knowing about rotations in advanced mechanics, particularly involving contact transformations, the idea of a derivative as the generator of translations becomes quite straightforward.

I'm an old guy, trained in the '50s. I see things as I learned them and taught them. Do students these days do Hamiltonian mechanics, rigid body mechanics, E&M radiation theory, elementary group theory. ... prior to a first course in QM?

Thanks and regards,
Reilly
 
  • #29
Typically it goes like so. Elementary mechanics/EM.. then

Mechanics (this includes both hamiltonian, and rigid bodies)
EM (including circuits)
Optics (sometimes optional)
Thermodynamics
then quantum mechanics.
and special/General relativity

Two problems with the standard curriculum.

One, very often very important stuff is left out for lack of time. Rigid body mechanics, Maxwell wave mechanics, Poynting vectors etc were all taught to me in a rush, and I had to relearn them later in my career.

The other problem is the math.

Unfortunately elementary group theory is usually not a prereq or even taught outside of the math divisions. Nor is some of the other stuff, like linear algebra and more importantly, differential geometry and functional analysis (much less real analysis).

Its actually IMO a problem for undergrad classes, you really, really want to to have the math background first. I hated learning physics math, on the fly. It made little sense to me at the time, was adhoc, often wrong or grossly simplified. In the math divisions, though, you often spend too much time 'proving' things, which is more or less irrelevant for an undergrad.

Everyonce and awhile I would have a good teacher, who would focus on calculating examples, without skipping too much key background definitions
, and giving a good feel to the students. Differential geometry was such an example. It made GR a breeze for me. Ditto with statistics, Stat mech was then completely trivial.

Unfortunately, I did not have functional analysis before quantum mechanics, and frankly the whole experience was really painful the first time around. I got a good grade, but I didn't learn anything really.
 
  • #30
Response to Turin and Eye_in_the_Sky

Turin and Eye,
I haven't done anything with Fourier transforms in many years. It was silly of me to think you can get a FT of x2.
I just checked the book and of course, the function has to obey the Dirichlet conditions and have a finite integral, which just makes sense.
So with this Turin, you have not revealed any ignorance (but I have). And that's OK with me. I am not claiming certain level of knowledge or intelligence. I am in this forum to learn and to help if someone ask an easy question.

Eye,
With respect to one of your previous posts, I don't think of myself as having any out-of-the-ordinary skills in visualization. I do try to look at problems from a different angle, but that is just because of my defficiencies in memory and symbol-processing. I have these handicaps, and I have to look for ways to absorb and interpret the subject that will produce long-lasting neuronal connections.

Now with respect to post #27:
First I would just like to comment that I noticed that even you can't get a FT of x2, you can still move it by using the Taylor series.
With respect to your derivation of the generator of translations, I did understand well how you got this:
<br /> f(x + a) &lt; - - &gt; G(k) = e^{ika} \,F(x)\,dx \\ <br />

What I don't understand is how you get [3] and how you compare [4] to [3]

With respect to my statement:
This would give the appearence of certain "diagonality"
I would drop this, not necessarily retract it as I think the reason for the apparent inconsistency is that I was not able to communicate clearly. But this was just something I mentioned in passing and it doesn't boder me.
On the other hand, if it is not too much trouble, I would like to hear more about the "simple polinomials". Are we talking about polinomials of the form
a0x0+a1x1+a2x2+...anxn..
where each xn is a basis for the space?
Would in that case the operator d/dx consist of a line of numbers n-1 parallel to the diagonal? Am I on the right track?
 
Last edited:
  • #31
Response to Reilly and Haelfix

Reilly,
I would agree with most of what Haelfix has said.
With respect to your position, I get the impression you propose that very complex and abstract math should be learned first. I see two problems with that.
In the first place, different people may have different modes of learning. The sequence you might propose (that would probably be close to the sequence of classes and topics you took) might work for some people and not for others. The very abstract concepts may not be digested very well by people that prefer to see concrete examples and then abstract from them.
For these people, the math might be understood better within the concept of physics. Probably that's why they usually have a course entitled "mathematical methods"
The other problem I see with the approach of learning very well all the math first is that if you don't apply it you forget it, and there might be quite a gap in time from learning the mathematical concepts to applying them to physical problems.
On the other hand I think physics courses in most schools are not very well organized and they don't guarantee that a minimum of simple math is learned before it is needed. For example, when I took quantum mechanics, there was no pre-requisite to study linear algebra before taking this class. I had studied linear algebra but we had not covered the part that uses complex numbers. So I had to study this on my own. I was never tought group theory, but I think most QM courses at the undergraduate level don't require it.
Now I'll be taking graduate-level courses, so I'll make sure I do understand these concepts.
Reilly and Haelfix,
Your discussion has helped me see some of the areas in mathematics where I may need more knowledge. I thank you for that.
 
  • #32
Response to Alex

alexepascual said:
I just checked the book and of course, the function has to obey ...
A sufficient, but not necessary, condition for the Fourier transform of a function f(x) to exist is

(i) Integral[all space] { |f(x)| dx } converges ;

(ii) f(x) has a finite number of discontinuities .

------------------------

... you can't get a FT of x2, you can still move it by using the Taylor series.
Yes, absolutely.

------------------------

What I don't understand is how you get [3] and how you compare [4] to [3].
Recall:
[1] f(x) = 1/sqrt{2 pi} Integral { eikx F(k) dk } ,

[2] F(k) = 1/sqrt{2 pi} Integral { e-ikx f(x) dx } .

Now, apply d/dx to equation [1], and get the correspondence

[3] [(d/dx)f](x) <--> ik F(k) .
To get [3], apply d/dx to both sides of [1], and on the right-hand-side, push d/dx "through" and "under" the integral; the only function under the integral there which depends on "x" is eikx, and d/dx of that is just the same thing multiplied by "ik"; thus, [1] becomes

df(x)/dx = 1/sqrt{2 pi} Integral { eikx ik F(k) dk } .

From this, [3] follows.

Now, [3] tells us that the "image" in k-space of d/dx is just multiplication by ik. It then follows that for any (analytic) function h(x), the "image" in k-space of h(d/dx) is just h(ik). We therefore have the correspondence

[ea(d/dx)f](x) <--> eika F(k) .

Now, take [4] and compare it to this. We had

[4] f(x+a) <--> eika F(k) .

The left-hand-sides of the last two correspondences must be equal; i.e.

f(x+a) = [ea(d/dx)f](x) .

------------------------

Finally, regarding:

... I would like to hear more about the "simple polinomials". Are we talking about polinomials of the form
a0x0+a1x1+a2x2+...anxn..
where each xn is a basis for the space?
Would in that case the operator d/dx consist of a line of numbers n-1 parallel to the diagonal? Am I on the right track?
"Yes" (where the full set {xn|n=0,1,2,...} is a basis), "yes", and "yes". At the next opportunity, I will attempt to post more on this matter.
 
  • #33
I think I see the source of my difficulty.
You derive [3] by differentiating both sides of [1] (which is an equation, not a correspondence). Don't you get an equation when you take the derivative with respect to the same variable of both sides of an equation?. Shouldn't [3] be an equality instead of a correspondence?
I hope I understood correctly the meaning of correspondence in this context. I would have interpreted it as "is a Fourier transform of" which would make:
f(x) <-->F(k)
f(x+a)<-->G(k)=eikaF(k)
On the other hand your argument on your last post ends up in a compelling way when you compare [3] to [4], so at this point I am a little confused.
 
  • #34
alexepascual said:
Don't you get an equation when you take the derivative with respect to the same variable of both sides of an equation?
Yes. In our case, the equation was just:

df(x)/dx = 1/sqrt{2 pi} Integral { eikx ik F(k) dk }
Call this equation [5].

------

Shouldn't [3] be an equality instead of a correspondence?
No. And you have given the reason why this is so:


I hope I understood correctly the meaning of correspondence in this context. I would have interpreted it as "is a Fourier transform of" which would make:
f(x) <-->F(k)
f(x+a)<-->G(k)=eikaF(k)
You understood correctly:

"-->" means "is the Fourier transform of" ;

"<--" means "is the inverse Fourier transform of" .

Looking at equation [5] above with this understanding of "<-->" gives us the correspondence:


[3] [(d/dx)f](x) <--> ik F(k)

Does this help to clear up the confusion?
 
  • #35
I guess I'll have to think about it.
Thanks a lot Eye. I won't be home today, probably tomorrow I'll be able to look into this.
 
  • #36
alexepascual,
If you have access to such resources, you may find an systems engineering text helpful for understanding these Fourier relationships. Specifically, what I have in mind is a junior or senior level electrical engineering major's understanding of the topic. The book I have is called "Circuits, Signals, and Systems," and I'm sure there are hundreds of other good books as well. I just think that considering a concrete system as what is doing the transform may help.

A good systems engineering text should derive the time and frequency shifts, as well as many other usual relationships. In engineering, this is done in such a concrete and straightforward way that it is worth looking into, even from the theoretical standpoint as an aspiring physicist. Basically, it should go through what Eye has done, but with diagrams and things to go along with it in a very symbolic way.
 
  • #37
alexepascual said:
... I would like to hear more about the "simple polinomials". Are we talking about polinomials of the form
a0x0+a1x1+a2x2+...anxn..
where each xn is a basis for the space?
Would in that case the operator d/dx consist of a line of numbers n-1 parallel to the diagonal? Am I on the right track?

Eye_in_the_Sky said:
"Yes" (where the full set {xn|n=0,1,2,...} is a basis), "yes", and "yes". At the next opportunity, I will attempt to post more on this matter.
Let bi be a basis. Then, (using the "summation convention" for repeated indices) any vector v can be written as

v = vibi .

In this way, we can think the of vi as the components of a column matrix v which represents v in the bi basis. For example, in particular, the vector bk relative to its own basis is represented by a column matrix which has a 1 in the kth position and 0's everywhere else.

Now, let L be a linear operator. Let L act on one of the basis vectors bj; the result is another vector in the space which itself is a linear combination of the bi's. That is, for each bj, we have

[1] Lbj = Lijbi .

In a moment, we shall see that this definition of the "components" Lij is precisely what we need to define the matrix L corresponding to L in the bi basis.

Let us apply L to an arbitrary vector v = vjbj, and let the result be w = wibi. We then have

wibi

= w

= Lv

= L(vjbj)

= vj(Lbj)

= vj(Lijbi) ... (from [1])

= (Lijvj)bi .

If we compare the first and last lines of this sequence of equalities, we are forced to conclude that

[2] wi = Lijvj ,

where, Lij was, of course, given by [1].

Now, relation [2] is precisely what we want for the component form of a matrix equation

w = L v .

We, therefore, conclude that [1] is the correct "rule" for giving us the matrix representation of a linear operator L relative to a basis bi.

-----------------------------------------

Now, let L = d/dx, and bn = xn-1, n = 1,2,3... .

In this context, rule [1] above becomes

[1'] Lxn-1 = Lmnxm-1 .

But

Lxn-1

= (d/dx)xn-1

= (n-1)xn-2

= (n-1) deltam,n-1 xm-1 ,

so that

Lmn = (n-1) deltam,n-1 .

This is equivalent to (no summation)

Ln,n+1 = n , with all other components equal to 0.
 
  • #38
I have been away from the forum the last three days and haven't had time to think about the topic.
Turin:
Thanks for your suggestion. I'll try to get a hold of the systems engineering book you mention. In other circumstance I was having trouble with a topic in thermodynamics and found that an engineering book explained things in a way that was more clear to me.
Eye:
I'll be printing out your last two posts and probably will have a chance to read them and think about what you say sometime today or tomorrow.
I'll let you know as soon as I do so.
 
  • #39
Turin,
I got the book you suggested from a library. Thanks for your suggestion, I'll tell you later if it helped.
Eye,
First I would like to apologize for my lack of familiarity with the Fourier transform.
Yesterday I thought I had understood your derivation. But today I looked at it again, and found out that the way I was making it work (the intermediate steps I was filling-in) make an asumption that may not be warranted.
The way I chose to demonstrate that (df/dx)f(x) <--> ik F(k) is to multiply both sides of the correspondence [1] (my eq. number) by ik. If ik were a constant I guess this would be legal.

The sequence would be: (my equation numbers)
By definition:
[1] f(x) <--> F(k)
[2] f(x) = 1/sqrt{2 pi} integral {eikx F(k) dk}
[3] F(k) = 1/sqrt{2 pi} integral {eikx f(x) dx}

Now make:
[4] f'(x) = ik f(x)
Then there is some F'(k) such that:
[5] f'(x) <--> F'(k)
where (by [3]): F'(k) = 1/sqrt{2 pi} integral {eikx f'(x) dx}
Using [4]: F'(k) = 1/sqrt{2 pi} integral {eikx ik f(x) dx}
Pulling ik out: F'(k) = ik 1/sqrt{2 pi} integral {eikx f(x) dx}
By [2]: f'(x) <--> F'(k) = ik F(k)
Which can also be expressed:
[5] ik f(x) <--> ik F(k)
Taking the derivative of [2]:
[6] (d/dx)f(x) = ik f(x)
Substituting [6] in [5]:
[7] (d/dx)f(x) <--> ik F(k) (your equation [3])

The problem I see is that f(x)<-->F(k) makes a correspondence between two functions. This means that all values of k are used in the correspondence. But which value of k do we use on the left side of [5]?.
In order to see this better, I thought that the Fourier transform should be ammenable to being visualized as a matrix. I looked for: "Fourier Transform in matrix form" in google and found a few results. Interestingly, two of those entries had to do with image processing. One was a description of a book: "Image processing: the fundamentals". I looked for it in Amazon.com and, besides having a positive review, the table of contents appeared very interesting. I have ordered this book as a loan from other university.
I'll be waiting for your comments on the above equations.
 
Last edited:
  • #40
I will let Eye take care of the other details, but for now:

alexepascual said:
... I thought that the Fourier transform should be ammenable to being visualized as a matrix.
ABSOLUTELY! In fact, there is a discrete Fourier transform (DFT) that is exactly a numerical matrix and a fast Fourier transform (FFT) that is a radix for further algorithmic optimization. You may find the DFT to hold the specific explanation that you're looking for. Even the straight-up continuous time Fourier transform is basically a matrix, T, in the sense that you could find the ω,t component as:

Tω,t = <ω|T|t> = <ω|F{t}[ω]> = integral of ωF{t}[ω]dω

Basically, the components turn out to be:

Tω,t = e-iωt
(the Kernel of the transformation).
 
  • #41
I have kept thinking about this problem and can't find a solution.
But I noticed the following:
I arrived (through a dubious path) to the following
[5] ik f(x) <--> ik F(k)
Now, if I start off with { ik F(k) } and plug that into the definition of the inverse FT. I should get f'(x)= ik f(x)
f'(x) = 1/sqrt(2pi) integral {eikx F'(k) dk]
f'(x) = 1/sqrt(2pi) integral {eikx ik F(k) dk]
But here is where the problem shows, because I can't pull ik out of the integral sign because k is a variable, not a constant in this case.
 
  • #42
Turin
I had not read yours when I sent my last post (7 minutes later)
Thanks for your info about the discrete Fourier transform. I'll try to learn more about it.
I was browsing this morning through the Circuits, Signals and Systems book. I found it very interesting, but it'll involve learning a lot about electronics. Although I have read about the subject in the past on my own, I have never taken an electronics course.
In the short term, I'll try to get the most out of this book by reading some of the chapter introductions and by looking directly at the sections on the diferent transforms.
Thanks again,
Alex
 
  • #43
Concerning the required knowledge of electronics:
Perhaps there is another book by that same name. The one that I have (though not with me right now) has an entire chapter almost entirely devoted to the Fourier transform (and called "The Fourier Transform" if I remember correctly).


alexepascual said:
I arrived (through a dubious path) to the following
[5] ik f(x) <--> ik F(k)
I suppose I may be unclear on the meaning of "<-->". But, if it is supposed to mean "transforms into," then I don't agree with this statement. The way to get the transformation of the derivative is to simply take the derivative of the inverse transform of some function and then infere the transform. To start out with some definitions:

f(t) = Tinv{F(ω)}[t]
= (√2π)-1 integral of {dωeiωtF(ω)}

=>

df/dt = (d/dt)Tinv{F(ω)}[t]
= (√2π)-1 (d/dt) integral of {dωeiωtF(ω)}

Since the integration is over ω, the derivative wrt t can be taken inside the integral (the integration and differentiation commute):

= (√2π)-1 integral of {dω(d/dt)(eiωtF(ω))}

Then, since only the Kernel depends on t, the differentiation only operates on the Kernel:

= (√2π)-1 integral of {dω(d/dt)(eiωt)F(ω)}
= (√2π)-1 integral of {dω(iωeiωt)F(ω)}
= (√2π)-1 integral of {dωeiωt(iωF(ω))}

Let (iωF(ω)) = G(ω) (some other function of ω):

= (√2π)-1 integral of {dωeiωtG(ω)}
= Tinv{G(ω)}[t]

Taking the Fourier transform of both sides:

T{df/dt}[ω] = T{Tinv{G(ω)}[t]}[ω]
= G(ω)
= (iωF(ω))
= (iω)F(ω)

The result:

T{df/dt}[ω] = (iω)F(ω)

This shows that the Fourier transform of the time derivative of a function is equal to (iω) times the transform of the function. In other words, differentiation in the position basis is multiplication in the momentum basis.
 
Last edited:
  • #44
Thanks Turin,
I'll have to go over your post and think about it. But I think it'll probably answer my question.
With respect to the book, I think it is the only one with that title. The author is Siebert and it is published by MIT press / MacGraw-Hill. Chapter 13 is titled: Fourier Transforms and Fourier's Theorem.
When I posted my comment, I had just browsed through the book, and I got the impression that most of it was intimately related to electronic systems. Now that I examined some of the chapters in more detail, I see that probably I can go directly to the chapter on the Fourier transforn and understand it with my present (little) knowledge of electronics. On the other hand, there is always a little of a culture shock when going from physics books to electronics books, which is in a way good because it forces you to look at the same topics from a different angle.
I see that one of the differences with the physics books is the inclusion of discrete transforms, which I don't remember seing in physics. Probably part of the reason for this is their use in digital systems, digital signal processing, etc. Probably discret math in physics becomes more important at the Plank scale, but that is just my speculation (I don't know anything about quantum loop gravity)(a long way to go before I get there).
 
  • #45
Turin,
Your notation is a little different to what I am used to.

In your first post:
(1) When you write F{t}[&omega;], I wonder what you mean. What is the difference between the square brackets and the curly brackets?
(2) I looked at my linear algebra book and it says that the kernel of a transformation is the portion of the domain that is mapped to zero. Is this a different use of the word "kernel" or is it connected with your comment that e-iωt is the kernel of the transformation?

In your second post:
(3) Now you write {F(&omega;)}[t] , slightly different notation but the square bracket seems to have the same function. Probably if you just tell me how you read it aloud I'll understand.
 
  • #46
OK, I ignored the square brackets and was able to follow your reasoning.
I still would appreciate your explanation about the notation and the "kernel".
Thanks again Turin,
Alex
 
  • #47
That nasty double arrow

The intended meaning of "f(x) <--> F(k)" is "f(x) and F(k) are a Fourier transform pair". The x-space function is written on the left and the k-space function on the right [1]. So, reading the arrow from left to right gives

f(x) --> F(k) ..... f(x) goes to F(k) via a Fourier transform ,

and reading from right to left gives

f(x) <-- F(k) ..... F(k) goes to f(x) via an inverse Fourier transform .

If you know that the arrow holds true for one direction, then it must also hold for the other.

The corresponding Fourier integrals are [2]

f(x) --> F(k) ..... F(k) = 1/sqrt{2 pi} Integral { e-ikx f(x) dx } ,

f(x) <-- F(k) ..... f(x) = 1/sqrt{2 pi} Integral { eikx F(k) dk } .

Note that one of the transforms has a "+ikx" in the exponential (the inverse transform, according to my definition), while the other has a "-ikx" [3],[4].
_________________________
[1] Thus it would not make sense to multiply both sides of the double arrow by the same thing ... like, for example, ik.

[2] Some books use an asymmetrical convention with regard to the numerical constant in front of the integral, putting a 1/(2 pi) at the front of one of the transforms and just a 1 at the front of the other.

[3] Some books use the opposite convention with regard to (+/-)ikx in the exponential.

[4] In an earlier post, such a sign was missed out. Both transforms we written with a "+" sign.
By definition:
[1] f(x) <--> F(k)
[2] f(x) = 1/sqrt{2 pi} integral {eikx F(k) dk}
[3] F(k) = 1/sqrt{2 pi} integral {eikx f(x) dx}
-------------------------------------------------------------
-------------------------------------------------------------

The above explains what I meant by the double arrow. Instead of providing a means to express Fourier-type relationships between objects in a clear and compact way, it only added confusion to an already uncertain situation. ... Sorry about that.
 
  • #48
Eye:
I didn't have trouble with the double arrow. Turin asked to be sure he was interpreting correctly (which he was). But he preferred to use the operator notation. My only difficulty was in deriving your correspondence [3].
You said: "...and from this [3] follows" ( I didn't see how it followed and tried to explain it using wrong arguments). Turin made me see my error and now everything is clear.
I did have a little trouble with Turin's notation but managed to follow his argument anyway. So now I think the whole topic of momentum as the generator of translations is quite clear to me.

Eye and Turin:
Thanks a lot for all your help.
Alex
 
  • #49
Regarding what 'kernel' means. It actually is all related, but let's stay clear of the linear algebra meaning for the moment, and keep it simple.

In signal processing, and other theories (like Sturm-Liouville functions) the idea is to find a set of functions that form a sort of basis. But I diverge...

Lets say we are interested in defining what an integral transform is. Generically it is a map that takes one function f(x) into another function say g(t).

So
f(x) = integral (a..b) K(x,t) g(t) dt

The function K(x,t) is called the kernel of the transformation, and along with the limits of integration, uniquely define the properties of how one maps into the other.

In the case of Fourier analysis, that kernel is exp (-i x t), for other transforms its something else (there are many transforms with interesting properties, like the Laplace transform, the wavelet transform, etc etc)
 
  • #50
Alex,
You should probably ignore my posts if you are comfortable with your understanding. They are very confusing. And shame on me for not defining anything; that was sure sloppy. And, out of habit, I used the frequency-time transformation rather than the momentum-position transformation (the same, but different notation).

If you're interested, though, in my most recent post:

T{f(t)}[&omega;] represents the transformation from the function f(t) in the time domain to a function in the frequency domain. To complete the statement:

F(&omega;) = T{f(t)}[&omega;]

The "T" is usually a capital cursive "F" that denotes "Fourier transform." I used a "T" to spare some confusion that would have arisen per the rest of my notational scheme.

The curly braces contain the function to be transformed (as well as the variable of integration implied by the argument of that function).

The square braces display the domain into which the transformation is to take the function. Quite often this is left out, as it is obvious from the context or more explicit expression in terms of the integral.

f(t) is the function in the time domain. F(&omega;) is the corresponding function in the frequency domain (the transform of f(t)).

Haelfix provides an explanation of my use of "kernel." Though, I am unclear how one can say this is not a linear algebraic application. I was always under the impression that that is exactly what the kernel is: the elements of the transformation matrix.
 

Similar threads

Replies
3
Views
2K
Replies
8
Views
1K
Replies
21
Views
3K
Replies
11
Views
2K
Replies
6
Views
2K
Replies
33
Views
3K
Replies
14
Views
2K
Back
Top