Momentum operator as the generator of translations.

Click For Summary
The discussion centers on the concept of the momentum operator as the generator of translations in quantum mechanics, with participants seeking clearer explanations and intuitive understanding. Initial confusion arises from the mathematical complexities involved, particularly the relationship between infinitesimal translations and the Taylor series expansion. The conversation suggests that understanding angular momentum as the generator of rotations may provide a more accessible entry point to grasping momentum's role in translations. Participants also explore the implications of these concepts in classical mechanics and the potential for broader applications, including computational modeling. Overall, the thread emphasizes the need for a deeper understanding of the mathematical foundations underlying these quantum mechanical principles.
  • #31
Response to Reilly and Haelfix

Reilly,
I would agree with most of what Haelfix has said.
With respect to your position, I get the impression you propose that very complex and abstract math should be learned first. I see two problems with that.
In the first place, different people may have different modes of learning. The sequence you might propose (that would probably be close to the sequence of classes and topics you took) might work for some people and not for others. The very abstract concepts may not be digested very well by people that prefer to see concrete examples and then abstract from them.
For these people, the math might be understood better within the concept of physics. Probably that's why they usually have a course entitled "mathematical methods"
The other problem I see with the approach of learning very well all the math first is that if you don't apply it you forget it, and there might be quite a gap in time from learning the mathematical concepts to applying them to physical problems.
On the other hand I think physics courses in most schools are not very well organized and they don't guarantee that a minimum of simple math is learned before it is needed. For example, when I took quantum mechanics, there was no pre-requisite to study linear algebra before taking this class. I had studied linear algebra but we had not covered the part that uses complex numbers. So I had to study this on my own. I was never tought group theory, but I think most QM courses at the undergraduate level don't require it.
Now I'll be taking graduate-level courses, so I'll make sure I do understand these concepts.
Reilly and Haelfix,
Your discussion has helped me see some of the areas in mathematics where I may need more knowledge. I thank you for that.
 
Physics news on Phys.org
  • #32
Response to Alex

alexepascual said:
I just checked the book and of course, the function has to obey ...
A sufficient, but not necessary, condition for the Fourier transform of a function f(x) to exist is

(i) Integral[all space] { |f(x)| dx } converges ;

(ii) f(x) has a finite number of discontinuities .

------------------------

... you can't get a FT of x2, you can still move it by using the Taylor series.
Yes, absolutely.

------------------------

What I don't understand is how you get [3] and how you compare [4] to [3].
Recall:
[1] f(x) = 1/sqrt{2 pi} Integral { eikx F(k) dk } ,

[2] F(k) = 1/sqrt{2 pi} Integral { e-ikx f(x) dx } .

Now, apply d/dx to equation [1], and get the correspondence

[3] [(d/dx)f](x) <--> ik F(k) .
To get [3], apply d/dx to both sides of [1], and on the right-hand-side, push d/dx "through" and "under" the integral; the only function under the integral there which depends on "x" is eikx, and d/dx of that is just the same thing multiplied by "ik"; thus, [1] becomes

df(x)/dx = 1/sqrt{2 pi} Integral { eikx ik F(k) dk } .

From this, [3] follows.

Now, [3] tells us that the "image" in k-space of d/dx is just multiplication by ik. It then follows that for any (analytic) function h(x), the "image" in k-space of h(d/dx) is just h(ik). We therefore have the correspondence

[ea(d/dx)f](x) <--> eika F(k) .

Now, take [4] and compare it to this. We had

[4] f(x+a) <--> eika F(k) .

The left-hand-sides of the last two correspondences must be equal; i.e.

f(x+a) = [ea(d/dx)f](x) .

------------------------

Finally, regarding:

... I would like to hear more about the "simple polinomials". Are we talking about polinomials of the form
a0x0+a1x1+a2x2+...anxn..
where each xn is a basis for the space?
Would in that case the operator d/dx consist of a line of numbers n-1 parallel to the diagonal? Am I on the right track?
"Yes" (where the full set {xn|n=0,1,2,...} is a basis), "yes", and "yes". At the next opportunity, I will attempt to post more on this matter.
 
  • #33
I think I see the source of my difficulty.
You derive [3] by differentiating both sides of [1] (which is an equation, not a correspondence). Don't you get an equation when you take the derivative with respect to the same variable of both sides of an equation?. Shouldn't [3] be an equality instead of a correspondence?
I hope I understood correctly the meaning of correspondence in this context. I would have interpreted it as "is a Fourier transform of" which would make:
f(x) <-->F(k)
f(x+a)<-->G(k)=eikaF(k)
On the other hand your argument on your last post ends up in a compelling way when you compare [3] to [4], so at this point I am a little confused.
 
  • #34
alexepascual said:
Don't you get an equation when you take the derivative with respect to the same variable of both sides of an equation?
Yes. In our case, the equation was just:

df(x)/dx = 1/sqrt{2 pi} Integral { eikx ik F(k) dk }
Call this equation [5].

------

Shouldn't [3] be an equality instead of a correspondence?
No. And you have given the reason why this is so:


I hope I understood correctly the meaning of correspondence in this context. I would have interpreted it as "is a Fourier transform of" which would make:
f(x) <-->F(k)
f(x+a)<-->G(k)=eikaF(k)
You understood correctly:

"-->" means "is the Fourier transform of" ;

"<--" means "is the inverse Fourier transform of" .

Looking at equation [5] above with this understanding of "<-->" gives us the correspondence:


[3] [(d/dx)f](x) <--> ik F(k)

Does this help to clear up the confusion?
 
  • #35
I guess I'll have to think about it.
Thanks a lot Eye. I won't be home today, probably tomorrow I'll be able to look into this.
 
  • #36
alexepascual,
If you have access to such resources, you may find an systems engineering text helpful for understanding these Fourier relationships. Specifically, what I have in mind is a junior or senior level electrical engineering major's understanding of the topic. The book I have is called "Circuits, Signals, and Systems," and I'm sure there are hundreds of other good books as well. I just think that considering a concrete system as what is doing the transform may help.

A good systems engineering text should derive the time and frequency shifts, as well as many other usual relationships. In engineering, this is done in such a concrete and straightforward way that it is worth looking into, even from the theoretical standpoint as an aspiring physicist. Basically, it should go through what Eye has done, but with diagrams and things to go along with it in a very symbolic way.
 
  • #37
alexepascual said:
... I would like to hear more about the "simple polinomials". Are we talking about polinomials of the form
a0x0+a1x1+a2x2+...anxn..
where each xn is a basis for the space?
Would in that case the operator d/dx consist of a line of numbers n-1 parallel to the diagonal? Am I on the right track?

Eye_in_the_Sky said:
"Yes" (where the full set {xn|n=0,1,2,...} is a basis), "yes", and "yes". At the next opportunity, I will attempt to post more on this matter.
Let bi be a basis. Then, (using the "summation convention" for repeated indices) any vector v can be written as

v = vibi .

In this way, we can think the of vi as the components of a column matrix v which represents v in the bi basis. For example, in particular, the vector bk relative to its own basis is represented by a column matrix which has a 1 in the kth position and 0's everywhere else.

Now, let L be a linear operator. Let L act on one of the basis vectors bj; the result is another vector in the space which itself is a linear combination of the bi's. That is, for each bj, we have

[1] Lbj = Lijbi .

In a moment, we shall see that this definition of the "components" Lij is precisely what we need to define the matrix L corresponding to L in the bi basis.

Let us apply L to an arbitrary vector v = vjbj, and let the result be w = wibi. We then have

wibi

= w

= Lv

= L(vjbj)

= vj(Lbj)

= vj(Lijbi) ... (from [1])

= (Lijvj)bi .

If we compare the first and last lines of this sequence of equalities, we are forced to conclude that

[2] wi = Lijvj ,

where, Lij was, of course, given by [1].

Now, relation [2] is precisely what we want for the component form of a matrix equation

w = L v .

We, therefore, conclude that [1] is the correct "rule" for giving us the matrix representation of a linear operator L relative to a basis bi.

-----------------------------------------

Now, let L = d/dx, and bn = xn-1, n = 1,2,3... .

In this context, rule [1] above becomes

[1'] Lxn-1 = Lmnxm-1 .

But

Lxn-1

= (d/dx)xn-1

= (n-1)xn-2

= (n-1) deltam,n-1 xm-1 ,

so that

Lmn = (n-1) deltam,n-1 .

This is equivalent to (no summation)

Ln,n+1 = n , with all other components equal to 0.
 
  • #38
I have been away from the forum the last three days and haven't had time to think about the topic.
Turin:
Thanks for your suggestion. I'll try to get a hold of the systems engineering book you mention. In other circumstance I was having trouble with a topic in thermodynamics and found that an engineering book explained things in a way that was more clear to me.
Eye:
I'll be printing out your last two posts and probably will have a chance to read them and think about what you say sometime today or tomorrow.
I'll let you know as soon as I do so.
 
  • #39
Turin,
I got the book you suggested from a library. Thanks for your suggestion, I'll tell you later if it helped.
Eye,
First I would like to apologize for my lack of familiarity with the Fourier transform.
Yesterday I thought I had understood your derivation. But today I looked at it again, and found out that the way I was making it work (the intermediate steps I was filling-in) make an asumption that may not be warranted.
The way I chose to demonstrate that (df/dx)f(x) <--> ik F(k) is to multiply both sides of the correspondence [1] (my eq. number) by ik. If ik were a constant I guess this would be legal.

The sequence would be: (my equation numbers)
By definition:
[1] f(x) <--> F(k)
[2] f(x) = 1/sqrt{2 pi} integral {eikx F(k) dk}
[3] F(k) = 1/sqrt{2 pi} integral {eikx f(x) dx}

Now make:
[4] f'(x) = ik f(x)
Then there is some F'(k) such that:
[5] f'(x) <--> F'(k)
where (by [3]): F'(k) = 1/sqrt{2 pi} integral {eikx f'(x) dx}
Using [4]: F'(k) = 1/sqrt{2 pi} integral {eikx ik f(x) dx}
Pulling ik out: F'(k) = ik 1/sqrt{2 pi} integral {eikx f(x) dx}
By [2]: f'(x) <--> F'(k) = ik F(k)
Which can also be expressed:
[5] ik f(x) <--> ik F(k)
Taking the derivative of [2]:
[6] (d/dx)f(x) = ik f(x)
Substituting [6] in [5]:
[7] (d/dx)f(x) <--> ik F(k) (your equation [3])

The problem I see is that f(x)<-->F(k) makes a correspondence between two functions. This means that all values of k are used in the correspondence. But which value of k do we use on the left side of [5]?.
In order to see this better, I thought that the Fourier transform should be ammenable to being visualized as a matrix. I looked for: "Fourier Transform in matrix form" in google and found a few results. Interestingly, two of those entries had to do with image processing. One was a description of a book: "Image processing: the fundamentals". I looked for it in Amazon.com and, besides having a positive review, the table of contents appeared very interesting. I have ordered this book as a loan from other university.
I'll be waiting for your comments on the above equations.
 
Last edited:
  • #40
I will let Eye take care of the other details, but for now:

alexepascual said:
... I thought that the Fourier transform should be ammenable to being visualized as a matrix.
ABSOLUTELY! In fact, there is a discrete Fourier transform (DFT) that is exactly a numerical matrix and a fast Fourier transform (FFT) that is a radix for further algorithmic optimization. You may find the DFT to hold the specific explanation that you're looking for. Even the straight-up continuous time Fourier transform is basically a matrix, T, in the sense that you could find the ω,t component as:

Tω,t = <ω|T|t> = <ω|F{t}[ω]> = integral of ωF{t}[ω]dω

Basically, the components turn out to be:

Tω,t = e-iωt
(the Kernel of the transformation).
 
  • #41
I have kept thinking about this problem and can't find a solution.
But I noticed the following:
I arrived (through a dubious path) to the following
[5] ik f(x) <--> ik F(k)
Now, if I start off with { ik F(k) } and plug that into the definition of the inverse FT. I should get f'(x)= ik f(x)
f'(x) = 1/sqrt(2pi) integral {eikx F'(k) dk]
f'(x) = 1/sqrt(2pi) integral {eikx ik F(k) dk]
But here is where the problem shows, because I can't pull ik out of the integral sign because k is a variable, not a constant in this case.
 
  • #42
Turin
I had not read yours when I sent my last post (7 minutes later)
Thanks for your info about the discrete Fourier transform. I'll try to learn more about it.
I was browsing this morning through the Circuits, Signals and Systems book. I found it very interesting, but it'll involve learning a lot about electronics. Although I have read about the subject in the past on my own, I have never taken an electronics course.
In the short term, I'll try to get the most out of this book by reading some of the chapter introductions and by looking directly at the sections on the diferent transforms.
Thanks again,
Alex
 
  • #43
Concerning the required knowledge of electronics:
Perhaps there is another book by that same name. The one that I have (though not with me right now) has an entire chapter almost entirely devoted to the Fourier transform (and called "The Fourier Transform" if I remember correctly).


alexepascual said:
I arrived (through a dubious path) to the following
[5] ik f(x) <--> ik F(k)
I suppose I may be unclear on the meaning of "<-->". But, if it is supposed to mean "transforms into," then I don't agree with this statement. The way to get the transformation of the derivative is to simply take the derivative of the inverse transform of some function and then infere the transform. To start out with some definitions:

f(t) = Tinv{F(ω)}[t]
= (√2π)-1 integral of {dωeiωtF(ω)}

=>

df/dt = (d/dt)Tinv{F(ω)}[t]
= (√2π)-1 (d/dt) integral of {dωeiωtF(ω)}

Since the integration is over ω, the derivative wrt t can be taken inside the integral (the integration and differentiation commute):

= (√2π)-1 integral of {dω(d/dt)(eiωtF(ω))}

Then, since only the Kernel depends on t, the differentiation only operates on the Kernel:

= (√2π)-1 integral of {dω(d/dt)(eiωt)F(ω)}
= (√2π)-1 integral of {dω(iωeiωt)F(ω)}
= (√2π)-1 integral of {dωeiωt(iωF(ω))}

Let (iωF(ω)) = G(ω) (some other function of ω):

= (√2π)-1 integral of {dωeiωtG(ω)}
= Tinv{G(ω)}[t]

Taking the Fourier transform of both sides:

T{df/dt}[ω] = T{Tinv{G(ω)}[t]}[ω]
= G(ω)
= (iωF(ω))
= (iω)F(ω)

The result:

T{df/dt}[ω] = (iω)F(ω)

This shows that the Fourier transform of the time derivative of a function is equal to (iω) times the transform of the function. In other words, differentiation in the position basis is multiplication in the momentum basis.
 
Last edited:
  • #44
Thanks Turin,
I'll have to go over your post and think about it. But I think it'll probably answer my question.
With respect to the book, I think it is the only one with that title. The author is Siebert and it is published by MIT press / MacGraw-Hill. Chapter 13 is titled: Fourier Transforms and Fourier's Theorem.
When I posted my comment, I had just browsed through the book, and I got the impression that most of it was intimately related to electronic systems. Now that I examined some of the chapters in more detail, I see that probably I can go directly to the chapter on the Fourier transforn and understand it with my present (little) knowledge of electronics. On the other hand, there is always a little of a culture shock when going from physics books to electronics books, which is in a way good because it forces you to look at the same topics from a different angle.
I see that one of the differences with the physics books is the inclusion of discrete transforms, which I don't remember seing in physics. Probably part of the reason for this is their use in digital systems, digital signal processing, etc. Probably discret math in physics becomes more important at the Plank scale, but that is just my speculation (I don't know anything about quantum loop gravity)(a long way to go before I get there).
 
  • #45
Turin,
Your notation is a little different to what I am used to.

In your first post:
(1) When you write F{t}[&omega;], I wonder what you mean. What is the difference between the square brackets and the curly brackets?
(2) I looked at my linear algebra book and it says that the kernel of a transformation is the portion of the domain that is mapped to zero. Is this a different use of the word "kernel" or is it connected with your comment that e-iωt is the kernel of the transformation?

In your second post:
(3) Now you write {F(&omega;)}[t] , slightly different notation but the square bracket seems to have the same function. Probably if you just tell me how you read it aloud I'll understand.
 
  • #46
OK, I ignored the square brackets and was able to follow your reasoning.
I still would appreciate your explanation about the notation and the "kernel".
Thanks again Turin,
Alex
 
  • #47
That nasty double arrow

The intended meaning of "f(x) <--> F(k)" is "f(x) and F(k) are a Fourier transform pair". The x-space function is written on the left and the k-space function on the right [1]. So, reading the arrow from left to right gives

f(x) --> F(k) ..... f(x) goes to F(k) via a Fourier transform ,

and reading from right to left gives

f(x) <-- F(k) ..... F(k) goes to f(x) via an inverse Fourier transform .

If you know that the arrow holds true for one direction, then it must also hold for the other.

The corresponding Fourier integrals are [2]

f(x) --> F(k) ..... F(k) = 1/sqrt{2 pi} Integral { e-ikx f(x) dx } ,

f(x) <-- F(k) ..... f(x) = 1/sqrt{2 pi} Integral { eikx F(k) dk } .

Note that one of the transforms has a "+ikx" in the exponential (the inverse transform, according to my definition), while the other has a "-ikx" [3],[4].
_________________________
[1] Thus it would not make sense to multiply both sides of the double arrow by the same thing ... like, for example, ik.

[2] Some books use an asymmetrical convention with regard to the numerical constant in front of the integral, putting a 1/(2 pi) at the front of one of the transforms and just a 1 at the front of the other.

[3] Some books use the opposite convention with regard to (+/-)ikx in the exponential.

[4] In an earlier post, such a sign was missed out. Both transforms we written with a "+" sign.
By definition:
[1] f(x) <--> F(k)
[2] f(x) = 1/sqrt{2 pi} integral {eikx F(k) dk}
[3] F(k) = 1/sqrt{2 pi} integral {eikx f(x) dx}
-------------------------------------------------------------
-------------------------------------------------------------

The above explains what I meant by the double arrow. Instead of providing a means to express Fourier-type relationships between objects in a clear and compact way, it only added confusion to an already uncertain situation. ... Sorry about that.
 
  • #48
Eye:
I didn't have trouble with the double arrow. Turin asked to be sure he was interpreting correctly (which he was). But he preferred to use the operator notation. My only difficulty was in deriving your correspondence [3].
You said: "...and from this [3] follows" ( I didn't see how it followed and tried to explain it using wrong arguments). Turin made me see my error and now everything is clear.
I did have a little trouble with Turin's notation but managed to follow his argument anyway. So now I think the whole topic of momentum as the generator of translations is quite clear to me.

Eye and Turin:
Thanks a lot for all your help.
Alex
 
  • #49
Regarding what 'kernel' means. It actually is all related, but let's stay clear of the linear algebra meaning for the moment, and keep it simple.

In signal processing, and other theories (like Sturm-Liouville functions) the idea is to find a set of functions that form a sort of basis. But I diverge...

Lets say we are interested in defining what an integral transform is. Generically it is a map that takes one function f(x) into another function say g(t).

So
f(x) = integral (a..b) K(x,t) g(t) dt

The function K(x,t) is called the kernel of the transformation, and along with the limits of integration, uniquely define the properties of how one maps into the other.

In the case of Fourier analysis, that kernel is exp (-i x t), for other transforms its something else (there are many transforms with interesting properties, like the Laplace transform, the wavelet transform, etc etc)
 
  • #50
Alex,
You should probably ignore my posts if you are comfortable with your understanding. They are very confusing. And shame on me for not defining anything; that was sure sloppy. And, out of habit, I used the frequency-time transformation rather than the momentum-position transformation (the same, but different notation).

If you're interested, though, in my most recent post:

T{f(t)}[&omega;] represents the transformation from the function f(t) in the time domain to a function in the frequency domain. To complete the statement:

F(&omega;) = T{f(t)}[&omega;]

The "T" is usually a capital cursive "F" that denotes "Fourier transform." I used a "T" to spare some confusion that would have arisen per the rest of my notational scheme.

The curly braces contain the function to be transformed (as well as the variable of integration implied by the argument of that function).

The square braces display the domain into which the transformation is to take the function. Quite often this is left out, as it is obvious from the context or more explicit expression in terms of the integral.

f(t) is the function in the time domain. F(&omega;) is the corresponding function in the frequency domain (the transform of f(t)).

Haelfix provides an explanation of my use of "kernel." Though, I am unclear how one can say this is not a linear algebraic application. I was always under the impression that that is exactly what the kernel is: the elements of the transformation matrix.
 
  • #51
Turin,
Your posts are not confusing. I just was unfamiliar with the notation you used. As a matter of fact, I think the only thing I believe I didn't understand was the square brackets. But your post made very clear where I was erring and how to fix it. The fact that you used time domain instead of space domain didn't confuse me at all either. Your explanation about the notation in your last post makes everything absolutely clear.
With respect to the kernel, I understand very well your definition. This definition coincides with the one given by Haelfix but I can't make a connection with the one given in my linear algebra book. I did a Google search on the "kernel of a transformation" and the definitions I saw coincided with the linear algebra book.
According to this definition, the kernel is that sub-set of the domain that is mapped by the transformation to zero. If we think of the transformation as represented by a matrix and the domain and range composed by vectors (functions), then this other definition would define the kernel as a set of vectors (functions) while your definition would consider it as an element of the transformation matrix. Maybe these are two completely different meanings of the same word, or maybe they are somehow connected or equivalent but I can't see the relationship.
 
  • #52
turin said:
Haelfix provides an explanation of my use of "kernel." Though, I am unclear how one can say this is not a linear algebraic application. I was always under the impression that that is exactly what the kernel is: the elements of the transformation matrix.
Haelfix isn't saying that T is not a linear transformation. Rather, there is a concept in linear algebra which is designated by the same term "kernel" (also "null space") but refers to something else. For that context, the definition is:

Defintintion: Let A be a linear transformation from a vector space V into a vector space W. Then, the "kernel" (or "null space") of A is the set
KA = {v Є V | A(v) = 0}.

It turns out that KA is a linear subspace of V, and A is invertible iff A is "onto" and KA = {0}.

Thus, in your sense of "kernel", the "kernel" of T is e-iωt, whereas in the other sense, the "kernel" ("null space") is the set consisting of [the equivalence class of functions corresponding to] the zero vector. (Note: You can ignore the preceding square-bracket remark if it troubles you (or, better yet, ask (if it troubles you)).)
 
Last edited:
  • #53
OK, I think I get it. The Haelfix version of "kernel" must be the set consisting of only the zero vector since there is an inverse Fourier transform? It doesn't "trouble" me, but I don't know what an "equivalence class" is. Care to enlighten us?
 
  • #54
turin said:
OK, I think I get it.
You got it.

----------

turin said:
... I don't know what an "equivalence class" is.
When we choose to represent the "vectors" of our Hilbert space by (square-integrable) "functions", a potential difficulty may arise.

Suppose we have two functions, f and g, which are equal "almost everywhere" (i.e. the set of points for which f(x) ≠ g(x) has "measure zero"), then as far as all [of the "tame"] integrals are concerned, f(x) and
g(x), under the integral, will give the same result. Thus, for all practical purposes, f and g are considered to represent the same "vector" ... yet, at the same time, they may be distinct "functions" (i.e. we may not have f(x) = g(x) at every single point x).

In simple terms, the condition "f is almost everywhere equal to g" can be expressed as

[1] Integral { |f(x) - g(x)| dx } = 0 .

This condition puts our "functions" into "groups", or "classes", of "equivalent functions". These "groups" are the "equivalence classes" corresponding to the "equivalence relation" defined by [1]. In this way, a formal "vector" of the Hilbert space is represented by a particular "equivalence class", and when we wish to do a calculation, we can simply pick any "function" in the given "equivalence class" (... and it doesn't matter which one we pick).

[Note: In the above, a formal definition of "equivalence relation" has not been given. Nor has a formal demonstration been given that condition [1] satisfies such a definition. Neither has a formal definition of "measure" and "measure zero" been given, nor a formal demonstration that condition [1] is equivalent to "the set of points for which f(x) ≠ g(x) has measure zero".
... But these things are basically trivial to do, and yet, may be the cause for a "small headache" to some, as well as, being construed (by some) to be "a complete waste of time".]
 
Last edited:
  • #55
I don't want to deam the issue of equivalence class to be "a waste of time," but it seems to me that there can only be one of the functions out of the equivalence class that could survive the further restriction of being physically meaningful. Am I incorrect to interpret the distinctions as removable discontinuities?
 
  • #56
turin said:
Am I incorrect to interpret the distinctions as removable discontinuities?
It would be more accurate to describe the distinctions as occurring at "isolated points". Suppose that x = xo is such a point of distinction. Then, if f(xo-) = f(xo+), yes, we are dealing with a "removable discontinuity". If, however, f(xo-) ≠ f(xo+), then, even though we have a rule for removing the "distinction", we cannot apply the term "removable discontinuity".


... it seems to me that there can only be one of the functions out of the equivalence class that could survive the further restriction of being physically meaningful.
Yes, I think that your statement is basically true. In a physical situation which demands that a function be continuous on some interval, all members of the corresponding "equivalence class" will differ on that interval only by "removable discontinuities". On the other hand, if a physical situation (probably involving an idealization of some kind) calls for some function to have a "step discontinuity", the physical context would probably tell us that the value of the function at the "step", say x = xo, is quite irrelevant; we would probably just make f(xo) = f(xo-) or f(xo+).

So ... it sounds to me like you may be suggesting that, in a general sort of way, the "physically distinct solutions" are, so to speak, in a one-to-one correspondence with the "equivalence classes". Yeah, this makes sense ... never thought of it, though.
 
  • #57
Perhaps this issue comes to its climactic import when decomposing a function that is physical into a basis of functions that may not themselves be physical (or the other way around).
 
  • #58
Quite independently of whether or not such a climax - or one of similar import - can/cannot or will/will not occur, the real question (I would say) is: Why does the mathematician feel compelled to speak of "equivalence classes" instead of the "functions" themselves?

... Any idea?
 
  • #59
Excuse me Eye, but it seemed to me that you yourself answered this question in post #54, earlier in this thread. You have ONE thing to represent, and several functions do it, and the differences between them you want to ignore, so you form equaivalence classes. You were so clear back there, I can't understand what your difficulty is here.
 
  • #60
I apologize. I didn't realize that my mention of "equivalence classes" and initial egging-on was going to be the spearhead of such a lengthy tangent. I'm not having a difficulty here. I posed the question to Turin, because I thought, somehow, the main point was being lost. So, in order to return, once again, to the main point I posed the question:

Why does the mathematician feel compelled to speak of "equivalence classes" instead of the "functions" themselves?

I posed the question, not because I didn't have an answer, but rather, to give Turin something more to think about.

As for this question having already been answered in post #54, I see only two (distinct) statements there, each of which (only) appears provide an answer for why the mathematician feels "compelled":

(i) without equivalence classes "a potential difficulty may arise";

(ii) "for all practical purposes" a pair of almost-everywhere equal functions are considered to be the same.

But (i) only hints at an answer, while (ii) opts out of giving that answer.

That answer is: If we speak only of "functions", then a linear transformation, such as the Fourier transform, when construed as a mapping of "functions" to "functions" will be MANY-to-ONE, and therefore, have no well-defined inverse. By speaking of "equivalence classes" instead, this difficulty is removed.

This then explains why I felt compelled to [parenthetically] mention "equivalence classes" in the context of the "kernel" (i.e. "null space") of the Fourier transform back in post #52.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
4K
  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 14 ·
Replies
14
Views
2K