What's the deal on infinitesimal operators?

In summary, the conversation discusses the idea of infinitesimal operators and the desire for a rigorous treatment of them from an "advanced calculus" point of view, rather than using infinitesimals. The concept of infinitesimal operators is connected to Lie groups and Lie algebras, and there is a desire for a practical approach to understanding them. The conversation also mentions the use of Taylor series in proving results related to infinitesimal operators.
  • #1
Stephen Tashi
Science Advisor
7,861
1,598
Is there a treatment of "infinitesimal operators" that is rigorous from the epsilon-delta point of view?

In looking for material on the infinitesimal transformations of Lie groups, I find many things online about infinitesimal operators. Most seem to be by people who take the idea of infinitesimals seriously and I don't think they are talking about the rigorous approach to infinitesimals [a la Abraham Robinson and "nonstandard analysis".

I suppose people who work on mainfolds and various morphisms also can deal with infinitesimal operators via some abstraction. However, I'd like to know if there is an approach to infinitesimal operators ( in general, not simply Lie group operators) that is essentially from the "advanced calculus" point of view.

(If not, I suppose I'll have to think about them like a physicist.)
 
Physics news on Phys.org
  • #2
http://en.wikipedia.org/wiki/Infinitesimal_transformation

Any book on differential geometry or Lie algebras should be rigorous. You can just go through and add epsilons and deltas.

Maybe you are talking about things like

$$e^{t \, D} \mathop{f}(x)=\mathop{f}(x+t) \\
e^{t \, D} \, \sim \, 1+t \, D $$

That is entirely rigorous, it is just that the equivalence is to first order.
 
  • #3
lurflurf said:

That's the correct subject, but it isn't the correct approach. (I find it amusing that calculus students are discouraged from proving things in terms of infinitesimals, but when we get to some advanced topics, reasoning by way of infinitesimals suddenly becomes respectable.)

Any book on differential geometry or Lie algebras should be rigorous. You can just go through and add epsilons and deltas.

I have yet to see such a book that was compatible with an rigorous advanced calculus point of view. The abstract approaches are rigorous, but that's not my interest. The down to Earth books speak in terms of infinitesimals. Do you know of a book with a practical orientation that treats infinitesimal transformations without using infinitesimals?

Maybe you are talking about things like
[itex]
e^{t \, D} \mathop{f}(x)=\mathop{f}(x+t) \\
[/itex]
(I first saw that result in a volume of Oliver Heaviside's Electrical Papers.)


[itex]
e^{t \, D} \, \sim \, 1+t \, D
[/itex]
That is entirely rigorous, it is just that the equivalence is to first order.

As far as I can interpret the materials I've seen on applications of Lie groups, the arguments that begin with showing things are true "to the first order" proceed to claim things that are true with "=" meaing equal, not "equal to the first order". Perhaps there is a rigorous way to phrase these arguments, but I have not seen it done.





.
 
  • #4
I'm not sure what you mean by "rigorous from the epsilon-delta point of view"- the whole point of "epsilon delta limit arguments" is to avoid "infinitesmals", which, in order to be introduced rigorously require an extended number system.
 
  • #5
Stephen Tashi said:
As far as I can interpret the materials I've seen on applications of Lie groups, the arguments that begin with showing things are true "to the first order" proceed to claim things that are true with "=" meaing equal, not "equal to the first order". Perhaps there is a rigorous way to phrase these arguments, but I have not seen it done.

I am positively surprised to find that I'm not the only one who's annoyed by this.

Eventually I have not evolved to become an expert on this stuff, so I cannot answer the original question. But I am under impression that for example "Lie Groups Beyond an Introduction" by Anthony W. Knapp would be rigorous like real mathematics. What I do not know is how much it actually answers to those who are puzzled by theoretical physics.
 
  • #6
HallsofIvy said:
I'm not sure what you mean by "rigorous from the epsilon-delta point of view"- the whole point of "epsilon delta limit arguments" is to avoid "infinitesmals", which, in order to be introduced rigorously require an extended number system.

I do mean that I would like to see a treatment of "infinitesimal operators" that did not make reference to "infinitesimals" but rather used the more familiar (to me) concepts of limits (in the epsilon-delta sense) of functions, sequences of functions etc. My web search on "infinitesimal operators" didn't even turn up a source that defined "infinitesimal operators" except in terms of infinitesimals.

Perhaps I'm using terminology that only turns up things written by physicists. Or perhaps "infinitesimal operators" are one of these areas where physicists have gotten ahead of mathematicians and no non-infinitesimal treatment of them has been written.

An example that I've given in another thread (on using Lie Groups to solve differential equations) is the treatment of the "infinitesimal operator" of a 1-parameter Lie group.

Avoiding the traditional zoo of Greek letters, I prefer the terminology:

Let [itex] T(x.y,\alpha) = ( \ T_x(x,y,\alpha), T_y(x,y,\alpha )\ ) [/itex] denote an element of a Lie group of 1-parameter transformations of the xy-plane onto itself.

Let [itex] f(x,y) [/itex] be a real valued function whose domain is the xy-plane.

Let [itex] u_x(x,y) = D_\alpha \ T_x(x,y,\alpha)_{\ |\alpha=0} [/itex]
Let [itex] u_y(x,y) = D_\alpha \ T_y(x,y,\alpha)_{\ |\alpha =0} [/itex]

([itex] u_x [/itex] and [itex] u_y [/itex] are the " infinitesimal elements ".)

Let [itex] U [/itex] be the differential operator defined by the operation on the function [itex]g(x,y)[/itex] by:

[itex] U g(x,y) = u_x(x,y) \frac{\partial}{\partial x} g(x,y) + u_y(x,y)\frac{\partial}{\partial y} g(x,y) [/itex]

(The operator [itex] U [/itex] is "the symbol of the infinitesimal transformation" .)

Every book that takes a concrete approach to Lie Groups proves a result that says

[tex]f(x_1,y_1) = f(x,y) + U f\ \alpha + \frac{1}{2!}U^2 f \ \alpha^2 + \frac{1}{3!}U^3 f \ \alpha^3 + ..[/tex]

by using Taylor series.

However, the function they are expanding is (to me) unclear.

If I try to expand [itex] f(x_1,y_1) = f(T_x(x,y,\alpha),T_y(x,y,\alpha)) [/itex] in Taylor series, only the first two terms of that result work.

If I expand [itex] f(x_1,y_1) = f( x + \alpha \ u_x(x,y), y + \alpha \ u_y(x,y) ) [/itex] then I get the desired result. So I think this is equivalent to what the books do because they do not give an elaborate proof of the result. They present it as being "just calculus" and expanding [itex] f(x_1,y_1) = f( x + \alpha \ u_x(x,y), y + \alpha \ u_y(x,y) ) [/itex] is indeed just calculus.

The books then proceed to give examples where the above result is applied to expand [itex] f(x_1,y_1) = f(T_x(x,y,\alpha),T_y(x,y,\alpha)) [/itex] I haven't found any source that justifies this expansion except by using the concept of infinitesimals.
 
Last edited:
  • #7
I am not sure I understand the question. But it seems to me that epsilon delta proof are avoided for simplicity of exposition.
 
  • #8
lavinia said:
I am not sure I understand the question.

I'd be happy to have answers to either of the following questions.

1. If you are famililar with books about Lie groups that do a Taylor expansion in terms of the operator [itex]U [/itex] what is the definition of the function [[itex] f(x_1,y_1) [/itex] that they are expanding?

2. What justifies using that expansion for the function [itex] f(T_x(x,y,\alpha),T_y(x,y,\alpha) ) [/itex]?
But it
seems to me that epsilon delta proof are avoided for simplicity of exposition.
I suppose lack of clarity is a type of simplicity.
 
  • #9
The only context in which I've ever seen the phrases "infinitesimal operators" and "infinitesimal transformations" are in QM and GR texts respectively with the latter being related to one-parameter Lie groups (specifically one-parameter diffeomorpism groups of space-times). I've never seen a math book that uses the terminology. In the case of "infinitesimal transformations", one starts with a one-parameter group of diffeomorphisms ##\varphi_t## on a smooth manifold ##M## generated by a vector field ##v^{a}## and defines the Lie derivative of a tensor field ##T^{a_1...a_k}{}{}_{b_1...b_l}## on ##M## as [tex]\mathcal{L}_{v}T^{a_1...a_k}{}{}_{b_1...b_l} = \lim_{t\rightarrow 0}\{\frac{\varphi^{*}_{-t}T^{a_1...a_k}{}{}_{b_1...b_l} - T^{a_1...a_k}{}{}_{b_1...b_l}}{t}\}[/tex] I guess in that sense the "infinitesimal transformations" are codified by regular ##\epsilon-\delta## limits.
 
Last edited:
  • #10
WannabeNewton said:
In the case of "infinitesimal transformations", one starts with a one-parameter group of diffeomorphisms ##\varphi_t## on a smooth manifold ##M## generated by a vector field ##v^{a}## and defines the Lie derivative of a tensor field ##T^{a_1...a_k}{}{}_{b_1...b_l}## on ##M## as [tex]\mathcal{L}_{v}T^{a_1...a_k}{}{}_{b_1...b_l} = \lim_{t\rightarrow 0}\{\frac{\varphi^{*}_{-t}T^{a_1...a_k}{}{}_{b_1...b_l} - T^{a_1...a_k}{}{}_{b_1...b_l}}{t}\}[/tex] I guess in that sense the "infinitesimal transformations" are codified by regular ##\epsilon-\delta## limits.

My simplistic view of the above is that the vector field in my example is [itex] (u_x,u_y) [/itex] and one can define a directional derivative of a function at each point (x,y) that is taken with respect to the field vector at that point. Those concepts are defined using the ordinary notion of limit. However, I don't understand how Taylor expansions of functions in terms of the operator [itex] U [/itex] are proven without resorting to arguments using infinitesimals..
 
  • #11
Ah yes. The book "Modern Quantum Mechanics"-Sakurai does that over and over and the justifications are by mathematical standards abysmal. All he does is ignore the terms of second order and higher in the infinitesimals. To give an example: http://postimg.org/image/fxhr3sgmf/

This is one of the reasons I absolutely hate that book. Apparently the author had some kind of grudge against mathematical rigor. I don't get how anyone can settle for such hand-waviness. The book by Ballentine is supposed to be more rigorous mathematically but: http://postimg.org/image/h29z9wlzb/

You might be interested in Stone's theorem: http://en.wikipedia.org/wiki/Stone's_theorem_on_one-parameter_unitary_groups and this thread: http://physics.stackexchange.com/questions/62876/translator-operator

I'm sure there is a QM for mathematicians book out there that treats these "infinitesimal operators" with more care. I agree with you that the notion of infinitesimal generators in the context of vector fields is much simpler to make mathematically rigorous than in the context of operators.
 
Last edited by a moderator:
  • #12
Stephen Tashi said:
I'd be happy to have answers to either of the following questions.

1. If you are famililar with books about Lie groups that do a Taylor expansion in terms of the operator [itex]U [/itex] what is the definition of the function [[itex] f(x_1,y_1) [/itex] that they are expanding?

I suppose lack of clarity is a type of simplicity.

Since I don't have your book let's start with an example to see if this is the right track.

SO(2) parameterized as

G(x) =

cos(x) -sin(x)
sin(x) cos(x)G can be expressed as a Taylor series around the identity matrix. This series can be computed directly.

On the other hand since G(x) is a homomorphism from the real line with addition, to the group of rotations under matrix multiplication, one can derive the equation

dG/dx = VG where VG is the matrix product of V and G and V is the matrix,

0 -1
1 0

V is the derivative of G at zero and the iterated derivatives of G at zero are just the iterated powers of V.

So in this case G(x) = exp(xV).

V is called the infinitesimal generator of G because every element of G is the exponential of V times the parameter,X. This simplifies G since now it can be derived from a single matrix.

Another way to look at infinitesimal generators of SO(2) is to look at the infinitesimal effect of a rotation of the plane upon a differentiable function.

The function, F(x,y) under rotation becomes a function of θ. That is F(x,y) = F(x(θ),y(θ))

where x(θ) = cosθx[itex]_{0}[/itex] - sinθy[itex]_{0}[/itex] and y'(θ) = sinθx[itex]_{0}[/itex] + cosθy[itex]_{0}[/itex] and (x[itex]_{0}[/itex],y[itex]_{0}[/itex]) is the point at which F is being differentiated and θ is the angle of rotation.

so dF/dθ = (∂F/∂x)(∂x/∂θ) + (∂F/∂y)(∂y/∂θ) = (∂F/∂x)(-x[itex]_{0}[/itex]sinθ - y[itex]_{0}[/itex]cosθ) + (∂F/∂y)(x[itex]_{0}[/itex]cosθ -y[itex]_{0}[/itex]sinθ)

evaluated at θ = 0. This is -y[itex]_{0}[/itex]∂F/∂x + x[itex]_{0}[/itex]∂F/∂y.

F(x,y) = F(x[itex]_{0}[/itex],y[itex]_{0}[/itex]) + (-y[itex]_{0}[/itex]∂F/∂x + x[itex]_{0}[/itex]∂F/∂y)dθ up to first order.

The infinitesimal generator of the rotation is defined as the operator

-y∂/∂x + x∂/∂y
 
  • #13
lavinia said:
V is the derivative of G at zero and the iterated derivatives of G at zero are just the iterated powers of V.

The question that I've asked (I think) is why, in general, does such a derivative turn out to be iterated powers of a single operator? Matrix groups are special case and ordinary calculus may suffice to show this. Perhaps that gives insight into the general situation.
Another way to look at infinitesimal generators of SO(2) is to look at the infinitesimal effect of a rotation of the plane upon a differentiable function.

The original post asks how to look at it without using arguments that assume existence of "infinitesimals".
 
  • #14
Stephen Tashi said:
The question that I've asked (I think) is why, in general, does such a derivative turn out to be iterated powers of a single operator? Matrix groups are special case and ordinary calculus may suffice to show this. Perhaps that gives insight into the general situation.

The generalization is the 1 parameter subgroup whose derivative at the identity is a particular element of the Lie algebra. In the case of SO(2), the rotation matrix of sines and cosines is a 1 parameter group.

All of the properties of the exponential follow because it is a homomorphism of the real numbers under addition into the Lie group.



The original post asks how to look at it without using arguments that assume existence of "infinitesimals".

Not sure about your problem here. You can think of dθ as the differential of the function,θ. For this first order approximation it just means that for small enough increments in theta the approximation is close and higher order terms become insignificant. This is just your epsilon delta proof. But the important observation is that (the Higher Order Terms/Δθ) all go to zero. So evetually the first order approximation is arbitrarily accurate.
 
Last edited:
  • #15
lavinia said:
The generalization is the 1 parameter subgroup whose derivative at the identity is a particular element of the Lie algebra. In the case of SO(2), the rotation matrix of sines and cosines is a 1 parameter group.

I think that's the problem I have described, if we assume [itex] T_x , T_y [/itex] are infinitely differntiable functions.


All of the properties of the exponential follow because it is a homomorphism of the real numbers under addition into the Lie group.

To justify the exponential in the first place, you need establish that the expansion can be done using powers of the same operator, which, as I said, is what I want to see demonstrated.



Not sure about your problem here. You can think of dθ as the differential of the function,θ.

I don't have a problem with differentials provided a "differential" can be defined without resorting to the terminology of "infinitesimals". The idea that that a first order approximation can be defined is not a problem. The problem is how that implies that the Taylor expansion can be written with an "=" (instead of a [itex]\approx [/itex]) using operators that are a first order approximations in the variable [itex] \alpha [/itex] that is used in the expansions.
 
  • #16
Stephen Tashi said:
To justify the exponential in the first place, you need establish that the expansion can be done using powers of the same operator, which, as I said, is what I want to see demonstrated

For matrix groups this is just a Taylor series expansion. You are solving the differential equation dH/dt = XH or X = (dH/dt)H[itex]^{-1}[/itex] which shows that the tangent vector to H is right invariant.

On an abstract Lie group the differential equation is

dR[itex]_{g_{-1}}[/itex]dH/dt = X where X is an element of the tangent space at the identity.
 
Last edited:
  • #17
lavinia said:
For matrix groups this is just a Taylor series expansion.

You are solving the differential equation dH/dt = XH or X = (dH/dt)H[itex]^{-1}[/itex] which shows that the tangent vector to H is right invariant.

On an abstract Lie group the differential equation is

dR[itex]_{g_{-1}}[/itex]dH/dt = X where X is an element of the tangent space at the identity.

In the books that take a concrete approach to "abstract" Lie groups (i.e. don't restrict themselves to matrix groups) the differential equation approach is mentioned. However, as an independent demonstration of the operator expansion, they purport to do the Taylor expansion directly. When I try this the 3rd term of the expansion

[itex] f(T_x(x,y,\alpha),T_y(x,y,\alpha)) = f(x,y) + Uf + \frac{1}{2!} U^2 f\ \alpha + ... [/itex]

does not come out to be [itex] \frac{1}{2!} U^2f\ \alpha^2 [/itex].

So a particular instance of my original question is whether there is a way to argue that the 3rd term is [itex] \frac{1}{2!} U^2f\ \alpha^2 [/itex]. using the special properties of the transformation.

Those special properties being

[itex] T_x(x,y,0) = x,\ \ T_y(x,y,0) = y [/itex]

and the property that allows the homomorphism you mentioned:

[tex] T_x(x,y,\alpha + \beta)= T_x(T_x(x,y,\alpha),\beta), T_y(T_y(x,y,\alpha),\beta) [/tex]
[tex] T_y(x,y,\alpha + \beta)= T_x(T_x(x,y,\alpha),\beta), T_y(T_y(x,y,\alpha),\beta) [/tex]

Or is the claim that the result can be established by doing Taylor expansion directly false?
 
  • #18
Wow, make sure you link this thread to our one in your next post, good stuff!

Stephen Tashi said:
When I try this the 3rd term of the expansion

[itex] f(T_x(x,y,\alpha),T_y(x,y,\alpha)) = f(x,y) + Uf + \frac{1}{2!} U^2 f\ \alpha + ... [/itex]

does not come out to be [itex] \frac{1}{2!} U^2f\ \alpha^2 [/itex].

So a particular instance of my original question is whether there is a way to argue that the 3rd term is [itex] \frac{1}{2!} U^2f\ \alpha^2 [/itex]. using the special properties of the transformation.

Emanuel derives it on page 13, explicitly showing how you get the third term, is there something wrong with what he does?

WannabeNewton said:
Ah yes. The book "Modern Quantum Mechanics"-Sakurai does that over and over and the justifications are by mathematical standards abysmal. All he does is ignore the terms of second order and higher in the infinitesimals.

I was reading Gelfand's proof of Noether's theorem yesterday (Page 168) & level to which terms of second order & higher are abused in that proof are stunning. I love the book but in that proof there's just too much of it going on, & not coincidentally it's intimately related to the topic of infinitesimal transformations.

WannabeNewton said:
The only context in which I've ever seen the phrases "infinitesimal operators" and "infinitesimal transformations" are in QM and GR texts respectively

Another place I've seen infinitesimal transformations come up is in the derivation of the Euler angles in classical mechanics, you could now have three subjects you';; have seen this stuff in :tongue:
 
  • #19
bolbteppa said:
Wow, make sure you link this thread to our one in your next post, good stuff!

only if this thread enlightens me.

Emanuel derives it on page 13, explicitly showing how you get the third term, is there something wrong with what he does?

He just claims its true. And he's correct if [itex] f_1(x,y) [/itex] is defined to be [itex] f(x + \alpha\ u_x(x,y), y+ \alpha\ u_y(x,y)) [/itex], which isn't the function I want to expand.

Try to derrive it using [itex] f_1(x,y) = f(T_x(x,y,\alpha), T_y(x,y,\alpha)) [/itex] and using ordinary calculus - the chain rule, the product rule. It doesn't work because [itex] \frac{d^2 f_1}{d\alpha^2} [/itex] has terms involving factors like [itex] \frac{d^2T_x(x,y,\alpha)}{d\alpha^2} [/itex] When you set [itex] \alpha = 0 [/itex] that factor becomes [itex]\frac{d^2 T_x(x,y,\alpha)}{d\alpha^2}_{|\ \alpha = 0} [/itex], not [itex] (u_x(x,y))^2 = \big( \frac{dT_x(x,y,\alpha)}{d\alpha}_{|\ \alpha = 0} \big)^2 [/itex]. So the derrivation (if there is one) requires something besides straightforward differentiation.
 
  • #20
From the differential equation

dH/dt = XH

one gets d[itex]^{2}[/itex]H/dt[itex]^{2}[/itex] = XdH/dt = X[itex]^{2}[/itex]H

Inductively, d[itex]^{n}[/itex]H/dt[itex]^{n}[/itex] = X[itex]^{n}[/itex]H

H(0) = Identity matrix

so the Taylor series at the identity follows.
The differential equation derives from the homomorphism equation.

H(x+y) = H(x)H(y)

for instance,

dH(x+y)/dx = dH/dxH(y) At x = 0 the right hand side is just XH where X is the derivative of H at 0.

The left hand side can be rewritten as

[dH(x + y)/x+y]d(x+y)/dx using the Chain Rule. This is just the derivative of H.

In sum dH/dt = XH
 
Last edited:
  • Like
Likes 1 person
  • #21
lavinia said:
From the differential equation

dH/dt = XH

one gets d[itex]^{2}[/itex]H/dt[itex]^{2}[/itex] = XdH/dt = X[itex]^{2}[/itex]H

Inductively, d[itex]^{n}[/itex]H/dt[itex]^{n}[/itex] = X[itex]^{n}[/itex]H

H(0) = Identity matrix

so the Taylor series at the identity follows.


The differential equation derives from the homomorphism equation.

H(x+y) = H(x)H(y)

for instance,

dH(x+y)/dx = dH/dxH(y) At x = 0 the right hand side is just XH where X is the derivative of H at 0.

The left hand side can be rewritten as

[dH(x + y)/x+y]d(x+y)/dx using the Chain Rule. This is just the derivative of H.

In sum dH/dt = XH


Thank you very much for that explanation. The part that justifies the differential equation works for a matrix group because composition of the functions in the group becomes multiplication of matrices.
 
  • #22
Most seem to be by people who take the idea of infinitesimals seriously and I don't think they are talking about the rigorous approach to infinitesimals [a la Abraham Robinson and "nonstandard analysis".
Actually, you can define Lie algebras rigorously using nonstandard analysis; see Abraham Robinson's original book.
 
  • #23
A long time ago I was wondering how to prove that the total charge is invariant in Lorentz boosts, under the assumption the the total charge is obtained by a spatial integral over a charge density which is the first component of a four current, also assuming that the four current satisfies a conservation equation.

https://www.physicsforums.com/showthread.php?t=180779

The thing ended in a situation where I was trying to prove that one mapping [itex]Q:[0,1]\to\mathbb{R}[/itex] would be constant, by showing it derivative to be zero, but was unable to accomplish it (trying this: [itex]Q'(x)=0\;\forall x[/itex].). Then samalkhaiat (now a science advisor) tried to convince me that I can prove [itex]Q[/itex] to be constant by proving that it derivative is zero at one point (like this [itex]Q'(0)=0[/itex] would be sufficient).

The original problem was left as a mystery to me.
 
  • #24
jostpuur, I'm not seeing the relevance of that to the question of the thread at hand to be honest. As an aside, Lorentz invariance of charge is a consequence of Stokes' theorem and the conserved Noetherian current coming from ##\partial_{a}j^{a} = 0##, that's all there is to it. See e.g. Franklin section 14.10.2
 
Last edited:
  • #25
Are you sure that the result is really proven in that book? Or could it be, that it only proves that the charge is conserved in infinitesimal boosts, then assuming that it implies the full result?
 
  • #26
Charge is conserved is different from charge is invariant but yes it proves that charge is a Lorentz invariant in the usual special relativistic sense under Lorentz boosts. The proof is very simple (it is identical to the one samalkhaiat gave in the thread you linked).
 
  • #27
ok my choice of words "conserved in boosts" was a mistake, but anyway people understand what I meant.

Your answer seems contradictory to me because samalkhait gave the proof only for infinitesimal boosts!

Besides I myself proved the result for infinitesimal boosts too. I was patiently trying to explain that that's how far I got on my own, and I was trying find a way to complete the proof, and then I just got bombarded with the repetition on the infinitesimal boost proof.
 
  • #28
I was speaking of his second method in post #24. This result holds for global Lorentz boosts in Minkowski spacetime. As I said, it is simply a consequence of Stokes' theorem and the fact that ##\partial_{a}j^{a} = 0##, for ##j^{a}## that are compactly supported in a worldtube.
 
  • #29
Can we please keep this thread on topic? Jostpuur, your question is very interesting, but can you ask it in a different thread?
 
  • #30
I am on the topic IMO. The topic is rather broad, because physics is filled with this infinitesimal stuff.

I'm going to show you a magic trick. I'll prove that a function

[tex]
f:[0,1]\to [0,1],\quad\quad f(x)=x
[/tex]

is constant! :wink:

The proof happens so that I prove that its derivative is zero everywhere. (In other words, given arbitrary point [itex]x\in [0,1][/itex], the function is constant in its infinitesimal environment.)

Let [itex]x\in [0,1][/itex] be fixed. Let's define

[tex]
\phi_x(u) = x + (u- x)^3.
[/tex]

Now we prove that the derivative of [itex]f\circ\phi_x[/itex] is zero at [itex]x[/itex].

[tex]
\big(D_u (f\circ\phi_x )\big)(u) = f'(\phi_x(u)) \;3 (u - x)^2
[/tex]

It is zero when you substitute [itex]u=x[/itex]!

So the lesson is this: If you want to prove that [itex]f[/itex] is constant, it is fully ok to carry out some transformation, and prove that [itex]f\circ \phi[/itex] is constant, if see it convenient for some reason. However, it is not ok to first fix some point [itex]x[/itex], then carry out a transformation that depends on that point, and then conclude that the derivative looks zero there.

That's the problem with the infinitesimal Lorentz boost thing. If you have parametrized some set of coordinates so that they depend on a parameter [itex]\alpha[/itex], and then have solved a formula for a total charge [itex]Q(\alpha)[/itex], you must actually prove [itex]D_{\alpha}Q(\alpha)=0[/itex] for all [itex]\alpha[/itex] with this particular representation of this function [itex]Q[/itex]. It is not ok to move into a local representation for each fixed [itex]\alpha[/itex], and then investigate infinitesimal boosts there.
 
  • #31
Sorry, I don't think you're on topic. So please, make a new thread.
 
  • #32
Stephen Tashi said:
Every book that takes a concrete approach to Lie Groups proves a result that says

[tex]f(x_1,y_1) = f(x,y) + U f\ \alpha + \frac{1}{2!}U^2 f \ \alpha^2 + \frac{1}{3!}U^3 f \ \alpha^3 + ..[/tex]

by using Taylor series.

However, the function they are expanding is (to me) unclear.

If I try to expand [itex] f(x_1,y_1) = f(T_x(x,y,\alpha),T_y(x,y,\alpha)) [/itex] in Taylor series, only the first two terms of that result work.

I would like to denote the series as

[tex]
(e^{\alpha U}f)(x,y) = f(x,y) + \alpha (Uf)(x,y) + \frac{1}{2!}\alpha^2 (U^2f)(x,y) +\cdots
[/tex]

Are we speaking about the same thing?

If I expand [itex] f(x_1,y_1) = f( x + \alpha \ u_x(x,y), y + \alpha \ u_y(x,y) ) [/itex] then I get the desired result.

This looks like a mistake to me. The operator [itex]U^2[/itex] behaves like this:

[tex]
U^2f = u_x\frac{\partial}{\partial x}(Uf) + u_y\frac{\partial}{\partial y}(Uf)
[/tex]
[tex]
= u_x\Big(\big(\frac{\partial u_x}{\partial x}\frac{\partial f}{\partial x}
+ u_x\frac{\partial^2 f}{\partial x^2}
+ \frac{\partial u_y}{\partial x}\frac{\partial f}{\partial y}
+ u_y\frac{\partial^2 f}{\partial x\partial y}\Big)
[/tex]
[tex]
+ u_y\Big(\frac{\partial u_x}{\partial u_y}\frac{\partial f}{\partial x}
+ u_y\frac{\partial^2 f}{\partial x \partial y}
+ \frac{\partial u_y}{\partial y} \frac{\partial f}{\partial y}
+ u_y\frac{\partial^2 f}{\partial y^2}\Big)
[/tex]

If you compute derivatives of [itex]f(x+\alpha u_x,y+\alpha u_y)[/itex] with respect to [itex]\alpha[/itex], the partial derivatives of [itex]u_x[/itex] and [itex]u_y[/itex] will never appear.
 
  • #33
Stephen Tashi said:
and the property that allows the homomorphism you mentioned:

[tex] T_x(x,y,\alpha + \beta)= T_x(T_x(x,y,\alpha),\beta), T_y(T_y(x,y,\alpha),\beta) [/tex]
[tex] T_y(x,y,\alpha + \beta)= T_x(T_x(x,y,\alpha),\beta), T_y(T_y(x,y,\alpha),\beta) [/tex]

This could be a minor thing, but these formulas probably should have been

[tex]
T_x(x,y,\alpha + \beta) = T_x(T_x(x,y,\alpha),T_y(x,y,\alpha),\beta)
[/tex]
[tex]
T_y(x,y,\alpha + \beta) = T_y(T_x(x,y,\alpha),T_y(x,y,\alpha),\beta)
[/tex]

?
 
  • #34
I don't remember precise conditions for uniqueness of the solutions of some DEs, but anyway I think the seeked result comes when some uniqueness result is used.

We want to prove

[tex]
(e^{\alpha U}f)(x,y) = f(T_x(x,y,\alpha),T_y(x,y,\alpha))
[/tex]

The left side satisfies the DE

[tex]
D_{\alpha} (e^{\alpha U}f)(x,y) = (U(e^{\alpha U}f)(x,y)
[/tex]

so the proof should be reasonably close if we succeed in proving that the right side satisfies the same DE, that means

[tex]
D_{\alpha} f(T_x(x,y,\alpha),T_y(x,y,\alpha)) = (Uf)(T_x(x,y,\alpha),T_y(x,y,\alpha))
[/tex]

(update: I think I made a mistake here. A comment in #36.)

Direct computation gives

[tex]
D_{\alpha} f(T_x(x,y,\alpha),T_y(x,y,\alpha))
[/tex]
[tex]
= (D_{\alpha}T_x(x,y,\alpha))\frac{\partial f}{\partial x}(T_x(x,y,\alpha),T_y(x,y,\alpha))
[/tex]
[tex]
+ (D_{\alpha}T_y(x,y,\alpha))\frac{\partial f}{\partial y}(T_x(x,y,\alpha),T_y(x,y,\alpha)) = \cdots
[/tex]

We assume that the transformation satisfies

jostpuur said:
[tex]
T_x(x,y,\alpha + \beta) = T_x(T_x(x,y,\alpha),T_y(x,y,\alpha),\beta)
[/tex]
[tex]
T_y(x,y,\alpha + \beta) = T_y(T_x(x,y,\alpha),T_y(x,y,\alpha),\beta)
[/tex]

Hence it also satisfies

[tex]
D_{\alpha}T_x(x,y,\alpha) = \frac{\partial}{\partial\alpha}T_x(T_x(x,y,\alpha),T_y(x,y,\alpha),0)
[/tex]
[tex]
D_{\alpha}T_y(x,y,\alpha) = \frac{\partial}{\partial\alpha}T_y(T_x(x,y,\alpha),T_y(x,y,\alpha),0)
[/tex]

So we get

[tex]
\cdots
= \frac{\partial}{\partial\alpha}T_x(T_x(x,y,\alpha),T_y(x,y,\alpha),0)
\frac{\partial f}{\partial x}(T_x(x,y,\alpha),T_y(x,y,\alpha))
[/tex]
[tex]
+ \frac{\partial}{\partial\alpha}T_y(T_x(x,y,\alpha),T_y(x,y,\alpha),0)
\frac{\partial f}{\partial y}(T_x(x,y,\alpha),T_y(x,y,\alpha))
[/tex]
[tex]
= (Uf)(T_x(x,y,\alpha),T_y(x,y,\alpha))
[/tex]
 
Last edited:
  • #35
After struggling with physicists' infinitesimal stuff myself during my studies, I eventually arrived at the enlightened conclusion, that the infinitesimal arguments can often be transformed into differential equations. I think this settles lot of problems, but as I pointed out in my off topic posts, there are also some infinitesimal arguments which I have been unable to transform into rigor form.

When I have attempted to discuss those problems, the physicists are usually unable to understand my complaints, since they have already been indoctrinated into believing anything that has the magic word "infinitesimal" as a part. Then usually the physicists attempt to convince me to abandon the differential equations or other rigor concepts, and accept the infinitesimal arguments for sake of making life easier.

So, to answer the question in the topic, I would say that the "situation with infinitesimal operators" is serious.
 

Similar threads

  • Calculus
Replies
5
Views
2K
  • Calculus
Replies
0
Views
1K
Replies
22
Views
3K
  • STEM Educators and Teaching
Replies
7
Views
2K
  • Quantum Interpretations and Foundations
9
Replies
309
Views
8K
Replies
81
Views
4K
  • Science and Math Textbooks
Replies
0
Views
696
  • STEM Academic Advising
Replies
11
Views
2K
Back
Top