A Can a Scalar Equation Be Transformed into Lagrangian Form?

  • A
  • Thread starter Thread starter wrobel
  • Start date Start date
  • Tags Tags
    Form Lagrangian
wrobel
Science Advisor
Insights Author
Messages
1,120
Reaction score
978
There is a problem from a Russian textbook in classical mechanics.

Consider a scalar equation $$\ddot x=F(t,x,\dot x),\quad x\in\mathbb{R}.$$ Show that this equation can be multiplied by a function ##\mu(t,x,\dot x)\ne 0## such that the resulting equation
$$\mu\ddot x=\mu F(t,x,\dot x)$$ has the Lagrangian form
$$\frac{d}{dt}\frac{\partial L}{\partial \dot x}-\frac{\partial L}{\partial x}=0.$$
I can only say that by the Cauchy-Kowalewski theorem it can be done locally provided ##F## is an analytic function.

But some relatively elementary way is supposed. Sure it is a local assertion.

What do you think?
 
Last edited:
Physics news on Phys.org
I've only seen the Lagrangian written like that when trying to maximize an integral. How is ##L## defined here? My brief googling didn't find anything useful.
 
Office_Shredder said:
How is L defined here?
from the Cauchy-Kowalewski theorem. I do not have other idea.
 
Sorry, I think I misunderstood the question. Is it supposed to be show there exists ##\mu## such that ##\mu\ddot x - \mu F## can be written as ##\frac{d}{dt} \frac{\partial L}{\partial \dot x} -\frac{\partial L}{\partial x}## for some ##L## which is presumably formed by ##F## and ##\mu##?
 
Office_Shredder said:
Sorry, I think I misunderstood the question. Is it supposed to be show there exists μ such that μx¨−μF can be written as ddt∂L∂x˙−∂L∂x for some L which is presumably formed by F and μ?
yes and ##\mu(t,x,\dot x)\ne 0##
 
The obvious course is to expand the total time derivative and substitute \ddot x = F, which yields <br /> F\frac{\partial^2L}{\partial \dot x^2} + \dot x \frac{\partial^2L}{\partial x \,\partial \dot x} + \frac{\partial^2L}{\partial t\,\partial \dot x} - \frac{\partial L}{\partial x} = 0 as an equation for L; \mu is then equal to \dfrac{\partial^2L}{\partial \dot x^2}. I haven't attempted to solve this.

Another approach is to write \mu \ddot x = \frac{d}{dt}(\mu \dot x) - \frac{d\mu}{dt}\dot x so that <br /> \frac{d}{dt}(\mu\dot x) - \frac{d\mu}{dt}\dot x - \mu F = 0 and try to solve <br /> \begin{split}<br /> \frac{\partial L}{\partial \dot x} &amp;= \mu\dot x \\<br /> \frac{\partial L}{\partial x} &amp;= \frac{d\mu}{dt}\dot x + \mu F\end{split}
 
Last edited:
pasmith said:
The obvious course is to expand the total time derivative and substitute x¨=F, which yields F∂2L∂x˙2+x˙∂2L∂x∂x˙+∂2L∂t∂x˙−∂L∂x=0 as an equation for L;
yes and that is why I referred to Cauchy-Kowalewski
 
wrobel said:
There is a problem from a Russian textbook in classical mechanics.
Does the textbook mention the "Helmholtz conditions" in The Inverse Problem in Lagrangian Mechanics, (or similar)?

In brief, the Helmholtz conditions on a given equation of motion (EoM) are necessary and sufficient for a corresponding Lagrangian to exist.

For 1D problems, 2 of the 3 Helmholtz conditions are trivially satisfied.

Write the EoM as ##\,G(x,\dot x, \ddot x, t) = 0\,##. Then the 3rd Helmholtz condition boils down to$$\frac{\partial G}{\partial \dot x} ~=~ \frac{d}{dt}\, \frac{\partial G}{\partial \ddot x}~.$$If this is not immediately satisfied, one can use the technique of "Jacobi's Last Multiplier", and write ##H := \mu(\dot x, x,t)\, G## which obviously satisfies the EoM. The Helmholtz condition on ##H## says $$\frac{\partial \mu}{\partial \dot x} \, G ~+~ \mu \, \frac{\partial G}{\partial \dot x} ~=~ \frac{d\mu}{dt}~,$$ (since we're assuming ##G## is 1st order in ##\ddot x##). Then the idea is to choose ##\mu## to be independent of ##\dot x## and solve what's left to get something like $$\mu ~=~ \exp\left(\int\! \frac{\partial G}{\partial \dot x} \, dt\right) ~.$$
wrobel said:
[...] But some relatively elementary way is supposed.
Elementary, yes -- but relying on a very nontrivial theorem about the Helmholtz conditions. :oldwink:

References:

1) Anton Almen's paper: https://jfuchs.hotell.kau.se/kurs/amek/prst/19_heco.pdf
which contains the nontrivial proof, but also works through the 1D harmonic oscillator as an example. Partially, a shorter version of the next reference:

2) Nigam & Banerjee, "A Brief Review of Helmholtz Conditions",
Available as: arxiv:1602.01563

Google gives many references about Helmholtz conditions and the technique of "Jacobi's Last Multiplier".

I hope that helps. :oldsmile:
 
Last edited:
  • Like
Likes wrobel
Thanks a lot. That is interesting indeed. I have never heard about the Helmholtz conditions
 
  • Like
Likes jim mcnamara
Back
Top