# Derivation of the Euler-Lagrangian

• I

## Main Question or Discussion Point

I have a question about a very specific step in the derivation of Euler-Lagrangian. Sorry if it seems simple and trivial. I present the question in the course of the derivation.

Given:

\begin{split}

F &=\int_{x_a}^{x_b} g(f,f_x,x) dx

\end{split}

Thus, where $\delta f$ denotes an infinitesimal variation in the function $f$:

\begin{split}

\delta F &= F[f+\delta f]-F[f]

\end{split}

Where:

\begin{split}

f &\rightarrow f+ \delta f \Rightarrow f_x \rightarrow f_x + (\delta f)'

\end{split}

Thus:

\begin{split}

\delta F &= \int_{x_a}^{x_b} g(f+\delta f,f_x+ (\delta f)',x) dx - \int_{x_a}^{x_b} g(f,f_x,x) dx

\end{split}

Where:

\begin{split}

g(f+\delta f,f_x+ (\delta f)',x) \rightarrow g(f,f_x,x) + \frac{\partial g}{\partial f} \partial f + \frac{\partial g}{\partial f_x} (\partial f)'

\end{split}

Such that:

\begin{split}

\partial F &= \int_{x_a}^{x_b} \frac{\partial g}{\partial f} \partial f dx + \int_{x_a}^{x_b} \frac{\partial g}{\partial f_x} (\partial f)' dx

\end{split}

Integration by parts:

\begin{split}

\int u dv &= u v - \int v du

\end{split}

Setting $u = \frac{\partial g}{\partial f_x}$ and $dv = (\delta f)' dx$, such that:

\begin{split}

\int_{x_a}^{x_b} \frac{\partial g}{\partial f_x} (\partial f)' dx &= \frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_a}^{x_b} - \int_{x_a}^{x_b} (\partial f) \frac{d}{dx}\frac{\partial g}{\partial f_x} dx

\end{split}

Thus:

\begin{split}

\partial F &= \int_{x_a}^{x_b} \frac{\partial g}{\partial f} \partial f dx + \frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_a}^{x_b} - \int_{x_a}^{x_b} (\partial f) \frac{d}{dx}\frac{\partial g}{\partial f_x} dx

\\

&=\int_{x_a}^{x_b} \left( \frac{\partial g}{\partial f} \partial f - (\partial f) \frac{d}{dx}\frac{\partial g}{\partial f_x}\right) dx + \frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_a}^{x_b}

\\

&=\int_{x_a}^{x_b} \left( \frac{\partial g}{\partial f} - \frac{d}{dx}\frac{\partial g}{\partial f_x}\right) \partial f dx + \frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_a}^{x_b}

\\

&=\int_{x_a}^{x_b} \left( \frac{\partial g}{\partial f} - \frac{d}{dx}\frac{\partial g}{\partial f_x}\right) \partial f dx + \frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_b} -\frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_a}

\end{split}

At an extremum, $\partial F=0$, such that:

\begin{split}

\int_{x_a}^{x_b} \left( \frac{\partial g}{\partial f} - \frac{d}{dx}\frac{\partial g}{\partial f_x}\right) \partial f dx + \frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_b} -\frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_a} &= 0

\end{split}

THIS IS THE STEP I DON'T UNDERSTAND. WHY IS THIS TRUE?:

\begin{split}

\frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_b} -\frac{\partial g}{\partial f_x} (\delta f) \bigg\rvert_{x_a} &= 0

\end{split}

Such that:

\begin{split}

\int_{x_a}^{x_b} \left( \frac{\partial g}{\partial f} - \frac{d}{dx}\frac{\partial g}{\partial f_x}\right) \partial f dx &= 0

\end{split}

Thus:

\begin{split}

\frac{\partial g}{\partial f} - \frac{d}{dx}\frac{\partial g}{\partial f_x} &= 0

\end{split}

Related Classical Physics News on Phys.org
The function is generally assumed to be fixed at the boundaries such that the variation is always zero there.

Orodruin
Staff Emeritus
Homework Helper
Gold Member
First, please make sure to use the correct symbols when you write LaTeX. It becomes very confusing when you alternate between $\delta f$ and $\partial f$.

Regarding your question: You want the variation to be zero for all variations. In particular, it should be zero for variations that fix the value of the function at the boundary. For such variations $\delta f(x_a) = \delta f(x_b) = 0$ and the relation (11) automatically holds. Hence, you arrive at the Euler-Lagrange equation since $\delta f$ is arbitrary in the interval.

Now, all variations do not necessarily have $\delta f(x_a) = \delta f(x_b) = 0$ unless you impose boundary conditions that fix the values of $f$ at the boundaries. If you do that, then you are done, you have a differential equation with appropriate boundary conditions. However, if the values at the boundaries are free, then you must also check what happens with the variations that allow the variation to be non-zero at the boundaries. Since you already know that an on-shell solution must satisfy the Euler-Lagrange equation, the boundary terms are all that is left from (9) and so they must be zero. This will give you boundary conditions on the function to complement your differential equation. There are some different options here:
• You can vary the values at the boundary separately: If this is the case then regardless of how you vary each boundary value, the expression must evaluate to zero. This means that both $\partial g/\partial f_x$ terms must be zero independently. This gives you the two boundary conditions that are necessary.
• You can vary the value only at one boundary and the other boundary is fixed: The fixed boundary is a boundary condition of its own and that term therefore is identically zero since the variation vanishes. In the other boundary you get a boundary condition by requiring that the corresponding term must be zero regardless of the variation. Either way, you end up with two boundary conditions (the one from the fixed boundary and the natural boundary condition on the other boundary).
• You can have periodic boundary conditions: This means that the variation is free, but equal on both ends. You get one boundary condition from the requirement of the variations being equal and another from the requirement on the boundary terms from the integral.

Orodruin
Staff Emeritus
Homework Helper
Gold Member
The function is generally assumed to be fixed at the boundaries such that the variation is always zero there.
This is not really correct, although it is a common special case and usually the case that is taught first. The Euler-Lagrange equations are anyway satisfied whenever $\delta F$ is zero for all possible variations $\delta f$ since you can always consider the particular case when $\delta f = 0$ on the boundaries. This does not mean that $f$ needs to be generally assumed to be fixed at the boundaries.

I didn't write that it needs to be assumed, and you yourself say that it is "often taught" this way, which is not so far from it being "generally assumed".

Orodruin
Staff Emeritus
Homework Helper
Gold Member
To me, "generally assumed" gives the impression that everybody does it and that it is something that needs to be done to arrive at the end result, which is not the case. Either way, I prefer not to have to make unnecessary assumptions to argue the point.

I apologize but I don't see how you have answered the question.

"Fixing the function at the boundaries" seems exactly the same as assuming $\delta f(x_a) = \delta f(x_b) = 0$. Stating the the solution must be "on-shell" implies that the result must be the Euler-Lagrangian, which requires $\delta f(x_a) - \delta f(x_b) = 0$. From a purely mathematical perspective, I still don't see why it must be true that $\delta f(x_a) - \delta f(x_b) = 0$.

I am trying to work forward from the mathematics, not backward from the physics. Again, I appreciate your help.

Orodruin
Staff Emeritus
Homework Helper
Gold Member
Please use quotes to make it clear exactly what point you are referring to.

I never assumed that the general variation was fixed at the boundaries (although in some problems that is the case, such as finding the minimum distance between two points). What I said was that the $\delta F$ must be zero for any variation $\delta f$ so in particular it must be true for those variations that satisfy $\delta f(x_a) = \delta f(x_b) = 0$. Later on, I went on to discuss what happens when other variations are also allowed.

Stating the the solution must be "on-shell" implies that the result must be the Euler-Lagrangian, which requires δf(xa)−δf(xb)=0δf(xa)−δf(xb)=0\delta f(x_a) - \delta f(x_b) = 0.
No. That it is on-shell means that $\delta F = 0$.

I am trying to work forward from the mathematics, not backward from the physics.
I did not use a single piece of physics here.

Orodruin
Staff Emeritus
Homework Helper
Gold Member
To make an analogue to regular calculus: When you find the stationary points of a function $f(x,y)$ you need to look at both partial derivatives $f_x$ and $f_y$. A constraint would be to look only at the function value along $y = 1$, which would give you only the condition $f_x(x,1) = 0$. You then only needed to consider the x derivative, i.e., displacements in the direction x, because y displacements were not allowed. However, if you do not restrict yourself to $y = 1$ and allow all values of $y$, then you must still have $f_x = 0$, since you are still allowing displacements that are purely in the x direction, but additionally you get $f_y = 0$ because you are also allowing displacements in the y direction.

$\delta F$ must be zero for any variation $\delta f$ so in particular it must be true for those variations that satisfy $\delta f(x_a) = \delta f(x_b) = 0$.
I get it. That makes sense. I appreciate the insight.

vanhees71
Gold Member
2019 Award
To me, "generally assumed" gives the impression that everybody does it and that it is something that needs to be done to arrive at the end result, which is not the case. Either way, I prefer not to have to make unnecessary assumptions to argue the point.
Usually in Hamilton's principle in the Lagrangian formulation you consider all paths in configuration space connected two fixed points. In the (equivalent) Hamiltonian formulation of Hamilton's principle you consider all phase-space paths with the endpoints in configuration space fixed and free boundary conditions in momentum space.

The reason, why Hamilton's principle works like this in both configurations is first really understood from quantum mechanics, where the classical trajectory is derived as the saddle-point approximation of Feynman's path integral.

Orodruin
Staff Emeritus
Homework Helper
Gold Member
Usually in Hamilton's principle in the Lagrangian formulation you consider all paths in configuration space connected two fixed points. In the (equivalent) Hamiltonian formulation of Hamilton's principle you consider all phase-space paths with the endpoints in configuration space fixed and free boundary conditions in momentum space.

The reason, why Hamilton's principle works like this in both configurations is first really understood from quantum mechanics, where the classical trajectory is derived as the saddle-point approximation of Feynman's path integral.
As I understand the OP's question, he wants to have the argument without any physics reference. As such, I believe the problem here is purely one of variational calculus, where one can either assume fixed boundary conditions or not. My post #2 described the different possibilities for the case of a single independent variable (which is what the OP asked about).

I would also say that the more common thing to do in the Lagrangian formulation is to start with initial conditions (i.e., fixing the configuration and the time derivative of the configuration at the initial time). Of course, this can be shown to be equivalent to fixing the end-point configurations (although it is not always a one-to-one correspondence - the initial conditions necessarily imply an end configuration, but an end configuration does not necessarily mean that there is a unique stationary path).

fresh_42
Mentor
As I understand the OP's question, he wants to have the argument without any physics reference.
It might have helped if the thread had been posted in Differential Geometry rather than Classical Physics!
"Hello, physicists! I have a question, but do not talk about physics to me!"

Last edited: