1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What is a Lagrangian?

  1. Jul 17, 2015 #1
    So i actually have many words that i know of, and are familiar with such as :

    The Hamiltonian Operator
    The Hermitian Operator
    The Lagrangian
    Eigen Values/States

    However, i am struggling with how these things work, and when to apply them, and what they actually mean. Many of the physics lectures i watch tend to glaze over them.

    For simplicity sake, lets just start with the Lagrangian. What exactly is it? When is it applied, and can someone offer an example of its use?
     
  2. jcsd
  3. Jul 17, 2015 #2
    I will recommend you very highly stanford Leonard Susskind Lectures.I watched Classical Mechanics Lectures 2008 I guess.Perfect explanation of Lagrange.If you know calculus definitly watch it.
     
  4. Jul 17, 2015 #3
    Ahh ya its funny you say that, that's where i picked up most of those terms, from Susskind's lectures online. Sometimes he just throws around those words and i don't have a clue what they mean.

    I totally skipped his classical mechanics lectures though, i went straight to GR and QM lol! So i'll definetly take your advice to heart and watch those.
     
  5. Jul 18, 2015 #4

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    The "Hamiltonian" is the total energy operator. The "Lagrangian" is the operator that gives the difference between potential energy and kinetic energy (rather than the sum like the "Hamiltonian").
    There is no operator called "the Hermitian" operator. Any operator is an "Hermitian" operator if it is its own conjugate transpose. In particular, its eigenvalues are always real numbers.

    An "eigenvalue" for an operator, A, is a number, [itex]\lambda[/itex] such that there exist non-zero vectors, v, such that [itex]Av= \lambda v[/itex]. Those vectors are called "eigenvectors". Eigenvectors have the nice property that the operator, restricted to an eigenvector or multiple of an eigenvector is just "multiplying by a number" so very simple. If you can find a basis for your vector space, consisting entirely of eigenvectors, (i.e. an Hermitian operator) the operator can be written in a very simple form.

    You can't just "jump" to quantum theory and relativity, ignoring all of the preliminary physics (and mathematics)!
     
  6. Jul 20, 2015 #5
    A Lagrangian is a function that is used when you take newtonian mechanics using vector methods and such, and reformulate it in an analytic method. To define what it means and such is basically a few lectures worth of material on advanced classical mechanics. In general, it is the Kinetic energy of a system minus its potential energy. When you integrate this function over time you get a scalar called the action. From this point on it gets pretty technical. It's taken as a postulate that varying the action to first order should be 0 for the trajectory of motion. You should read Landau's first book in his series on physics, he derives very beautifully why the lagrangian is what it is by symmetry principles. The hamiltonian is again a reformulation. This time you go from the langrian to the hamiltonian via a legendre transform. the lagrangian does not always have to "look" like the mechanical kinetic energy-potential and the hamiltonian does not always look like the regular mechanical energy of a system.

    You really should get a firm grasp on the subject before moving on. All modern theories are formulated in terms of Action and lagrangian formalisms. This inlcludes quantum field theory and string theory. General Relativity also has an action formulation.
     
  7. Jul 20, 2015 #6
    The Lagrangian is an object called a functional, a device that returns a number for any given function. For instance, a definite integral is a simple case of functional. We also have the functional derivative, which explains how the value of the functional will change when the function put into it is changed, defined:

    (δF[f(x)]/δf(x)) = limε→0 (1/ε)(F[f(y) + εδ(x - y)] - F[f(y)]) Where F is some functional, f(x) some function, y is a dummy variable, and δ(x-y) is a Dirac delta function. The subject of calculus of variations deals with operations on functionals.

    When you're considering the trajectory of a particle, there are two functionals you want to think of: the averages over some duration τ of potential energy V and kinetic energy T of a particle following a trajectory x(t) connecting two points a and b. These are defined:

    Tave = τ-10τ(1/2)m[xt(t)]2dt

    Vave = τ-10τV[x(t)]dt

    We can take the functional derivatives of Vave and Tave to find that

    δVave[x(t)]/δx(t) = δTave[x(t)]/δx(t)

    meaning that if we vary the trajectory x(t) by a small amount, the average kinetic energy and average potential energy for the duration will each change by the same amount, or that the difference between the average kinetic energy and average potential energy does not change as the trajectory changes

    (δ/δx(t))(Tave[x(t)] - Vave[x(t)]) = 0

    That observation motivates us to define the Lagrangian, L = T-V. The Lagrangian is special because it does not change as the particle moves through its trajectory. We can also define a quantity S called the action, which is the time integral from 0 to τ of the Lagrangian, measured in Joule-Seconds (if you've taken an elementary QM course, reflect on the fact that Planck's constant h is also defined in Joule-Seconds). We also have Hamilton's Principle of Least Action,

    δS/δx(t) = 0

    which tells us that the path that will be taken by a particle between points a and b is the one for which the action will be stationary (a maximum, minimum, or saddle trajectory, analogous to maxima and minima in calculus). By using the Lagrangian and the Principle of Least Action, we can we can derive the Euler-Lagrange equation, and then from there we can solve the Euler-Lagrange equation to determine the laws of motion for the system. Intuitively, the Lagrangian and the action contain the dynamics for a system.

    Here's an example: https://en.wikipedia.org/wiki/Lagrangian#An_example_from_classical_mechanics
     
  8. Jul 23, 2015 #7
    Thanks so much for the detailed response! I think i understand now, Thank You!
     
  9. Jul 23, 2015 #8

    julian

    User Avatar
    Gold Member

    Here's is a simple derivation of an action principle. It is fairly easy to prove that the solution of Newton's equations for particle motion,

    ##m {d^2 x \over dt^2} = F##

    minimizes the action:

    ##S = \int_{t_1}^{t_2} \big[ {1 \over 2} m \dot{x}^2 - V (x) \big] dt##

    where the integrand ##\mathcal{L} = {1 \over 2} m \dot{x}^2 - V (x)## is called the Lagrangian.

    Denote by ##x_c (t)## the motion which minimizes the action. We consider an alternative `motion' given by ##x (t) = x_c (t) + \alpha \eta (t)## where ##\alpha## is a parameter we are free to vary and ##\eta (t)## is an arbitrary function (called a `test' function). The only condition we impose is that the initial and final positions are fixed, in other words ##\eta (t_1) = \eta (t_2) = 0##.

    We can calculate the action of this alternative motion ##x (t) = x_c (t) + \alpha \eta (t)## - making it a function of ##\alpha##:

    ##S(\alpha) = \int_{t_1}^{t_2} \big[ {1 \over 2} m (\dot{x}_c + \alpha \dot{\eta})^2 - V (x_c + \alpha \eta) \big] dt##.

    As ##x_c (t)## minimizes the action, ##S(\alpha)## will have a minimum at ##\alpha = 0## for any test function ##\eta (t)##. This can be expressed:

    ##{\partial S (\alpha = 0) \over \partial \alpha} = 0##.

    Let us expand ##S(\alpha)## in powers of ##\alpha##:

    ##S(\alpha) = \int_{t_1}^{t_2} \big[ {1 \over 2} m (\dot{x}_c + \alpha \dot{\eta})^2 - V (x_c + \alpha \eta) \big] dt =##
    ##= \int_{t_1}^{t_2} \big[ {1 \over 2} m \dot{x}_c^2 - V(x_c) + \alpha m \dot{x}_c \dot{\eta} - \alpha {\partial V \over \partial x} \eta + \mathcal{O} (\alpha^2) \big] dt##

    (where ## \mathcal{O} (\alpha^2)## denotes terms of order ##\alpha^2## and greater). The requirement ##\partial S (\alpha = 0) / \partial \alpha = 0## gives:

    ##\int_{t_1}^{t_2} \big[ m \dot{x}_c \dot{\eta} (t) - {\partial V \over \partial x} \eta (t) \big] dt = 0##

    upon integrating by parts, we obtain:

    ##[m \dot{x}_c \eta (t)]_{t_1}^{t_2} - \int_{t_1}^{t_2} \big[ m \ddot{x}_c + {\partial V \over \partial x} \big] \eta (t) dt = 0##

    Using ##\eta (t_1) = \eta (t_2) = 0##, this is reduces to:

    ##\int_{t_1}^{t_2} \big[ m \ddot{x}_c + {\partial V (x_c) \over \partial x} \big] \eta (t) dt = 0##.

    This is true for all test functions ##\eta (t)##, we can imagine test functions `concentrated' at each particular time between ##t_1## and ##t_2## and conclude that what is in the square brackets vanishes, giving:

    ##m {d^2 x_c \over dt^2} = - {\partial V (x_c) \over \partial x} = F##,

    the required result.
     
    Last edited: Jul 23, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook