Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Homework Help: How to find the minimum of an integral with calculus of variations

  1. Nov 15, 2011 #1
    I need to find the minimum of this integral

    F=∫ (αy^-1+βy^3+δxy)dx

    where α, β and δ are constant; y is a function of x

    the integral is calculated over the interval [0,L], where L is constant

    I need to find the function y that minimizes the above-mentioned integral

    The integral is subject to the following constraint

    N=∫ydx

    where N is a constant and the integral interval is again [0,L]


    Anyone can help?
    Is it possible to find an analytical solution?
    Thanks

    Ps:Sorry for the bad format, it's my first post
     
    Last edited: Nov 15, 2011
  2. jcsd
  3. Nov 15, 2011 #2

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    This does not look anything like a Calculus of Variations problem, because dy/dx is not involved in the integrand. Instead, you can just minimize the integrand for each x (to get a function y(x)). More precisely, you can look at the "Lagrangian" type problem, where yu want to minimize int f(x,y) dx + r* int y dx with no constraints; here, r is a "lagrange multiplier" and note that it is a constant, not a function of x. So, your integrand is of the form
    [tex]f(x,y) = \frac{a}{y} + b y^3 + c x y + r y,[/tex]
    where I have used 'a' instead of , 'b' instead of β and 'c' instead of δ. If a > 0 and b > 0 we can minimize f by setting [itex] \partial f/\partial y = 0 [/itex] for each x and solve for y. There are 4 roots, but for b > 0 it seems there are only two relevant roots, both of which contain the parameter r. Determine r by asking that int y dx = N. (This will be a nasty problem that almost certainly needs a numerical approach for given a, b, c and L.)

    If a and/or b < 0, there may not be a minimum at all; we may be able to find a sequence y_n(x) giving int f(x,y_n(x)) dx --> -infinity, while keeping int y_n(x) dx = N for each n. (I am not absolutely sure about this, but I think it is true.)

    RGV
     
  4. Nov 16, 2011 #3
    thanks a lot!
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook