Calculus of variations: multiple variables, functions of one variable

In summary: If the problem was to extremizeJ[f]=\iint L\left(x,y,f(x,y),g(x,y),f_x(x,y),f_y(x,y),g_x(x,y),g_y(x,y) \right) \,dx \,dyhere treating f and g as different functions the standard result gives:{\partial L \over \partial f} - {\partial \over \partial x} \Big( {\partial L \over \partial f_x} \Big) - {\partial \over \partial y} \Big( {\partial L \over \partial
  • #1
phi1123
9
0
Simply put, can you find the function which extremizes the integral
[tex]J[f]=\iint L\left(x,y,f(x),f(y),f'(x),f'(y)\right) \,dx \,dy[/tex]
Where ##f## is the function to be extremized, and ##x## and ##y## are independent variables? A result seems possible by using the usual calculus of variation technique of finding
[tex]\left.\frac{J[f+\epsilon \eta]}{d\epsilon}\right |_{\epsilon=0}[/tex]
treating ##f(x)## and ##f(y)## as two different functions, and setting this to zero since ##J## is extremized. My (rather sketchy) calculations seem to give the differential equations:
[itex]\frac{\partial L}{\partial f(x)}-\frac{\partial}{\partial x}\left(\frac{\partial L}{\partial f'(x)}\right)=0\text{ and}[/itex]
[itex]\frac{\partial L}{\partial f(y)}-\frac{\partial}{\partial y}\left(\frac{\partial L}{\partial f'(y)}\right)=0[/itex]
Can anyone with more experience than me in calculus of variations (or has maybe encountered this scenario before) confirm my result?

Also maybe help me out with how Lagrange multipliers would work under such a situation? Basically the same as with two independent variables?
 
Physics news on Phys.org
  • #2
I'm sorry you are not finding help at the moment. Is there any additional information you can share with us?
 
  • #3
phi1123 said:
Simply put, can you find the function which extremizes the integral
[tex]J[f]=\iint L\left(x,y,f(x),f(y),f'(x),f'(y)\right) \,dx \,dy[/tex]
Where ##f## is the function to be extremized, and ##x## and ##y## are independent variables? A result seems possible by using the usual calculus of variation technique of finding
[tex]\left.\frac{J[f+\epsilon \eta]}{d\epsilon}\right |_{\epsilon=0}[/tex]
treating ##f(x)## and ##f(y)## as two different functions, and setting this to zero since ##J## is extremized. My (rather sketchy) calculations seem to give the differential equations:
[itex]\frac{\partial L}{\partial f(x)}-\frac{\partial}{\partial x}\left(\frac{\partial L}{\partial f'(x)}\right)=0\text{ and}[/itex]
[itex]\frac{\partial L}{\partial f(y)}-\frac{\partial}{\partial y}\left(\frac{\partial L}{\partial f'(y)}\right)=0[/itex]
Can anyone with more experience than me in calculus of variations (or has maybe encountered this scenario before) confirm my result?

Also maybe help me out with how Lagrange multipliers would work under such a situation? Basically the same as with two independent variables?

If the problem was to extremize

[tex]J[f]=\iint L\left(x,y,f(x,y),g(x,y),f_x(x,y),f_y(x,y),g_x(x,y),g_y(x,y) \right) \,dx \,dy[/tex]

here treating [itex]f[/itex] and [itex]g[/itex] as different functions the standard result gives:[itex]{\partial L \over \partial f} - {\partial \over \partial x} \Big( {\partial L \over \partial f_x} \Big) - {\partial \over \partial y} \Big( {\partial L \over \partial f_y} \Big) = 0[/itex]

and

[itex]{\partial L \over \partial g} - {\partial \over \partial x} \Big( {\partial L \over \partial g_x} \Big) - {\partial \over \partial y} \Big( {\partial L \over \partial g_y} \Big) = 0[/itex]

But [itex]f_y = 0[/itex] and [itex]g_x = 0[/itex] as in your case ([itex]f(x,y) = f(x)[/itex] and [itex]g(x,y) = g(y)[/itex] ), [itex]L[/itex] is not a function of them, so what you should have is:


[itex]{\partial L \over \partial f} - {\partial \over \partial x} \Big( {\partial L \over \partial f_x} \Big) = 0[/itex]

and

[itex]{\partial L \over \partial g} - {\partial \over \partial y} \Big( {\partial L \over \partial g_y} \Big) = 0[/itex]
 
Last edited:
  • #4
Tired now...will try to give proper answer tomorrow.
 
  • #5
Haven't thought about it much. Interesting question. Tried including the constraint [itex]f=g[/itex] via

[itex]L \mapsto L + \lambda \int \int dx dy \delta (x-y) [f(x) - g(y)]^2 = 0[/itex]?
 

1. What is the calculus of variations?

The calculus of variations is a mathematical theory that deals with finding the best possible values for quantities that depend on multiple variables or functions of one variable. It involves optimizing a certain functional, which is a mathematical expression that maps a set of functions to a real number, by varying the functions within a given set of constraints.

2. How is the calculus of variations used in real-world applications?

The calculus of variations has many practical applications in fields such as physics, engineering, economics, and biology. It is used to find optimal solutions in problems involving motion, heat transfer, control systems, and more. For example, it can be used to find the path that a particle will take to minimize the time or energy required to travel between two points in space.

3. What is the difference between functions of one variable and multiple variables in the calculus of variations?

In the calculus of variations, functions of one variable involve finding the optimal value of a single function. On the other hand, functions of multiple variables involve finding the optimal values of multiple functions simultaneously. This adds an extra level of complexity to the problem, as the functions may be interdependent and affect each other's optimal values.

4. What is the Euler-Lagrange equation and how is it used in the calculus of variations?

The Euler-Lagrange equation is a fundamental equation in the calculus of variations that is used to find the optimal solution to a functional. It is derived from the principle of stationary action, which states that the optimal solution will make the functional stationary (neither increasing nor decreasing) when small variations are made to the functions. The Euler-Lagrange equation is a necessary condition for a functional to have an extremum.

5. Is it necessary to use calculus of variations to solve optimization problems?

No, there are other methods for solving optimization problems such as linear programming or gradient descent. However, the calculus of variations is particularly useful for problems that involve continuous functions and can provide more precise and elegant solutions. It also has applications in more complex optimization problems that cannot be solved using other methods.

Similar threads

Replies
1
Views
937
Replies
3
Views
1K
Replies
5
Views
385
Replies
8
Views
1K
Replies
1
Views
2K
Replies
6
Views
928
Replies
3
Views
1K
Replies
3
Views
1K
Replies
22
Views
456
Replies
12
Views
1K
Back
Top