Calculus of variations: multiple variables, functions of one variable

Click For Summary

Discussion Overview

The discussion centers around the calculus of variations, specifically regarding the extremization of integrals involving functions of multiple variables and their derivatives. Participants explore the formulation of differential equations arising from this extremization process and the application of Lagrange multipliers in such contexts.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant proposes a method to find the function that extremizes the integral J[f] by treating f(x) and f(y) as separate functions, leading to specific differential equations.
  • Another participant expresses a need for more information to assist with the inquiry.
  • A later post reiterates the initial proposal and expands on the extremization problem, suggesting that if f and g are treated as different functions, the standard results yield different differential equations.
  • One participant mentions a potential approach using constraints via Lagrange multipliers, indicating a modification to the Lagrangian to include a delta function for the constraint f=g.

Areas of Agreement / Disagreement

The discussion reflects multiple competing views and approaches regarding the extremization of the integral and the application of Lagrange multipliers. No consensus is reached on the correctness of the proposed methods or results.

Contextual Notes

Participants express uncertainty about the application of Lagrange multipliers in this context and the implications of treating f and g as different functions. There are unresolved aspects regarding the assumptions made in the calculations and the dependencies on specific definitions.

phi1123
Messages
9
Reaction score
0
Simply put, can you find the function which extremizes the integral
J[f]=\iint L\left(x,y,f(x),f(y),f'(x),f'(y)\right) \,dx \,dy
Where ##f## is the function to be extremized, and ##x## and ##y## are independent variables? A result seems possible by using the usual calculus of variation technique of finding
\left.\frac{J[f+\epsilon \eta]}{d\epsilon}\right |_{\epsilon=0}
treating ##f(x)## and ##f(y)## as two different functions, and setting this to zero since ##J## is extremized. My (rather sketchy) calculations seem to give the differential equations:
\frac{\partial L}{\partial f(x)}-\frac{\partial}{\partial x}\left(\frac{\partial L}{\partial f'(x)}\right)=0\text{ and}
\frac{\partial L}{\partial f(y)}-\frac{\partial}{\partial y}\left(\frac{\partial L}{\partial f'(y)}\right)=0
Can anyone with more experience than me in calculus of variations (or has maybe encountered this scenario before) confirm my result?

Also maybe help me out with how Lagrange multipliers would work under such a situation? Basically the same as with two independent variables?
 
Physics news on Phys.org
I'm sorry you are not finding help at the moment. Is there any additional information you can share with us?
 
phi1123 said:
Simply put, can you find the function which extremizes the integral
J[f]=\iint L\left(x,y,f(x),f(y),f'(x),f'(y)\right) \,dx \,dy
Where ##f## is the function to be extremized, and ##x## and ##y## are independent variables? A result seems possible by using the usual calculus of variation technique of finding
\left.\frac{J[f+\epsilon \eta]}{d\epsilon}\right |_{\epsilon=0}
treating ##f(x)## and ##f(y)## as two different functions, and setting this to zero since ##J## is extremized. My (rather sketchy) calculations seem to give the differential equations:
\frac{\partial L}{\partial f(x)}-\frac{\partial}{\partial x}\left(\frac{\partial L}{\partial f'(x)}\right)=0\text{ and}
\frac{\partial L}{\partial f(y)}-\frac{\partial}{\partial y}\left(\frac{\partial L}{\partial f'(y)}\right)=0
Can anyone with more experience than me in calculus of variations (or has maybe encountered this scenario before) confirm my result?

Also maybe help me out with how Lagrange multipliers would work under such a situation? Basically the same as with two independent variables?

If the problem was to extremize

J[f]=\iint L\left(x,y,f(x,y),g(x,y),f_x(x,y),f_y(x,y),g_x(x,y),g_y(x,y) \right) \,dx \,dy

here treating f and g as different functions the standard result gives:{\partial L \over \partial f} - {\partial \over \partial x} \Big( {\partial L \over \partial f_x} \Big) - {\partial \over \partial y} \Big( {\partial L \over \partial f_y} \Big) = 0

and

{\partial L \over \partial g} - {\partial \over \partial x} \Big( {\partial L \over \partial g_x} \Big) - {\partial \over \partial y} \Big( {\partial L \over \partial g_y} \Big) = 0

But f_y = 0 and g_x = 0 as in your case (f(x,y) = f(x) and g(x,y) = g(y) ), L is not a function of them, so what you should have is:


{\partial L \over \partial f} - {\partial \over \partial x} \Big( {\partial L \over \partial f_x} \Big) = 0

and

{\partial L \over \partial g} - {\partial \over \partial y} \Big( {\partial L \over \partial g_y} \Big) = 0
 
Last edited:
Tired now...will try to give proper answer tomorrow.
 
Haven't thought about it much. Interesting question. Tried including the constraint f=g via

L \mapsto L + \lambda \int \int dx dy \delta (x-y) [f(x) - g(y)]^2 = 0?
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K