Calculus of variations: multiple variables, functions of one variable

phi1123
Messages
9
Reaction score
0
Simply put, can you find the function which extremizes the integral
J[f]=\iint L\left(x,y,f(x),f(y),f'(x),f'(y)\right) \,dx \,dy
Where ##f## is the function to be extremized, and ##x## and ##y## are independent variables? A result seems possible by using the usual calculus of variation technique of finding
\left.\frac{J[f+\epsilon \eta]}{d\epsilon}\right |_{\epsilon=0}
treating ##f(x)## and ##f(y)## as two different functions, and setting this to zero since ##J## is extremized. My (rather sketchy) calculations seem to give the differential equations:
\frac{\partial L}{\partial f(x)}-\frac{\partial}{\partial x}\left(\frac{\partial L}{\partial f'(x)}\right)=0\text{ and}
\frac{\partial L}{\partial f(y)}-\frac{\partial}{\partial y}\left(\frac{\partial L}{\partial f'(y)}\right)=0
Can anyone with more experience than me in calculus of variations (or has maybe encountered this scenario before) confirm my result?

Also maybe help me out with how Lagrange multipliers would work under such a situation? Basically the same as with two independent variables?
 
Physics news on Phys.org
I'm sorry you are not finding help at the moment. Is there any additional information you can share with us?
 
phi1123 said:
Simply put, can you find the function which extremizes the integral
J[f]=\iint L\left(x,y,f(x),f(y),f'(x),f'(y)\right) \,dx \,dy
Where ##f## is the function to be extremized, and ##x## and ##y## are independent variables? A result seems possible by using the usual calculus of variation technique of finding
\left.\frac{J[f+\epsilon \eta]}{d\epsilon}\right |_{\epsilon=0}
treating ##f(x)## and ##f(y)## as two different functions, and setting this to zero since ##J## is extremized. My (rather sketchy) calculations seem to give the differential equations:
\frac{\partial L}{\partial f(x)}-\frac{\partial}{\partial x}\left(\frac{\partial L}{\partial f'(x)}\right)=0\text{ and}
\frac{\partial L}{\partial f(y)}-\frac{\partial}{\partial y}\left(\frac{\partial L}{\partial f'(y)}\right)=0
Can anyone with more experience than me in calculus of variations (or has maybe encountered this scenario before) confirm my result?

Also maybe help me out with how Lagrange multipliers would work under such a situation? Basically the same as with two independent variables?

If the problem was to extremize

J[f]=\iint L\left(x,y,f(x,y),g(x,y),f_x(x,y),f_y(x,y),g_x(x,y),g_y(x,y) \right) \,dx \,dy

here treating f and g as different functions the standard result gives:{\partial L \over \partial f} - {\partial \over \partial x} \Big( {\partial L \over \partial f_x} \Big) - {\partial \over \partial y} \Big( {\partial L \over \partial f_y} \Big) = 0

and

{\partial L \over \partial g} - {\partial \over \partial x} \Big( {\partial L \over \partial g_x} \Big) - {\partial \over \partial y} \Big( {\partial L \over \partial g_y} \Big) = 0

But f_y = 0 and g_x = 0 as in your case (f(x,y) = f(x) and g(x,y) = g(y) ), L is not a function of them, so what you should have is:


{\partial L \over \partial f} - {\partial \over \partial x} \Big( {\partial L \over \partial f_x} \Big) = 0

and

{\partial L \over \partial g} - {\partial \over \partial y} \Big( {\partial L \over \partial g_y} \Big) = 0
 
Last edited:
Tired now...will try to give proper answer tomorrow.
 
Haven't thought about it much. Interesting question. Tried including the constraint f=g via

L \mapsto L + \lambda \int \int dx dy \delta (x-y) [f(x) - g(y)]^2 = 0?
 
Back
Top