Maximization problem using Euler Lagrange

In summary, the conversation discusses a problem involving maximizing an integral with a constraint and the use of calculus of variations. The problem resembles a typical calculus of variations problem but with an unknown boundary. There is also a discussion about the constraint on the function and the upper bound on the integral. Suggestions are made for approaching the problem, such as using piecewise linear functions.
  • #1
petterson
8
0
Hi,

I'm trying to solve the following problem

##\max_{f(x)} \int_{f^{-1}(0)}^0 (kx- \int_0^x f(u)du) f'(x) dx##.

I have only little experience with calculus of variations - the problem resembles something like

## I(x) = \int_0^1 F(t, x(t), x'(t),x''(t))dt##

but I don't know about the boundary ## f^{-1}(0) ##.

Is this there a way to solve this with a Euler Lagrange equation?

Thanks very much for your help!
 
Physics news on Phys.org
  • #2
petterson said:
Hi,

I'm trying to solve the following problem

##\max_{f(x)} \int_{f^{-1}(0)}^0 (kx- \int_0^x f(u)du) f'(x) dx##.

If this comes from an applied math problem, is there a reason to think that integral has an upper bound?

Must a solution have ##f^{-1}(0) < 0## or do we also consider the case ##f^{-1}(0) > 0 ##?
but I don't know about the boundary ## f^{-1}(0) ##.

Can you solve the problem with the constraint that ## f^{-1}(0)## is equal to a given constant ##C## ?

Let ##f_C## be a solution to Maximize ##I_C(f) = \int_{f^{-1}(0)=C}^ 0 [ (kx - \int_0^x f(u)du) f'(x) ] dx## then the set of solutions is a family of functions parameterized by ##C##. You might be able to solve the further problem of maximizing ##I_C(f_C)## with respect to ##C##.
 
  • Like
Likes petterson
  • #3
Stephen Tashi said:
If this comes from an applied math problem, is there a reason to think that integral has an upper bound?

Must a solution have ##f^{-1}(0) < 0## or do we also consider the case ##f^{-1}(0) > 0 ##?

Can you solve the problem with the constraint that ## f^{-1}(0)## is equal to a given constant ##C## ?

Let ##f_C## be a solution to Maximize ##I_C(f) = \int_{f^{-1}(0)=C}^ 0 [ (kx - \int_0^x f(u)du) f'(x) ] dx## then the set of solutions is a family of functions parameterized by ##C##. You might be able to solve the further problem of maximizing ##I_C(f_C)## with respect to ##C##.

We consider only the case ##f^{-1}(0) > 0 ##. Typically, the solution would be such that ##f'(x) < 0##. The problem is also such that ##k## is always an upper bound on the integral.

If I choose ## f^{-1}(0)## equal to a given constant ##C## , I think I can find a stationary point with the Lagrange equation. It gives me ## f'(x) = 0 ##, which I know should be part of a valid solution, but I can't find out anything about the constant ##C## then, because the integrand is zero.
 
  • #4
petterson said:
We consider only the case ##f^{-1}(0) > 0 ##.

In that case, it's easier for me to visualize the problem as:

##I(f) = \int_0^{f^{-1}0} [ \int_0^x f(u)du - kx) f'(x)] dx ##
Find ##M =##max##_f \ I(f)##

Typically, the solution would be such that ##f'(x) < 0##. The problem is also such that ##k## is always an upper bound on the integral.

Are we insisting on the constraint ##f'(x) < 0##? ##\ ## If not, I don't understand why there is any upperbound on ##I(f)##. To make ##I(f)## big, use a function ##f## that is big at ##x = 0## and increases more steeply than the graph of ##y = kx##.

Edit: No, the above idea isn't correct. ##f## must eventually cross the positive x-axis in order for ##f^{-1}(0)## to be positive. So ##f## can't increase indefinitely.
 
Last edited:
  • Like
Likes petterson
  • #5
Stephen Tashi said:
In that case, it's easier for me to visualize the problem as:

##I(f) = \int_0^{f^{-1}0} [ \int_0^x f(u)du - kx) f'(x)] dx ##
Find ##M =##max##_f \ I(f)##

Are we insisting on the constraint ##f'(x) < 0##? ##\ ## If not, I don't understand why there is any upperbound on ##I(f)##. To make ##I(f)## big, use a function ##f## that is big at ##x = 0## and increases more steeply than the graph of ##y = kx##.

Edit: No, the above idea isn't correct. ##f## must eventually cross the positive x-axis in order for ##f^{-1}(0)## to be positive. So ##f## can't increase indefinitely.

Sorry I wasn't quite clear with ##f'(x)## and the upper bound - from the setup of the problem it is a given that ## k ## is an upper bound on the integral. Typically we find ## f'(x) < 0## (in related problems) - it would be nicer not to impose it as a constraints but we may easily do so if it makes it mathematically more tractable.
It would probably be fine to write the problem as
##I(f) = \int_0^{f^{-1}0} [ kx - \int_0^x f(u)du )|f'(x)|] dx ##
Find ##M =##max##_f \ I(f)##
 
  • #6
petterson said:
from the setup of the problem it is a given that ## k ## is an upper bound on the integral.
Do you mean ##I(f) \le k##? - or is k a bound for ##\int_0^x f(u)du## ?

It would probably be fine to write the problem as
##I(f) = \int_0^{f^{-1}0} [ kx - \int_0^x f(u)du )|f'(x)|] dx ##

In that case, do we use the constraint ##f(x) \ge 0##?
 
  • #7
Stephen Tashi said:
Do you mean ##I(f) \le k##? - or is k a bound for ##\int_0^x f(u)du## ?

In that case, do we use the constraint ##f(x) \ge 0##?

Yes I mean ##I(f) \le k##. In the original problem ## x## is between 0 and 1. The boundary ## f^{-1}(0) ## comes from a substitution. We also restrict to ##f(x) \ge 0##. I'm sorry for all the confusion (I thought there might be more of a standard technique that could help me) . Below is the original problem with all constraints :

Assuming ## x \in [0,1], a \in [0,1], k > 0 ##

$$\max_{f} I(f) = \int_{0}^{f(0)} (k f^{-1}(a) - \int_0^{f^{-1}(a)} f(t)dt) da \\

\text{s.t. } f(x) \geq 0, f'(x) \leq 0$$
 
  • #8
I don't see how to formulate that problem as a typical Calculus Of Variations problem.

I have some vague thoughts about approaching it in the spirit of the Calculus Of Variations (- or perhaps "dynamic programming").

Suppose we only consider functions that are piecewise linear. Let [0,1] on the x-axis be partition into given intervals so that on ##[x_i, x_{i+1}]## , ##f(x) = A_i x + B_i ##. Assume ##f_m(x)## is a particular function that maximizes ##I(f)##. Can we develop simultaneous equations that would allow us to solve for the ##f_m(x_i)## ?

We get some equations by requiring that each linear segment of the graph of ##f_m## meets the next linear segment at a common point: ##A_i x_{i+1} + B_i = A_{i+1} x_{i+1} + B_{i+1}##. ( Each of the ##A_i## and ##B_i## can be expressed in terms of values of ##f_m##.)

It seems possible to get other equations by using the idea that the value of ##f_m## at the point ##x_{i+1}## must be the optimum value given we knew the values of ##f_m## at all other ##x_j##.

Momentarily take the viewpoint that the ##f(x_j) ## are known except that we don't know ##f( x_{i+1})## (i.e. We don't know that to do about ##f_m## between ##(x_i,f(x_i))## and ##(x_{i+2} f( x_{i+2})## ). Solve the problem of maximizing ##I(f_m)## with respect to chosing ##f_m(x_{i+1})##.
We can consider ##I(f)## as the difference of two integrals. They will be evaluated as finite sums.

Pretending we know ordered pairs ##(x_j, f_m(x_j))## of ##f_m## implies we also know ordered pairs ##(f_m(x_j),x_j)## of ##f_m^{-1}##.

For ##I_1(f) = \int_0^{f(0}{k f^{-1}(a)} da## , the formula for ##I_1(f)## will be a summation of areas of trapezoids. We are pretending these areas are known except for the two trapezoids with common vertex ##(x_{i+1}, f_m(x_{i+1}))##

For ##I_2(f) = \int_0^{f(0)}\ [ \int_0^{f^{-1}(a)} f(t)dt ]\ da ## , thinking about it makes my head spin!

My intuition is that ##I_2(f)## can be expressed as a function of the unknown ##f_m(x_{i+1})## and the other known ##f_m(x_j)## in an explicit manner.

Assuming it can, solving the 1-variable optimization problem

Max##_{f_m(x_i+1)} (I_1(f_m) - I_2(f_m)) ##

should give an equation relating ##f_m(x_{i+1})## to the other values of ##f_m##.
 
Last edited:
  • #9
Thanks very much for this idea, Stephen - I'll have to think about this for a little bit.
 

1. What is the Euler Lagrange equation?

The Euler Lagrange equation is a mathematical expression used to solve optimization problems, specifically, the maximization problem. It is derived from the Euler-Lagrange equation of calculus of variations and is used to find the maximum value of a function subject to certain constraints.

2. How is the Euler Lagrange equation used in optimization problems?

The Euler Lagrange equation is used to find the optimal value of a function, which maximizes or minimizes the function subject to certain constraints. It is typically used in physics, engineering, and economics to solve complex optimization problems.

3. What is the difference between a maximization problem and a minimization problem?

A maximization problem involves finding the largest possible value of a function, while a minimization problem involves finding the smallest possible value of a function. Both types of problems can be solved using the Euler Lagrange equation.

4. When should the Euler Lagrange equation be used in problem-solving?

The Euler Lagrange equation should be used when trying to optimize a function with multiple variables and constraints. It is a powerful tool for solving complex optimization problems and can provide a precise solution to the problem at hand.

5. Are there any limitations to the use of the Euler Lagrange equation?

While the Euler Lagrange equation is a powerful tool for solving optimization problems, it does have its limitations. It may not always provide a unique solution, and it can be challenging to apply in certain situations. Additionally, it may not be suitable for solving problems with discontinuous or non-differentiable functions.

Similar threads

Replies
1
Views
928
Replies
13
Views
1K
Replies
1
Views
927
Replies
4
Views
740
Replies
2
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
982
Replies
11
Views
2K
  • Calculus and Beyond Homework Help
Replies
6
Views
849
Replies
4
Views
970
  • Calculus
Replies
6
Views
1K
Back
Top