Graduate Maximization problem using Euler Lagrange

Click For Summary
The discussion revolves around solving a maximization problem using the Euler-Lagrange equation, specifically focusing on the integral involving a function f(x) and its derivative f'(x). Participants explore the implications of the boundary condition f^{-1}(0) and whether it should be constrained to a specific constant or allowed to vary. There is a consensus that the solution typically requires f'(x) to be negative, and k serves as an upper bound for the integral. Suggestions include reformulating the problem with piecewise linear functions to facilitate finding optimal solutions. The conversation emphasizes the complexity of the problem and the need for further exploration of potential mathematical techniques.
petterson
Messages
8
Reaction score
0
Hi,

I'm trying to solve the following problem

##\max_{f(x)} \int_{f^{-1}(0)}^0 (kx- \int_0^x f(u)du) f'(x) dx##.

I have only little experience with calculus of variations - the problem resembles something like

## I(x) = \int_0^1 F(t, x(t), x'(t),x''(t))dt##

but I don't know about the boundary ## f^{-1}(0) ##.

Is this there a way to solve this with a Euler Lagrange equation?

Thanks very much for your help!
 
Physics news on Phys.org
petterson said:
Hi,

I'm trying to solve the following problem

##\max_{f(x)} \int_{f^{-1}(0)}^0 (kx- \int_0^x f(u)du) f'(x) dx##.

If this comes from an applied math problem, is there a reason to think that integral has an upper bound?

Must a solution have ##f^{-1}(0) < 0## or do we also consider the case ##f^{-1}(0) > 0 ##?
but I don't know about the boundary ## f^{-1}(0) ##.

Can you solve the problem with the constraint that ## f^{-1}(0)## is equal to a given constant ##C## ?

Let ##f_C## be a solution to Maximize ##I_C(f) = \int_{f^{-1}(0)=C}^ 0 [ (kx - \int_0^x f(u)du) f'(x) ] dx## then the set of solutions is a family of functions parameterized by ##C##. You might be able to solve the further problem of maximizing ##I_C(f_C)## with respect to ##C##.
 
  • Like
Likes petterson
Stephen Tashi said:
If this comes from an applied math problem, is there a reason to think that integral has an upper bound?

Must a solution have ##f^{-1}(0) < 0## or do we also consider the case ##f^{-1}(0) > 0 ##?

Can you solve the problem with the constraint that ## f^{-1}(0)## is equal to a given constant ##C## ?

Let ##f_C## be a solution to Maximize ##I_C(f) = \int_{f^{-1}(0)=C}^ 0 [ (kx - \int_0^x f(u)du) f'(x) ] dx## then the set of solutions is a family of functions parameterized by ##C##. You might be able to solve the further problem of maximizing ##I_C(f_C)## with respect to ##C##.

We consider only the case ##f^{-1}(0) > 0 ##. Typically, the solution would be such that ##f'(x) < 0##. The problem is also such that ##k## is always an upper bound on the integral.

If I choose ## f^{-1}(0)## equal to a given constant ##C## , I think I can find a stationary point with the Lagrange equation. It gives me ## f'(x) = 0 ##, which I know should be part of a valid solution, but I can't find out anything about the constant ##C## then, because the integrand is zero.
 
petterson said:
We consider only the case ##f^{-1}(0) > 0 ##.

In that case, it's easier for me to visualize the problem as:

##I(f) = \int_0^{f^{-1}0} [ \int_0^x f(u)du - kx) f'(x)] dx ##
Find ##M =##max##_f \ I(f)##

Typically, the solution would be such that ##f'(x) < 0##. The problem is also such that ##k## is always an upper bound on the integral.

Are we insisting on the constraint ##f'(x) < 0##? ##\ ## If not, I don't understand why there is any upperbound on ##I(f)##. To make ##I(f)## big, use a function ##f## that is big at ##x = 0## and increases more steeply than the graph of ##y = kx##.

Edit: No, the above idea isn't correct. ##f## must eventually cross the positive x-axis in order for ##f^{-1}(0)## to be positive. So ##f## can't increase indefinitely.
 
Last edited:
  • Like
Likes petterson
Stephen Tashi said:
In that case, it's easier for me to visualize the problem as:

##I(f) = \int_0^{f^{-1}0} [ \int_0^x f(u)du - kx) f'(x)] dx ##
Find ##M =##max##_f \ I(f)##

Are we insisting on the constraint ##f'(x) < 0##? ##\ ## If not, I don't understand why there is any upperbound on ##I(f)##. To make ##I(f)## big, use a function ##f## that is big at ##x = 0## and increases more steeply than the graph of ##y = kx##.

Edit: No, the above idea isn't correct. ##f## must eventually cross the positive x-axis in order for ##f^{-1}(0)## to be positive. So ##f## can't increase indefinitely.

Sorry I wasn't quite clear with ##f'(x)## and the upper bound - from the setup of the problem it is a given that ## k ## is an upper bound on the integral. Typically we find ## f'(x) < 0## (in related problems) - it would be nicer not to impose it as a constraints but we may easily do so if it makes it mathematically more tractable.
It would probably be fine to write the problem as
##I(f) = \int_0^{f^{-1}0} [ kx - \int_0^x f(u)du )|f'(x)|] dx ##
Find ##M =##max##_f \ I(f)##
 
petterson said:
from the setup of the problem it is a given that ## k ## is an upper bound on the integral.
Do you mean ##I(f) \le k##? - or is k a bound for ##\int_0^x f(u)du## ?

It would probably be fine to write the problem as
##I(f) = \int_0^{f^{-1}0} [ kx - \int_0^x f(u)du )|f'(x)|] dx ##

In that case, do we use the constraint ##f(x) \ge 0##?
 
Stephen Tashi said:
Do you mean ##I(f) \le k##? - or is k a bound for ##\int_0^x f(u)du## ?

In that case, do we use the constraint ##f(x) \ge 0##?

Yes I mean ##I(f) \le k##. In the original problem ## x## is between 0 and 1. The boundary ## f^{-1}(0) ## comes from a substitution. We also restrict to ##f(x) \ge 0##. I'm sorry for all the confusion (I thought there might be more of a standard technique that could help me) . Below is the original problem with all constraints :

Assuming ## x \in [0,1], a \in [0,1], k > 0 ##

$$\max_{f} I(f) = \int_{0}^{f(0)} (k f^{-1}(a) - \int_0^{f^{-1}(a)} f(t)dt) da \\

\text{s.t. } f(x) \geq 0, f'(x) \leq 0$$
 
I don't see how to formulate that problem as a typical Calculus Of Variations problem.

I have some vague thoughts about approaching it in the spirit of the Calculus Of Variations (- or perhaps "dynamic programming").

Suppose we only consider functions that are piecewise linear. Let [0,1] on the x-axis be partition into given intervals so that on ##[x_i, x_{i+1}]## , ##f(x) = A_i x + B_i ##. Assume ##f_m(x)## is a particular function that maximizes ##I(f)##. Can we develop simultaneous equations that would allow us to solve for the ##f_m(x_i)## ?

We get some equations by requiring that each linear segment of the graph of ##f_m## meets the next linear segment at a common point: ##A_i x_{i+1} + B_i = A_{i+1} x_{i+1} + B_{i+1}##. ( Each of the ##A_i## and ##B_i## can be expressed in terms of values of ##f_m##.)

It seems possible to get other equations by using the idea that the value of ##f_m## at the point ##x_{i+1}## must be the optimum value given we knew the values of ##f_m## at all other ##x_j##.

Momentarily take the viewpoint that the ##f(x_j) ## are known except that we don't know ##f( x_{i+1})## (i.e. We don't know that to do about ##f_m## between ##(x_i,f(x_i))## and ##(x_{i+2} f( x_{i+2})## ). Solve the problem of maximizing ##I(f_m)## with respect to chosing ##f_m(x_{i+1})##.
We can consider ##I(f)## as the difference of two integrals. They will be evaluated as finite sums.

Pretending we know ordered pairs ##(x_j, f_m(x_j))## of ##f_m## implies we also know ordered pairs ##(f_m(x_j),x_j)## of ##f_m^{-1}##.

For ##I_1(f) = \int_0^{f(0}{k f^{-1}(a)} da## , the formula for ##I_1(f)## will be a summation of areas of trapezoids. We are pretending these areas are known except for the two trapezoids with common vertex ##(x_{i+1}, f_m(x_{i+1}))##

For ##I_2(f) = \int_0^{f(0)}\ [ \int_0^{f^{-1}(a)} f(t)dt ]\ da ## , thinking about it makes my head spin!

My intuition is that ##I_2(f)## can be expressed as a function of the unknown ##f_m(x_{i+1})## and the other known ##f_m(x_j)## in an explicit manner.

Assuming it can, solving the 1-variable optimization problem

Max##_{f_m(x_i+1)} (I_1(f_m) - I_2(f_m)) ##

should give an equation relating ##f_m(x_{i+1})## to the other values of ##f_m##.
 
Last edited:
Thanks very much for this idea, Stephen - I'll have to think about this for a little bit.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K