# How Can I Maximize a Functional with a Bounded Integral Constraint?

• saminator910
In summary: X}f(x,\lambda)dx.In summary, you would maximise the function f(x,\lambda) subject to the constraints T≥\displaystyle\int_{X}f(x)h(x)dx.
saminator910
Can anyone tell me straightforward information about a way to maximize a certain functional $I[f]=\displaystyle\int_{X} L(f,x)dx$ such that the integral is bounded, $T≥\displaystyle\int_{X}f(x)h(x)dx$. I really know a minimal amount about functional analysis and calculus of variations, but I've looked at things like Hamiltonians and I don't know if they apply to this problem. Intuitively I see this as a problem that would be solved with Lagrange multipliers if it we weren't talking about functions and functionals, ie $\vec{f}\cdot \vec{h}=T$ is a linear constraint, $∇I(x_{1},...,x_{n})=\lambda \vec{h}$. We would then solve systems of equations $\lambda h_{i}=\frac{\partial I}{\partial x_{i}}$. Any help is greatly appreciated.

How would you find the minimum of any (continuous and differentiable) function f(x) subject to the condition ##x^2 \leq 1##? Can you do something similar for functionals?

My guess would be a functional derivative equal to 0, but all the problems I've looked at don't have a constraint.

saminator910 said:
My guess would be a functional derivative equal to 0

Well, for a function you would put the derivative equal to zero. When considering functionals you would put the functional derivative equal to zero. The question is what you would do with the constraints? Is the derivative equal to zero anywhere if the function is ##f(x) = x##? Where is the corresponding minumum?

So, the extreme values are at the boundary?

Not necessarily, but they might be. Hence, you need to minimise the functional both in the interior as well as on the boundary. The minimum value is the minimal value of these.

Ok, so from what I've read I set $\frac{\delta I}{\delta f(x)}=\frac{\partial L}{\partial f}=0$, the derivative is in this specific form because $L$ is not a function of $f'$. Mathematically I don't know how to incorporate the constraint. So, if I don't arrive at a max that satisfies the constraint, then I check the boundary cases?

You need to check the boundary in all cases. Even if you find a maximum inside the region, it might be a local maximum. You need to check that any maximum you find (a) is inside the region and (b) is a maximum and not another kind of extremum. In addition, you need to check the boundary of the region of functions which fulfill your criterion. The boundary is given by "integral = T". Do you know how to use Lagrange multipliers? The problem constrained to the boundary is exactly equivalent to extremisation with constraints in a finite dimensional vector space.

You need to check the boundary in all cases. Even if you find a maximum inside the region, it might be a local maximum. You need to check that any maximum you find (a) is inside the region and (b) is a maximum and not another kind of extremum. In addition, you need to check the boundary of the region of functions which fulfill your criterion. The boundary is given by "integral = T". Do you know how to use Lagrange multipliers? The problem constrained to the boundary is exactly equivalent to extremisation with constraints in a finite dimensional vector space.

Yeah, take a look at the bottom of my original question, I thought about using Lagrange multipliers, I just have no idea about how to get rid of the lambda? once I set the functional derivative of the constraint equal to the functional derivative of the boundary multiplied by lambda, and get a solution in terms of lambda, what do I do next?

Just the same as you would do in a finite vector space. You fix lambda such that the constraint is satisfied.

It strikes me now that you may not have to check the interior separately. You will find a lambda dependent value and find the maximum value of the functional such that the constraint is fulfilled. In other words, you will have reduced the problem to maximising a function of lambda subject to some constraints.

Ok, that makes sense. Now what I'm realizing is my functionals and constraint are such that the functional for the region will be maximized on the boundary.

saminator910 said:
Ok, that makes sense. Now what I'm realizing is my functionals and constraint are such that the functional for the region will be maximized on the boundary.

This actually does not matter much. The method just described will reduce the problem to maximising a function of lambda subject to constraints, and we already know how to do that for functions.

So, I have an expression $0=A (f,x,\lambda)$ after setting $\frac{\delta I}{\delta f}=\lambda \frac{\delta J}{\delta f}$ (where J is the functional that is set equal to constraint T), so I maximize what w.r.t. what? I really appreciate the help, I'm back from college for the summer so I don't have easy access to teachers.

edit:

Ok, so I have found $f(x,\lambda)$, so now I just find $f$ such that $T=\displaystyle \int_{X} f(x,\lambda)h(x)dx$

Last edited:
Oncle you have f(x,lambda), you can insert it into the functionals to define ##I(\lambda) = I[f(*,\lambda)]## and ##J(\lambda) = J[f(*,\lambda)]##. Your problem is now to maximise the function ##I(\lambda)## under the constraint ##J(\lambda) \leq T##.

## 1. What is a function optimization problem?

A function optimization problem is a mathematical problem that aims to find the maximum or minimum value of a given function. The goal is to find the input value or set of input values that will result in the optimal output value.

## 2. What are some common applications of function optimization?

Function optimization is commonly used in various fields such as engineering, economics, physics, and computer science. It can be used to optimize the design of structures, determine the most efficient production process, or find the best solution to a complex problem.

## 3. What are the different types of function optimization methods?

There are several types of function optimization methods, including gradient descent, genetic algorithms, simulated annealing, and particle swarm optimization. Each method has its own advantages and can be more suitable for certain types of problems.

## 4. How do you know when you have found the optimal solution?

In function optimization, the optimal solution is typically identified when the algorithm reaches a convergence point, where further iterations do not significantly improve the output value. This can also be confirmed by checking if the gradient of the function is close to zero at the solution point.

## 5. What are some challenges in solving function optimization problems?

Solving function optimization problems can be challenging due to the complexity of the functions involved and the large number of possible solutions. It can also be difficult to determine the best optimization method to use for a specific problem. Additionally, the presence of multiple local optima can make it challenging to find the global optimal solution.

Replies
1
Views
903
Replies
1
Views
1K
Replies
2
Views
545
Replies
2
Views
2K
Replies
1
Views
788
Replies
2
Views
2K
Replies
14
Views
3K
Replies
4
Views
889
Replies
2
Views
979
Replies
2
Views
2K