Pontryagin minimum principle with control constraints

In summary, the conversation discusses a control problem aimed at minimizing fuel consumption of a vehicle. The problem involves dynamic and inequality constraints on the state and control variables. The speaker has been able to solve the unconstrained problem by minimizing the Hamiltonian, but is struggling to incorporate the inequality constraints without losing the constant co-state variable that ensures the last state is equal to the initial state. Suggestions are made to use a potential well or Lagrange multipliers to address the constraints.
  • #1
synMehdi
38
0
Hi, I am trying to solve a control problem where I have to minimize the fuel consumption of a vehicle:
$$J=\int_{0}^{T} L(x(t), u(t),t) + g(x(T),T)dt$$
##L(u(t),v(t))=\sum\limits_{i,j=0}^{2} K_{i,j} u(t)^i v(t)^j ## is convex (quadratic) and the term ##g(x(T),T)## is to have a constraint in the value of the last state - in my case ##x(T)=x(0)##
Subject to some dynamic constraints on the state:
$$\dot{x}(t)=f(x(t),u(t))$$
and some inequality constraints on the control variable (which is what I'm having trouble with)
$$U_{min}<u(t)<U_{max}$$
So far I've been able to solve the unconstrained problem by minimizing the Hamiltonian
$$H(x(t),u(t),p(t))=L(x(t), u(t),t)+p(t) \dot{x(t)}$$
The co-state ##p(t)=p_0## is constant and calculated so that ##x(T)=x(0)##. Also ##u(t)## is function of ##p_0## and some of the quadratic parameters that come from derivative of ##L(x(t),u(t),t)## wrt to ##u(t)## (##\frac{dH}{du}=0##)
This works fine and i have the results I want. My question is how can I incorporate the inequality constrains on the control input: ##u(t)## without losing the ##p_0## that makes ##x(T)=x(0)##
 
Physics news on Phys.org
  • #2
synMehdi said:
Hi, I am trying to solve a control problem where I have to minimize the fuel consumption of a vehicle:
$$J=\int_{0}^{T} L(x(t), u(t),t) + g(x(T),T)dt$$
##L(u(t),v(t))=\sum\limits_{i,j=0}^{2} K_{i,j} u(t)^i v(t)^j ## is convex (quadratic) and the term ##g(x(T),T)## is to have a constraint in the value of the last state - in my case ##x(T)=x(0)##
Subject to some dynamic constraints on the state:
$$\dot{x}(t)=f(x(t),u(t))$$
and some inequality constraints on the control variable (which is what I'm having trouble with)
$$U_{min}<u(t)<U_{max}$$
So far I've been able to solve the unconstrained problem by minimizing the Hamiltonian
$$H(x(t),u(t),p(t))=L(x(t), u(t),t)+p(t) \dot{x(t)}$$
The co-state ##p(t)=p_0## is constant and calculated so that ##x(T)=x(0)##. Also ##u(t)## is function of ##p_0## and some of the quadratic parameters that come from derivative of ##L(x(t),u(t),t)## wrt to ##u(t)## (##\frac{dH}{du}=0##)
This works fine and i have the results I want. My question is how can I incorporate the inequality constrains on the control input: ##u(t)## without losing the ##p_0## that makes ##x(T)=x(0)##
Could you create a potential well so that the potential energy rises steeply at ##u = U_{max}## and ##u = U_{min}##? Something like ##P(u) =\left(\frac {2u- (U_{max}+U_{min})} {U_{max}-U_{min}}\right)^{2n}##. Then you could look at the result in the limit as ##n \rightarrow \infty##.
 
  • #3
tnich said:
Could you create a potential well so that the potential energy rises steeply at ##u = U_{max}## and ##u = U_{min}##? Something like ##P(u) =\left(\frac {2u- (U_{max}+U_{min})} {U_{max}-U_{min}}\right)^{2n}##. Then you could look at the result in the limit as ##n \rightarrow \infty##.
Or it might be easier to use Heaviside step functions to construct the potential well.
 
  • #4
Thanks for the answer.
The solution that you propose seems difficult to include analytically.
My question is more about how the Pontryagin Minimum Principle works when whe have constraints. I suppose that ##\frac{dH}{du} = 0## is no longer valid so I will have to look in the borders (##H(U_{max})## and ##H(U_{min})##) and check if it is smaller than the u given by ##\frac{dH}{du} = 0##.
In my case as as it is quadratic it will simply be ##Min(Max(argmin(H),U_{min}),U_{max})##
But then how would my co-state variable ##p## change (maybe it won't be constant anymore) and how to calculate it in this case so that I don't lose the property of ##x(T)=x(0)##
 
  • #5
synMehdi said:
Thanks for the answer.
The solution that you propose seems difficult to include analytically.
My question is more about how the Pontryagin Minimum Principle works when whe have constraints. I suppose that ##\frac{dH}{du} = 0## is no longer valid so I will have to look in the borders (##H(U_{max})## and ##H(U_{min})##) and check if it is smaller than the u given by ##\frac{dH}{du} = 0##.
In my case as as it is quadratic it will simply be ##Min(Max(argmin(H),U_{min}),U_{max})##
But then how would my co-state variable ##p## change (maybe it won't be constant anymore) and how to calculate it in this case so that I don't lose the property of ##x(T)=x(0)##
OK. I was trying to give you a term that you could add to the Hamiltonian that would express the constraint and be differentiable.
 
  • Like
Likes synMehdi
  • #6
tnich said:
OK. I was trying to give you a term that you could add to the Hamiltonian that would express the constraint and be differentiable.
Have you tried using Lagrange multipliers?
 
  • #7
tnich said:
Have you tried using Lagrange multipliers?
Well I thought multipliers were for equality constraints not inequality. Can they be used in this case?
tnich said:
OK. I was trying to give you a term that you could add to the Hamiltonian that would express the constraint and be differentiable.
I now understand what you were proposing and it is a good idea, like an infinity penalty for going outside the bounds of ##u(t)##, right?. I will see at the maths of it, try it and get back.
 
  • #8
synMehdi said:
Well I thought multipliers were for equality constraints not inequality. Can they be used in this case?
You can use Lagrange multipliers with inequalities, but you need to constrain the multiplier to be non-negative so that the product ##λ(u-u_{min})## (for example) is ##\geq 0##. That is, since ##u \geq u_{min}## and ##λ\geq 0##, ##λ(u-u_{min}) \geq 0##. Then you add ##λ(u-u_{min})## to the function you are trying to minimize. I am not sure that really helps you.

synMehdi said:
I now understand what you were proposing and it is a good idea, like an infinity penalty for going outside the bounds of ##u(t)##, right?. I will see at the maths of it, try it and get back.
Yes, that is the idea.
 
  • Like
Likes synMehdi
  • #9
synMehdi said:
Well I thought multipliers were for equality constraints not inequality. Can they be used in this case?

I now understand what you were proposing and it is a good idea, like an infinity penalty for going outside the bounds of ##u(t)##, right?. I will see at the maths of it, try it and get back.
I have been imagining the kinds of problems you have been running into in implementing this approach. I want to suggest an alternative algorithm:
1) Solve the unconstrained problem.
2) If constraint is violated, then re-solve the problem under the assumption that the system operates at the constraint from time ##t_1## to time ##t_2## (e.g. for ##t \in [t_1,t_2]##, ##u(t)=u_{min}##). You will have to do three separate optimizations (again assuming the ##u_{min}## constraint is violated):
a) For ##t \in [0,t_1]## with the constraint that ##u(t_1)=u_{min}##
b) For ##t \in [t_1,t_2]## with the constraint that ##u(t) = u_{min}## is constant
c) For ##t \in [t_2,t_f]## with the constraint that ##u(t_2)=u_{min}##
3) Take the result of these three optimizations and optimize over ##t_1## and ##t_2##.

If the result violates the other constraint, you may need to use a similar method to deal with both constraints simultaneously.

The complexity of this approach grows exponentially in the number of constraints, so while it might work in a simple case, it would not be very good with a lot of constraints.
 

1. What is the Pontryagin minimum principle with control constraints?

The Pontryagin minimum principle with control constraints is a mathematical principle used in optimal control theory to find the optimal control trajectory for a dynamical system subject to constraints. It is named after Russian mathematician Lev Pontryagin, who developed the principle in the 1950s.

2. How does the Pontryagin minimum principle work?

The principle states that for a given system, the optimal control trajectory can be found by minimizing a certain function, called the Hamiltonian, over a set of feasible control inputs. This Hamiltonian function takes into account both the system dynamics and the control constraints, and the optimal control trajectory is the one that minimizes this function.

3. What are control constraints and why are they important?

Control constraints are limitations on the control inputs that can be applied to a system. These constraints can be physical limitations, such as the maximum thrust of an engine, or operational limitations, such as a speed limit. They are important because they affect the behavior of the system and must be taken into account when finding the optimal control trajectory.

4. What are some applications of the Pontryagin minimum principle with control constraints?

The principle has various applications in engineering, economics, and biology. It can be used to design optimal control strategies for systems such as spacecraft, vehicles, and chemical processes. It is also used in economic models to determine the optimal allocation of resources, and in biology to study optimal behavior of organisms.

5. What are the limitations of the Pontryagin minimum principle with control constraints?

One limitation is that the principle does not always guarantee a unique solution. In some cases, there may be multiple optimal control trajectories that satisfy the principle. Additionally, the principle assumes that the system dynamics and control constraints are known and can be modeled accurately, which may not always be the case in real-world scenarios.

Similar threads

  • Differential Equations
Replies
1
Views
775
  • Differential Equations
Replies
5
Views
659
  • Differential Equations
Replies
2
Views
2K
Replies
2
Views
1K
  • Differential Equations
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
11
Views
690
  • Classical Physics
Replies
0
Views
149
Replies
7
Views
734
Replies
33
Views
2K
Replies
1
Views
1K
Back
Top