Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Optimal Control Problem - LQR

  1. Oct 5, 2011 #1
    I'm trying to pick up optimal control by self study. At the moment I'm working on linear quadratic regulator and trying to reproduce the result publish in this paper.
    Curtis and Beard, Successive collocation: An approximation to optimal nonlinear control, Proceedings of the American Control Conference 2001.

    The problem is:
    Minimize [tex]J(x)=\int_0^{10} x^Tx + u^Tu dt[/tex]
    subject to
    [tex]\dot{x}=Ax+Bu; \ \ x_0^T=(-12,20)[/tex]
    where
    [tex]A=\left(\begin{array}{cc}0&1\\-1&2\end{array}\right) [/tex]
    [tex]B=\left(\begin{array}{cc}0\\1\end{array}\right) [/tex]

    Answer for optimal cost is J*(x)=2221.

    However I have try a few times but cannot reproduce this answer. I obtain 2346.5 instead using the methods of Pontryagin's Minimum Principle or Riccati equation. Probably I have misunderstood some concept here.

    Using Pontryagin's Minimum Principle, I let the Hamiltonian
    [tex]H=x_1^2 + x_2^2 + u^2 + \lambda_1x_2 + \lambda_2(-x_1+2x_2+u)[/tex]

    From which I can obtain 5 equations.

    [tex]\dot{x}=Ax+Bu [/tex]
    [tex]\dot{\lambda}_1 = -\frac{\partial H}{\partial x_1}[/tex]
    [tex]\dot{\lambda}_2 = -\frac{\partial H}{\partial x_2}[/tex]
    [tex]\frac{\partial H}{\partial u}=0[/tex]
    This linear system can be solve subject to the conditions
    [itex]x_1(0)=-12, x_2(0)=20, \lambda_1(10)=0 , \lambda_2(10)=0.[/itex]

    The solutions are plug into
    [tex]J(x)=\int_0^{10} x^Tx + u^Tu dt [/tex].

    Any clue where did I gone wrong? Or do anybody know a program that can compute the answer. I know there is a matlab command lqr but it only gives the feedback control not the value of the optimal cost.
     
  2. jcsd
  3. Oct 6, 2011 #2

    Pyrrhus

    User Avatar
    Homework Helper

    How do you know u is a scalar? from the problem u the control variable is a vector. Thus, your Hamiltonian is wrong.

    [itex] H = x_{1}^{2} + x_{2}^{2} + u_{1}^{2} + u_{2}^2 + \vec{\lambda}^{T} (Ax + Bu) [/itex]

    Also you forgot to say anything about the initial and terminal conditions...
     
  4. Oct 6, 2011 #3
    Thanks Pyrrhus. Probably thats my mistake.

    My arguement why the control u is a scalar because in the equation [itex]\dot{x}=Ax+Bu[/itex] , B is a column vector. The only way we can compute Bu is when u is a scalar.

    [tex]Bu=\left(\begin{array}{cc}0\\u\end{array}\right).[/tex]

    The initial condition x(0) is specified as x1(0)=-12, x2(0)=20,
    but the terminal point x(T) is not given.
     
  5. Oct 6, 2011 #4

    Pyrrhus

    User Avatar
    Homework Helper

    Ok, it makes sense.

    Did you try solving it as a free end terminal problem?

    It looks like you solved as a fixed end terminal problem.
     
  6. Oct 7, 2011 #5

    This is the part that really confuse me, the terminal point, because so far I have been doing by just following examples.

    Some problem have specific fixed end. Whilst others are free and yet some have infinite time.
    So I'm not fully understand what I'm doing here whether it is fixed end, free end or infinite time.

    I guess I'm solving it as a free terminal point because I'm taking the costate value at terminal point as zero, [itex]\lambda_1(T)=\lambda_2(T)=0[/itex].
     
  7. Oct 10, 2011 #6

    Pyrrhus

    User Avatar
    Homework Helper

    This is important. I'd recommend reading the paper and identifying the initial and final conditions.
     
  8. Oct 12, 2011 #7
    I have gone through the paper again but cannot extract new information about the terminal point other than what I have already written.

    But I see there is a sentence which claim that this example is for linear unstable system.
    Why is it unstable? Will it effect the computation?
     
  9. Oct 12, 2011 #8

    Pyrrhus

    User Avatar
    Homework Helper

    That's a good question. I am not sure what "linear unstable system" means.

    I know Dynamic Optimization, because economists use the theory. I am not an Electronic/Electric Engineer, so I am not sure.

    Perhaps a search on Google Scholar will help you?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Optimal Control Problem - LQR
Loading...