Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Discrete LQR regulator and Bellman's principle

  1. Jun 29, 2017 #1
    Hello everyone.

    Iam studying the LQR regulator in optimal control theory right now but Iam having some issues in understanding the approach of Bellmans principle.

    As far as I have understood, in Bellmans dynamical programming approach, one goes backward in time to find the optimal Gains K and inputs u for every state x so that my cost J is minimized.

    And here some questions that came into my mind:

    Do I have to precalculate the Gains before i "start" my controller? And if so, what happens if my states are different from the ones I used before?

    Furthermore I stumbled about the infinite horizon version of the LQR regulator. And from what I understand my Gain jag will at some point reach a steady state. That means it is not necessary to have a Gain that changes with time. So I would set my Gain to the steady state value instead. That makes sense because it would be much easier to. Implement then a time varying Gain.

    What I don't get: do I use the same constant Gain all the time in the horizon approach.?

    I hope I was able to describe my issue and that anyone can give me. Some advice.


  2. jcsd
  3. Jul 4, 2017 #2
    Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted