Hello everyone.(adsbygoogle = window.adsbygoogle || []).push({});

Iam studying the LQR regulator in optimal control theory right now but Iam having some issues in understanding the approach of Bellmans principle.

As far as I have understood, in Bellmans dynamical programming approach, one goes backward in time to find the optimal Gains K and inputs u for every state x so that my cost J is minimized.

And here some questions that came into my mind:

Do I have to precalculate the Gains before i "start" my controller? And if so, what happens if my states are different from the ones I used before?

Furthermore I stumbled about the infinite horizon version of the LQR regulator. And from what I understand my Gain jag will at some point reach a steady state. That means it is not necessary to have a Gain that changes with time. So I would set my Gain to the steady state value instead. That makes sense because it would be much easier to. Implement then a time varying Gain.

What I don't get: do I use the same constant Gain all the time in the horizon approach.?

I hope I was able to describe my issue and that anyone can give me. Some advice.

Cheers,

Mike

**Physics Forums | Science Articles, Homework Help, Discussion**

Dismiss Notice

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Discrete LQR regulator and Bellman's principle

Have something to add?

Draft saved
Draft deleted

**Physics Forums | Science Articles, Homework Help, Discussion**