Discrete LQR regulator and Bellman's principle

In summary: In the horizon approach, the same constant gain is used throughout. It is important to carefully consider the trade-offs between using a time-varying gain and a constant gain. In summary, the LQR regulator in optimal control theory involves pre-calculating gains using Bellman's principle and using a steady state gain in the infinite horizon version. In the horizon approach, a constant gain is used throughout.
  • #1
MikeSv
35
0
Hello everyone.

Iam studying the LQR regulator in optimal control theory right now but Iam having some issues in understanding the approach of Bellmans principle.

As far as I have understood, in Bellmans dynamical programming approach, one goes backward in time to find the optimal Gains K and inputs u for every state x so that my cost J is minimized.

And here some questions that came into my mind:

Do I have to precalculate the Gains before i "start" my controller? And if so, what happens if my states are different from the ones I used before?

Furthermore I stumbled about the infinite horizon version of the LQR regulator. And from what I understand my Gain jag will at some point reach a steady state. That means it is not necessary to have a Gain that changes with time. So I would set my Gain to the steady state value instead. That makes sense because it would be much easier to. Implement then a time varying Gain.

What I don't get: do I use the same constant Gain all the time in the horizon approach.?

I hope I was able to describe my issue and that anyone can give me. Some advice.

Cheers,

Mike
 
Engineering news on Phys.org
  • #2
Yes, you would need to pre-calculate the gains before you start your controller, although it is possible to calculate them in real time. If your states are different from the ones you used before, then you would need to recalculate the gains using the new states.

In the infinite horizon version of the LQR regulator, your gain will reach a steady state. You can use this steady state value as your gain instead of having a time-varying gain. This is more efficient as you don't have to constantly adjust the gain with time. However, you do need to recalculate the gains if the states change.
 

Related to Discrete LQR regulator and Bellman's principle

What is a Discrete LQR regulator?

A Discrete LQR (Linear Quadratic Regulator) regulator is a control algorithm used in systems theory to find the optimal control input for a linear system with quadratic cost functions. It is a widely used technique in control engineering for designing controllers that can stabilize and optimize the performance of a system.

How does a Discrete LQR regulator work?

A Discrete LQR regulator uses the principles of optimal control theory to minimize a cost function that is defined as a sum of quadratic terms. It calculates the optimal control input by solving a matrix equation using the system dynamics and the cost function. The resulting control input is the one that minimizes the cost function and stabilizes the system.

What is Bellman's principle in the context of Discrete LQR regulator?

Bellman's principle, also known as the principle of optimality, states that an optimal policy has the property that, whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. In the context of Discrete LQR regulator, this means that the optimal control input for a current state will also be optimal for future states.

What are the advantages of using a Discrete LQR regulator?

A Discrete LQR regulator has several advantages, including its ease of implementation, robustness to system uncertainties, and ability to handle complex systems with multiple inputs and outputs. It also provides a systematic approach for designing controllers that can achieve optimal performance and stability.

What are the limitations of a Discrete LQR regulator?

One of the main limitations of a Discrete LQR regulator is that it is only applicable to linear systems with quadratic cost functions. It also requires accurate knowledge of the system dynamics and cost function, which may not always be available. Additionally, it may not be able to handle systems with significant nonlinearities or constraints.

Similar threads

  • Electrical Engineering
Replies
12
Views
2K
  • Electrical Engineering
Replies
3
Views
812
  • Electrical Engineering
Replies
16
Views
2K
  • Electrical Engineering
Replies
4
Views
2K
  • Electrical Engineering
Replies
26
Views
2K
  • Electrical Engineering
Replies
11
Views
3K
Replies
3
Views
1K
  • New Member Introductions
Replies
2
Views
90
  • Electrical Engineering
Replies
8
Views
1K
  • Electrical Engineering
Replies
5
Views
1K
Back
Top