How do I numerically compute a state-space equation?

  • Thread starter Thread starter hkBattousai
  • Start date Start date
  • Tags Tags
    State-space
Click For Summary
SUMMARY

This discussion focuses on numerically computing state-space equations for linear systems defined by the equations dx/dt = Ax + Bu and y = Cx + Du. The recommended approach for solving the differential equation for x is to use the Runge-Kutta method, specifically the RK4 variant, which allows for adaptive step size control to balance truncation and rounding errors. The conversation also addresses the need for interpolation of discrete input signals to maintain continuity in the numerical solution. The integration term can be approximated using methods like Legendre polynomials for better accuracy between discrete samples.

PREREQUISITES
  • Understanding of state-space representation in control systems
  • Familiarity with numerical methods, particularly the Runge-Kutta method
  • Knowledge of adaptive step size control techniques
  • Basic concepts of interpolation methods, such as Legendre polynomials
NEXT STEPS
  • Study the implementation of the Runge-Kutta-Feldberg 7/8 method for adaptive step size control
  • Explore numerical methods for solving ordinary differential equations (ODEs) in detail
  • Research interpolation techniques for discrete signals, focusing on Legendre polynomials
  • Read "Numerical Methods for Ordinary Differential Systems" by Lambert for foundational knowledge
USEFUL FOR

Control engineers, software developers working on simulation tools, and researchers in numerical analysis will benefit from this discussion on computing state-space equations and numerical methods.

hkBattousai
Messages
64
Reaction score
0
Suppose that, I have a linear system which is analytically defined as below:

\frac{dx}{dt} = Ax + Bu \, ... \, (I)
y = Cx + Du \, ... \, (II)
A, B, C, D are matrices defining the system,
u is input, y is output.

I want to simulate this system in computer (not by using Matlab or any other libraries/tools) by sending input values and receiving corresponding output values. How do I do this? What is the basic idea?

Do I have to iterate the equation (I) by calculating dx/dt and using it to calculate the value of x, then use it in equation (II) to find the output?

What is x vector for in the first place? Why do we calculate it?
 
Physics news on Phys.org
Yes, you should use something like a "Runge-Kutta" method to solved the differential equation for x, then use it in the equation for y.

As to "what is x vector for", that would depend entirely upon the problem the system of equations is from!
 
HallsofIvy said:
Yes, you should use something like a "Runge-Kutta" method to solved the differential equation for x, then use it in the equation for y.

As to "what is x vector for", that would depend entirely upon the problem the system of equations is from!

Is this scenario correct?:

Suppose that, input sample rate is 1 samples/second. I use Runge-Kutta method with a time step (h) which is very low compared to the sample rate and integer divisor of it; for example h=0.1 seconds. I receive an input sample, then apply it Runge Kutta method 10 times (10 = 1/0.1), then receive the next input and so on...

In this case, can we guaranty that the accumulating error of Runge Kutta (or any other derivation method) converges? In other words, will the error remain small, or it will grow up in time?
 
Could it be by solving it with the equation below?:

x(t) = e^{At}x(t-T) + \int_{t-T}^{t} e^{A(t - \tau)}Bu(\tau)d\tau

"T" is the sampling period of the input signal. To calculate the state vector between any two input samples, I will just solve this equation. Would this method work? Even if it works, the integration term requires input signal to be a continuous function; but it originally is discreet. How do I use it as a continuous function? Would interpolation with Legendre polynomials work for estimating the input signal "u "between u(t-T) and u(t)?
 
hkBattousai said:
Suppose that, input sample rate is 1 samples/second. I use Runge-Kutta method with a time step (h) which is very low compared to the sample rate and integer divisor of it; for example h=0.1 seconds. I receive an input sample, then apply it Runge Kutta method 10 times (10 = 1/0.1), then receive the next input and so on...

In this case, can we guaranty that the accumulating error of Runge Kutta (or any other derivation method) converges? In other words, will the error remain small, or it will grow up in time?

All numerical methods must produce a numerical solution that converges to the real solution as the step size goes towards zero. However, finding a fixed step size that balances truncation error, rounding error and stability is often difficult unless the system being integrated is very smooth. For simple integration methods, like Euler, the requirement for a stable solution may easily drive the step size so small that the solution globally suffers from severe rounding errors and requires much more computational effort than if done using methods of higher order.

In order to find the "right" step size, you would in practice normally employ so-called adaptive step size control when using Runge-Kutta and similar methods which works by estimating and monitoring the truncation error in the numerical solution. For instance, using RK4, a popular fourth-order method, the truncation error should be proportional with h4 so from the same initial state you can calculate the next state using different h (say, h and h/2) and compare the normed difference in state to a set limit. If error is above limit you reduce h, if well below, you can increase h. Most ODE solver libraries implements something like this principle and allows you to specify the relative or absolute error the solver should strive to maintain.

As a further example, one RK variant, called embedded RK, allows you to calculate a step from the same state using two approximations with different orders but only with a minimum of additional computational effort, making adaptive step size control particular attractive for those methods. One common example of such a method is the Runge-Kutta-Feldberg 7/8 (aka RKF78) method.

There are plenty of good textbooks on the subject if you like to read more. I can personally recommend [1], although it is a bit old. There may be more appropriate reference if you are mostly interested in the state-space solution to a given control system.

[1] Numerical Methods for Ordinary Differential Systems, Lambert, Wiley 1991.
 
Last edited:
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 2 ·
Replies
2
Views
686
  • · Replies 18 ·
Replies
18
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K