# Control theory: Laplace versus state space representation

#### Mårten

I'm taking a course in control theory, and have been wondering for a while what the benefits are when you describe a system based on the Laplace method with transfer functions, compared to when you use the state space representation method. In particular, when using the Laplace method you are limited to a system where the initial conditions all of them have to be equal to zero. The following questions then:

1) Is it correct that with the state space representation, the initial conditions could be whatever, i.e., you are not limitied to a system where they equal zero?

2) If so, why all this fuzz about using the Laplace method? Why not always use the state space representation?

I'm most eager for an answer on question number 1 above...

Related Electrical Engineering News on Phys.org

#### rbj

I'm taking a course in control theory, and have been wondering for a while what the benefits are when you describe a system based on the Laplace method with transfer functions, compared to when you use the state space representation method. In particular, when using the Laplace method you are limited to a system where the initial conditions all of them have to be equal to zero.
i do not think that is true. just as when you use the Laplace Transform (which is what i think you mean by the "Laplace method") to represent a linear differential equation with initial conditions, the same applies to representing a linear, time-invariant system.

the main difference between the transfer function representation of a linear system and the state-space representation is that the former is concerned only with the input-output relationship while the latter is also concerned with what is going on inside the box. the state-space representation is more general and you can have all sorts of different state-space representations that appear to have the same input-output relationship.

a consequence of this is what happens with pole-zero cancellation. you might think that you have a nice 2nd order, stable linear system judging from the input-output relationship, but internally there might be an internal pole that is unstable but canceled by a zero sitting right on top of it. so before things inside blow up (and become non-linear), everything looks nice on the outside while things are going to hell on the inside. you find out that things aren't so nice on the inside when some internal state (that is not observable at the outside) hits the rails.

#### Mårten

i do not think that is true. just as when you use the Laplace Transform (which is what i think you mean by the "Laplace method") to represent a linear differential equation with initial conditions, the same applies to representing a linear, time-invariant system.
Hm... I don't quite understand...

For instance, when using the Laplace transform on the following equation, we get:

$$y'' + 4y' + 2y = u, y(0) = 0, y'(0) = 0,$$

$$s^2Y(s) + 4sY(s) + 2Y(s) = U(s),$$

$$Y(s) = \frac{1}{s^2+4s+2}U(s).$$

Now this simple result wouldn't be possible, if it wasn't for the initial conditions y(0) = 0 and y'(0) = 0.

But when we have a state space representation, like the following

$$X'(t) = AX(t) + BU(t),$$
$$Y(t) = CX(t) + DU(t),$$
$$X(t_0) = E,$$

then I thought it was possible to choose the initial conditions X(t_0) to what ever, e.g.

$$X(t_0) = E = \{x_1(0),x_2(0),x_3(0)\}^T = \{2,7,6\}^T,$$

so I am not limited to initial conditions where all x_i(0) = 0 like in the Laplace transform case. Or am I thinking erroneously here?

#### rbj

Hm... I don't quite understand...

For instance, when using the Laplace transform on the following equation, we get:

$$y'' + 4y' + 2y = u, y(0) = 0, y'(0) = 0,$$

$$s^2Y(s) + 4sY(s) + 2Y(s) = U(s),$$

$$Y(s) = \frac{1}{s^2+4s+2}U(s).$$

Now this simple result wouldn't be possible, if it wasn't for the initial conditions y(0) = 0 and y'(0) = 0.
but this

$$s^2 Y(s) + 4s Y(s) + 2 Y(s) = U(s)$$

is not quite complete, in the general case. to go from

$$y'' + 4y' + 2y = u$$

it should be

$$(s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)$$

that's the "Laplace method".

#### Mårten

but this

$$s^2 Y(s) + 4s Y(s) + 2 Y(s) = U(s)$$

is not quite complete, in the general case. to go from

$$y'' + 4y' + 2y = u$$

it should be

$$(s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)$$

that's the "Laplace method".
But, hm... In my control theory text book, it says that when using Laplace transforms in order to more easily handle ODEs, you always assume that $y^{(n)}(0)=0$ for all n. This in order to get the manageable form $Y(s) = G(s)U(s)$. All the calculations in my book assumes this. As I understood it, if you not assume this, the whole point with using Laplace transforms is lost, because then you cannot easily multiply the different boxes around the feedback loop for instance.

But what I haven't understood yet, is whether the state space representation method is limited to this as well or not (that all the initial values, each of them, are set to 0)? Are you instead allowed to use an initial value state vector X(t_0), as for instance, the one I used above in an earlier message?

#### rbj

But, hm... In my control theory text book, it says that when using Laplace transforms in order to more easily handle ODEs, you always assume that $y^{(n)}(0)=0$ for all n. This in order to get the manageable form $Y(s) = G(s)U(s)$. All the calculations in my book assumes this.
well, sure. otherwise you get $Y(s) = G(s)U(s)$ + some other stuff.

As I understood it, if you not assume this, the whole point with using Laplace transforms is lost, because then you cannot easily multiply the different boxes around the feedback loop for instance.

But what I haven't understood yet, is whether the state space representation method is limited to this as well or not (that all the initial values, each of them, are set to 0)? Are you instead allowed to use an initial value state vector X(t_0), as for instance, the one I used above in an earlier message?
sure, if you want to represent the (transformed) output as only the transfer function times a single input, then you have to assume that the system is completely "relaxed" at time 0. but you can model a system in terms of only the input and output (and their initial conditions) with the Laplace Transforms of the input and output. but it won't be a simple transfer function.

the main difference between this and the state-space model is that the state-space model is trying to model what is going on inside the box. it is more general than the simple input-output description of the system.

M

#### misgfool

but you can model a system in terms of only the input and output (and their initial conditions) with the Laplace Transforms of the input and output. but it won't be a simple transfer function.
Can you make some kind of distinction between simple and complicated transfer function?

#### rbj

Can you make some kind of distinction between simple and complicated transfer function?
in the present context,

$$(s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)$$

i meant what would happen if you solved for Y(s). it is not only a function of U(s), but also a function of the two initial conditions.

$$Y(s) = \frac{1}{s^2 + 4s + 2}U(s) \ + \ \frac{s+4}{s^2 + 4s + 2} y(0) \ + \ \frac{1}{s^2 + 4s + 2} y'(0)$$

the simple one (assuming a completely relaxed system at t=0) would be

$$Y(s) = \frac{1}{s^2 + 4s + 2}U(s)$$

or

$$\frac{Y(s)}{U(s)} \equiv G(s) = \frac{1}{s^2 + 4s + 2}$$

Last edited:

#### Mårten

the main difference between this and the state-space model is that the state-space model is trying to model what is going on inside the box. it is more general than the simple input-output description of the system.
Okey, I think I got that, it also models what happens inside the box as you say.

But, still, I haven't got any answer on the question about the limitations on the initial values in the state space representations. There are no such limitations in the state space representation, are there? You can choose the initial values to whatever you like, can you?

#### rbj

But, still, I haven't got any answer on the question about the limitations on the initial values in the state space representations. There are no such limitations in the state space representation, are there? You can choose the initial values to whatever you like, can you?
and you can't with the input-output model?

with either model, you can put in whatever initial values you want. but the input-output model only allows you to put in initial values for the output(s) and the various derivatives of the output (up to the order of the system). since the input-output model doesn't even think about the internal states, then i guess you can't choose initial values of internal states.

#### Mårten

with either model, you can put in whatever initial values you want. but the input-output model only allows you to put in initial values for the output(s) and the various derivatives of the output (up to the order of the system). since the input-output model doesn't even think about the internal states, then i guess you can't choose initial values of internal states.
Okey, I got it!

Next thing then is: With the state space model, you have initial values for the internal states, that is X(t_0). But what about for the output, Y(t)? There's no need for initial values there? Sorry, if this is obvious...

#### rbj

With the state space model, you have initial values for the internal states, that is x(t_0). But what about for the output, y(t)? There's no need for initial values there?
try to leave the variables with caps for transformed signals and small case for time-domain. y(t) not Y(t) and Y(s) (or in discrete-time Z Transform, Y(z)). it just keeps things clear.

those initial conditions (for y(t)) are fully determined by the initial conditions for the states in X(t). they cannot be independently specified. and with the state-space model, with all of those states (an equal number of states to the order of the system), you need not and will not have initial conditions for higher derivatives (like you did for y'(0)). just having initial conditions for every element in the X(0) vector is good enough.

#### Mårten

Before we go on - thank you very much for this personal class I get here in control theory! It helps me a lot!

try to leave the variables with caps for transformed signals and small case for time-domain. y(t) not Y(t) and Y(s) (or in discrete-time Z Transform, Y(z)). it just keeps things clear.
Okey, sorry. That was to signal that they could be matrices or vectors. Bold face may be better...

those initial conditions (for y(t)) are fully determined by the initial conditions for the states in X(t). they cannot be independently specified. and with the state-space model, with all of those states (an equal number of states to the order of the system), you need not and will not have initial conditions for higher derivatives (like you did for y'(0)). just having initial conditions for every element in the X(0) vector is good enough.
Okey, I think I got it there. y(0) would just have been a transformation (made by the C-matrix) of the vector x(0).

Then comes the ultimate question: Why don't always use the state-space model in preference to the Laplace transform model? What benefits does the Laplace transform model have, which the state-space model doesn't have?

Last edited:

#### rbj

Before we go on - thank you very much for this personal class I get here in control theory! It helps me a lot!

Okey, sorry. That was to signal that they could be matrices or vectors. Bold face may be better...

Okey, I think I got it there. y(0) would just have been a transformation (made by the C-matrix) of the vector x(0).
yeah, and it would be nice if i practice what i preach. i should have said x(t) for the state vector instead of X(t).

Then comes the ultimate question: Why don't always use the state-space model in preference to the Laplace transform model? What benefits does the Laplace transform model have, which the state-space model doesn't have?
simplicity. if a system is known to be linear and time-invariant, and if you don't care about what's going on inside the black box, but only on how the system interacts (via its inputs and outputs) with the rest of the world that it is connected to, the input-output transfer function description is all that you need. if you have an Nth-order, single-input, single-output system, then 2N+1 numbers fully describe your system, from an input-output POV. with the state variable description, an Nth-order system (single input and output) has N2 numbers, just for the A matrix. and 2N+1 numbers for the B, C, and D matrices. so there are many different state-variable systems that have the same transfer function. all of these different systems behave identically with the outside world until some internal state saturates or goes non-linear and that's where they are modeled differently in practice. you can even have internal state(s) blow up and not even know it (if the system is "not completely observable") until the state(s) that are unstable hit the "rails", the maximum values they can take before clipping or some other non-linearity. when something like this happens, there is pole-zero cancellation, as far as the input-output description is concerned. so maybe some zero killed the pole and they both disappeared in the transfer function, G(s), but inside that bad pole still exists.

#### Mårten

Sorry for late reply, been away for a while...
simplicity. if a system is known to be linear and time-invariant, and if you don't care about what's going on inside the black box, but only on how the system interacts (via its inputs and outputs) with the rest of the world that it is connected to, the input-output transfer function description is all that you need. if you have an Nth-order, single-input, single-output system, then 2N+1 numbers fully describe your system, from an input-output POV. with the state variable description, an Nth-order system (single input and output) has N2 numbers, just for the A matrix. and 2N+1 numbers for the B, C, and D matrices
Hm... Still something here that confuses me. Imagine that you have a system described by the ODE $y'' + 4y' + 2y = u$. It's possible to describe this system with a transfer function $G(s) = \frac{1}{s^2+4s+2}$. It's also possible to describe this system with a state space representation, constructing the A-matrix and so on. Then the following obscurities:

1) With the state space representation the system is described by N2 numbers in the A-matrix. But most of the numbers in the A-matrix are just zeros and ones. So it seems that the number of significant numbers in the A-matrix are just 2N+1. So then the state space representation is not so much more complicated?

2) If a system is described by this ODE above, how could there be any internal information that is being revealed if putting up this system on a state space representation, compared to when using the transfer function to describe it?

#### rbj

Sorry for late reply, been away for a while...
Hm... Still something here that confuses me. Imagine that you have a system described by the ODE $y'' + 4y' + 2y = u$. It's possible to describe this system with a transfer function $G(s) = \frac{1}{s^2+4s+2}$. It's also possible to describe this system with a state space representation, constructing the A-matrix and so on.
this is true, but while there is really only one G(s) that accurately represents what the ODE says, there are many (and infinite number) of state space representations that will have the same transfer function, G(s). some are completely controllable, some are completely observable, some are both, some are neither completely controllable nor completely observable.

Then the following obscurities:

1) With the state space representation the system is described by N2 numbers in the A-matrix. But most of the numbers in the A-matrix are just zeros and ones. So it seems that the number of significant numbers in the A-matrix are just 2N+1. So then the state space representation is not so much more complicated?
it's that there are many different state space representations (all with different A, B, and C matrices, but i think the D matrix is the same) that have the same effect as far as input and output are concerned. put these different state-variable systems (with identical transfer functions) in a box and draw out the inputs and outputs to the rest of the world and the rest of the world could not tell the difference between them. at least as long as they stayed linear inside.

2) If a system is described by this ODE above, how could there be any internal information that is being revealed if putting up this system on a state space representation, compared to when using the transfer function to describe it?
if it stays linear inside and you put it in a black box, you can't tell everything about the internal structure. but if some internal state goes non-linear (perhaps because this internal state blew up because it was unstable), then you can tell something is different on the outside. if the state-variable system is completely observable and you have information about the internal structure (essentially the A, B, C, D matrices), you can determine (or "reveal") what the states must be from looking only at the output (and its derivatives). if it's completely controllable, you can manipulate the input so that the states take on whatever values you want.

this "controllable" and "observable" stuff is in sorta advanced control theory (maybe grad level, even though i first heard about it in undergrad when i learned about the state space representation). i dunno if i can give it justice in this forum. i would certainly have to dig out my textbooks and re-learn some of this again so i don't lead you astray with bu11sh1t. so far, i'm pretty sure i'm accurate about what i said, but i don't remember everything else about the topic.

#### Mårten

Okey, I think I understand now. Correct me then if something of the following is wrong.

The expression of G(s) as I was talking about above, could actually be the result of a lot of boxes so to say. That is, G(s) is a black box, and inside that black box there could be several other boxes with signals going in and out, and even feedback loops. Actually, the number of configurations inside the black box, are infinite. That corresponds to the infinite number of state space representations for G(s).

But, let's now say that we know exactly what happens inside the black box. E.g., the black box is the ODE describing a bathtub with water coming in, and water going out. Let's say for simplicity, even though this is probably not physically correct, that this bathtub is described by $G(s) = \frac{1}{s^2+4s+2}$. Then, this bathtub system could just have one state space representation, since we know the system perfectly inside. Is that correct? (I here disregard the fact that you could manipulate the rows in the A-matrix, and make different matrix-operations, but it's still really the same equivalent matrix.)

So in this case, I could as well solve the matrix equation, as solving the Laplace equation, it is as simple. Except that if I do it the matrix way (i.e. the state space representation way), I could have whatever initial values, it won't do it more complicated, as it would do in the Laplace way. Is that correct?

#### Mårten

Okey, since I got no reply to my previous post above, I guess that means everything there was correct...?

Then to the following question: One of the benefits with Laplace transforms, is that you can just mulitply boxes in a row to get the transfer function for all the boxes together. In general, it's pretty simple to handle different configurations with boxes in series and in parallel, when doing transfer functions.

Is it possible as well to put up a state space representation when you have several boxes in series and so on, and how is that done then? Maybe that's a drawback to state space representations, that it's not done so easily as with transfer functions?

#### rbj

Marten,

i just now saw this thread was left hanging. i just received a few infractions from Evo (from a totally different thread) and i need some time to think about what you were asking in post #17. there is a straight-forward way to go from state-space representation to transfer function. it's in the text books but it's pretty easy to derive. of course, there is loss of information (since many different state-space representations can turn into a single transfer-function representation). it's maybe not as straight-forward, but you can combine the states of one box and the states of another box (in series or parallel) into a single box where the number of states is the sum of the two and where the states are defined exactly as they were before.

#### Mårten

i just now saw this thread was left hanging. i just received a few infractions from Evo (from a totally different thread) and i need some time to think about what you were asking in post #17.
I'll be happy for any comments!

The reason I'm so eager to stick to state space representations, is that I would like to be able to handle, and to fully understand, systems with multiple inputs/outputs and with arbitrary initial values, and still have pretty simple calculations (for instance, arbitrary initial values is not good for transfer functions cause then simplicity disappears; multiple inputs/outputs only possible to handle with state space representations).

there is a straight-forward way to go from state-space representation to transfer function. it's in the text books but it's pretty easy to derive. of course, there is loss of information (since many different state-space representations can turn into a single transfer-function representation). it's maybe not as straight-forward, but you can combine the states of one box and the states of another box (in series or parallel) into a single box where the number of states is the sum of the two and where the states are defined exactly as they were before.
Okey. I'll try to do this myself here to see what happens. Imagine two boxes G_1 and G_2 after each other in the following way

u --> $G_1$ --> v --> $G_2$ --> y

where $G_1: a_1v'' + b_1v' + c_1v = u$ and $G_2: a_2y'' + b_2y' + c_2y = v$, i.e. G_1 has u and v as input/output and G_2 has v and y as input/output. For simplicity, $a_1 = a_2 = 1$. The state space representation for this then becomes (as far as I can see)

$$\left( \begin{array}{c} x'_1 = v' \\ x'_2 = v'' \\ x'_3 = y' \\ x'_4 = y'' \\ \end{array} \right) = \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ -c_1 & -b_1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & -c_2 & -b_2 \\ \end{array} \right) \left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ \end{array} \right) + \left( \begin{array}{c} 0 \\ u \\ 0 \\ 0 \\ \end{array} \right).$$

Okey, it seems to work! The output from G_1 becomes the input to G_2 through the number 1 in the A-matrix' lower left corner (element $a_{41}$, sort of the "chain element" which connects the two boxes).

Now comes the question of how to solve this, ehhh... :yuck: Using eigenvalue techniques or $e^{At}$-matrices, and so forth, maybe leading to unsolvable nth degree equations (n>4). Or simply using a computer...

#### Proton Soup

yeah, it's been a good 12 or 13 years since i modelled state space systems, which is why i haven't tried to help. but yeah, computing the discrete state-space matrix from the continuous time transfer function is done with a computer. something like Matlab or Scilab, maybe Octave, should have the functions available. Matlab i know for sure does, because that's what i used to use.

#### Mårten

Of course, MATLAB would do the trick! I should know that.

Okey, I guess I won't get out any more in this particular area. Thanks for all help! Don't hesitate to comment further, what I've said above.

I've now started a new thread on the topic of frequency response,

Last edited by a moderator:

#### Proton Soup

well, it requires computing a matrix exponential, doesn't it? iirc, there's really no good way to do that by hand.

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving