Register to reply

Control theory: Laplace versus state space representation

Share this thread:
Mårten
#1
Oct21-08, 06:41 PM
P: 127
I'm taking a course in control theory, and have been wondering for a while what the benefits are when you describe a system based on the Laplace method with transfer functions, compared to when you use the state space representation method. In particular, when using the Laplace method you are limited to a system where the initial conditions all of them have to be equal to zero. The following questions then:

1) Is it correct that with the state space representation, the initial conditions could be whatever, i.e., you are not limitied to a system where they equal zero?

2) If so, why all this fuzz about using the Laplace method? Why not always use the state space representation?

I'm most eager for an answer on question number 1 above...
Phys.Org News Partner Engineering news on Phys.org
Researchers find security flaws in backscatter X-ray scanners
Virtual reality guides those whose memory is failing
Intelligent navigation system to personalise shopping trips
rbj
#2
Oct21-08, 10:36 PM
P: 2,251
Quote Quote by Mårten View Post
I'm taking a course in control theory, and have been wondering for a while what the benefits are when you describe a system based on the Laplace method with transfer functions, compared to when you use the state space representation method. In particular, when using the Laplace method you are limited to a system where the initial conditions all of them have to be equal to zero.
i do not think that is true. just as when you use the Laplace Transform (which is what i think you mean by the "Laplace method") to represent a linear differential equation with initial conditions, the same applies to representing a linear, time-invariant system.

the main difference between the transfer function representation of a linear system and the state-space representation is that the former is concerned only with the input-output relationship while the latter is also concerned with what is going on inside the box. the state-space representation is more general and you can have all sorts of different state-space representations that appear to have the same input-output relationship.

a consequence of this is what happens with pole-zero cancellation. you might think that you have a nice 2nd order, stable linear system judging from the input-output relationship, but internally there might be an internal pole that is unstable but canceled by a zero sitting right on top of it. so before things inside blow up (and become non-linear), everything looks nice on the outside while things are going to hell on the inside. you find out that things aren't so nice on the inside when some internal state (that is not observable at the outside) hits the rails.
Mårten
#3
Oct22-08, 06:56 AM
P: 127
Quote Quote by rbj View Post
i do not think that is true. just as when you use the Laplace Transform (which is what i think you mean by the "Laplace method") to represent a linear differential equation with initial conditions, the same applies to representing a linear, time-invariant system.
Hm... I don't quite understand...

For instance, when using the Laplace transform on the following equation, we get:

[tex]y'' + 4y' + 2y = u, y(0) = 0, y'(0) = 0,[/tex]

[tex]s^2Y(s) + 4sY(s) + 2Y(s) = U(s),[/tex]

[tex]Y(s) = \frac{1}{s^2+4s+2}U(s).[/tex]

Now this simple result wouldn't be possible, if it wasn't for the initial conditions y(0) = 0 and y'(0) = 0.

But when we have a state space representation, like the following

[tex]X'(t) = AX(t) + BU(t),[/tex]
[tex]Y(t) = CX(t) + DU(t),[/tex]
[tex]X(t_0) = E,[/tex]

then I thought it was possible to choose the initial conditions X(t_0) to what ever, e.g.

[tex]X(t_0) = E = \{x_1(0),x_2(0),x_3(0)\}^T = \{2,7,6\}^T,[/tex]

so I am not limited to initial conditions where all x_i(0) = 0 like in the Laplace transform case. Or am I thinking erroneously here?

rbj
#4
Oct22-08, 10:05 AM
P: 2,251
Control theory: Laplace versus state space representation

Quote Quote by Mårten View Post
Hm... I don't quite understand...

For instance, when using the Laplace transform on the following equation, we get:

[tex]y'' + 4y' + 2y = u, y(0) = 0, y'(0) = 0,[/tex]

[tex]s^2Y(s) + 4sY(s) + 2Y(s) = U(s),[/tex]

[tex]Y(s) = \frac{1}{s^2+4s+2}U(s).[/tex]

Now this simple result wouldn't be possible, if it wasn't for the initial conditions y(0) = 0 and y'(0) = 0.
but this

[tex]s^2 Y(s) + 4s Y(s) + 2 Y(s) = U(s)[/tex]

is not quite complete, in the general case. to go from

[tex]y'' + 4y' + 2y = u[/tex]

it should be

[tex](s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)[/tex]

that's the "Laplace method".
Mårten
#5
Oct22-08, 03:35 PM
P: 127
Quote Quote by rbj View Post
but this

[tex]s^2 Y(s) + 4s Y(s) + 2 Y(s) = U(s)[/tex]

is not quite complete, in the general case. to go from

[tex]y'' + 4y' + 2y = u[/tex]

it should be

[tex](s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)[/tex]

that's the "Laplace method".
But, hm... In my control theory text book, it says that when using Laplace transforms in order to more easily handle ODEs, you always assume that [itex]y^{(n)}(0)=0[/itex] for all n. This in order to get the manageable form [itex]Y(s) = G(s)U(s)[/itex]. All the calculations in my book assumes this. As I understood it, if you not assume this, the whole point with using Laplace transforms is lost, because then you cannot easily multiply the different boxes around the feedback loop for instance.

But what I haven't understood yet, is whether the state space representation method is limited to this as well or not (that all the initial values, each of them, are set to 0)? Are you instead allowed to use an initial value state vector X(t_0), as for instance, the one I used above in an earlier message?
rbj
#6
Oct22-08, 11:26 PM
P: 2,251
Quote Quote by Mårten View Post
But, hm... In my control theory text book, it says that when using Laplace transforms in order to more easily handle ODEs, you always assume that [itex]y^{(n)}(0)=0[/itex] for all n. This in order to get the manageable form [itex]Y(s) = G(s)U(s)[/itex]. All the calculations in my book assumes this.
well, sure. otherwise you get [itex]Y(s) = G(s)U(s)[/itex] + some other stuff.

As I understood it, if you not assume this, the whole point with using Laplace transforms is lost, because then you cannot easily multiply the different boxes around the feedback loop for instance.

But what I haven't understood yet, is whether the state space representation method is limited to this as well or not (that all the initial values, each of them, are set to 0)? Are you instead allowed to use an initial value state vector X(t_0), as for instance, the one I used above in an earlier message?
sure, if you want to represent the (transformed) output as only the transfer function times a single input, then you have to assume that the system is completely "relaxed" at time 0. but you can model a system in terms of only the input and output (and their initial conditions) with the Laplace Transforms of the input and output. but it won't be a simple transfer function.

the main difference between this and the state-space model is that the state-space model is trying to model what is going on inside the box. it is more general than the simple input-output description of the system.
misgfool
#7
Oct23-08, 12:05 PM
P: 95
Quote Quote by rbj View Post
but you can model a system in terms of only the input and output (and their initial conditions) with the Laplace Transforms of the input and output. but it won't be a simple transfer function.
Can you make some kind of distinction between simple and complicated transfer function?
rbj
#8
Oct23-08, 02:35 PM
P: 2,251
Quote Quote by misgfool View Post
Can you make some kind of distinction between simple and complicated transfer function?
in the present context,

[tex](s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)[/tex]

i meant what would happen if you solved for Y(s). it is not only a function of U(s), but also a function of the two initial conditions.

[tex]Y(s) = \frac{1}{s^2 + 4s + 2}U(s) \ + \ \frac{s+4}{s^2 + 4s + 2} y(0) \ + \ \frac{1}{s^2 + 4s + 2} y'(0) [/tex]

the simple one (assuming a completely relaxed system at t=0) would be

[tex]Y(s) = \frac{1}{s^2 + 4s + 2}U(s) [/tex]

or

[tex]\frac{Y(s)}{U(s)} \equiv G(s) = \frac{1}{s^2 + 4s + 2}[/tex]
Mårten
#9
Oct23-08, 07:19 PM
P: 127
Quote Quote by rbj View Post
the main difference between this and the state-space model is that the state-space model is trying to model what is going on inside the box. it is more general than the simple input-output description of the system.
Okey, I think I got that, it also models what happens inside the box as you say.

But, still, I haven't got any answer on the question about the limitations on the initial values in the state space representations. There are no such limitations in the state space representation, are there? You can choose the initial values to whatever you like, can you?
rbj
#10
Oct23-08, 11:40 PM
P: 2,251
Quote Quote by Mårten View Post
But, still, I haven't got any answer on the question about the limitations on the initial values in the state space representations. There are no such limitations in the state space representation, are there? You can choose the initial values to whatever you like, can you?
and you can't with the input-output model?

with either model, you can put in whatever initial values you want. but the input-output model only allows you to put in initial values for the output(s) and the various derivatives of the output (up to the order of the system). since the input-output model doesn't even think about the internal states, then i guess you can't choose initial values of internal states.
Mårten
#11
Oct24-08, 09:31 PM
P: 127
Quote Quote by rbj View Post
with either model, you can put in whatever initial values you want. but the input-output model only allows you to put in initial values for the output(s) and the various derivatives of the output (up to the order of the system). since the input-output model doesn't even think about the internal states, then i guess you can't choose initial values of internal states.
Okey, I got it!

Next thing then is: With the state space model, you have initial values for the internal states, that is X(t_0). But what about for the output, Y(t)? There's no need for initial values there? Sorry, if this is obvious...
rbj
#12
Oct24-08, 10:22 PM
P: 2,251
Quote Quote by Mårten View Post
With the state space model, you have initial values for the internal states, that is x(t_0). But what about for the output, y(t)? There's no need for initial values there?
try to leave the variables with caps for transformed signals and small case for time-domain. y(t) not Y(t) and Y(s) (or in discrete-time Z Transform, Y(z)). it just keeps things clear.

those initial conditions (for y(t)) are fully determined by the initial conditions for the states in X(t). they cannot be independently specified. and with the state-space model, with all of those states (an equal number of states to the order of the system), you need not and will not have initial conditions for higher derivatives (like you did for y'(0)). just having initial conditions for every element in the X(0) vector is good enough.
Mårten
#13
Oct26-08, 09:54 PM
P: 127
Before we go on - thank you very much for this personal class I get here in control theory! It helps me a lot!

Quote Quote by rbj View Post
try to leave the variables with caps for transformed signals and small case for time-domain. y(t) not Y(t) and Y(s) (or in discrete-time Z Transform, Y(z)). it just keeps things clear.
Okey, sorry. That was to signal that they could be matrices or vectors. Bold face may be better...

Quote Quote by rbj View Post
those initial conditions (for y(t)) are fully determined by the initial conditions for the states in X(t). they cannot be independently specified. and with the state-space model, with all of those states (an equal number of states to the order of the system), you need not and will not have initial conditions for higher derivatives (like you did for y'(0)). just having initial conditions for every element in the X(0) vector is good enough.
Okey, I think I got it there. y(0) would just have been a transformation (made by the C-matrix) of the vector x(0).

Then comes the ultimate question: Why don't always use the state-space model in preference to the Laplace transform model? What benefits does the Laplace transform model have, which the state-space model doesn't have?
rbj
#14
Oct26-08, 10:54 PM
P: 2,251
Quote Quote by Mårten View Post
Before we go on - thank you very much for this personal class I get here in control theory! It helps me a lot!


Okey, sorry. That was to signal that they could be matrices or vectors. Bold face may be better...


Okey, I think I got it there. y(0) would just have been a transformation (made by the C-matrix) of the vector x(0).
yeah, and it would be nice if i practice what i preach. i should have said x(t) for the state vector instead of X(t).

Then comes the ultimate question: Why don't always use the state-space model in preference to the Laplace transform model? What benefits does the Laplace transform model have, which the state-space model doesn't have?
simplicity. if a system is known to be linear and time-invariant, and if you don't care about what's going on inside the black box, but only on how the system interacts (via its inputs and outputs) with the rest of the world that it is connected to, the input-output transfer function description is all that you need. if you have an Nth-order, single-input, single-output system, then 2N+1 numbers fully describe your system, from an input-output POV. with the state variable description, an Nth-order system (single input and output) has N2 numbers, just for the A matrix. and 2N+1 numbers for the B, C, and D matrices. so there are many different state-variable systems that have the same transfer function. all of these different systems behave identically with the outside world until some internal state saturates or goes non-linear and that's where they are modeled differently in practice. you can even have internal state(s) blow up and not even know it (if the system is "not completely observable") until the state(s) that are unstable hit the "rails", the maximum values they can take before clipping or some other non-linearity. when something like this happens, there is pole-zero cancellation, as far as the input-output description is concerned. so maybe some zero killed the pole and they both disappeared in the transfer function, G(s), but inside that bad pole still exists.
Mårten
#15
Nov10-08, 02:01 PM
P: 127
Sorry for late reply, been away for a while...
Quote Quote by rbj View Post
simplicity. if a system is known to be linear and time-invariant, and if you don't care about what's going on inside the black box, but only on how the system interacts (via its inputs and outputs) with the rest of the world that it is connected to, the input-output transfer function description is all that you need. if you have an Nth-order, single-input, single-output system, then 2N+1 numbers fully describe your system, from an input-output POV. with the state variable description, an Nth-order system (single input and output) has N2 numbers, just for the A matrix. and 2N+1 numbers for the B, C, and D matrices
Hm... Still something here that confuses me. Imagine that you have a system described by the ODE [itex]y'' + 4y' + 2y = u[/itex]. It's possible to describe this system with a transfer function [itex]G(s) = \frac{1}{s^2+4s+2}[/itex]. It's also possible to describe this system with a state space representation, constructing the A-matrix and so on. Then the following obscurities:

1) With the state space representation the system is described by N2 numbers in the A-matrix. But most of the numbers in the A-matrix are just zeros and ones. So it seems that the number of significant numbers in the A-matrix are just 2N+1. So then the state space representation is not so much more complicated?

2) If a system is described by this ODE above, how could there be any internal information that is being revealed if putting up this system on a state space representation, compared to when using the transfer function to describe it?
rbj
#16
Nov10-08, 11:46 PM
P: 2,251
Quote Quote by Mårten View Post
Sorry for late reply, been away for a while...
Hm... Still something here that confuses me. Imagine that you have a system described by the ODE [itex]y'' + 4y' + 2y = u[/itex]. It's possible to describe this system with a transfer function [itex]G(s) = \frac{1}{s^2+4s+2}[/itex]. It's also possible to describe this system with a state space representation, constructing the A-matrix and so on.
this is true, but while there is really only one G(s) that accurately represents what the ODE says, there are many (and infinite number) of state space representations that will have the same transfer function, G(s). some are completely controllable, some are completely observable, some are both, some are neither completely controllable nor completely observable.

Then the following obscurities:

1) With the state space representation the system is described by N2 numbers in the A-matrix. But most of the numbers in the A-matrix are just zeros and ones. So it seems that the number of significant numbers in the A-matrix are just 2N+1. So then the state space representation is not so much more complicated?
it's that there are many different state space representations (all with different A, B, and C matrices, but i think the D matrix is the same) that have the same effect as far as input and output are concerned. put these different state-variable systems (with identical transfer functions) in a box and draw out the inputs and outputs to the rest of the world and the rest of the world could not tell the difference between them. at least as long as they stayed linear inside.

2) If a system is described by this ODE above, how could there be any internal information that is being revealed if putting up this system on a state space representation, compared to when using the transfer function to describe it?
if it stays linear inside and you put it in a black box, you can't tell everything about the internal structure. but if some internal state goes non-linear (perhaps because this internal state blew up because it was unstable), then you can tell something is different on the outside. if the state-variable system is completely observable and you have information about the internal structure (essentially the A, B, C, D matrices), you can determine (or "reveal") what the states must be from looking only at the output (and its derivatives). if it's completely controllable, you can manipulate the input so that the states take on whatever values you want.

this "controllable" and "observable" stuff is in sorta advanced control theory (maybe grad level, even though i first heard about it in undergrad when i learned about the state space representation). i dunno if i can give it justice in this forum. i would certainly have to dig out my textbooks and re-learn some of this again so i don't lead you astray with bu11sh1t. so far, i'm pretty sure i'm accurate about what i said, but i don't remember everything else about the topic.
Mårten
#17
Nov11-08, 04:59 PM
P: 127
Okey, I think I understand now. Correct me then if something of the following is wrong.

The expression of G(s) as I was talking about above, could actually be the result of a lot of boxes so to say. That is, G(s) is a black box, and inside that black box there could be several other boxes with signals going in and out, and even feedback loops. Actually, the number of configurations inside the black box, are infinite. That corresponds to the infinite number of state space representations for G(s).

But, let's now say that we know exactly what happens inside the black box. E.g., the black box is the ODE describing a bathtub with water coming in, and water going out. Let's say for simplicity, even though this is probably not physically correct, that this bathtub is described by [itex]G(s) = \frac{1}{s^2+4s+2}[/itex]. Then, this bathtub system could just have one state space representation, since we know the system perfectly inside. Is that correct? (I here disregard the fact that you could manipulate the rows in the A-matrix, and make different matrix-operations, but it's still really the same equivalent matrix.)

So in this case, I could as well solve the matrix equation, as solving the Laplace equation, it is as simple. Except that if I do it the matrix way (i.e. the state space representation way), I could have whatever initial values, it won't do it more complicated, as it would do in the Laplace way. Is that correct?
Mårten
#18
Nov14-08, 01:03 PM
P: 127
Okey, since I got no reply to my previous post above, I guess that means everything there was correct...?

Then to the following question: One of the benefits with Laplace transforms, is that you can just mulitply boxes in a row to get the transfer function for all the boxes together. In general, it's pretty simple to handle different configurations with boxes in series and in parallel, when doing transfer functions.

Is it possible as well to put up a state space representation when you have several boxes in series and so on, and how is that done then? Maybe that's a drawback to state space representations, that it's not done so easily as with transfer functions?


Register to reply

Related Discussions
Metric space versus Topological space Calculus 5
Query regarding steady state error in a velocity control system Electrical Engineering 4
Non-Adiabatic versus excited state Biology, Chemistry & Other Homework 1