Control theory: Laplace versus state space representation

In summary: Is this correct?When using the Laplace transform on the following equation, we get:y'' + 4y' + 2y = u, y(0) = 0, y'(0) = 0,s^2Y(s) + 4sY(s) + 2Y(s) = U(s),Y(s) = \frac{1}{s^2+4s+2}U(s).
  • #1
Mårten
126
1
I'm taking a course in control theory, and have been wondering for a while what the benefits are when you describe a system based on the Laplace method with transfer functions, compared to when you use the state space representation method. In particular, when using the Laplace method you are limited to a system where the initial conditions all of them have to be equal to zero. The following questions then:

1) Is it correct that with the state space representation, the initial conditions could be whatever, i.e., you are not limitied to a system where they equal zero?

2) If so, why all this fuzz about using the Laplace method? Why not always use the state space representation?

I'm most eager for an answer on question number 1 above... :wink:
 
Engineering news on Phys.org
  • #2
Mårten said:
I'm taking a course in control theory, and have been wondering for a while what the benefits are when you describe a system based on the Laplace method with transfer functions, compared to when you use the state space representation method. In particular, when using the Laplace method you are limited to a system where the initial conditions all of them have to be equal to zero.

i do not think that is true. just as when you use the Laplace Transform (which is what i think you mean by the "Laplace method") to represent a linear differential equation with initial conditions, the same applies to representing a linear, time-invariant system.

the main difference between the transfer function representation of a linear system and the state-space representation is that the former is concerned only with the input-output relationship while the latter is also concerned with what is going on inside the box. the state-space representation is more general and you can have all sorts of different state-space representations that appear to have the same input-output relationship.

a consequence of this is what happens with pole-zero cancellation. you might think that you have a nice 2nd order, stable linear system judging from the input-output relationship, but internally there might be an internal pole that is unstable but canceled by a zero sitting right on top of it. so before things inside blow up (and become non-linear), everything looks nice on the outside while things are going to hell on the inside. you find out that things aren't so nice on the inside when some internal state (that is not observable at the outside) hits the rails.
 
  • #3
rbj said:
i do not think that is true. just as when you use the Laplace Transform (which is what i think you mean by the "Laplace method") to represent a linear differential equation with initial conditions, the same applies to representing a linear, time-invariant system.

Hm... I don't quite understand... :confused:

For instance, when using the Laplace transform on the following equation, we get:

[tex]y'' + 4y' + 2y = u, y(0) = 0, y'(0) = 0,[/tex]

[tex]s^2Y(s) + 4sY(s) + 2Y(s) = U(s),[/tex]

[tex]Y(s) = \frac{1}{s^2+4s+2}U(s).[/tex]

Now this simple result wouldn't be possible, if it wasn't for the initial conditions y(0) = 0 and y'(0) = 0.

But when we have a state space representation, like the following

[tex]X'(t) = AX(t) + BU(t),[/tex]
[tex]Y(t) = CX(t) + DU(t),[/tex]
[tex]X(t_0) = E,[/tex]

then I thought it was possible to choose the initial conditions X(t_0) to what ever, e.g.

[tex]X(t_0) = E = \{x_1(0),x_2(0),x_3(0)\}^T = \{2,7,6\}^T,[/tex]

so I am not limited to initial conditions where all x_i(0) = 0 like in the Laplace transform case. Or am I thinking erroneously here?
 
  • #4
Mårten said:
Hm... I don't quite understand... :confused:

For instance, when using the Laplace transform on the following equation, we get:

[tex]y'' + 4y' + 2y = u, y(0) = 0, y'(0) = 0,[/tex]

[tex]s^2Y(s) + 4sY(s) + 2Y(s) = U(s),[/tex]

[tex]Y(s) = \frac{1}{s^2+4s+2}U(s).[/tex]

Now this simple result wouldn't be possible, if it wasn't for the initial conditions y(0) = 0 and y'(0) = 0.

but this

[tex]s^2 Y(s) + 4s Y(s) + 2 Y(s) = U(s)[/tex]

is not quite complete, in the general case. to go from

[tex]y'' + 4y' + 2y = u[/tex]

it should be

[tex](s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)[/tex]

that's the "Laplace method".
 
  • #5
rbj said:
but this

[tex]s^2 Y(s) + 4s Y(s) + 2 Y(s) = U(s)[/tex]

is not quite complete, in the general case. to go from

[tex]y'' + 4y' + 2y = u[/tex]

it should be

[tex](s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)[/tex]

that's the "Laplace method".

But, hm... In my control theory textbook, it says that when using Laplace transforms in order to more easily handle ODEs, you always assume that [itex]y^{(n)}(0)=0[/itex] for all n. This in order to get the manageable form [itex]Y(s) = G(s)U(s)[/itex]. All the calculations in my book assumes this. As I understood it, if you not assume this, the whole point with using Laplace transforms is lost, because then you cannot easily multiply the different boxes around the feedback loop for instance.

But what I haven't understood yet, is whether the state space representation method is limited to this as well or not (that all the initial values, each of them, are set to 0)? Are you instead allowed to use an initial value state vector X(t_0), as for instance, the one I used above in an earlier message?
 
  • #6
Mårten said:
But, hm... In my control theory textbook, it says that when using Laplace transforms in order to more easily handle ODEs, you always assume that [itex]y^{(n)}(0)=0[/itex] for all n. This in order to get the manageable form [itex]Y(s) = G(s)U(s)[/itex]. All the calculations in my book assumes this.

well, sure. otherwise you get [itex]Y(s) = G(s)U(s)[/itex] + some other stuff.

As I understood it, if you not assume this, the whole point with using Laplace transforms is lost, because then you cannot easily multiply the different boxes around the feedback loop for instance.

But what I haven't understood yet, is whether the state space representation method is limited to this as well or not (that all the initial values, each of them, are set to 0)? Are you instead allowed to use an initial value state vector X(t_0), as for instance, the one I used above in an earlier message?

sure, if you want to represent the (transformed) output as only the transfer function times a single input, then you have to assume that the system is completely "relaxed" at time 0. but you can model a system in terms of only the input and output (and their initial conditions) with the Laplace Transforms of the input and output. but it won't be a simple transfer function.

the main difference between this and the state-space model is that the state-space model is trying to model what is going on inside the box. it is more general than the simple input-output description of the system.
 
  • #7
rbj said:
but you can model a system in terms of only the input and output (and their initial conditions) with the Laplace Transforms of the input and output. but it won't be a simple transfer function.

Can you make some kind of distinction between simple and complicated transfer function?
 
  • #8
misgfool said:
Can you make some kind of distinction between simple and complicated transfer function?

in the present context,

[tex](s^2 Y(s) - s y(0)) - y'(0)) + 4(s Y(s) - y(0)) + 2 Y(s) = U(s)[/tex]

i meant what would happen if you solved for Y(s). it is not only a function of U(s), but also a function of the two initial conditions.

[tex]Y(s) = \frac{1}{s^2 + 4s + 2}U(s) \ + \ \frac{s+4}{s^2 + 4s + 2} y(0) \ + \ \frac{1}{s^2 + 4s + 2} y'(0) [/tex]

the simple one (assuming a completely relaxed system at t=0) would be

[tex]Y(s) = \frac{1}{s^2 + 4s + 2}U(s) [/tex]

or

[tex]\frac{Y(s)}{U(s)} \equiv G(s) = \frac{1}{s^2 + 4s + 2}[/tex]
 
Last edited:
  • #9
rbj said:
the main difference between this and the state-space model is that the state-space model is trying to model what is going on inside the box. it is more general than the simple input-output description of the system.

Okey, I think I got that, it also models what happens inside the box as you say.

But, still, I haven't got any answer on the question about the limitations on the initial values in the state space representations. There are no such limitations in the state space representation, are there? You can choose the initial values to whatever you like, can you?
 
  • #10
Mårten said:
But, still, I haven't got any answer on the question about the limitations on the initial values in the state space representations. There are no such limitations in the state space representation, are there? You can choose the initial values to whatever you like, can you?

and you can't with the input-output model?

with either model, you can put in whatever initial values you want. but the input-output model only allows you to put in initial values for the output(s) and the various derivatives of the output (up to the order of the system). since the input-output model doesn't even think about the internal states, then i guess you can't choose initial values of internal states.
 
  • #11
rbj said:
with either model, you can put in whatever initial values you want. but the input-output model only allows you to put in initial values for the output(s) and the various derivatives of the output (up to the order of the system). since the input-output model doesn't even think about the internal states, then i guess you can't choose initial values of internal states.

Okey, I got it! :smile:

Next thing then is: With the state space model, you have initial values for the internal states, that is X(t_0). But what about for the output, Y(t)? There's no need for initial values there? Sorry, if this is obvious... :blushing:
 
  • #12
Mårten said:
With the state space model, you have initial values for the internal states, that is x(t_0). But what about for the output, y(t)? There's no need for initial values there?

try to leave the variables with caps for transformed signals and small case for time-domain. y(t) not Y(t) and Y(s) (or in discrete-time Z Transform, Y(z)). it just keeps things clear.

those initial conditions (for y(t)) are fully determined by the initial conditions for the states in X(t). they cannot be independently specified. and with the state-space model, with all of those states (an equal number of states to the order of the system), you need not and will not have initial conditions for higher derivatives (like you did for y'(0)). just having initial conditions for every element in the X(0) vector is good enough.
 
  • #13
Before we go on - thank you very much for this personal class I get here in control theory! o:) It helps me a lot!

rbj said:
try to leave the variables with caps for transformed signals and small case for time-domain. y(t) not Y(t) and Y(s) (or in discrete-time Z Transform, Y(z)). it just keeps things clear.
Okey, sorry. That was to signal that they could be matrices or vectors. Bold face may be better...

rbj said:
those initial conditions (for y(t)) are fully determined by the initial conditions for the states in X(t). they cannot be independently specified. and with the state-space model, with all of those states (an equal number of states to the order of the system), you need not and will not have initial conditions for higher derivatives (like you did for y'(0)). just having initial conditions for every element in the X(0) vector is good enough.
Okey, I think I got it there. y(0) would just have been a transformation (made by the C-matrix) of the vector x(0).

Then comes the ultimate question: Why don't always use the state-space model in preference to the Laplace transform model? What benefits does the Laplace transform model have, which the state-space model doesn't have?
 
Last edited:
  • #14
Mårten said:
Before we go on - thank you very much for this personal class I get here in control theory! o:) It helps me a lot!Okey, sorry. That was to signal that they could be matrices or vectors. Bold face may be better...Okey, I think I got it there. y(0) would just have been a transformation (made by the C-matrix) of the vector x(0).

yeah, and it would be nice if i practice what i preach. i should have said x(t) for the state vector instead of X(t).

Then comes the ultimate question: Why don't always use the state-space model in preference to the Laplace transform model? What benefits does the Laplace transform model have, which the state-space model doesn't have?

simplicity. if a system is known to be linear and time-invariant, and if you don't care about what's going on inside the black box, but only on how the system interacts (via its inputs and outputs) with the rest of the world that it is connected to, the input-output transfer function description is all that you need. if you have an Nth-order, single-input, single-output system, then 2N+1 numbers fully describe your system, from an input-output POV. with the state variable description, an Nth-order system (single input and output) has N2 numbers, just for the A matrix. and 2N+1 numbers for the B, C, and D matrices. so there are many different state-variable systems that have the same transfer function. all of these different systems behave identically with the outside world until some internal state saturates or goes non-linear and that's where they are modeled differently in practice. you can even have internal state(s) blow up and not even know it (if the system is "not completely observable") until the state(s) that are unstable hit the "rails", the maximum values they can take before clipping or some other non-linearity. when something like this happens, there is pole-zero cancellation, as far as the input-output description is concerned. so maybe some zero killed the pole and they both disappeared in the transfer function, G(s), but inside that bad pole still exists.
 
  • #15
Sorry for late reply, been away for a while...
rbj said:
simplicity. if a system is known to be linear and time-invariant, and if you don't care about what's going on inside the black box, but only on how the system interacts (via its inputs and outputs) with the rest of the world that it is connected to, the input-output transfer function description is all that you need. if you have an Nth-order, single-input, single-output system, then 2N+1 numbers fully describe your system, from an input-output POV. with the state variable description, an Nth-order system (single input and output) has N2 numbers, just for the A matrix. and 2N+1 numbers for the B, C, and D matrices
Hm... Still something here that confuses me. Imagine that you have a system described by the ODE [itex]y'' + 4y' + 2y = u[/itex]. It's possible to describe this system with a transfer function [itex]G(s) = \frac{1}{s^2+4s+2}[/itex]. It's also possible to describe this system with a state space representation, constructing the A-matrix and so on. Then the following obscurities:

1) With the state space representation the system is described by N2 numbers in the A-matrix. But most of the numbers in the A-matrix are just zeros and ones. So it seems that the number of significant numbers in the A-matrix are just 2N+1. So then the state space representation is not so much more complicated?

2) If a system is described by this ODE above, how could there be any internal information that is being revealed if putting up this system on a state space representation, compared to when using the transfer function to describe it?
 
  • #16
Mårten said:
Sorry for late reply, been away for a while...

Hm... Still something here that confuses me. Imagine that you have a system described by the ODE [itex]y'' + 4y' + 2y = u[/itex]. It's possible to describe this system with a transfer function [itex]G(s) = \frac{1}{s^2+4s+2}[/itex]. It's also possible to describe this system with a state space representation, constructing the A-matrix and so on.

this is true, but while there is really only one G(s) that accurately represents what the ODE says, there are many (and infinite number) of state space representations that will have the same transfer function, G(s). some are completely controllable, some are completely observable, some are both, some are neither completely controllable nor completely observable.

Then the following obscurities:

1) With the state space representation the system is described by N2 numbers in the A-matrix. But most of the numbers in the A-matrix are just zeros and ones. So it seems that the number of significant numbers in the A-matrix are just 2N+1. So then the state space representation is not so much more complicated?

it's that there are many different state space representations (all with different A, B, and C matrices, but i think the D matrix is the same) that have the same effect as far as input and output are concerned. put these different state-variable systems (with identical transfer functions) in a box and draw out the inputs and outputs to the rest of the world and the rest of the world could not tell the difference between them. at least as long as they stayed linear inside.

2) If a system is described by this ODE above, how could there be any internal information that is being revealed if putting up this system on a state space representation, compared to when using the transfer function to describe it?

if it stays linear inside and you put it in a black box, you can't tell everything about the internal structure. but if some internal state goes non-linear (perhaps because this internal state blew up because it was unstable), then you can tell something is different on the outside. if the state-variable system is completely observable and you have information about the internal structure (essentially the A, B, C, D matrices), you can determine (or "reveal") what the states must be from looking only at the output (and its derivatives). if it's completely controllable, you can manipulate the input so that the states take on whatever values you want.

this "controllable" and "observable" stuff is in sort of advanced control theory (maybe grad level, even though i first heard about it in undergrad when i learned about the state space representation). i don't know if i can give it justice in this forum. i would certainly have to dig out my textbooks and re-learn some of this again so i don't lead you astray with bu11sh1t. so far, I'm pretty sure I'm accurate about what i said, but i don't remember everything else about the topic.
 
  • #17
Okey, I think I understand now. Correct me then if something of the following is wrong.

The expression of G(s) as I was talking about above, could actually be the result of a lot of boxes so to say. That is, G(s) is a black box, and inside that black box there could be several other boxes with signals going in and out, and even feedback loops. Actually, the number of configurations inside the black box, are infinite. That corresponds to the infinite number of state space representations for G(s).

But, let's now say that we know exactly what happens inside the black box. E.g., the black box is the ODE describing a bathtub with water coming in, and water going out. Let's say for simplicity, even though this is probably not physically correct, that this bathtub is described by [itex]G(s) = \frac{1}{s^2+4s+2}[/itex]. Then, this bathtub system could just have one state space representation, since we know the system perfectly inside. Is that correct? (I here disregard the fact that you could manipulate the rows in the A-matrix, and make different matrix-operations, but it's still really the same equivalent matrix.)

So in this case, I could as well solve the matrix equation, as solving the Laplace equation, it is as simple. Except that if I do it the matrix way (i.e. the state space representation way), I could have whatever initial values, it won't do it more complicated, as it would do in the Laplace way. Is that correct?
 
  • #18
Okey, since I got no reply to my previous post above, I guess that means everything there was correct...? :rolleyes:

Then to the following question: One of the benefits with Laplace transforms, is that you can just mulitply boxes in a row to get the transfer function for all the boxes together. In general, it's pretty simple to handle different configurations with boxes in series and in parallel, when doing transfer functions.

Is it possible as well to put up a state space representation when you have several boxes in series and so on, and how is that done then? Maybe that's a drawback to state space representations, that it's not done so easily as with transfer functions?
 
  • #19
Marten,

i just now saw this thread was left hanging. i just received a few infractions from Evo (from a totally different thread) and i need some time to think about what you were asking in post #17. there is a straight-forward way to go from state-space representation to transfer function. it's in the textbooks but it's pretty easy to derive. of course, there is loss of information (since many different state-space representations can turn into a single transfer-function representation). it's maybe not as straight-forward, but you can combine the states of one box and the states of another box (in series or parallel) into a single box where the number of states is the sum of the two and where the states are defined exactly as they were before.
 
  • #20
rbj said:
i just now saw this thread was left hanging. i just received a few infractions from Evo (from a totally different thread) and i need some time to think about what you were asking in post #17.
I'll be happy for any comments! :smile:

The reason I'm so eager to stick to state space representations, is that I would like to be able to handle, and to fully understand, systems with multiple inputs/outputs and with arbitrary initial values, and still have pretty simple calculations (for instance, arbitrary initial values is not good for transfer functions cause then simplicity disappears; multiple inputs/outputs only possible to handle with state space representations).

rbj said:
there is a straight-forward way to go from state-space representation to transfer function. it's in the textbooks but it's pretty easy to derive. of course, there is loss of information (since many different state-space representations can turn into a single transfer-function representation). it's maybe not as straight-forward, but you can combine the states of one box and the states of another box (in series or parallel) into a single box where the number of states is the sum of the two and where the states are defined exactly as they were before.
Okey. I'll try to do this myself here to see what happens. Imagine two boxes G_1 and G_2 after each other in the following way

u --> [itex]G_1[/itex] --> v --> [itex]G_2[/itex] --> y

where [itex]G_1: a_1v'' + b_1v' + c_1v = u[/itex] and [itex]G_2: a_2y'' + b_2y' + c_2y = v[/itex], i.e. G_1 has u and v as input/output and G_2 has v and y as input/output. For simplicity, [itex]a_1 = a_2 = 1[/itex]. The state space representation for this then becomes (as far as I can see)

[tex]
\left(
\begin{array}{c}
x'_1 = v' \\
x'_2 = v'' \\
x'_3 = y' \\
x'_4 = y'' \\
\end{array}
\right)

=

\left(
\begin{array}{cccc}
0 & 1 & 0 & 0 \\
-c_1 & -b_1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & -c_2 & -b_2 \\
\end{array}
\right)

\left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
x_4 \\
\end{array}
\right)

+

\left(
\begin{array}{c}
0 \\
u \\
0 \\
0 \\
\end{array}
\right).
[/tex]

Okey, it seems to work! The output from G_1 becomes the input to G_2 through the number 1 in the A-matrix' lower left corner (element [itex]a_{41}[/itex], sort of the "chain element" which connects the two boxes).

Now comes the question of how to solve this, ehhh... :yuck: Using eigenvalue techniques or [itex]e^{At}[/itex]-matrices, and so forth, maybe leading to unsolvable nth degree equations (n>4). Or simply using a computer... :confused:
 
  • #21
yeah, it's been a good 12 or 13 years since i modeled state space systems, which is why i haven't tried to help. but yeah, computing the discrete state-space matrix from the continuous time transfer function is done with a computer. something like Matlab or Scilab, maybe Octave, should have the functions available. Matlab i know for sure does, because that's what i used to use.
 
  • #22
Of course, MATLAB would do the trick! I should know that. :wink:

Okey, I guess I won't get out any more in this particular area. Thanks for all help! Don't hesitate to comment further, what I've said above.

I've now started a new thread on the topic of frequency response,
https://www.physicsforums.com/showthread.php?t=273010"
 
Last edited by a moderator:
  • #23
well, it requires computing a matrix exponential, doesn't it? iirc, there's really no good way to do that by hand.
 

1. What is control theory?

Control theory is a branch of engineering and mathematics that deals with the analysis and design of systems that can be controlled to achieve a desired behavior. It involves using mathematical models to understand how a system behaves and using this knowledge to design control systems that can manipulate the system's inputs to achieve a desired output.

2. What is the difference between Laplace and state space representation in control theory?

Laplace representation is a mathematical technique that involves transforming a system's differential equations into a complex frequency domain, making it easier to analyze the system's behavior. State space representation, on the other hand, is a mathematical model that describes a system using a set of first-order differential equations, making it easier to implement and simulate the system in a computer.

3. Which representation is better for control system design?

There is no clear answer to this question as both representations have their strengths and weaknesses. Laplace representation is better for theoretical analysis and design, while state space representation is better for practical implementation and simulation. It ultimately depends on the specific application and the preferences of the designer.

4. Can Laplace and state space representations be used together in control system design?

Yes, it is possible to use both representations together in control system design. In fact, it is common to use Laplace representation for theoretical analysis and state space representation for practical implementation. Many modern control systems use a combination of these two representations for more accurate and efficient design.

5. How do I choose between Laplace and state space representation for my control system?

The choice between Laplace and state space representation depends on the specific requirements and constraints of your control system design. Consider the complexity and nonlinearity of the system, the desired performance and robustness, and the available resources for implementation. Consulting with a control systems expert can also help in making an informed decision.

Similar threads

  • Electrical Engineering
Replies
3
Views
938
  • Electrical Engineering
Replies
12
Views
2K
  • Electrical Engineering
Replies
1
Views
977
Replies
12
Views
1K
  • Electrical Engineering
Replies
21
Views
12K
Replies
2
Views
960
  • Engineering and Comp Sci Homework Help
Replies
6
Views
1K
  • Electrical Engineering
Replies
1
Views
8K
  • Electrical Engineering
Replies
4
Views
1K
Replies
7
Views
226
Back
Top