I was reading BC Kuo's Automatic Control Systems where I came across a solved problem (page 369 of 7th edition) regarding velocity control. I have a problem understanding how the steady state error has been computed. The original problem and its solution as given in the book are quoted below.

Let the feed-forward transfer function be

[tex]G(s) = \frac{1}{s^2(s+12)}[/tex]

and the feedback transfer function be

[tex]H(s) = K_{t}s[/tex]

where [itex]K_{t}[/tex] is the tachometer constant.

I am not clear about how the steady state error has been computed here. I understand that the dominating terms as [itex]t\rightarrow \infty[/itex] are the linear term and the constant term, but how does the limit of the (0.1t -y(t)) term represent steady state error? How is the reference signal equal to 0.1t?

I haven't been able to figure this out yet, so I would be really grateful if someone could look into it for me and tell me what the mistake in my understanding/interpretation is?

For the later occurrences, steady state means change of the function goes to zero when t goes to infinity right? That means if I take the derivative of the function it should go to zero since there is a steady state(existence must be checked of course!) then derivative means multiplying with s right?