Hello, I'm a mechanical engineering major/electrical engineering minor university student. A huge part of mechanical and electrical engineering is control theory, which I enjoy immensly. I have yet to take any ME control electives but took a systems/stability course as my last EE elective. The course consisted of a review of Laplace transforms, poles and zeros of transfer functions and how they relate to stability,routh criterion, forced and natural response, state space etc. I understand the concept mathematically of why a critically damped step response is the ideal situation, but the one thing that always left me hanging was that I never understood how to interpret all of this theory physically. My professor always made a joke about systems that werent BIBO stable saying that "your plane has crashed" due to failure of the system. My question is, what is the significance of poles, stability, step response, damping etc in the physical world? I understand how all of theory works, but I cannot explain how to use it. Thank you! KiltedEngineer
What a 'critically damped' system means depends on context. For a mathematician, 'critically damped' usually means unity damping factor, i.e. no overshoot in its step response. For a control engineer, it often means a damping factor of ##\zeta = \cos(45^\circ)##. In either case, what's "ideal" depends on the application. You probably wouldn't want the positionservo for a surgical robot to have any overshoot There are good reasons as to why we're often concerned with the step response of a system. Foremost, a step signal often corresponds to the type of real-world signals your system will be exposed to, so by considering the step-response of your system model, we're really just simulating what will happen if we apply such a signal to your physical system. It's also a mathematically "simple" signal, so it helps to make the analysis manageable. As an example, consider an industrial robot (let's assume it only has one degree of freedom): The transfer function ##H(s) = \frac{C(s)}{R(s)}##gives the dynamic relationship between a position command input ##R(s)## and the position response of the tool point of the robot ##C(s)##. If you primarily just want the robot to move its tool point from one position to another as quickly as possible, your command signal ##R(s)## would be well modeled as a step signal. Then it's your job as a control engineer to make certain that its step-response adheres to whatever performance specifications you've agreed to. Any help? Maybe you could try to narrow it down to a couple of specific questions. That would make it easier to see where you're struggling.
Thanks you have helped somewhat, but I still have some unasnwered questions. Are the functions C(s) and R(s) position functions one being the desired input while the other being the real output? So if the system were say a dental handpiece, the input function would be the foot pedal while the desired output would be the drill activating as quickly as possible and not overshooting the desired speed? As for poles and stability, what exactly does it mean to be unstable? Will the system not respond properly? And how do we know how to derive these control functions for a particular system? Thanks KiltedEngineer
Sure. ##R(s)## could be the speed command input to your drill and ##C(s)## would be its speed response. Ideally, you'd want ##C(s) = R(s)## such that ##H(s) = 1##, but that's not physically possible due to the dynamics of your dental drill. In other words, the speed reponse of your drill has some sort of time dependency that makes it impossible for it to perfectly track your command signal. Technically, it means that ##c(t)##, the inverse Laplace transform of ##C(s)##, will have a term in it, for any input signal ##r(t)##, that grows exponentially as ##t \rightarrow \infty##. For your drill, it would mean that, the instant it gets any sort of input from your foot pedal (even electrical noise), its speed would start increasing uncontrollably until it reaches its physical limits. That could easily happen, for instance, if you have a large delay in your feedback path, e.g.: 1. You command your controller to increase the speed of the drill to 10000 RPM. 2. The controller starts increasing the voltage across the DC motor powering the drill. 3. Due to the feedback delay, as far as the controller knows, nothing is happening to the drill speed even though the voltage across the DC motor is steadily increasing. 4. The drill speed massively overshoots the 10000 RPM mark since the controller was unaware of the delayed effect of its action. 5. An even worse counterreaction occurs since the controller now sees this massive overshoot and tries to correct it with a large decrease in DC motor voltage. 6. The drill speed oscillates with ever increasing amplitude (within the physcial limits of the system). You might be able to compensate for such a delay, but it often spells doom for control system performance. There are basically two methodologies for deriving ##H(s)## for your system: 1. Start from first principles. Use classical mechanics, circuit analysis etc. to derive the governing differential equations for your system. 2. Measure input and output data sets of your system and use system identification to fit the response of a chosen model to your experimental data. You often start with (1) and use (2) to find values for unknown parameters of your derived model. System identification is itself a very large field in applied mathematics.