Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Meaning of the s-domain absolute value function

  1. Sep 4, 2006 #1
    I am trying to understand the meaning of the s-domain absolute function derived from taking the laplace transform of a t-domain function. I know for sure that the real part of the complex frequency in the time domain is the sinusoidal frequency and the imaginary part of the complex frequency in the time domain is the exponential decay frequency. I was able to prove this to myself.

    Now, in the s-domain, it seems like the opposite occurs. It looks to me like the real part is now the exponential decay and the imaginary part is now the sinusoidal frequency. IS my interpretation correct? I was hoping things would be simple and complex frequency could be interpreted in the same in both the t-domain and the s-domain....I guess I was wrong.
     
  2. jcsd
  3. Sep 4, 2006 #2
    Also, it seems like the laplace transform shows the character of the time domain function at different frequencies, or at least this is how I am interpreting the meaning of F(s). I was under the impression that the fourier transform did this. But now, it makes more sense that the laplace transform does this. The fourier transform is the integral of all of the different frequencies in the laplace transform function, and this explains why the fourier transform is the inverse of the laplace transform.

    Sorry I am slow with this...I am just learning this stuff as we speak....
     
  4. Sep 4, 2006 #3

    Astronuc

    User Avatar

    Staff: Mentor

    Laplace and Fourier transforms are ways of looking at a problem in the 'frequency domain'. Fourier transform domain is a subset of Laplace transform domain. (I don't know if that's the right way of saying this :uhh: ).

    See - http://en.wikipedia.org/wiki/Laplace_Transform#Formal_definition

    If s = [itex]\sigma\,+\,i\omega[/itex] then setting [itex]\sigma[/itex] = 0 gives the domain Fourier transform, s = [itex]i\omega[/itex].

    The fourier transform is NOT the inverse of the laplace transform. Both are in the frequency domain. There is an Inverse Lapace Transform and an Inverse Fourier Transform, which takes the problem back into time-domain.
     
  5. Sep 4, 2006 #4
    ok, that is actually what I was thinking....

    so to my other question....It looks to me like in the s-domain the real part is now the exponential decay and the imaginary part is now the sinusoidal frequency. Is my interpretation correct? I was hoping things would be simple and complex frequency could be interpreted in the same way in both the t-domain and the s-domain....I guess I was wrong.

    I am still not clear as to what the s-domain represents.

    It also seems that, as you said, the fourier transform is the laplace transform with s = iw.
     
    Last edited: Sep 4, 2006
  6. Sep 4, 2006 #5
    Just answer this.....what does the s-domain transform of a time function describe??? If I transform a time function to the s-domain and plug in a complex frequency and take the modulus, what is the meaning of the resulting number?

    Better yet, what is the meaning of the resulting complex number??
     
  7. Sep 4, 2006 #6
    I want to better understand the s-domain's meaning so that interpreting pole-zero plots is less of a game of applying memorized steps, and more of a game of intuituition and understanding.
     
  8. Sep 4, 2006 #7

    Gokul43201

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Let me know when you figure this out!

    The pole-zero plot, as advertized, tells you the locations of poles (where the gain goes to infinity) and zeros (where the gain goes to zero) of some transfer function in the s-domain. If the real signal is causal, the region of convergence (ie, the portion of the complex plane where the Laplace Transform exists, or does not blow up) is the right half-plane bounded by the line Re(s)=c, where the real signal is given by f(t) = A*exp(ct)*u(t), where u(t) is the Heaviside-step function (the transform of the above signal is simply 1/(s-c), and has a pole at s=c). If there are multiple poles, the region of convergence is the intersection of each of the individual regions of convergence. [I'm guessing you've covered most of this in your classes and am only skimming through it. Feel free to make me walk the walk step by step, though I don't promise I can - I've never learnt this stuff formally.]

    Now, the only thing that I've learned to take away from this is a question of determining stability - are there frequencies where the system is unstable? To answer this, what I do is see if the y-axis (or the imaginary axis, or the frequency axis) in the s-plane in included in the region of convergence. If it is, I have a stable system. And that's as much as I have a feel for - but there's folks here, like RBJ or Berkeman (I imagine) who probably eat and breathe this stuff daily.

    Edit: You might have better luck if this were moved to either EE or to Calc & Analysis.
     
    Last edited: Sep 4, 2006
  9. Sep 4, 2006 #8
    Well, I learned a little in circuit analysis, but I've only had one lecture in my controls class, so I am very new to this stuff too. :tongue:
     
  10. Sep 4, 2006 #9
    You will use poles and zeros to design controllers later in your controls course. Techniques like pole placement, etc, help you design filters and controllers that are stable and meet certain specifications such as percent overshoot, settling time, etc.

    When you have a transfer function, G(s), for example - you have ZEROS/POLES (that is, zeros over poles). The roots of the denominator are your poles and the roots of the numerator are the zeros.

    Some important things to remember: Pole must ALWAYS be on the left hand side of the complex plane. If you have right side poles/zeros, you're system will be unstable. There are a lot of methods to fix this.

    You generally design and place your controllers poles based on specifications. You will generally be given specs in the time domain such as settling time and percent overshoot - or specs in the frequency domain such as bandwidth requirements and disturbance rejection.

    Complex poles/zeros are fine, and very common as they are related to your specs, but always make sure they are on the left side of the complex plane.

    I hope this clears up some basic ideas.
     
  11. Sep 4, 2006 #10

    rbj

    User Avatar

    it's about the best way to say it. the way i like to say it is that the bilateral Laplace Transform (bottom limit of integral is [itex]-\infty[/itex]) is a generalization of the Fourier Transform or that the F.T. is a "degenerate case" of the bilateral L.T. (Fourier was such a degenerate :rolleyes: ) where the real part of s is zero.


    it's a transform, similar to how the logarithm works on (positive) numbers. the logarithm transforms a multiplication problem into one of addition and transforms a exponential power problem into one of multiplication. the L.T. transforms a linear differential equation problem into an algebraic problem of solving a polynomial equation.

    if you want to understand more deeply the sorta pedagogical train of thought, assuming you know your sines and cosines well, start with Fourier Series (particularly the representation with complex exponentials) and then, for a fixed and truncated function (that gets periodically extended so you can use F.S. on it) then let the period (which goes from -T/2 to +T/2) go out to infinity. your F.S. becomes a F.T. then try to (easily) compute the F.T. for the (heaviside) unit step function: you'll see you can't get the integral to converge until you add a little [itex]\sigma[/itex] to the [itex]j \omega[/itex]. generalizing that is the L.T.

    that's essentially how i understand it on the fundamental level.



    okay, if you have a linear, time-invariant system (the LTI condition is required, if you want to do L.T. to it), purely from the fact that it is linear and time-invariant (forget about Laplace for the time being) the output [itex] y(t) [/itex] of such a system can be computed, in general, from the input [itex] x(t) [/itex] and the system's "impulse response" [itex] h(t) [/itex] from:

    [tex] y(t) = h(t) * x(t) \equiv \int_{-\infty}^{+\infty} x(u) h(t-u) du = \int_{-\infty}^{+\infty} h(u) x(t-u) du [/tex]

    and when you L.T. both sides, you get:

    [tex] Y(s) = H(s) X(s) [/tex]

    and, in the "degenerate case" of the F.T. it's

    [tex] Y(j \omega) = H(j \omega) X(j \omega) [/tex]

    now, if you were to drive the input of this LTI system with a sinusoid in the form of a complex exponential

    [tex] x(t) = e^{j \omega t} [/tex]

    then, using the convolution integral above, you will see that the output is:

    [tex] y(t) = h(t) * e^{j \omega t} = H(j \omega) e^{j \omega t} [/tex]

    or

    [tex] y(t) = |H(j \omega)| e^{j \arg(H(j \omega))} e^{j \omega t} [/tex]

    or

    [tex] y(t) = |H(j \omega)| e^{j (\omega t + \phi)} [/tex]

    where [tex] \phi \equiv \arg(H(j \omega)) [/tex].

    so [itex]|H(j \omega)|[/itex] is the "gain" of this system (how much it will boost the input sinusoid) and [itex] \phi \equiv \arg(H(j \omega)) [/itex] is the phase shift (how much it will shift the phase of the input sinusoid).

    now here are where the poles and zeros come in. if your LTI system is one where the output [itex] y(t) [/itex] can be defined as differential equation that is the sum of various derivatives of the output and the input [itex] x(t) [/itex] (including the 0th derivative of [itex] x(t) [/itex]):

    [tex] y(t) = b_0 x(t) + b_1 x'(t) + b_2 x''(t) + ... + b_M x^{(M)}(t) - a_1 y'(t) - a_2 y''(t) - ... - a_N y^{(N)}(t) [/tex]

    where [itex] M \le N [/itex]. that differential equation can be Laplace Transformed into

    [tex] Y(s)= b_0 X(s) + b_1 s X(s) + b_2 s^2 X(s) + ... + b_M s^M X(s) - a_1 s Y(s) - a_2 s^2 Y(s) - ... - a_N s^N Y(s)[/tex]

    and solved:

    [tex] Y(s)= \frac{b_0 + b_1 s + b_2 s^2 + ... + b_M s^M }{1 + a_1 s + a_2 s^2 ... + a_N s^N} X(s) = H(s) X(s) [/tex]

    and factored:

    [tex] H(s) = \frac{Y(s)}{X(s)} = \frac{b_0 + b_1 s + b_2 s^2 ... + b_M s^M}{1 + a_1 s + a_2 s^2 ... + a_N s^N} = \frac{b_M}{a_N} \ \frac{(s-z_1)(s-z_2)...(s-z_M)}{(s-p_1)(s-p_2)...(s-p_N)} [/tex]

    now the gain:

    [tex] |H(j \omega)| = \frac{|b_M|}{|a_N|} \ \frac{|j \omega-z_1|\ |j \omega-z_2| \ ... |j \omega-z_M|}{|j \omega-p_1|\ |j \omega-p_2|\ ...|j \omega-p_N|} [/tex]

    now, here is what's happening: to determine the "frequency response" of your system (how much gain there is for any general frequency [itex] \omega [/itex]), you are measuring the distance that the point [itex] s = j \omega [/itex] on the imaginary axis is from each zero [itex] z_m [/itex] (and multiplying those distances together) and dividing by the distances that the same point [itex] s = j \omega [/itex] is from all of the poles [itex] p_n [/itex] (dividing by the product of all of those distances). there is also a constant gain factor [itex]\frac{|b_M|}{|a_N|}[/itex] that i don't wanna think about.

    so, as your frequency starts out at zero and you increase it, your [itex] j \omega [/itex] point starts out at the origin [itex] s = 0 [/itex] and moves up on the imaginary axis. as [itex] s = j \omega [/itex] gets close to any zero [itex] z_m [/itex], the gain of your system will decrease (because that distance is decreasing and you are multiplying by it). as [itex] s = j \omega [/itex] gets close to any pole [itex] p_n [/itex], the gain of your system will increase (because that distance is decreasing and you are dividing by it).


    that is one salient meaning of how we think of poles and zeros.

    if you express the transfer function in terms of partial fraction expansion,

    [tex] H(s) = \frac{\frac{b_M}{a_N} (s-z_1)(s-z_2)...(s-z_M)}{(s-p_1)(s-p_2)...(s-p_N)} = \frac{A_1}{s-p_1} + \frac{A_2}{s-p_2} + ... + \frac{A_N}{s-p_N} [/tex]

    then the impulse response of the system is:

    [tex] h(t) = \left[A_1 e^{p_1 t} + A_2 e^{p_2 t} +... +A_N e^{p_N t} \right] u(t) [/tex]

    (where [itex]u(t)[/itex] is the unit step function) and you can then figure out that if any of the poles, [itex] p_n [/itex], move into the right half plane, that is:

    [tex] \mbox{Re}\{p_n\} \ge 0 [/tex]

    you will get an exponentially increasing term in the impulse response

    [tex] A_n e^{p_n t} = A_n e^{\mbox{Re}\{p_n\} t} e^{j\mbox{Im}\{p_n\} t} [/tex]

    which blows up and your system is unstable.

    that's the other salient meaning of poles.
     
    Last edited: Sep 4, 2006
  12. Sep 4, 2006 #11
    Man, thanks a lot rbj. Everything you said before and after this part of your post I knew, but this last insight you provided helped a lot. Thanks.
     
    Last edited: Sep 4, 2006
  13. Sep 4, 2006 #12

    rbj

    User Avatar

    edit: looks like our edits "crossed in the mail". so it was the influence of poles and zeros on frequency response you were wondering about. (this is not often well taught in an undergrad linear systems course.) you can also come up with a corresponding tidbit regarding phase. the angle of the same vectors that connect your zeros (and poles) to [itex] j \omega [/itex] also add (and subtract) to get your phase response [itex] \phi( \omega ) [/itex]. same song-and-dance.

    yer welcome. there were two "insights" that you quoted that are different. one is that as [itex] j \omega [/itex] gets close to a pole, you get a resonance at that frequency. the other has to do why systems go unstable as the poles move into the right half-plane. dunno which you mean. but it doesn't matter.

    BTW, as you get into discrete-time systems ("digital filters") all of this is applicable but you replace convolution integral with convolution summation, s with z, Laplace Transform with Z Transform, and the [itex] j \omega [/itex] axis with the unit circle [itex] e^{j \omega} [/itex]. but all of that other stuff (distances to poles and zeros, partial fraction expansion, etc.) is done just the same and you get the same or corresponding results.
     
    Last edited: Sep 4, 2006
  14. Sep 4, 2006 #13
    It seems the biggest problem I had in seeing the connections was the chronology in which the fourier transform and the laplace transform were introduced in my curriculum.

    I was introduced to the laplace transform first in circuit analysis (well, really, first in DE) because it helps in solving the complicated differential equations that arise in DC circuit analysis. It wasn't until quite a while later that I was introduced to fourier series and transforms. Then, all of a sudden, out of nowhere it seems, they tell you that the fourier transform is like a slice of the s-domain without explaining how the fourier transform connects to the big picture in the s-domain world or how the fourier transform relates to the laplace transform. The tools are not introduced in a logical order so that the connections between the tools is clear.

    It seems to me that the fourier series should be introduced first, and then the fourier transform, which is just a fourier series, but instead of line spectra frequencies it has continuous spectral frequencies, and the "fourier coefficients" in the fourier transform is really a continuous function of omega describing the amplitude of the sines and cosines at all of the frequencies. The fourier series and transform should be presented in trig form first, and then the complex form should be derived.

    Once this is done, then the laplace transform should be presented as a means of avoiding the singularities that arise.

    And, my original statement holds about the role reversal of the real part of complex frequency and the imaginary part when going from the time domain to the s-domain. It seems that in the time domain, the real part of freq is the sinusoidal frequency of the waveform and the imaginary part of the freq is the decay frequency of the waveform. I can prove this. And, by observation, the opposite is the case in the s-domain. Is this observation true? Can someone explain this phenomenon?
     
  15. Sep 4, 2006 #14
    I found this to be a very informative read.
     
  16. Sep 4, 2006 #15
    Maxwell, what is the problem with having a zero in the left half plane? I can understand why it's a problem to have a pole in the left half, because then you have an increasing exponential attached to a sinusoid of a certain frequency, where that frequency has very large character (amplitude) in the spectrum, but what's wrong with a zero over there?
     
  17. Sep 4, 2006 #16
    You WANT your zeros in the left half plane. Sorry for not making that clear. If you have right hand zeros, you need to use something called the Diophantine equation to fix that.
     
  18. Sep 4, 2006 #17
    oh, also, is a digital control systems class the same as a digital signal processing class?
     
  19. Sep 4, 2006 #18
    Nope. Digital controls are usually appended onto each controls class. For example, in my Classical Controls class we covered Digital Control systems as the last unit. At the end of my Modern Controls class, we applied all the methods we learned to Digital Control. The reason for this is because in Digital Controls we basically use the same exact methods except for one major difference -- we use something called a Z-transform. There are other small differences, but those are covered as well.

    You will most likely see digital control systems at the end of the class you are currently taking.

    Digital Signal Processing is a different, although closely related, field. There are a lot of DSP classes, but in an intro class you'd see things like transversal and recursive filters, signal detection in different scenarios, and different types of signal/noise analysis.
     
  20. Sep 4, 2006 #19

    yeah, he said we will do some digital control systems at the end. Hmmm....I don't think my school offers a digital signal processing course. Would this be a good directed study to do for someone interested in controls?
     
  21. Sep 4, 2006 #20
    It definitely would. An intro DSP class would cover a lot of filter design, and that's important for a control systems engineer to know. There is cross-over between the two fields. Intro DSP classes would be a good topic for any one to study -- not just for someone who wants to go into more advanced fields of DSP.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Meaning of the s-domain absolute value function
  1. 'S' in s domain (Replies: 2)

Loading...