1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Laplace transform, sum of dirac delta

  1. Nov 14, 2012 #1
    1. The problem statement, all variables and given/known data

    2. Relevant equations

    I really wish they existed in my notes! *cry*.

    All I can think of is that integrating or in other words summing the dirac delta functions for all t, would be infinite? None the less the laplace transform exist since its asked for in the question and i don't know what formulas to use...

    3. The attempt at a solution

    Well the Z transform of the given signal is Z{1, 1, 1, 1....} = z/z-1. (I think)

    But Z transform and laplace transform are two different things.

    Also it mentioned poles in the question.. I have no idea what it is talking about. I am so frustrated that my university doesn't provide me with enough material to possibly learn what they ask for.

    Thanks a lot for any help it would be much appreciated.
  2. jcsd
  3. Nov 14, 2012 #2
    The area under a delta function is 1. So if you integrate across a delta function, you will get 1 where the delta occurs. In an integral containing deltas, the deltas will be picking out values of the integrand at specific instants.

    The dirac delta function is defined in terms of a limiting process. One possibility is a rectangle whose width approaches zero but whose area is kept constant at one. This is why the delta function has the area property mentioned above.

    There is a connection, just like there is a connection between the laplace and fourier transforms and between the fourier and z transforms. This question is poking at it and this relationship will be explored in the course you are taking or in another related one.

    They are probably challenging you a little ahead of covering the material. It's good to exercise the thinking muscle :)

    When you integrate that sequence of deltas inside the laplace integral, you will end up with an infinite sum of exponentials. The hint provided suggests trying to find a closed form sum for that sum and then I think you will see the poles they are talking about :)
  4. Nov 14, 2012 #3
    Thanks, that was a great tip. So far I figured out that the laplace transform is the sum of e^(-snT) for n=0 to n= infinity (hopefully). However I went to wikipedia about geometric series searching for the formula of this sum but I couldn't find that formula. Could you possibly help me with the sum? Thanks a lot :)
  5. Nov 14, 2012 #4
    I found the final answer to be e^sT / (e^sT - 1). Is that correct?
  6. Nov 15, 2012 #5
    Yes that looks right. Next you need to locate the poles.


    You can see some connections between the Laplace transform and the Z transform now.

    A continuous time signal f(t) sampled and Laplace transformed is equivalent to taking the Laplace transform of f(t)×iT(t) (iT is the sum of deltas in the question). This ends up being the infinite summation:

    f(0) + f(T)e-sT + f(2T)e-sT2 + ... + f(nT)e-sTn + ...

    The Z transform of the same sampled signal is:

    f(0) + f(T)z-1 + f(2T)z-2 + ... + f(nT)z-n + ....

    You should notice you can get the Laplace transform of the sampled signal from the Z transform by substituting z=esT

    In fact you did this once already... You found the Z transform of a sampled step function starting at time=0 in your first post z/(z-1). If you were to sample a step function in the continuous time doman, you would get a sequence of deltas. You just found the Laplace transform of a bunch of deltas to be esT/(esT-1)

    Given the Laplace transform of a continuous time transfer function, you know that setting s=jw will give you the frequency response of the system at w. Since z=esT gets you the laplace transform of a sampled function from the Z transform, you could substitute z=ejwT into a Z transform transfer function directly to find the frequency response to real time sinusoids of frequency w.
    Last edited: Nov 15, 2012
  7. Nov 15, 2012 #6
    I see, that was usefull to know for the continued questions as well, and for my understanding. Thanks again :) Would it be right to say that the poles would be those values of s such that e^sT = 1?
  8. Nov 15, 2012 #7
    yep! poles
  9. Nov 15, 2012 #8
    Here's another task that I got some questions about. Any help would be appreciated!

    1. The problem statement, all variables and given/known data


    2. Relevant equations

    formula of convolution

    http://math.fullerton.edu/mathews/c2003/LaplaceConvolutionMod.html [Broken]

    and laplace transform of a convolution equals to the multiple of the laplace transform of the two involved functions individually.

    3. The attempt at a solution


    I think this answer should be right based on the definition of the laplace transform, but the question asks me to use the convolution theorem which I don't know how, and also implies that the laplace transform of f(t) (which would be F(t)) should enter the equation somewhere. Also I don't know how to find the period of this signal.
    Last edited by a moderator: May 6, 2017
  10. Nov 15, 2012 #9
    Now they want you to find the frequency content of the sampled signal by setting s=jw in the Laplace transform of the signal.

    Your result:

    L(sampled f) = Ʃ f(nT) e-snT

    is correct but it is not a convenient form to find the magnitude of the signal as a function of frequency.

    How would you find the magnitude at a specific frequency w using this representation?

    M(w) = | (Ʃ f(nT) e-snT) @ s=jw |
    = | Ʃ [f(nT) cos(wnT) + j f(nT) sin(wnT)] |
    = | Ʃ [f(nT) cos(wnT)] + j Ʃ [f(nT) sin(wnT)] |

    ;; ^ add up the real and imaginary parts separately

    = √ [Ʃ f(nT) cos(wnT)]2 + [Ʃ f(nT) sin(wnT)]2

    What do you know about each of these terms in the square root? Consider a fourier series representation of a periodic signal.... it's a sum of harmonically related cosines, starting at the fundamental periodic frequency. Each of those summations is therefore periodic (you will have to do some mental gymnastics here -- the continuously varying time variable t in fourier series you normally see is played by w in the above equations and therefore the above are periodic in w with fundamental frequency T). The square of a periodic signal is still periodic. The sum of two periodic signals of the same period is still periodic with the same period. And finally the square root of a periodic signal is still periodic.

    You could probably answer the question this way but it is *not* the way you should do it because there is another simpler way to understand this that you need to see and the question is steering you toward. I only went through this exercise so you would not be too confused. The Laplace representation of the signal you found as an infinite summation is equivalent to this other representation you are about to find.

    So, start again:

    We have a sampled signal whose Laplace representation is:

    L[ f(t) Ʃ δ(t-nT) ]

    Look at this time domain multiplication in the frequency domain. A time domain multiplication is the same as convolution in the frequency domain. So suppose you have F(s) and its fourier representation F(jw) and the Laplace of the deltas (infinite sum of exponentials) and its fourier representation (set s=jw, your exponentials appear as ? with phase ?). Convolve them. First assume the bandwidth of F(jw) (+ve and -ve frequency) is smaller than the width between the fourier exponentials of the delta sample train. Then you will see there's a problem if F(jw) has larger bandwidth but either way the result will still be periodic with the same fundamential period.

    EDIT: I fixed the fourier series explanation above
    Last edited: Nov 15, 2012
  11. Nov 15, 2012 #10
    Thank you for your answer, it was very enlightening. The first part is understood. The only thing I still need to understand is how am I supposed to convolve F(jw) and the laplace transform of delta's? Should I do the integral over Those two multiplied? Will there be any shift in the variables in that integral?

    I have always had a little trouble understanding convolution in general. What does f*g mean? Is it multiple or that f is a function of g? Or is it just a notation for the integral in the formula?
  12. Nov 15, 2012 #11
    Do it graphically on the magnitude spectrums. Since f(t) is arbitrary you can assume F(jw) has some bandwidth that is centered on 0 Hz (keep in mind real signals have +ve and -ve frequency components). During the graphical convolution keep the delta spectrum in place while the F(jw) spectrum is moved. Assume a small bandwidth for F(jw) at first because if it is too large, a bad thing will happen (aliasing). Then try a second pass with larger bandwidth to see what happens. Both will be periodic with the same period but one is harder to construct. If the sample rate is increased, the distance between the deltas in the delta spectrum increases which allows wider bandwidth in F(jw) without aliasing. This is where the nyquist sampling theorem comes from.

    Maybe it's better to read the above after you've figured out graphical convolution.

    f*g is shorthand for the convolution integral. '*' is an overused symbol so you do have to watch context. In terms of signals, it could also mean complex conjugate.

    I spent a half hour trying to find a good explanation of 1D convolution and couldn't find it. For some reason everyone jumps to explaining the integral itself without even trying to explain the motivation or the reason for the integral in the first place. That is a shame because convolution is very easy to understand once you know what it is but it is hard to explain in text.

    I am very short on time so I will try an explanation later tonight if no one else tries.

    Something to chew on though, which may help you to figure it out yourself:

    An LTI system's impulse response h(t) is the output the system generates due to an impulse input at time t=0. If two impulse inputs occur, one at t=0 and another at t=1 then the system's response will be the sum of two impulse responses h(t)+h(t-1). The first h(t) starts at time t=0 and the second h(t-1) starts at t=1.

    An arbitrary input x(t) can be regarded as a summation of densely packed impulses. The system's response to those impulses will be the sum of a bunch of h(t) shifted in time.

    Try a graphical convolution of x(t) with h(t) -- keep h(t) fixed and do the flipping on x(t). See if you can see how the summation of those impulse responses are occurring.
  13. Nov 15, 2012 #12
    Convolution is simple. The explanations on the web make it seem like you need to be a genius to understand it. But it still feels hard to explain in words. Here's a try.

    We have an LTI system. This means the system is time invariant -- its impulse response or transfer function does not change with time. If you apply an input now or 20 minutes from now, you will get the same response. It also means the system is linear. If you apply input x1(t) and get y1(t) as output and you later apply input x2(t) and get y2(t) as output, then the output due to x1+x2 will be y1+y2. This describes most engineering systems and when it doesn't, we make approximate models of nonlinear systems that are linear to simplify matters (eg, the small signal models of transistors which approximate the exponential transistor characteristics).

    We begin by knowing the response due to an impulse δ(0) is the impulse response h(t). Because the system is linear, the response to two impulses, perhaps δ(0)+δ(1), is h(t)+h(t-1). That is, the system begins to respond with h(t) at time t=0 due to the first impulse and then begins to generate a second impulse response h(t-1) at time t=1 due to the second impulse. The total output is the summation of the two.

    We can express an arbitrary signal x(t) as an infinite sum of impulses. You've already done this by sampling x(t) with period T to get a representation as a summation of impulses separated in time by T. If you continue to decrease T, the impulses become closer together until at T=0, x(t) is represented exactly as a solid wall of impulses. The proposal is the system response to x(t) can then be regarded as a summation of impulse responses h(t).

    The computation of this summation of impulse responses is done with convolution. I've attached a badly drawn diagram.

    On the left side I've drawn x(t), an input, chosen to be a rectangle. On the right is the system impulse response h(t), chosen to be a decreasing ramp.

    The second image down on the left is a graph of x(-t), which is found by reflecting x(t) about the y axis. The important characteristic of this graph is that x(t) is ordered such that later input values appear further to the left. The value of x at the start t=0 is still at the origin.

    At t=0 the output to the system begins because x(0) presents itself as an impulse. This is shown in the second diagram on the right where both x(-t) and h(t) are drawn. The impulse x(0) will cause the output at t=0 to be x(0)×h(0), which is the area of overlap between x(-t) and h(t) at the time shown.

    Next look at the output at time t=0.5. The third image on the left is a graph of x(-t) shifted to the right by 0.5. The third image on the right shows x(0.5-t) superimposed on a graph of h(t).

    That impulse x(0) that occurred at t=0 is still generating its impulse response and at t=0.5, the output due to impulse x(0) only is x(0)×h(0.5). On that third graph on the right, I marked this impulse in red. In fact as time passes, that impulse slides right tracing out the part of the impulse response it is responsible for generating. I've marked another impulse on that graph x(0.25). That input was presented to the system at time t=0.25 and began tracing out its impulse response at that time. At t=0.5, it is responsible for generating the response h(0.25)×x(0.25). In fact, there is a solid wall of impulses to the right of t=0 that are generating a part of their impulse responses at this time. We need to add up all the responses to get the total response of the system at this time. We do that with an integral.

    At this snapshot in time (at t=0.5), the output will then be y(t=0.5) = ∫00.5 h(t) x(0.5-t) dt

    A moment Δτ later, the x part of the graph on the bottom right slides Δt to the right as each impulse selects the part of the impulse response that it is generating. The function x graphed is x(0.5+Δτ - t). And the output at this time y(0.5+Δτ) can be found by adding up all the parts of the impulse response each impulse in x is generating. y(0.5+Δτ) = ∫00.5+Δτ h(t) x(0.5+Δτ - t) dt

    In general the output at any time t due to the impulse wall x(t) is:

    y(t) = ∫0th(τ) x(t-τ) dτ and this is the convolution integral.

    We flipped x(t) so that the part of x that occurs soonest would overlap h(t) first as it was slid to the right. This corresponds to earlier impulses in x each generating h(t) before later impulses of x arrive.

    To find the output at a specific time, we shifted the flipped x to the right by that amount of time. The part of the impulse response each impulse in x was generating is then coincident so that multiplying each impulse of x with the impulse response value at the same t would yield the current output due to that impulse. Then to get the total response to all impulses in x currently generating their impulse responses, we need to add them up with an integration.

    A note on the various graphs of x(t).

    x(t) is some function of t = 1 + (t) + (t)^2 + ....

    x(-t) is a reflection around the y axis. To find this function, we replace the 't' in x(t) by -t:

    x(-t) = 1 + (-t) + (-t)^2 + ....

    To shift this last function right by τ seconds, we need to replace the 't' with 't-τ':

    1 + (-(t-τ)) + (-(t-τ))^2 + ....
    = 1 + (τ-t) + (τ-t)^2 + ....

    Compare this to the original x(t), this means the result of flipping and shifting right by τ can be found by replacing 't' in x(t) by 'τ-t' ie x(τ-t).

    Many people become confused about why it isn't x(-t-τ) to shift x(-t) right by τ seconds.

    I hope that helped to explain. It seems hard to articulate the idea but it is simple once you've grasped it.

    Attached Files:

  14. Nov 16, 2012 #13
    Back to your original question. I would still do a graphical convolution so that it is easy to see what is happening but it's quicker to do the convolution mathematically.

    Any function convolved with an impulse generates a copy of the original function. You can see this by plugging into a convolution integral (make the impulse do the shifting) and integrating to find the output at different times.
  15. Nov 17, 2012 #14
    Sorry I'm a little late with the feedback, but I just finally had time to go through and understand it now. I think I understood all of your points. I appreciate all of your explanations they really help me to get a deeper understanding. Thank you for your time and effort, I hope you have a nice weekend!! :)
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook