# Understanding Laplace transforms.

1. Jan 31, 2010

### Urmi Roy

Hi, I've been doing Laplace transforms lately...the sums are pretty simple (as much as there is in our syllabus),and I've been practising hard...but the problem is, I don't really understand what I'm doing or why what I am dong is justified at all.....

I want to know certain things like...

1, The general rule for transformation of a function f(t) is to integrate it w.r.t 's' (after multiplying with e^-st) within the limits 0 to infinity,in order to transform it into a function of 's'.....why? How can we take this as a general rule to convert a function of 't' to that of 's' ?

2. Does the laplace transform have any geometrical interpretation? If so, please tell me about it.

3. It also says in my book that all values of 't' in the function should be greater than zero...why?

4. Condition for existance of the laplace transform of f(t) is that its magnitude should be greater than M(e^-kt) for some constant value of M and k....please explain this.
Besides, how should we find the values of M and k?

5.I feel that the laplace transform for a function need not be unique...we could perhaps find the same transform for two functions by using algaebric manipulation....yet my book says this isn't possible....how can we explain this?

6. What is 's'? Could it be any variable at all?

Sorry if I asked too many questions.....I googled for ages,trying to find the answers,but I couldn't...besides,my teacher's really no good.

2. Jan 31, 2010

### conway

Never made sense to me either. The FOURIER transform makes sense, but it doesn't automatically pick up the initial conditions the Laplace does. Laplace is somehow related to Fourier.

3. Jan 31, 2010

### HallsofIvy

The Laplace transform is, like any transform, a way of changing one function into another. The crucial point about the Laplace tranform is that a differential equation in the function f is changed into an algebraic equation in the function L(f), its Laplace transform.

That is, you can take a differential equation, transform it into an algebraic equation, solve the algebraic equation, then transform that solution back to the solution of the differential equation.

But, frankly, I have never liked the Laplace transform and never taught it the many years I taught differential equations. The problem is that it is only for "linear equations with constant coefficients" that you can do the transforms. And those equations can, in my opinion, be better solved by the standard methods.

Laplace transforms really have two applications:
1) It gives Engineers a way of simply "looking up" solutions to simple differential equations.

2) It gives theoreticians a way of writing out solutions to really complex differential equations as a specific formula- but involving Laplace transforms that cannot actually be done- so that they can then talk about those solutions.

4. Jan 31, 2010

### Urmi Roy

So does that mean we just can't understand why laplace transforms work? Laplace himself must have had some logic to do things in the way he did....didn't he?This is really very strange!

5. Jan 31, 2010

### conway

What I'm saying is that with Fourier transforms there are a lot of things that make sense: differentiation in the time domain corresponds to multiplication by jw in the frequency domain, etc. These things are understandable and worth understanding. Before you get into Laplace transforms.

Laplace transforms are some kind of generalization or special case (depending on your perspective) of Fourier transforms, and have a lot of the same identities. If you don't know about Fourier transforms it would be really hard to understand how Laplace transforms work.

6. Jan 31, 2010

### conway

PS and the reason they don't teach you Fourier transforms first is because nobody cares if you understand Lapolace transforms or not.

7. Jan 31, 2010

### JSuarez

That's a little unfair; I used to work in Control Theory and Signal Processing, and I can tell you that many problems, described by linear, constant coefficient, ODE's or PDE's (these are usually called LTI system, from Linear, Time-Invariant), would be almost impossible to solve with the standard methods: just consider the cassical linear control problem, in which you have a LTI, described the Laplace transform of its impulse response (the equation solved for null boundary conditions and a Dirac's delta input) H(s), which is called the system's transfer function, in a feedback loop, where the controller has a transfer function G(s); then the multiplicative property of the transforms iimediately gives you the transfer function of the whole thing:

$$\frac{H\left(s\right)G\left(s\right)}{1 + H\left(s\right)G\left(s\right)}$$

And, belive me, it would be almost impossible to arrive at this without the LP. In addition, transfer functions gives us concepts that are important in understanding the systems dynamics (poles, zeros, stability, causality, etc.), that are very difficult to see in the original equations.

Of course, you can point out that this is a very restricted class of systems; nevertheless, it's a very useful one: even nonlinear systems are studied using LTI's. And then there are the analogs in the discrete domain (the discrete Fourier and the z-transform).

The Lapalce and Fourier transforms are a representation of a certain class of functions in terms of complex exponentials, but this can be properly understood after studying Distributions.

This is a causality condition. Classically, the LP transform was applied in equations describing physical systems; the condition that the function is defined only for t>0 corresponds to the fact that these systems do not have a future memory, that is, the output should depend, at time t, only on the behaviour and input for times t' < t. There is a generalization of the LP you are referring (the unilateral one), that is called the bilateral Laplace transform, where f(t) may be defined for all $$\mathbb R$$, but then you have restrictions on the complex variable s, called regions of convergence, that contain pretty much the same information as the above restriction.

You don't have to: this describes a generic class of functions. At the worst, you have to show that the asymptotic behaviour of f(t) falls in this class.

For the unilateral LP, it is, in this sense: if the transform is a rational function, the quotient of two polynomials in s, then you can multiply above and below by the same polynomial and (because of cancellation) obtain a seemingly different LP, but when you invert, you get the same function (but, for Control problems, these cancellations are a problem, because they represent dynamic modes of the system you are studying that are invisible in the original ODE).

No, it's just a run-of-the-mill complex variable: s = a + ib.

Last edited: Feb 1, 2010
8. Feb 1, 2010

### Urmi Roy

I don't think I could hunt down any book from the library,since it would have to be a heck of hunt for something like this.

So this means,since I'm doing a special case of the laplace transforms,the unilateral one,its a consideration I have to make...right?

Actually,I was rather asking that suppose I'm given a function f(t),how do I test whether Laplace transforms are applicable on it ? Also,does it mean that for a given function f(t),if we manage to find some values of M and K for which this condition is valid,then Laplace transform is applicable to this function?

9. Feb 1, 2010

### Urmi Roy

Hi everyone,
I was looking up the initial and final value thoerems on the net,and I found some websites mention 'asymptote of s' or something...like what JSuarez was saying....so I wanted even more to find out what this is all about.

I found a website www.engr.uky.edu/~ymzhang/EE422/EE422-5.doc ....would this be a good site to know about the relation of Laplace transforms woth the complex plane...or is there any better page I could read?

10. Feb 1, 2010

### jambaugh

There is an linear algebraic interpretation, with geometric analogues which might help. If you've had a bit of linear algebra you understand change of basis, and eigen-values. ?

Think of the set of linear combinations of functions as a vector space (infinite dimensional).
In linear algebra if you want to solve a vector equation which invokes a specific linear operator A (matrix) it is best to change to the basis where that operator is diagional. This is the eigen-basis Ax = ax (for x in basis, 'a' a number, and A your linear operator.)

Well the differential operator d/dt is just a linear operator on the space of functions. Recall also that d/dt exp(st) = s exp(st). So the functions exp(st) are the 'eigen-vectors' of the differential operator d/dt with eigen-values 's'. The Laplace transform may be seen as resolving arbitrary functions in terms of the eigen-basis of the differential operator and so makes solving, or working with in general, differential equations simpler.

The fourier transform is similar but works with imaginary eigen-values of the differential operator.

I don't know if you'll find that helpful or more confusing but for me it put Laplace and other integral transforms in a useful context.

Last edited: Feb 1, 2010
11. Feb 2, 2010

### Urmi Roy

Thanks jambaugh ,but I don't think I have adequate background to understand your post completely.

I suppose I might understand what a laplace transforms has to with complex planes,since I've done argand planes quite a lot,and I can imagine it....but I don't know much about eigen vectors,except for a few sums that I've done.

Also, JSuarez ,please clarify those points I referred to in #8.

Thanks everyone.

12. Feb 2, 2010

### matematikawan

Thanks for the link. It does enlighten me a bit. I saw one unusual formula in the note.

$$L\{ \int_{-\infty}^t f(t) dt \} =\frac{1}{s} F(s) + \frac{1}{s} \int_{-\infty}^{0^-} f(t) dt$$

Usually we only consider the Laplace transform of $$\int_0^t f(t) dt$$
What is the significant of that usual formula?

13. Feb 2, 2010

### JSuarez

Ok, for this I think it's better for you to wait until you have a firmer grasp of the subject and more mathematics. Then there will time to study distributions, infinite-dimensional spaces, etc.

Yes. From what you wrote, it seems that you're using the LP to determine the solution of ODE's, for t>0, with the initial condition at t = 0. This is a special case of a more general problem. This entry doesn't have much, but may give you the general ideia:

http://en.wikipedia.org/wiki/Two-sided_Laplace_transform" [Broken]

Yes. And finding them depends on the particular function. But let me tell you that, in most applications, we already know that the function is in this class.

It's just that: s is just a complex number; an element of the Argand plane, or whatever you like to call it. The LT of a function f(t) is a complex function: both the argument and its value are complex numbers.

Of more interest is this: for the unilateral transform, the set of point s where the LT is defined is always a left half-plane in C; for the bilateral, it's the same if the function f(t) is defined only for t>0, it's a right half-plane if f(t) is defined only for t<0 and it's a vertical strip if f(t) is defined for all R.

That formula is the integral theorem for the unilateral transform. For the bilateral one, there would be no second term.

Last edited by a moderator: May 4, 2017
14. Feb 2, 2010

### jambaugh

Yes, my exposition has a bit of linear algebra pre-requisite. But if you've played with matrices at all I think you could pick up these concepts pretty quickly and it's quite worthwhile as they are quite useful mathematical "power tools".

I'll leave you with one more analogy which may help. Without speaking of eigen-values per se you may have been exposed to the idea of choosing a basis wherein a given matrix is diagional. For example have you been exposed to the idea of the moment of inertia tensor for a rigid object? The principle axies are the basis by which an object's moment of inertia tensor take diagonal form and so you can simply enumerate the moment of inertia around each axis. Similarly the exponential functions are the "principle functions" of the differentiation operation and the Laplace transform resolves general functions in those terms so that differentiation takes an especially simple form.

Well that too may be a bit too abstract but at least let me assure you that when you get a little more linear algebra under your belt you should find many seemingly disjoint operations begin to take on a common context.... "We're just diagionalizing the operator" or "we're just choosing a natural basis" or "we're just solving a linear equation by multiplying by the inverse"... then the real fun begins!

15. Feb 3, 2010

### Urmi Roy

Thanks everyone,I think I've got a lot of things cleared,though,as JSuarez and jambaugh said,I've got to learn a lot more things to finally get things clear.
I will do some more reading on this whenever possible.
Also,jambaugh ,I still don't have any idea of change of basis of matrices! I must be sounding very stupid-I don't know anything! Anyway,I'll be sure to refer to your post when I do come to know more.

In the mean time,I hope you people won't mind if I sneak in a few more questions one in a while about Laplace transforms,since I haven't finished the chapter yet,and I might come into more stuff (of my level) that I might need help with,in the near future.

Thanks again.

16. Feb 3, 2010

### matematikawan

Huh! Now I'm not sure whether I have really understand Laplace transform. This two-sided Laplace transform bisness is new to me. And I also see a new term, Mellin transform. How is this transform related to the inverse Laplace transform ?

Last edited by a moderator: May 4, 2017
17. Feb 3, 2010

### JSuarez

Don't fret. The two-sided (or bilateral) Laplace transform is just a generalization of the one you know, used when you have functions that are defined in $\mathbb R$, insted of $\mathbb R^+$. The main difference is the differentiation theorem, when it's applied to ODE's with initial conditions; in the case if the one sided transform, you have:

$${\cal L}\left\{f\left(t\right)\right\}=F\left(s\right)-f\left(0\right)$$

In the two-sided transform, you have simply:

$${\cal L}\left\{f\left(t\right)\right\}=F\left(s\right)$$

Which means that the latter doesn't allow you to solve initial value problems directly, but if you consider, instead of just f(t), the function $f\left(t\right)-f\left(0\right)\delta$$\left(t\right)$$ and apply the two-sided transform to it, you recover the usual differentiation theorem, with the initial conditions. See here: http://en.wikipedia.org/wiki/Mellin_transform" [Broken] Last edited by a moderator: May 4, 2017 18. Feb 4, 2010 ### matematikawan Hope you don't mind I corrected your latex display. Please do check whether I have done it correctly. Is it ok if I say that the usual Laplace transform is suited for solving IVP while the two-sided Laplace transform is suitable for solving boundary value problem. The Mellin transform is related to the two-sided Laplace transform via [itex]\theta=e^{-t}$. http://en.wikipedia.org/wiki/Laplace_transform" [Broken].

Last edited by a moderator: May 4, 2017
19. Feb 4, 2010

### JSuarez

Thank you, I must have missed one s and copy+paste took care if the rest. Just one more note: for higher-order derivatives, the same applies with additional terms involving

I never thought about it that way, so I really don't know. Maybe someone with more experience in solving BVP could say more.

20. Feb 6, 2010

### Urmi Roy

Hi everyone,
just a few more points to get cleared....

1.for the laplace transform of f(t) to exist,it ust be atleast piece-wise continuous....why is piece-wise continuity a sufficient condition?

2. If we have F'(s) as the derivative of the laplace transform of f(t),we integrate F'(s) within the limits 's' to infinity...why this choice of limits?

Infact, these limits are chosen even finding the inverse laplace transform of f(t)/t ....again,what's so special about these limits?

3.What exactly is the convolution of two functions...why is it defined the way it is?

4.While finding the laplace transform of the integral of f(t),we define F(s)/s as the transform of f(t) integrated within the limits 0 to t.....now,obviously,from the derivation,it is evident that this is not applicable for any other limits of integration except for 0 to t......but this is only a special case...is there nothing more general...where the limits don't have to be 0 to t?

5. The laplace transform of e^at is 1/(s-a),provided s>a....what happens if this condition is not satisfied?

6. In initial value theorem,what it basically says is that when t is near 0,sF(s) is almost infinity.......what does this basically mean? How can we visualise this?