Hello. I'm studying signals and systems on my own this summer and I'm trying to get a good grasp of the convolution. I think I understand it mathematically enough to do some problems, but I don't have a firm grasp by any means. I'm studying both discrete and continuous time cases. Before I get to my questions, here is how I understand things in my own words (if any of this is off, please correct me):

Sifting Property

$$x(t) = \int_{-\infty}^{+\infty}x(\tau)\delta(t-\tau)d\tau$$

The value of a signal at a time t can be found by summing the product of the impulse response and the signal at all times. Since the impulse response will be 1 only at time t and 0 everywhere else, you will "sift" out only that value of the signal at time t. (1 times the signal at time t will just be the signal)

Impulse Response

Written as $h(t)$ in my texts (Oppenheim and my Schaum's), the impulse response is just the output of a system at a time $t_0$ when the input is the unit impulse $\delta(t-t_0)$.

In other words, I just think of the impulse response as what you get out of a system if you send it a 1 at a certain time.

And finally:

The Convolution

If you know the impulse response of a system at a time t, you just have to scale it by the value of x(t) to get the response of the system to x(t). For the system y(t):

$$y(t) = \int_{-\infty}^{+\infty}x(\tau)h(t-\tau)d\tau$$

I sort of think of this as multiplying a unit area (ie, 1 m^2) by a scalar, q, to get an area of (q m^2). (the unit impulse response is analogous to the unit area, and the scalar is analogous to the input $x(\tau)$).

Actually... I think I may be more confused about the convolution than I realize. If you have any good tips on how to think about it, please let me know.

Ok. Now my questions:

Question I: Sifting Property

What good is the sifting property??? It seems to be circular in its logic! I mean, you are basically saying you can get x($t_0$) if you know x($t_0$)!! You're just going through the extra step of multiplying all values of x(t) by $\delta(t-t_0)$ to "catch" the x($t_0$)... But that means you already had x($t_0$) in the first place! So what the heck is the point?!?

Question II: The Impulse Response

Is $\delta(t-t_0)$ a 1, or infinity at time $t_0$?? When it's under the integral, I know it is 1, since it has unit are. But when the impulse response is described, it seems to be the response to the impulse, with no integral involved. Here is how my Schaum's defines it:

$$h(t) = \textbf{T}\{\delta(t)\} [/itex] where T is the LTI system. And if it is infinity, how can a system respond to an infinite input? This, I think, is my biggest point of confusion, and may be why I'm having trouble understanding the convolution fully. Question III: The Convolution Why would you have the response of a system to the unit impulse, but not have its response to the signal x(t)? If you could get the impulse response, why not just get the x(t) response and forget about the convolution altogether? Conclusion Well, I think that sums up my confusion for now. I hope my questions made sense! I will be thrilled if someone is nice enough to clear some of these issues up for me! Thanks! Last edited: ## Answers and Replies hang on. when i have some time offa work, i'll try to get back to this. just for the meantime, i might suggest that you look at this from a discrete-time signal and convolution POV. here all signals are discretely sampled. it's easier to understand convolution from the discrete-time POV and then extend the concept a little to continuous time. hang on. when i have some time offa work, i'll try to get back to this. Thanks. I'll look forward to it. I'm not really in a rush anyway. I plan for my undergrad concentration to be in signals, so I just want to get started in understanding this stuff as thoroughly as possible. i might suggest that you look at this from a discrete-time signal and convolution POV. here all signals are discretely sampled. it's easier to understand convolution from the discrete-time POV and then extend the concept a little to continuous time. That's actually what I've been trying to do. I just used continuous time in my post because all the questions I had about discrete time apply to continuous time, but not vice versa. Thanks again. okay, we're doing this the discrete way first. $x_m[n]$ are arbitrary inputs, $y_m[n]$ are the corresponding outputs, and n is discrete "time" (or we'll call it that, n can be linearly related to some other physical parameter, like position). Linear means: [tex] \mathbf{L} \left\{ x_1[n] + x_2[n] \right\} = \mathbf{L} \left\{ x_1[n] \right\} + \mathbf{L} \left\{ x_2[n] \right\}$$

which is synonymous with "superposition applies" and this can be extended to:

$$\mathbf{L} \left\{ \sum_m c_m x_m[n] \right\} = \sum_m c_m \mathbf{L} \left\{ x_m[n] \right\}$$

for any rational constant coefficients $c_m$ , and then we just say that, for any physical system that makes sense, we can extend that to any real and constant numbers $c_m$.

Time-Invariant means:

If

$$y[n] = \mathbf{TI} \left\{ x[n] \right\}$$

then

$$y[n-m] = \mathbf{TI} \left\{ x[n-m] \right\}$$

for any delay m. so if you delay your input, all's what happens in a time-invariant system is that you get the same output, but delayed the same amount.

Linear, Time-Invariant means

If

$$y_m[n] = \mathbf{LTI} \left\{ x_m[n] \right\}$$

then

$$\sum_m c_m y_m[n - d_m] = \mathbf{LTI} \left\{ \sum_m c_m x_m[n - d_m] \right\}$$

where $c_m$ can be any set of real numbers and $d_m$ can be any set of integer delays (don't worry, for the time being that for any negative integer delays, $d_m < 0$, it means looking into the future, we don't have to require that the LTI system is "causal" for applying the convolution summation to such an LTI system that, from a purely theoretical POV, can possibly predict the future). and the $x_m[n]$ continue to be any arbitrary inputs.

Here is the discrete impulse function

$$\delta[n] = \begin{cases} 1 & \mbox{if }n=0 \\ 0 & \mbox{if }n \ne 0 \end{cases}$$

and we obviously know that

$$\delta[n-m] = \begin{cases} 1 & \mbox{if } n=m \\ 0 & \mbox{if } n \ne m \end{cases}$$

Now here is the sifting property:

$$x[n] = \sum_m x[m] \delta[n-m]$$ .

That should be obvious, but what this says is that the $x[m]$ are constants like $c_m$ which do not depend on n, so we broke up our input $x[n]$ into a sum of impulse functions $x[m] \delta[n-m]$ with constant coefficients.

The Impulse Response of a Linear Time-Invariant (LTI) system

$$h[n] = \mathbf{LTI} \left\{ \delta[n] \right\}$$

is sufficient to tell us how this discrete LTI system will respond to any arbitrary input. That is, if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system, at least from an input/output perspective. (there might be some nasty things going on inside, but that is, in the ideal, hidden from us. this is what state-variable systems are about.)

First, we know that due to linearity,

$$\mathbf{LTI} \left\{ \sum_m c_m \delta[n - d_m] \right\} = \sum_m c_m \mathbf{LTI} \left\{ \delta[n - d_m] \right\}$$

and, due to time-invariancy,

$$h[n-d_m] = \mathbf{LTI} \left\{ \delta[n - d_m] \right\}$$

BTW, if the LTI system is causal, (which means any output must result from inputs from only the present and the past, no effect from the future), then for all negative n:

$$h[n] = 0 \mbox{ for } n<0$$

So then we know that, pumping this input, the sum of impulses,

$$x[n] = \sum_m x[m] \ \delta[n-m]$$

into a discrete-time LTI system, then the output

$$y[n] = \mathbf{LTI} \left\{ x[n] \right\}$$

is

$$y[n] = \mathbf{LTI} \left\{ \sum_m x[m] \ \delta[n-m] \right\}$$

which is

$$y[n] = \sum_m x[m] \ \mathbf{LTI} \left\{ \delta[n-m] \right\}$$

which is

$$y[n] = \sum_m x[m] \ h[n-m] \right\}$$

That is convolution for discrete-time signals and LTI systems. Note that we made use of only the axioms of linearity and time-invariancy and made no reference to any Fourier Transform (that's a theorem for later). It says here that, given those axioms, if we know how the system will respond to a single impulse, we know how it will respond to any given input.

To do this in continuous-time, will require turning your summations into integrals and then, a little bit more nuance or sophistication in thinking about the continuous-time (Dirac) impulse and then it becomes the same song-and-dance above.

Last edited:
Oh my god rbj. That was a fantastic write up!

For some reason, I was like... I kind of understand convolution, but I would love to see an explanation from someone other than my professor and Openheim.

Amazing. Absolutely amazing!

Oh my god rbj. That was a fantastic write up!

For some reason, I was like... I kind of understand convolution, but I would love to see an explanation from someone other than my professor and Openheim.

it should how your professor or text says it. they didn't always do that right for me, either, when i was first learning this 3 decades ago.

doing something well should not need to be amazing. it's really just a shame how this stuff is not rigorously (but not bogged down with details that we don't care about) presented in contexts (both textbook and classroom) where you are paying for exactly that service.

Last edited:
it should how your professor or text says it. they didn't always do that right for me, either, when i was first learning this 3 decades ago.

doing something well should not need to be amazing. it's really just a shame how this stuff is not rigorously (but not bogged down with details that we don't care about) presented in contexts (both textbook and classroom) where you are paying for exactly that service.

i'm gonna try to just copy this, replace some of the sums with integrals, and see if it's nearly verbatim.

That is my number one complaint.

okay, now we're doing this the continuous-time POV. $x_m(t)$ are arbitrary inputs, $y_m(t)$ are the corresponding outputs, and t is continuous "time" (or we'll call it that, t can be linearly related to some other physical parameter, like position).

Linear means:

$$\mathbf{L} \left\{ x_1(t) + x_2(t) \right\} = \mathbf{L} \left\{ x_1(t) \right\} + \mathbf{L} \left\{ x_2(t) \right\}$$

which is synonymous with "superposition applies" and this can be extended to:

$$\mathbf{L} \left\{ \sum_m c_m x_m(t) \right\} = \sum_m c_m \mathbf{L} \left\{ x_m(t) \right\}$$

for any rational constant coefficients $c_m$ , and then we just say that, for any physical system that makes sense, we can extend that to any real and constant numbers $c_m$.

Time-Invariant means:

If

$$y(t) = \mathbf{TI} \left\{ x(t) \right\}$$

then

$$y(t-\tau) = \mathbf{TI} \left\{ x(t-\tau) \right\}$$

for any delay $\tau$. so if you delay your input, all's what happens in a time-invariant system is that you get the same output, but delayed the same amount.

Linear, Time-Invariant means

If

$$y_m(t) = \mathbf{LTI} \left\{ x_m(t) \right\}$$

then

$$\sum_m c_m y_m(t - \tau_m) = \mathbf{LTI} \left\{ \sum_m c_m x_m(t - \tau_m) \right\}$$

where $c_m$ can be any set of real numbers and $\tau_m$ can be any set of real delays (don't worry, for the time being that for any negative delays, $\tau_m < 0$, it means looking into the future, we don't have to require that the LTI system is "causal" for applying the convolution summation to such an LTI system that, from a purely theoretical POV, can possibly predict the future). and the $x_m(t)$ continue to be any arbitrary inputs.

Here is the continuous (dirac) impulse function (formally, this stuff about the dirac delta is disputed by mathematicians whom do not like the neanderthal engineering way of looking at it.)

$$\delta(t) = \lim_{a \rightarrow 0} \delta_a(t)$$

where $\delta_a(t)$ is this sorta "nascent" delta function so that two things are true:

$$\int_{-\infty}^{+\infty} \delta_a(t) dt = 1$$

for any positive a parameter, and as a>0 gets real small:

$$\lim_{a \rightarrow 0} \delta_a(t) = 0 \mbox{ for any } t \ne 0$$

that means that

$$\delta(t) = 0 \mbox{ for any } t \ne 0$$

but (and here is where the arguments with the math guys begin),

$$\int_{-\infty}^{+\infty} \delta(t) dt = 1$$

is true because it is true for any approximating nascent delta $\delta_a(t)$, as a>0 gets arbitrarily close to 0. so what we have is a function that is zero everywhere but t = 0, yet still has an area of 1 packed into only the space above 0 on the t axis. infinitely thin, but also infinitely tall such that the area is still 1. (the math guys say that this is not a function, but something else, a "distribution", and will not approve of how this Neanderthal engineer uses it.)

Now here is the sifting property:

$$x(0) = \int_{-\infty}^{+\infty} x(\tau) \delta(\tau) d\tau$$ .

it doesn't matter what the values of $x(\tau)$ are for $\tau \ne 0$, it's only the value of $x(\tau)$ at $\tau = 0$ that counts and scales the delta function. every other value of $x(\tau)$ gets multiplied by zero. you can flip the delta around (it can be even symmetrical) and get the same thing.

$$x(0) = \int_{-\infty}^{+\infty} x(\tau) \delta(-\tau) d\tau$$ .

then offset it (change of variables in integration) and get:

$$x(t) = \int_{-\infty}^{+\infty} x(\tau) \delta(t-\tau) d\tau$$ .

The Impulse Response of a Linear Time-Invariant (LTI) system

$$h(t) = \mathbf{LTI} \left\{ \delta(t) \right\}$$

is sufficient to tell us how this discrete LTI system will respond to any arbitrary input. That is, if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system, at least from an input/output perspective. (there might be some nasty things going on inside, but that is, in the ideal, hidden from us. this is what state-variable systems are about.)

First, we know that due to linearity,

$$\mathbf{LTI} \left\{ \sum_m c_m \delta(t - \tau_m) \right\} = \sum_m c_m \mathbf{LTI} \left\{ \delta(t - \tau_m) \right\}$$

But, since integrals can be expressed as a Riemann Summation (math guys like the Lebesgue integral better, and that's why we sometimes have fights with them regarding the nature of the Dirac delta).

$$\mathbf{LTI} \left\{ x(t) \right\} = \mathbf{LTI} \left\{ \int_{-\infty}^{+\infty} x(\tau) \delta(t - \tau) d\tau \right\} = \int_{-\infty}^{+\infty} x(\tau) \mathbf{LTI} \left\{ \delta(t - \tau) \right\} d\tau$$

and, due to time-invariancy,

$$h(t-\tau) = \mathbf{LTI} \left\{ \delta(t - \tau) \right\}$$

BTW, if the LTI system is causal, (which means any output must result from inputs from only the present and the past, no effect from the future), then for all negative t:

$$h(t) = 0 \mbox{ for } t<0$$

Then the output of the continuous-time LTI system is

$$y(t) = \mathbf{LTI} \left\{ x(t) \right\}$$

is

$$y(t) = \mathbf{LTI} \left\{ \int_{-\infty}^{+\infty} x(\tau) \delta(t - \tau) d\tau \right\}$$

which is

$$y(t) = \int_{-\infty}^{+\infty} x(\tau) \mathbf{LTI} \left\{ \delta(t - \tau) \right\} d\tau$$

which is

$$y(t) = \int_{-\infty}^{+\infty} x(\tau) h(t - \tau) d\tau$$

That is convolution for continuous-time signals and LTI systems. Note that we made use of only the axioms of linearity and time-invariancy and made no reference to any Fourier Transform (that's a theorem for later). It says here that, given those axioms, if we know how the system will respond to a single impulse, we know how it will respond to any given input.

Last edited:
Thank you so much rbj! I love the axiomatic explanation. You definitely answered my questions I and III.

However, I am still bothered by my original question II. You say:

...if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system

I realize you are talking about an actual experiment.... But let's see if what I'm wondering about makes any sense:

I thought the impulse response made perfect practical and theoretical sense in the discrete case, since we are "banging" the system with a 1, but in the continuous case I can't make sense of this. As you showed, the continuous impulse at 0 is "infinitely thin, but also infinitely tall such that the area is still 1".

When we "bang" the LTI system with $\delta(t)$, are we sending it an infinite number? Let's say we know a response to a certain system:

$$y(t) = \mathbf{LTI}\{x(t)\} = 5 + 3\cdot x(t)$$

Now we try to represent the impulse response for $y(t)$. Wouldn't it be written as follows?

$$h(t) = \mathbf{LTI}\{\delta(t)\} = 5 + 3\cdot \delta(t)$$

Now, it's clear that $h(t)$ will be 0 at $t \neq 0$, but what is $h(0)$ if the continuous impulse is infinite? It seems to me that no matter how you define the response $y(t)$ you will have an infinite impulse response, $h(t)$ , as long as the system is linear!

That, I believe, is my last point of remaining confusion.

Thanks again for the time you took to write that great explanation! I've printed it out and placed it in my notebook.

Last edited:
Thank you so much rbj! I love the axiomatic explanation. You definitely answered my questions I and III.

However, I am still bothered by my original question II. You say:

...if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system

I realize you are talking about an actual experiment....

not only an actual experiment, but also a theoretical determination or derivation of the impulse response (a.k.a. a "thought experiment").

But let's see if what I'm wondering about makes any sense:

I thought the impulse response made perfect practical and theoretical sense in the discrete case, since we are "banging" the system with a 1, but in the continuous case I can't make sense of this. As you showed, the continuous impulse at 0 is "infinitely thin, but also infinitely tall such that the area is still 1".

When we "bang" the LTI system with $\delta(t)$, are we sending it an infinite number?

dirac delta functions don't really exactly exist in nature (or physical reality). there is no such thing as an infinite voltage or whatever that would be when you apply a dirac impulse to the LTI system. but we sorta get close. we apply very thin pulses with a known (and very thin) width in time, and a known area (which would be the same area of the idealized dirac impulse). if the width of the impulse is very small, but not quite zero, and the area is finite, then height of the physical nascent impulse is also finite.

Let's say we know a response to a certain system:

$$y(t) = \mathbf{LTI}\{x(t)\} = 5 + 3\cdot x(t)$$

this cannot be the respose of a linear system. if x(t) is zero, then the output must also be zero in a linear system. that constant term, 5, is a problem.

Now we try to represent the impulse response for $y(t)$. Wouldn't it be written as follows?

$$h(t) = \mathbf{LTI}\{\delta(t)\} = 5 + 3\cdot \delta(t)$$

Now, it's clear that $h(t)$ will be 0 at $t \neq 0$, but what is $h(0)$ if the continuous impulse is infinite? It seems to me that no matter how you define the response $y(t)$ you will have an infinite impulse response, $h(t)$ , as long as the system is linear!

That, I believe, is my last point of remaining confusion.

Thanks again for the time you took to write that great explanation! I've printed it out and placed it in my notebook.

Last edited:
I just sort of threw the 5 in there arbitrarily on a whim... Your right, though. I remember reading about the "0 in, 0 out" property. I should have been more careful. I guess I put it there because it makes it more ugly.

I don't want to test your patience... So if your tired of the topic by now, read no further. Otherwise, here is what I'm thinking now:

Even if I get rid of the 5, isn't the impulse response still infinite? Say:

$$h(t) = \mathbf{LTI}\{\delta(t)\} = 3 \cdot \delta(t)$$

Is the impulse response 3 times infinity? Something is just really weird here. Does everything just sort of get "cleaned up" once we have h(t) under the integral?

The only system I can think of that makes sense with h(t) outside the integral is the identity:

$$h(t) = \mathbf{LTI}\{\delta(t)\} = \delta(t)$$

since I guess it makes perfect sense to get the infinite pulse out if you send it in.

Or how about thinking about it physically... If you have a "Ohm's Law System" where y(t) is the voltage and x(t) is the current. (in my first example above, the 3 would be the resistance) Would getting the impulse response be done by sending this system some huge (as close to infinite as you can get) current? And if so, wouldn't this mean that the response (the voltage) is infinite?! This just seems really weird, and not very practical...

I just sort of threw the 5 in there arbitrarily on a whim... Your right, though. I remember reading about the "0 in, 0 out" property. I should have been more careful. I guess I put it there because it makes it more ugly.

I don't want to test your patience... So if your tired of the topic by now, read no further. Otherwise, here is what I'm thinking now:

Even if I get rid of the 5, isn't the impulse response still infinite? Say:

$$h(t) = \mathbf{LTI}\{\delta(t)\} = 3 \cdot \delta(t)$$

Is the impulse response 3 times infinity? Something is just really weird here. Does everything just sort of get "cleaned up" once we have h(t) under the integral?

The only system I can think of that makes sense with h(t) outside the integral is the identity:

$$h(t) = \mathbf{LTI}\{\delta(t)\} = \delta(t)$$

since I guess it makes perfect sense to get the infinite pulse out if you send it in.

Or how about thinking about it physically... If you have a "Ohm's Law System" where y(t) is the voltage and x(t) is the current. (in my first example above, the 3 would be the resistance) Would getting the impulse response be done by sending this system some huge (as close to infinite as you can get) current? And if so, wouldn't this mean that the response (the voltage) is infinite?! This just seems really weird, and not very practical...

there aren't truly any dirac impulses in the world. but we approximate them, in the limit, with thin little spikes of not-quite-zero width and tall, but finite height. both of your h(t) impulse responses are pretty much identical looking spikes, but the first one is a spike with 3 times as much area in the spike as the second one. it could be the same width and 3 times higher, or the same height and 3 times wider or a little of both.

Ok. So I guess it's the area of h(t) that matters, not the value... meaning that h(t) really only makes sense under the integral. I think I got it.

Thanks!

Ok. So I guess it's the area of h(t) that matters, not the value... meaning that h(t) really only makes sense under the integral.

if you replace "$h(t)$" with "$\delta(t)$", that statement would be nearly correct.

your two $h(t)$ functions were for an ideal amplifier with a gain of 3 and a wire (gain of 1). $h(t)$ is generally not a delta function but will ring in some manner, and the characteristics of that ringing $h(t)$ is what determines what your filter or system will do to other input signals. the shape (and all of the values) of $h(t)$ matter.

but, strictly from a mathematical POV, it is true that for a Dirac delta function, $\delta(t)$, that it really only makes sense under an integral. but we Neanderthal enjunnears (yoose two b i cudnt even spel "enjunnear", now i are one), do play fast and loose with the Dirac delta function and use it in expressions that are not (yet) inside an integral. but the Cro-Magnon math guys and us Neanderthals agree that:

$$x(t_0) = \int_{-\infty}^{+\infty} x(t) \delta(t - t_0) dt$$

that is fundamental. and even for us Neanderthals, eventually the Dirac delta functions that we play fast and loose with, find their way to an integral which gets evaluated.

One example of this difference in usage, is with what is sometimes called the "Dirac comb" and is used to model ideal sampling in Nyquist/Shannon Sampling and Reconstruction Theorem. what the math guys really hate to see is an expression like:

$$T \sum_{k=-\infty}^{+\infty} \delta(t - kT) = \sum_{n=-\infty}^{+\infty} e^{j 2 \pi n t/T }$$

i consider that to be a true and valid and useful mathematical fact, despite that it is not inside any integral, and many math guys will say it's meaningless.

I think I got it.

Thanks!

better not thank me, yet (for coming away from this with misconceptions).

berkeman
Mentor
This is a great thread -- thanks rbj. I'm going to post a link to this thread in the PF Tutorials forum.

if you replace "$h(t)$" with "$\delta(t)$", that statement would be nearly correct.
...
$h(t)$ is generally not a delta function but will ring in some manner, and the characteristics of that ringing $h(t)$ is what determines what your filter or system will do to other input signals

Thanks for pointing this out...

But could you possibly give an example of such a response, y(t), written in terms of x(t)? All the examples in my book perform the convolution x(t)*h(t) and get a y(t) that is just some function of t with no obvious relation to x(t).

The reason I ask is because if you can write y(t) in terms of x(t), it seems that substituting $\delta(t)$ for all the x(t) would result in an infinite h(t), which would not be understandable outside the integral...

Furthermore, if for the type of systems you are talking about you can't write y(t) in terms of x(t), then how would the input effect the output?

One example of this difference in usage, is with what is sometimes called the "Dirac comb" and is used to model ideal sampling in Nyquist/Shannon Sampling and Reconstruction Theorem. what the math guys really hate to see is an expression like:

$$T \sum_{k=-\infty}^{+\infty} \delta(t - kT) = \sum_{n=-\infty}^{+\infty} e^{j 2 \pi n t/T }$$

i consider that to be a true and valid and useful mathematical fact, despite that it is not inside any integral, and many math guys will say it's meaningless.

I sort of came to EE by way of mathematics, so I find these controversies fascinating. I originally wanted to major in math, but switched to EE because it's more practical. I've kept a minor in math though. I think I will eventually read up on the details of what the math guys have to say about the delta function (distribution?) just for fun...

better not thank me, yet (for coming away from this with misconceptions).

I'm sure I still have some misconceptions, but I only started signals a week ago.... So I guess this is not a bad thing, yet. Anyway, thanks for everything. (all misconceptions are my own )

Thanks for pointing this out...

But could you possibly give an example of such a response, y(t), written in terms of x(t)?

we did that, sorta.

it's not an example, but the general formula of y(t) in terms of x(t) (and the filter characteristic which is fully described by h(t)).

$$y(t) = \int_{-\infty}^{+\infty} x(\tau) h(t - \tau) d\tau$$

which, if you do a little substitution of variable in the integral, is the same as

$$y(t) = \int_{-\infty}^{+\infty} h(\tau) x(t - \tau) d\tau$$

that is what y(t) is in terms of x(t).

All the examples in my book perform the convolution x(t)*h(t) and get a y(t) that is just some function of t with no obvious relation to x(t).

there's a reason they call it "convolution". it's a little bit convoluted. a "convoluted relationship" is not synonymous with an "obvious relationship".

The reason I ask is because if you can write y(t) in terms of x(t), it seems that substituting $\delta(t)$ for all the x(t) would result in an infinite h(t), which would not be understandable outside the integral...

no. h(t) has it's own separate definition. if you substitute $\delta(t)$ for x(t) (a legitimate thing to think about), what comes out for y(t) is h(t). that is (in words), if you input an impulse to the input of a LTI system, what comes out of the output is, by definition, the impulse response. and the convolution integrals above are perfectly consistent with that fact.

Furthermore, if for the type of systems you are talking about you can't write y(t) in terms of x(t), then how would the input effect the output?

of course you can write y(t) in terms of x(t), if you also have a description of the system (linear or not) that defines y(t) in terms of x(t). that is a tautology. if the system is LTI, then the two integral equations above relate y(t) to the input x(t) (or using your words, show how the input effects the output), given the description of the system. not all LTI systems are the same. different LTI systems have different h(t). but if two LTI systems have the same h(t), then we know that they will process the input signal identically and get the same output.

I sort of came to EE by way of mathematics, so I find these controversies fascinating. I originally wanted to major in math, but switched to EE because it's more practical. I've kept a minor in math though. I think I will eventually read up on the details of what the math guys have to say about the delta function (distribution?) just for fun...

rot's o' ruk. if you go to Wikipedia and check out some of the stuff in the Nyquist/Shannon Sampling Theorem, or the Dirac delta function, you'll see some of my discussion there. (i was [[User:Rbj]] and they have recently kicked me out of Wikipedia.)

probably the best way to understand how we view the Dirac delta differently is to understand the difference between the Riemann Integral and the Lebesgue Integral. but, for practical physical systems, there is no difference, but the way these two are treated mathematically are much different (but, for functions that are definable for both, they should give the same result). then go to the Richard Hamming wikipedia page and see what he says about it, it's kinda good.

Okay!! I think I figured the answer to my own question! An example is:

$$y(t) = \int_{-\infty}^t x(\tau) d\tau$$

So h(t) would be the unit step! ie:

$$h(t) = \int_{-\infty}^t \delta(\tau) d\tau = u(t)$$

I think I'm starting to hear things starting to "click" in my brain, and I'm actually starting to feel comfortable with the convolution and impulse response!

Edit:

I just saw your new post after I posted this. Nothing surprised me in it, so I think I'm good now. And I'm pretty confident that what I say above (in this post) is true and makes sense. If not, your welcome to correct me, if you have the time. Thanks.

Last edited:
we did that, sorta.

it's not an example, but the general formula of y(t) in terms of x(t) (and the filter characteristic which is fully described by h(t)).

$$y(t) = \int_{-\infty}^{+\infty} x(\tau) h(t - \tau) d\tau$$

which, if you do a little substitution of variable in the integral, is the same as

$$y(t) = \int_{-\infty}^{+\infty} h(\tau) x(t - \tau) d\tau$$

that is what y(t) is in terms of x(t).

there's a reason they call it "convolution". it's a little bit convoluted. a "convoluted relationship" is not synonymous with an "obvious relationship".

no. h(t) has it's own separate definition. if you substitute $\delta(t)$ for x(t) (a legitimate thing to think about), what comes out for y(t) is h(t). that is (in words), if you input an impulse to the input of a LTI system, what comes out of the output is, by definition, the impulse response. and the convolution integrals above are perfectly consistent with that fact.

of course you can write y(t) in terms of x(t), if you also have a description of the system (linear or not) that defines y(t) in terms of x(t). that is a tautology. if the system is LTI, then the two integral equations above relate y(t) to the input x(t) (or using your words, show how the input effects the output), given the description of the system. not all LTI systems are the same. different LTI systems have different h(t). but if two LTI systems have the same h(t), then we know that they will process the input signal identically and get the same output.

rot's o' ruk. if you go to Wikipedia and check out some of the stuff in the Nyquist/Shannon Sampling Theorem, or the Dirac delta function, you'll see some of my discussion there. (i was [[User:Rbj]] and they have recently kicked me out of Wikipedia.)

probably the best way to understand how we view the Dirac delta differently is to understand the difference between the Riemann Integral and the Lebesgue Integral. but, for practical physical systems, there is no difference, but the way these two are treated mathematically are much different (but, for functions that are definable for both, they should give the same result). then go to the Richard Hamming wikipedia page and see what he says about it, it's kinda good.

Are you a signals instructor of some sort? My god... I wish you would have taught my signals class.

long ago, i used to teach at the U of Southern Maine (1990). but i didn't complete my Ph.D. and with the present glut of Ph.D's, they felt that they could do better.

i'm the signal processing department at Kurzweil Music Systems (synthesizers and audio effects). i'm also listed on the Review Board of the Journal of the Audio Engineering Society (there's a web page you can find). with my initials, it should be obvious which one i am.

i know i could run circles around a lot of faculty teaching this stuff (because, as a life-long student, i also ask these basic questions until they get answered to my satisfaction) but there is, since the 60's a different (and false) economy in higher education about this. what matters more to EE departments is a Ph.D. and the quantity of publication.

i'm not advocating much of a change (but a little bit of a reversion). valid credentials are important. Ph.D.s have value. but their value is not absolute, yet are treated as such by institutions of higher education. without a Ph.D., i probably couldn't even teach at a mill like DeVry.

That is my number one complaint.

my signals and systems stuff was explained very rigorously in my circuits 1 and 2 courses.

my signals and systems stuff was explained very rigorously in my circuits 1 and 2 courses.

Sounds like you had a good circuits 1 and 2 course than.

In circuits-1 we stuck with Kirchoff's laws, methods to solve circuits (ex nodal analysis), and some transient stuff (I'm sure there is more... but I forget).

In circuits-2 we covered basic power systems, Laplace transforms (basically how to apply them), transfer functions, and we just glossed over convolution.

Our signals class followed Openheim for the most part. I hated the class because my professor taught it like a toolbox course, i.e. methods for solving a class of problems. She was NOT rigorous in her teaching at all. At one point she said... "ahh... it is too late in the day for a proof"

Anyways, sounds like you had a good prof. leright.

Rbj,

Just curious... Are there any excellent introductory signals/linear systems books you would recommend?

Sounds like you had a good circuits 1 and 2 course than.

In circuits-1 we stuck with Kirchoff's laws, methods to solve circuits (ex nodal analysis), and some transient stuff (I'm sure there is more... but I forget).

In circuits-2 we covered basic power systems, Laplace transforms (basically how to apply them), transfer functions, and we just glossed over convolution.

Our signals class followed Openheim for the most part. I hated the class because my professor taught it like a toolbox course, i.e. methods for solving a class of problems. She was NOT rigorous in her teaching at all. At one point she said... "ahh... it is too late in the day for a proof"

Anyways, sounds like you had a good prof. leright.

very good prof. he was very thorough and efficient with his teaching. most of the signals and systems stuff was blocked in with the circuits courses in my curriculum. I never took a stand alone signals and systems course.

very good prof. he was very thorough and efficient with his teaching. most of the signals and systems stuff was blocked in with the circuits courses in my curriculum. I never took a stand alone signals and systems course.

This is interesting... At my school we have only 1 quarter of circuits, and we have a quarter of signals/system that is completely separate.

I'm guessing it might be better to teach it in the context of something like circuits, in order to give the students something tangible to latch on to. Oppenheim lays it out almost purely as an abstract subject.... (which--being something of a math oriented fellow--I actually enjoy in a twisted sort of way :)