What Are the Key Challenges in Understanding Convolution in Signal Processing?

  • Thread starter WolfOfTheSteps
  • Start date
  • Tags
    Convolution
In summary: An Impulse Responseis the output of an LTI system when the input is the unit impulse: \delta[n] = 1 for n = 0, and 0 otherwise. That is, we send a unit impulse through the system (a single 1 at time 0) and see what the output is. this is usually called h[n] . It's the system's "natural response" to a sudden pulse. So h[n] = \mathbf{LTI} \left\{ \delta[n]
  • #1
WolfOfTheSteps
138
0
Hello. I'm studying signals and systems on my own this summer and I'm trying to get a good grasp of the convolution. I think I understand it mathematically enough to do some problems, but I don't have a firm grasp by any means. I'm studying both discrete and continuous time cases. Before I get to my questions, here is how I understand things in my own words (if any of this is off, please correct me):

Sifting Property

[tex]x(t) = \int_{-\infty}^{+\infty}x(\tau)\delta(t-\tau)d\tau[/tex]

The value of a signal at a time t can be found by summing the product of the impulse response and the signal at all times. Since the impulse response will be 1 only at time t and 0 everywhere else, you will "sift" out only that value of the signal at time t. (1 times the signal at time t will just be the signal)

Impulse Response

Written as [itex]h(t)[/itex] in my texts (Oppenheim and my Schaum's), the impulse response is just the output of a system at a time [itex]t_0[/itex] when the input is the unit impulse [itex]\delta(t-t_0)[/itex].

In other words, I just think of the impulse response as what you get out of a system if you send it a 1 at a certain time.

This is where I may be confused. I'll ask a question about this below.

And finally:

The Convolution

If you know the impulse response of a system at a time t, you just have to scale it by the value of x(t) to get the response of the system to x(t). For the system y(t):

[tex]y(t) = \int_{-\infty}^{+\infty}x(\tau)h(t-\tau)d\tau[/tex]

I sort of think of this as multiplying a unit area (ie, 1 m^2) by a scalar, q, to get an area of (q m^2). (the unit impulse response is analogous to the unit area, and the scalar is analogous to the input [itex]x(\tau)[/itex]).

Actually... I think I may be more confused about the convolution than I realize. If you have any good tips on how to think about it, please let me know.

Ok. Now my questions:

Question I: Sifting Property

What good is the sifting property? It seems to be circular in its logic! I mean, you are basically saying you can get x([itex]t_0[/itex]) if you know x([itex]t_0[/itex])! You're just going through the extra step of multiplying all values of x(t) by [itex]\delta(t-t_0)[/itex] to "catch" the x([itex]t_0[/itex])... But that means you already had x([itex]t_0[/itex]) in the first place! So what the heck is the point?!?

Question II: The Impulse Response

Is [itex]\delta(t-t_0)[/itex] a 1, or infinity at time [itex]t_0[/itex]?? When it's under the integral, I know it is 1, since it has unit are. But when the impulse response is described, it seems to be the response to the impulse, with no integral involved. Here is how my Schaum's defines it:

[tex]h(t) = \textbf{T}\{\delta(t)\} [/itex]

where T is the LTI system.

And if it is infinity, how can a system respond to an infinite input? This, I think, is my biggest point of confusion, and may be why I'm having trouble understanding the convolution fully.

Question III: The Convolution

Why would you have the response of a system to the unit impulse, but not have its response to the signal x(t)? If you could get the impulse response, why not just get the x(t) response and forget about the convolution altogether?
Conclusion

Well, I think that sums up my confusion for now. :smile: I hope my questions made sense! I will be thrilled if someone is nice enough to clear some of these issues up for me!

Thanks!
 
Last edited:
Engineering news on Phys.org
  • #2
hang on. when i have some time offa work, i'll try to get back to this.

just for the meantime, i might suggest that you look at this from a discrete-time signal and convolution POV. here all signals are discretely sampled. it's easier to understand convolution from the discrete-time POV and then extend the concept a little to continuous time.
 
  • #3
hang on. when i have some time offa work, i'll try to get back to this.

Thanks. I'll look forward to it. I'm not really in a rush anyway.

I plan for my undergrad concentration to be in signals, so I just want to get started in understanding this stuff as thoroughly as possible.

i might suggest that you look at this from a discrete-time signal and convolution POV. here all signals are discretely sampled. it's easier to understand convolution from the discrete-time POV and then extend the concept a little to continuous time.

That's actually what I've been trying to do. I just used continuous time in my post because all the questions I had about discrete time apply to continuous time, but not vice versa.

Thanks again.
 
  • #4
okay, we're doing this the discrete way first. [itex]x_m[n][/itex] are arbitrary inputs, [itex]y_m[n][/itex] are the corresponding outputs, and n is discrete "time" (or we'll call it that, n can be linearly related to some other physical parameter, like position).

Linear means:

[tex] \mathbf{L} \left\{ x_1[n] + x_2[n] \right\} = \mathbf{L} \left\{ x_1[n] \right\} + \mathbf{L} \left\{ x_2[n] \right\} [/tex]

which is synonymous with "superposition applies" and this can be extended to:

[tex] \mathbf{L} \left\{ \sum_m c_m x_m[n] \right\} = \sum_m c_m \mathbf{L} \left\{ x_m[n] \right\} [/tex]

for any rational constant coefficients [itex]c_m[/itex] , and then we just say that, for any physical system that makes sense, we can extend that to any real and constant numbers [itex]c_m[/itex].

Time-Invariant means:

If

[tex] y[n] = \mathbf{TI} \left\{ x[n] \right\} [/tex]

then

[tex] y[n-m] = \mathbf{TI} \left\{ x[n-m] \right\} [/tex]

for any delay m. so if you delay your input, all's what happens in a time-invariant system is that you get the same output, but delayed the same amount.

Linear, Time-Invariant means

If

[tex] y_m[n] = \mathbf{LTI} \left\{ x_m[n] \right\} [/tex]

then

[tex] \sum_m c_m y_m[n - d_m] = \mathbf{LTI} \left\{ \sum_m c_m x_m[n - d_m] \right\} [/tex]

where [itex]c_m[/itex] can be any set of real numbers and [itex]d_m[/itex] can be any set of integer delays (don't worry, for the time being that for any negative integer delays, [itex]d_m < 0[/itex], it means looking into the future, we don't have to require that the LTI system is "causal" for applying the convolution summation to such an LTI system that, from a purely theoretical POV, can possibly predict the future). and the [itex]x_m[n][/itex] continue to be any arbitrary inputs.Here is the discrete impulse function

[tex]
\delta[n] =
\begin{cases}
1 & \mbox{if }n=0 \\
0 & \mbox{if }n \ne 0
\end{cases}
[/tex]

and we obviously know that

[tex]
\delta[n-m] =
\begin{cases}
1 & \mbox{if } n=m \\
0 & \mbox{if } n \ne m
\end{cases}
[/tex]Now here is the sifting property:

[tex] x[n] = \sum_m x[m] \delta[n-m] [/tex] .

That should be obvious, but what this says is that the [itex]x[m][/itex] are constants like [itex]c_m[/itex] which do not depend on n, so we broke up our input [itex]x[n][/itex] into a sum of impulse functions [itex]x[m] \delta[n-m][/itex] with constant coefficients.The Impulse Response of a Linear Time-Invariant (LTI) system

[tex] h[n] = \mathbf{LTI} \left\{ \delta[n] \right\} [/tex]

is sufficient to tell us how this discrete LTI system will respond to any arbitrary input. That is, if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system, at least from an input/output perspective. (there might be some nasty things going on inside, but that is, in the ideal, hidden from us. this is what state-variable systems are about.)

First, we know that due to linearity,

[tex] \mathbf{LTI} \left\{ \sum_m c_m \delta[n - d_m] \right\} = \sum_m c_m \mathbf{LTI} \left\{ \delta[n - d_m] \right\} [/tex]

and, due to time-invariancy,

[tex] h[n-d_m] = \mathbf{LTI} \left\{ \delta[n - d_m] \right\} [/tex]

BTW, if the LTI system is causal, (which means any output must result from inputs from only the present and the past, no effect from the future), then for all negative n:

[tex] h[n] = 0 \mbox{ for } n<0 [/tex]So then we know that, pumping this input, the sum of impulses,

[tex] x[n] = \sum_m x[m] \ \delta[n-m] [/tex]

into a discrete-time LTI system, then the output

[tex] y[n] = \mathbf{LTI} \left\{ x[n] \right\} [/tex]

is

[tex] y[n] = \mathbf{LTI} \left\{ \sum_m x[m] \ \delta[n-m] \right\} [/tex]

which is

[tex] y[n] = \sum_m x[m] \ \mathbf{LTI} \left\{ \delta[n-m] \right\} [/tex]

which is

[tex] y[n] = \sum_m x[m] \ h[n-m] \right\} [/tex]

That is convolution for discrete-time signals and LTI systems. Note that we made use of only the axioms of linearity and time-invariancy and made no reference to any Fourier Transform (that's a theorem for later). It says here that, given those axioms, if we know how the system will respond to a single impulse, we know how it will respond to any given input.

To do this in continuous-time, will require turning your summations into integrals and then, a little bit more nuance or sophistication in thinking about the continuous-time (Dirac) impulse and then it becomes the same song-and-dance above.
 
Last edited:
  • #5
Oh my god rbj. That was a fantastic write up!

For some reason, I was like... I kind of understand convolution, but I would love to see an explanation from someone other than my professor and Openheim.

Amazing. Absolutely amazing!
 
  • #6
FrogPad said:
Oh my god rbj. That was a fantastic write up!

For some reason, I was like... I kind of understand convolution, but I would love to see an explanation from someone other than my professor and Openheim.

it should how your professor or text says it. they didn't always do that right for me, either, when i was first learning this 3 decades ago.

doing something well should not need to be amazing. it's really just a shame how this stuff is not rigorously (but not bogged down with details that we don't care about) presented in contexts (both textbook and classroom) where you are paying for exactly that service.
 
Last edited:
  • #7
rbj said:
it should how your professor or text says it. they didn't always do that right for me, either, when i was first learning this 3 decades ago.

doing something well should not need to be amazing. it's really just a shame how this stuff is not rigorously (but not bogged down with details that we don't care about) presented in contexts (both textbook and classroom) where you are paying for exactly that service.

i'm going to try to just copy this, replace some of the sums with integrals, and see if it's nearly verbatim.

That is my number one complaint.
 
  • #8
okay, now we're doing this the continuous-time POV. [itex]x_m(t)[/itex] are arbitrary inputs, [itex]y_m(t)[/itex] are the corresponding outputs, and t is continuous "time" (or we'll call it that, t can be linearly related to some other physical parameter, like position).

Linear means:

[tex] \mathbf{L} \left\{ x_1(t) + x_2(t) \right\} = \mathbf{L} \left\{ x_1(t) \right\} + \mathbf{L} \left\{ x_2(t) \right\} [/tex]

which is synonymous with "superposition applies" and this can be extended to:

[tex] \mathbf{L} \left\{ \sum_m c_m x_m(t) \right\} = \sum_m c_m \mathbf{L} \left\{ x_m(t) \right\} [/tex]

for any rational constant coefficients [itex]c_m[/itex] , and then we just say that, for any physical system that makes sense, we can extend that to any real and constant numbers [itex]c_m[/itex].

Time-Invariant means:

If

[tex] y(t) = \mathbf{TI} \left\{ x(t) \right\} [/tex]

then

[tex] y(t-\tau) = \mathbf{TI} \left\{ x(t-\tau) \right\} [/tex]

for any delay [itex]\tau[/itex]. so if you delay your input, all's what happens in a time-invariant system is that you get the same output, but delayed the same amount.

Linear, Time-Invariant means

If

[tex] y_m(t) = \mathbf{LTI} \left\{ x_m(t) \right\} [/tex]

then

[tex] \sum_m c_m y_m(t - \tau_m) = \mathbf{LTI} \left\{ \sum_m c_m x_m(t - \tau_m) \right\} [/tex]

where [itex]c_m[/itex] can be any set of real numbers and [itex]\tau_m[/itex] can be any set of real delays (don't worry, for the time being that for any negative delays, [itex]\tau_m < 0[/itex], it means looking into the future, we don't have to require that the LTI system is "causal" for applying the convolution summation to such an LTI system that, from a purely theoretical POV, can possibly predict the future). and the [itex]x_m(t)[/itex] continue to be any arbitrary inputs.Here is the continuous (dirac) impulse function (formally, this stuff about the dirac delta is disputed by mathematicians whom do not like the neanderthal engineering way of looking at it.)

[tex] \delta(t) = \lim_{a \rightarrow 0} \delta_a(t) [/tex]

where [itex] \delta_a(t) [/itex] is this sort of "nascent" delta function so that two things are true:

[tex] \int_{-\infty}^{+\infty} \delta_a(t) dt = 1 [/tex]

for any positive a parameter, and as a>0 gets real small:

[tex] \lim_{a \rightarrow 0} \delta_a(t) = 0 \mbox{ for any } t \ne 0 [/tex]

that means that

[tex] \delta(t) = 0 \mbox{ for any } t \ne 0 [/tex]

but (and here is where the arguments with the math guys begin),

[tex] \int_{-\infty}^{+\infty} \delta(t) dt = 1 [/tex]

is true because it is true for any approximating nascent delta [itex] \delta_a(t) [/itex], as a>0 gets arbitrarily close to 0. so what we have is a function that is zero everywhere but t = 0, yet still has an area of 1 packed into only the space above 0 on the t axis. infinitely thin, but also infinitely tall such that the area is still 1. (the math guys say that this is not a function, but something else, a "distribution", and will not approve of how this Neanderthal engineer uses it.)Now here is the sifting property:

[tex] x(0) = \int_{-\infty}^{+\infty} x(\tau) \delta(\tau) d\tau [/tex] .

it doesn't matter what the values of [itex]x(\tau)[/itex] are for [itex]\tau \ne 0[/itex], it's only the value of [itex]x(\tau)[/itex] at [itex]\tau = 0[/itex] that counts and scales the delta function. every other value of [itex]x(\tau)[/itex] gets multiplied by zero. you can flip the delta around (it can be even symmetrical) and get the same thing.

[tex] x(0) = \int_{-\infty}^{+\infty} x(\tau) \delta(-\tau) d\tau [/tex] .

then offset it (change of variables in integration) and get:

[tex] x(t) = \int_{-\infty}^{+\infty} x(\tau) \delta(t-\tau) d\tau [/tex] .The Impulse Response of a Linear Time-Invariant (LTI) system

[tex] h(t) = \mathbf{LTI} \left\{ \delta(t) \right\} [/tex]

is sufficient to tell us how this discrete LTI system will respond to any arbitrary input. That is, if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system, at least from an input/output perspective. (there might be some nasty things going on inside, but that is, in the ideal, hidden from us. this is what state-variable systems are about.)

First, we know that due to linearity,

[tex] \mathbf{LTI} \left\{ \sum_m c_m \delta(t - \tau_m) \right\} = \sum_m c_m \mathbf{LTI} \left\{ \delta(t - \tau_m) \right\} [/tex]

But, since integrals can be expressed as a Riemann Summation (math guys like the Lebesgue integral better, and that's why we sometimes have fights with them regarding the nature of the Dirac delta).

[tex] \mathbf{LTI} \left\{ x(t) \right\} = \mathbf{LTI} \left\{ \int_{-\infty}^{+\infty} x(\tau) \delta(t - \tau) d\tau \right\} = \int_{-\infty}^{+\infty} x(\tau) \mathbf{LTI} \left\{ \delta(t - \tau) \right\} d\tau [/tex]

and, due to time-invariancy,

[tex] h(t-\tau) = \mathbf{LTI} \left\{ \delta(t - \tau) \right\} [/tex]

BTW, if the LTI system is causal, (which means any output must result from inputs from only the present and the past, no effect from the future), then for all negative t:

[tex] h(t) = 0 \mbox{ for } t<0 [/tex]Then the output of the continuous-time LTI system is

[tex] y(t) = \mathbf{LTI} \left\{ x(t) \right\} [/tex]

is

[tex] y(t) = \mathbf{LTI} \left\{ \int_{-\infty}^{+\infty} x(\tau) \delta(t - \tau) d\tau \right\} [/tex]

which is

[tex] y(t) = \int_{-\infty}^{+\infty} x(\tau) \mathbf{LTI} \left\{ \delta(t - \tau) \right\} d\tau [/tex]

which is

[tex] y(t) = \int_{-\infty}^{+\infty} x(\tau) h(t - \tau) d\tau [/tex]

That is convolution for continuous-time signals and LTI systems. Note that we made use of only the axioms of linearity and time-invariancy and made no reference to any Fourier Transform (that's a theorem for later). It says here that, given those axioms, if we know how the system will respond to a single impulse, we know how it will respond to any given input.
 
Last edited:
  • #9
Thank you so much rbj! I love the axiomatic explanation. You definitely answered my questions I and III.

However, I am still bothered by my original question II. You say:

...if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system

I realize you are talking about an actual experiment... But let's see if what I'm wondering about makes any sense:

I thought the impulse response made perfect practical and theoretical sense in the discrete case, since we are "banging" the system with a 1, but in the continuous case I can't make sense of this. As you showed, the continuous impulse at 0 is "infinitely thin, but also infinitely tall such that the area is still 1".

When we "bang" the LTI system with [itex]\delta(t)[/itex], are we sending it an infinite number? Let's say we know a response to a certain system:

[tex]y(t) = \mathbf{LTI}\{x(t)\} = 5 + 3\cdot x(t)[/tex]

Now we try to represent the impulse response for [itex]y(t)[/itex]. Wouldn't it be written as follows?

[tex]h(t) = \mathbf{LTI}\{\delta(t)\} = 5 + 3\cdot \delta(t)[/tex]

Now, it's clear that [itex]h(t)[/itex] will be 0 at [itex]t \neq 0[/itex], but what is [itex]h(0)[/itex] if the continuous impulse is infinite? It seems to me that no matter how you define the response [itex]y(t)[/itex] you will have an infinite impulse response, [itex]h(t)[/itex] , as long as the system is linear!

That, I believe, is my last point of remaining confusion. :redface:

Thanks again for the time you took to write that great explanation! I've printed it out and placed it in my notebook. :biggrin:
 
Last edited:
  • #10
WolfOfTheSteps said:
Thank you so much rbj! I love the axiomatic explanation. You definitely answered my questions I and III.

However, I am still bothered by my original question II. You say:
...if we bang the system with a single impulse and measure or determine the response, due to that sole (and simple) input and measurement or derivation of the output, we have a complete description of the behavior of that system
I realize you are talking about an actual experiment...

not only an actual experiment, but also a theoretical determination or derivation of the impulse response (a.k.a. a "thought experiment").

But let's see if what I'm wondering about makes any sense:

I thought the impulse response made perfect practical and theoretical sense in the discrete case, since we are "banging" the system with a 1, but in the continuous case I can't make sense of this. As you showed, the continuous impulse at 0 is "infinitely thin, but also infinitely tall such that the area is still 1".

When we "bang" the LTI system with [itex]\delta(t)[/itex], are we sending it an infinite number?

dirac delta functions don't really exactly exist in nature (or physical reality). there is no such thing as an infinite voltage or whatever that would be when you apply a dirac impulse to the LTI system. but we sort of get close. we apply very thin pulses with a known (and very thin) width in time, and a known area (which would be the same area of the idealized dirac impulse). if the width of the impulse is very small, but not quite zero, and the area is finite, then height of the physical nascent impulse is also finite.
Let's say we know a response to a certain system:

[tex]y(t) = \mathbf{LTI}\{x(t)\} = 5 + 3\cdot x(t)[/tex]

this cannot be the respose of a linear system. if x(t) is zero, then the output must also be zero in a linear system. that constant term, 5, is a problem.

Now we try to represent the impulse response for [itex]y(t)[/itex]. Wouldn't it be written as follows?

[tex]h(t) = \mathbf{LTI}\{\delta(t)\} = 5 + 3\cdot \delta(t)[/tex]

Now, it's clear that [itex]h(t)[/itex] will be 0 at [itex]t \neq 0[/itex], but what is [itex]h(0)[/itex] if the continuous impulse is infinite? It seems to me that no matter how you define the response [itex]y(t)[/itex] you will have an infinite impulse response, [itex]h(t)[/itex] , as long as the system is linear!

That, I believe, is my last point of remaining confusion. :redface:

Thanks again for the time you took to write that great explanation! I've printed it out and placed it in my notebook. :biggrin:

after fixing the problem above, try restating your unanswered question.
 
Last edited:
  • #11
I just sort of threw the 5 in there arbitrarily on a whim... Your right, though. I remember reading about the "0 in, 0 out" property. I should have been more careful. I guess I put it there because it makes it more ugly. :smile:

I don't want to test your patience... So if your tired of the topic by now, read no further. :biggrin: Otherwise, here is what I'm thinking now:


Even if I get rid of the 5, isn't the impulse response still infinite? Say:

[tex]h(t) = \mathbf{LTI}\{\delta(t)\} = 3 \cdot \delta(t)[/tex]

Is the impulse response 3 times infinity? Something is just really weird here. Does everything just sort of get "cleaned up" once we have h(t) under the integral?

The only system I can think of that makes sense with h(t) outside the integral is the identity:

[tex]h(t) = \mathbf{LTI}\{\delta(t)\} = \delta(t)[/tex]

since I guess it makes perfect sense to get the infinite pulse out if you send it in.

Or how about thinking about it physically... If you have a "Ohm's Law System" where y(t) is the voltage and x(t) is the current. (in my first example above, the 3 would be the resistance) Would getting the impulse response be done by sending this system some huge (as close to infinite as you can get) current? And if so, wouldn't this mean that the response (the voltage) is infinite?! This just seems really weird, and not very practical...
 
  • #12
WolfOfTheSteps said:
I just sort of threw the 5 in there arbitrarily on a whim... Your right, though. I remember reading about the "0 in, 0 out" property. I should have been more careful. I guess I put it there because it makes it more ugly. :smile:

I don't want to test your patience... So if your tired of the topic by now, read no further. :biggrin: Otherwise, here is what I'm thinking now:


Even if I get rid of the 5, isn't the impulse response still infinite? Say:

[tex]h(t) = \mathbf{LTI}\{\delta(t)\} = 3 \cdot \delta(t)[/tex]

Is the impulse response 3 times infinity? Something is just really weird here. Does everything just sort of get "cleaned up" once we have h(t) under the integral?

The only system I can think of that makes sense with h(t) outside the integral is the identity:

[tex]h(t) = \mathbf{LTI}\{\delta(t)\} = \delta(t)[/tex]

since I guess it makes perfect sense to get the infinite pulse out if you send it in.

Or how about thinking about it physically... If you have a "Ohm's Law System" where y(t) is the voltage and x(t) is the current. (in my first example above, the 3 would be the resistance) Would getting the impulse response be done by sending this system some huge (as close to infinite as you can get) current? And if so, wouldn't this mean that the response (the voltage) is infinite?! This just seems really weird, and not very practical...

there aren't truly any dirac impulses in the world. but we approximate them, in the limit, with thin little spikes of not-quite-zero width and tall, but finite height. both of your h(t) impulse responses are pretty much identical looking spikes, but the first one is a spike with 3 times as much area in the spike as the second one. it could be the same width and 3 times higher, or the same height and 3 times wider or a little of both.
 
  • #13
Ok. So I guess it's the area of h(t) that matters, not the value... meaning that h(t) really only makes sense under the integral. I think I got it.

Thanks!
 
  • #14
WolfOfTheSteps said:
Ok. So I guess it's the area of h(t) that matters, not the value... meaning that h(t) really only makes sense under the integral.

if you replace "[itex]h(t)[/itex]" with "[itex]\delta(t)[/itex]", that statement would be nearly correct.

your two [itex]h(t)[/itex] functions were for an ideal amplifier with a gain of 3 and a wire (gain of 1). [itex]h(t)[/itex] is generally not a delta function but will ring in some manner, and the characteristics of that ringing [itex]h(t)[/itex] is what determines what your filter or system will do to other input signals. the shape (and all of the values) of [itex]h(t)[/itex] matter.

but, strictly from a mathematical POV, it is true that for a Dirac delta function, [itex]\delta(t)[/itex], that it really only makes sense under an integral. but we Neanderthal enjunnears (yoose two b i cudnt even spel "enjunnear", now i are one), do play fast and loose with the Dirac delta function and use it in expressions that are not (yet) inside an integral. but the Cro-Magnon math guys and us Neanderthals agree that:

[tex] x(t_0) = \int_{-\infty}^{+\infty} x(t) \delta(t - t_0) dt [/tex]

that is fundamental. and even for us Neanderthals, eventually the Dirac delta functions that we play fast and loose with, find their way to an integral which gets evaluated.

One example of this difference in usage, is with what is sometimes called the "Dirac comb" and is used to model ideal sampling in Nyquist/Shannon Sampling and Reconstruction Theorem. what the math guys really hate to see is an expression like:

[tex] T \sum_{k=-\infty}^{+\infty} \delta(t - kT) = \sum_{n=-\infty}^{+\infty} e^{j 2 \pi n t/T } [/tex]

i consider that to be a true and valid and useful mathematical fact, despite that it is not inside any integral, and many math guys will say it's meaningless.

I think I got it.

Thanks!

better not thank me, yet (for coming away from this with misconceptions).
 
  • #15
This is a great thread -- thanks rbj. I'm going to post a link to this thread in the PF Tutorials forum.
 
  • #16
rbj said:
if you replace "[itex]h(t)[/itex]" with "[itex]\delta(t)[/itex]", that statement would be nearly correct.
...
[itex]h(t)[/itex] is generally not a delta function but will ring in some manner, and the characteristics of that ringing [itex]h(t)[/itex] is what determines what your filter or system will do to other input signals

Thanks for pointing this out...

But could you possibly give an example of such a response, y(t), written in terms of x(t)? All the examples in my book perform the convolution x(t)*h(t) and get a y(t) that is just some function of t with no obvious relation to x(t).

The reason I ask is because if you can write y(t) in terms of x(t), it seems that substituting [itex]\delta(t)[/itex] for all the x(t) would result in an infinite h(t), which would not be understandable outside the integral...

Furthermore, if for the type of systems you are talking about you can't write y(t) in terms of x(t), then how would the input effect the output?

One example of this difference in usage, is with what is sometimes called the "Dirac comb" and is used to model ideal sampling in Nyquist/Shannon Sampling and Reconstruction Theorem. what the math guys really hate to see is an expression like:

[tex] T \sum_{k=-\infty}^{+\infty} \delta(t - kT) = \sum_{n=-\infty}^{+\infty} e^{j 2 \pi n t/T } [/tex]

i consider that to be a true and valid and useful mathematical fact, despite that it is not inside any integral, and many math guys will say it's meaningless.

I sort of came to EE by way of mathematics, so I find these controversies fascinating. :smile: I originally wanted to major in math, but switched to EE because it's more practical. I've kept a minor in math though. I think I will eventually read up on the details of what the math guys have to say about the delta function (distribution?) just for fun...

better not thank me, yet (for coming away from this with misconceptions).

I'm sure I still have some misconceptions, but I only started signals a week ago... So I guess this is not a bad thing, yet. Anyway, thanks for everything. (all misconceptions are my own :biggrin:)
 
  • #17
WolfOfTheSteps said:
Thanks for pointing this out...

But could you possibly give an example of such a response, y(t), written in terms of x(t)?

we did that, sorta.

it's not an example, but the general formula of y(t) in terms of x(t) (and the filter characteristic which is fully described by h(t)).

[tex] y(t) = \int_{-\infty}^{+\infty} x(\tau) h(t - \tau) d\tau [/tex]

which, if you do a little substitution of variable in the integral, is the same as

[tex] y(t) = \int_{-\infty}^{+\infty} h(\tau) x(t - \tau) d\tau [/tex]

that is what y(t) is in terms of x(t).


All the examples in my book perform the convolution x(t)*h(t) and get a y(t) that is just some function of t with no obvious relation to x(t).

there's a reason they call it "convolution". it's a little bit convoluted. a "convoluted relationship" is not synonymous with an "obvious relationship".

The reason I ask is because if you can write y(t) in terms of x(t), it seems that substituting [itex]\delta(t)[/itex] for all the x(t) would result in an infinite h(t), which would not be understandable outside the integral...

no. h(t) has it's own separate definition. if you substitute [itex]\delta(t)[/itex] for x(t) (a legitimate thing to think about), what comes out for y(t) is h(t). that is (in words), if you input an impulse to the input of a LTI system, what comes out of the output is, by definition, the impulse response. and the convolution integrals above are perfectly consistent with that fact.

Furthermore, if for the type of systems you are talking about you can't write y(t) in terms of x(t), then how would the input effect the output?

of course you can write y(t) in terms of x(t), if you also have a description of the system (linear or not) that defines y(t) in terms of x(t). that is a tautology. if the system is LTI, then the two integral equations above relate y(t) to the input x(t) (or using your words, show how the input effects the output), given the description of the system. not all LTI systems are the same. different LTI systems have different h(t). but if two LTI systems have the same h(t), then we know that they will process the input signal identically and get the same output.

I sort of came to EE by way of mathematics, so I find these controversies fascinating. :smile: I originally wanted to major in math, but switched to EE because it's more practical. I've kept a minor in math though. I think I will eventually read up on the details of what the math guys have to say about the delta function (distribution?) just for fun...

rot's o' ruk. if you go to Wikipedia and check out some of the stuff in the Nyquist/Shannon Sampling Theorem, or the Dirac delta function, you'll see some of my discussion there. (i was [[User:Rbj]] and they have recently kicked me out of Wikipedia.)

probably the best way to understand how we view the Dirac delta differently is to understand the difference between the Riemann Integral and the Lebesgue Integral. but, for practical physical systems, there is no difference, but the way these two are treated mathematically are much different (but, for functions that are definable for both, they should give the same result). then go to the Richard Hamming wikipedia page and see what he says about it, it's kinda good.
 
  • #18
Okay! I think I figured the answer to my own question! An example is:

[tex]y(t) = \int_{-\infty}^t x(\tau) d\tau[/tex]

So h(t) would be the unit step! ie:

[tex]h(t) = \int_{-\infty}^t \delta(\tau) d\tau = u(t)[/tex]

I think I'm starting to hear things starting to "click" in my brain, and I'm actually starting to feel comfortable with the convolution and impulse response!

This thread has been awesome. :biggrin:

Edit:

I just saw your new post after I posted this. Nothing surprised me in it, so I think I'm good now. And I'm pretty confident that what I say above (in this post) is true and makes sense. If not, your welcome to correct me, if you have the time. Thanks.
 
Last edited:
  • #19
rbj said:
we did that, sorta.

it's not an example, but the general formula of y(t) in terms of x(t) (and the filter characteristic which is fully described by h(t)).

[tex] y(t) = \int_{-\infty}^{+\infty} x(\tau) h(t - \tau) d\tau [/tex]

which, if you do a little substitution of variable in the integral, is the same as

[tex] y(t) = \int_{-\infty}^{+\infty} h(\tau) x(t - \tau) d\tau [/tex]

that is what y(t) is in terms of x(t).



there's a reason they call it "convolution". it's a little bit convoluted. a "convoluted relationship" is not synonymous with an "obvious relationship".



no. h(t) has it's own separate definition. if you substitute [itex]\delta(t)[/itex] for x(t) (a legitimate thing to think about), what comes out for y(t) is h(t). that is (in words), if you input an impulse to the input of a LTI system, what comes out of the output is, by definition, the impulse response. and the convolution integrals above are perfectly consistent with that fact.



of course you can write y(t) in terms of x(t), if you also have a description of the system (linear or not) that defines y(t) in terms of x(t). that is a tautology. if the system is LTI, then the two integral equations above relate y(t) to the input x(t) (or using your words, show how the input effects the output), given the description of the system. not all LTI systems are the same. different LTI systems have different h(t). but if two LTI systems have the same h(t), then we know that they will process the input signal identically and get the same output.



rot's o' ruk. if you go to Wikipedia and check out some of the stuff in the Nyquist/Shannon Sampling Theorem, or the Dirac delta function, you'll see some of my discussion there. (i was [[User:Rbj]] and they have recently kicked me out of Wikipedia.)

probably the best way to understand how we view the Dirac delta differently is to understand the difference between the Riemann Integral and the Lebesgue Integral. but, for practical physical systems, there is no difference, but the way these two are treated mathematically are much different (but, for functions that are definable for both, they should give the same result). then go to the Richard Hamming wikipedia page and see what he says about it, it's kinda good.


Are you a signals instructor of some sort? My god... I wish you would have taught my signals class.
 
  • #20
long ago, i used to teach at the U of Southern Maine (1990). but i didn't complete my Ph.D. and with the present glut of Ph.D's, they felt that they could do better.

i'm the signal processing department at Kurzweil Music Systems (synthesizers and audio effects). I'm also listed on the Review Board of the Journal of the Audio Engineering Society (there's a web page you can find). with my initials, it should be obvious which one i am.

i know i could run circles around a lot of faculty teaching this stuff (because, as a life-long student, i also ask these basic questions until they get answered to my satisfaction) but there is, since the 60's a different (and false) economy in higher education about this. what matters more to EE departments is a Ph.D. and the quantity of publication.

i'm not advocating much of a change (but a little bit of a reversion). valid credentials are important. Ph.D.s have value. but their value is not absolute, yet are treated as such by institutions of higher education. without a Ph.D., i probably couldn't even teach at a mill like DeVry.
 
  • #21
FrogPad said:
That is my number one complaint.

my signals and systems stuff was explained very rigorously in my circuits 1 and 2 courses.
 
  • #22
leright said:
my signals and systems stuff was explained very rigorously in my circuits 1 and 2 courses.

Sounds like you had a good circuits 1 and 2 course than.

In circuits-1 we stuck with Kirchoff's laws, methods to solve circuits (ex nodal analysis), and some transient stuff (I'm sure there is more... but I forget).

In circuits-2 we covered basic power systems, Laplace transforms (basically how to apply them), transfer functions, and we just glossed over convolution.

Our signals class followed Openheim for the most part. I hated the class because my professor taught it like a toolbox course, i.e. methods for solving a class of problems. She was NOT rigorous in her teaching at all. At one point she said... "ahh... it is too late in the day for a proof"

Anyways, sounds like you had a good prof. leright.
 
  • #23
Rbj,

Just curious... Are there any excellent introductory signals/linear systems books you would recommend?
 
  • #24
FrogPad said:
Sounds like you had a good circuits 1 and 2 course than.

In circuits-1 we stuck with Kirchoff's laws, methods to solve circuits (ex nodal analysis), and some transient stuff (I'm sure there is more... but I forget).

In circuits-2 we covered basic power systems, Laplace transforms (basically how to apply them), transfer functions, and we just glossed over convolution.

Our signals class followed Openheim for the most part. I hated the class because my professor taught it like a toolbox course, i.e. methods for solving a class of problems. She was NOT rigorous in her teaching at all. At one point she said... "ahh... it is too late in the day for a proof"

Anyways, sounds like you had a good prof. leright.

very good prof. he was very thorough and efficient with his teaching. most of the signals and systems stuff was blocked in with the circuits courses in my curriculum. I never took a stand alone signals and systems course.
 
  • #25
very good prof. he was very thorough and efficient with his teaching. most of the signals and systems stuff was blocked in with the circuits courses in my curriculum. I never took a stand alone signals and systems course.

This is interesting... At my school we have only 1 quarter of circuits, and we have a quarter of signals/system that is completely separate.

I'm guessing it might be better to teach it in the context of something like circuits, in order to give the students something tangible to latch on to. Oppenheim lays it out almost purely as an abstract subject... (which--being something of a math oriented fellow--I actually enjoy in a twisted sort of way :)
 
  • #26
WolfOfTheSteps said:
This is interesting... At my school we have only 1 quarter of circuits, and we have a quarter of signals/system that is completely separate.

I'm guessing it might be better to teach it in the context of something like circuits, in order to give the students something tangible to latch on to. Oppenheim lays it out almost purely as an abstract subject... (which--being something of a math oriented fellow--I actually enjoy in a twisted sort of way :)

yeah, I had one 4 credit circuits 1 course and 1 3 credit circuits 2 course. These courses collectively covered all of the stuff on dc resistive networks, transient responses, capacitance, inductance, laplace transforms and s-domain analysis, system theory, Fourier analysis and frequency response analysis, 2 port networks and many other things. But I never had a standalone systems class.
 
  • #27
WolfOfTheSteps said:
Rbj,

Just curious... Are there any excellent introductory signals/linear systems books you would recommend?

i'm on the road at the moment.

i can only think of Oppenheim and Wilsky. there is another one by Orfanidis that has a nice connection to audio that i like.

dunno who else at the moment.
 
  • #29
Thanks for the links, FrogPad.

I've read through chapter 4 of Oppenheim so far... It's actually starting to grow on me. I think the Fourier analysis stuff is much easier to understand than the convolution was. (although I find the discrete side a bit more obscure than the continuous for some reason)

Also, I've been using the MIT opencourseware problem sets and solutions, which have been really useful for me. If anyone is interested, you can find them here:

http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-003Fall-2003/CourseHome/index.htm"

The solutions to the homework problems are written very well. Also, the class notes aren't bad.
 
Last edited by a moderator:
  • #30
study the convolution in more details

I want to study the convolution in more details. So if you can provide me by any link to find a brief study for the convolution.

thanks alot!
 
  • #31
T.Engineer said:
I want to study the convolution in more details. So if you can provide me by any link to find a brief study for the convolution.

thanks alot!

More detail than what rbj posted?
 
  • #32
I want to study the convolution in more details. So if you can provide me by any link to find a brief study for the convolution.

Frogpad is right, you probably aren't going to find a better explanation that is as concise and to the point as what rbj posted. But here are some links anyway:

  • http://cnx.org/content/m11541/latest/" of computing the convolution of two signals.
  • A pretty cool http://www.jhu.edu/~signals/convolve/index.html" helping you gain a good visual intuition of the convolution. (continuous time)
  • Same "slider" for the http://www.jhu.edu/~signals/discreteconv2/index.html"
  • You can also check out the EE 20 and EE 120 Lectures on the http://webcast.berkeley.edu/courses.php?semesterid=22" (I'm not sure exactly where in the videos he talks about the convolution, though.)
  • And of course there are the http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-003Fall-2003/CourseHome/index.htm" at the MIT Open Courseware site. (The homework solutions are exceptionally well written!)

But for a great derivation, I've found nothing better than what rbj posted here!
 
Last edited by a moderator:
  • #33
this is an amazing tutorial - thanks a lot rbj
wolf of the steps - haven't i seen you somewhere? :P
 
  • #34
trickae said:
this is an amazing tutorial - thanks a lot rbj
wolf of the steps - haven't i seen you somewhere? :P

Who me? You must be thinking of someone else. :biggrin:
 

1. What is convolution and how does it work?

Convolution is a mathematical operation that combines two functions to produce a third function. It involves multiplying one function by a reversed and shifted version of the other function and then integrating the product. This process is used to analyze and manipulate signals and images in various fields such as signal processing, image processing, and physics.

2. What is the purpose of convolution in science?

Convolution is used to extract useful information from signals or images. It helps in analyzing and understanding the characteristics of a signal or image, such as its frequency components or spatial features. This information can then be used for further processing or analysis.

3. Can you provide an example of convolution in action?

One example is using convolution to blur an image. The blurred image is created by convolving the original image with a blur kernel, which is a small matrix of numbers that determines how much each pixel in the original image contributes to the blurred pixel. This process is also used in edge detection and noise reduction in image processing.

4. How is convolution related to the Fourier transform?

The Fourier transform is a mathematical tool used to decompose a signal or image into its frequency components. Convolution is closely related to the Fourier transform because convolution in the time/space domain is equivalent to multiplication in the frequency domain. This relationship allows us to use convolution to manipulate signals or images in the frequency domain, which can be more efficient than working in the time/space domain.

5. Are there any limitations or drawbacks to using convolution?

One limitation of convolution is that it assumes the signal or image being analyzed is stationary, meaning its properties do not change over time or space. This may not be the case in some real-world scenarios, leading to inaccurate results. Additionally, convolution can be computationally expensive, especially for large signals or images, which can make it impractical for real-time applications.

Similar threads

Replies
5
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
0
Views
164
Replies
4
Views
1K
  • Electrical Engineering
Replies
4
Views
828
  • Electrical Engineering
Replies
7
Views
2K
  • Electrical Engineering
Replies
4
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Electrical Engineering
Replies
1
Views
930
  • Electrical Engineering
Replies
9
Views
2K
Back
Top