- #1

- 30

- 0

Physically or mathematically, what does the Convolution integral compute?

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- I
- Thread starter henry wang
- Start date

- #1

- 30

- 0

Physically or mathematically, what does the Convolution integral compute?

- #2

- 941

- 394

In electrical engineering, every system has an associated impulse response ##h(t)##. It can be shown that, given some input signal ##x(t)## to a linear time invariant system, the system's output ##y(t)## is given by

$$y(t) = x(t) * h(t)$$

i.e. the convolution of the input with the impulse response.

Correspondingly, that means that if you find the Laplace (or Fourier) transform of ##h(t)##, denoted ##H(s)##, then given some input signal ##X(s)##, the output is $$Y(s) = X(s) H(s)$$ Multiplication is a lot easier to do than convolution, and once you find the product, you can just find the inverse Laplace transform to find the output signal.

- #3

- 22,089

- 3,296

[tex](a_0 + a_1 + a_2)(b_0 + b_1 + b_2) = a_0b_0 + (a_0b_1 + b_0a_1) + (a_0b_2 + a_1b_1 + a_2b_0)[/tex]

Hmm, let's generalize this:

[tex]\sum_{n=0}^N a_n \sum_{m=0}^N b_m = \sum_{k=0}^N c_k[/tex]

where

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

We can generalize this to series too:

[tex]\sum_{n=0}^\infty a_n \sum_{m=0}^\infty b_n = \sum_{k=0}^\infty c_k[/tex]

with

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

The convolution product is merely the continuous generalization of this: we replace sum by integral:

[tex]\int f(t) g(\tau - t)dt[/tex]

So we can simply see the convolution as a generalization of the distributive law.

- #4

- 30

- 0

[tex](a_0 + a_1 + a_2)(b_0 + b_1 + b_2) = a_0b_0 + (a_0b_1 + b_0a_1) + (a_0b_2 + a_1b_1 + a_2b_0)[/tex]

Hmm, let's generalize this:

[tex]\sum_{n=0}^N a_n \sum_{m=0}^N b_m = \sum_{k=0}^N c_k[/tex]

where

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

We can generalize this to series too:

[tex]\sum_{n=0}^\infty a_n \sum_{m=0}^\infty b_n = \sum_{k=0}^\infty c_k[/tex]

with

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

The convolution product is merely the continuous generalization of this: we replace sum by integral:

[tex]\int f(t) g(\tau - t)dt[/tex]

So we can simply see the convolution as a generalization of the distributive law.

Thank you that's very helpful!

- #5

- 30

- 0

- #6

chiro

Science Advisor

- 4,790

- 132

- #7

- 196

- 42

Its easier for me to think in dicrete terms sometimes. The cyclical convolution of two vectors is a vector of the same length, and each number in it is the dot product of the two input vectors at different offsets. So if the inputs are <1,2,3> <4,5,6> then the ouput will be the 3 numbers given by <<1,2,3>*<4,5,6>, <1,2,3>*<6,4,5>,<1,2,3>*<5,6,4>> where * means dot multiply.

In a sense its the correlation of the vectors at each possible offset.

By taking the Discrete Fourier Transform (DFT) of the two vectors, and multiplying each entry in those vectors and taking the inverse DFT of the resulting vector, you get the same result. This is just due to a weird property of sin/cos. Play around and you will see it.

Once you see that, replace the vectors with functions to see the larger picture of the continuous convolution, and for me at least, its more clear.. I hope that helps.

- #8

TheDemx27

Gold Member

- 170

- 13

In electrical engineering, every system has an associated impulse response ##h(t)##. It can be shown that, given some input signal ##x(t)## to a linear time invariant system, the system's output ##y(t)## is given by

$$y(t) = x(t) * h(t)$$

i.e. the convolution of the input with the impulse response.

Correspondingly, that means that if you find the Laplace (or Fourier) transform of ##h(t)##, denoted ##H(s)##, then given some input signal ##X(s)##, the output is $$Y(s) = X(s) H(s)$$ Multiplication is a lot easier to do than convolution, and once you find the product, you can just find the inverse Laplace transform to find the output signal.

Expanding on this, what you typically do to find out the coefficients that define a filter is to make a single impulse of magnitude 1, and measure the values that follow after for each sample. For instance, you may have coefficients like 0.9, 0.6, 0.4, 0.3, 0.25... etc. and this is what you convolute with the input. This would cause the impulse decay slowly. If you imagine that the input be a high frequency sine wave, then the decay produced by these coefficients will effectively cancel eachother out, and you are left with nothing. Otherwise, if the sine wave is a low enough frequency, the input will pass through, minimally altered. This is an example of an elementary low pass filter.

But that is a really bad low pass filter. If you want a really good low pass filter, you sample a sinc(x) function and use that for the impulse response. For some reason (that I would really like to know) this forms a rock solid low pass filter.

- #9

- 941

- 394

But that is a really bad low pass filter. If you want a really good low pass filter, you sample a sinc(x) function and use that for the impulse response. For some reason (that I would really like to know) this forms a rock solid low pass filter.

The Fourier transform of a sinc function is a rectangular pulse, i.e. In the frequency domain, you've got one section with a magnitude of 1, and it is zero for all other frequencies. This would be an ideal low pass filter.

Clearly this is not really constructable in the real world, because if the impulse response were a sinc function, it would mean that the system responded because the delta function input! It would then be referred to as a non causal system (needless to say, we cannot construct such systems). There are some filters that try to get as closely as possible, and they all have advantages and disadvantages. See: Butterworth filters, Chebyshev filters, etc.

When you speak of sampling a sinc function though, that's a whole different ballpark.

- #10

- 314

- 122

Physically or mathematically, what does the Convolution integral compute?

The convolution integral basically tells you how much one function f is changed by another function g.

For example, in electrical engineering you might have some analog device and you want to know what the output will be for some input signal. The analog device can be described by a transfer function and the input signal can be described by a function in time. The output will be given by the convolution of the input signal with the transfer function.

Turning convolution into multiplication is one of the primary motivations of the Laplace transform. Doing the convolution directly may be very cumbersome in the time domain, but when you take the Laplace transform of the two functions, multiply the two transformed functions, and then take the inverse Laplace transform of the product, you'll get back the convolution. This is usually much easier because most of the signals you'll encounter are of only a small number of forms (exponentials, polynomials, sinusoids, impulses, and constant functions) and taking the transform is just a matter of looking at a table.

Share: