What are the interpretations of Convolution integral?

  • #1
30
0
formula_convolution.png

Physically or mathematically, what does the Convolution integral compute?
 
  • #2
The use in the convolution integral comes from the Laplace (or Fourier) relation. Namely, that multiplication in the ##s## domain corresponds to convolution in the time domain, and vice versa.

In electrical engineering, every system has an associated impulse response ##h(t)##. It can be shown that, given some input signal ##x(t)## to a linear time invariant system, the system's output ##y(t)## is given by
$$y(t) = x(t) * h(t)$$
i.e. the convolution of the input with the impulse response.

Correspondingly, that means that if you find the Laplace (or Fourier) transform of ##h(t)##, denoted ##H(s)##, then given some input signal ##X(s)##, the output is $$Y(s) = X(s) H(s)$$ Multiplication is a lot easier to do than convolution, and once you find the product, you can just find the inverse Laplace transform to find the output signal.
 
  • #3
Let's look at multiplying sums. You have

[tex](a_0 + a_1 + a_2)(b_0 + b_1 + b_2) = a_0b_0 + (a_0b_1 + b_0a_1) + (a_0b_2 + a_1b_1 + a_2b_0)[/tex]

Hmm, let's generalize this:

[tex]\sum_{n=0}^N a_n \sum_{m=0}^N b_m = \sum_{k=0}^N c_k[/tex]

where

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

We can generalize this to series too:

[tex]\sum_{n=0}^\infty a_n \sum_{m=0}^\infty b_n = \sum_{k=0}^\infty c_k[/tex]

with

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

The convolution product is merely the continuous generalization of this: we replace sum by integral:

[tex]\int f(t) g(\tau - t)dt[/tex]

So we can simply see the convolution as a generalization of the distributive law.
 
  • #4
Let's look at multiplying sums. You have

[tex](a_0 + a_1 + a_2)(b_0 + b_1 + b_2) = a_0b_0 + (a_0b_1 + b_0a_1) + (a_0b_2 + a_1b_1 + a_2b_0)[/tex]

Hmm, let's generalize this:

[tex]\sum_{n=0}^N a_n \sum_{m=0}^N b_m = \sum_{k=0}^N c_k[/tex]

where

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

We can generalize this to series too:

[tex]\sum_{n=0}^\infty a_n \sum_{m=0}^\infty b_n = \sum_{k=0}^\infty c_k[/tex]

with

[tex]c_k = \sum_{i=0}^k a_i b_{k-i}[/tex]

The convolution product is merely the continuous generalization of this: we replace sum by integral:

[tex]\int f(t) g(\tau - t)dt[/tex]

So we can simply see the convolution as a generalization of the distributive law.

Thank you that's very helpful!
 
  • #5
Ive heard that convolution calculates the area of overlap between two functions, is this true? If it is true, what's the explanation of how convolution does it?
 
  • #6
You might also want to think about how micro-masses answer relates to frequencies (and probabilities) as well (and there are connections to probability theory - particularly with that of finding the distribution of adding two independent random variables).
 
  • #7
Ive heard that convolution calculates the area of overlap between two functions, is this true? If it is true, what's the explanation of how convolution does it?

Its easier for me to think in dicrete terms sometimes. The cyclical convolution of two vectors is a vector of the same length, and each number in it is the dot product of the two input vectors at different offsets. So if the inputs are <1,2,3> <4,5,6> then the ouput will be the 3 numbers given by <<1,2,3>*<4,5,6>, <1,2,3>*<6,4,5>,<1,2,3>*<5,6,4>> where * means dot multiply.

In a sense its the correlation of the vectors at each possible offset.

By taking the Discrete Fourier Transform (DFT) of the two vectors, and multiplying each entry in those vectors and taking the inverse DFT of the resulting vector, you get the same result. This is just due to a weird property of sin/cos. Play around and you will see it.

Once you see that, replace the vectors with functions to see the larger picture of the continuous convolution, and for me at least, its more clear.. I hope that helps.
 
  • #8
The use in the convolution integral comes from the Laplace (or Fourier) relation. Namely, that multiplication in the ##s## domain corresponds to convolution in the time domain, and vice versa.

In electrical engineering, every system has an associated impulse response ##h(t)##. It can be shown that, given some input signal ##x(t)## to a linear time invariant system, the system's output ##y(t)## is given by
$$y(t) = x(t) * h(t)$$
i.e. the convolution of the input with the impulse response.

Correspondingly, that means that if you find the Laplace (or Fourier) transform of ##h(t)##, denoted ##H(s)##, then given some input signal ##X(s)##, the output is $$Y(s) = X(s) H(s)$$ Multiplication is a lot easier to do than convolution, and once you find the product, you can just find the inverse Laplace transform to find the output signal.

Expanding on this, what you typically do to find out the coefficients that define a filter is to make a single impulse of magnitude 1, and measure the values that follow after for each sample. For instance, you may have coefficients like 0.9, 0.6, 0.4, 0.3, 0.25... etc. and this is what you convolute with the input. This would cause the impulse decay slowly. If you imagine that the input be a high frequency sine wave, then the decay produced by these coefficients will effectively cancel each other out, and you are left with nothing. Otherwise, if the sine wave is a low enough frequency, the input will pass through, minimally altered. This is an example of an elementary low pass filter.

But that is a really bad low pass filter. If you want a really good low pass filter, you sample a sinc(x) function and use that for the impulse response. For some reason (that I would really like to know) this forms a rock solid low pass filter.
 
  • #9
But that is a really bad low pass filter. If you want a really good low pass filter, you sample a sinc(x) function and use that for the impulse response. For some reason (that I would really like to know) this forms a rock solid low pass filter.

The Fourier transform of a sinc function is a rectangular pulse, i.e. In the frequency domain, you've got one section with a magnitude of 1, and it is zero for all other frequencies. This would be an ideal low pass filter.

Clearly this is not really constructable in the real world, because if the impulse response were a sinc function, it would mean that the system responded because the delta function input! It would then be referred to as a non causal system (needless to say, we cannot construct such systems). There are some filters that try to get as closely as possible, and they all have advantages and disadvantages. See: Butterworth filters, Chebyshev filters, etc.

When you speak of sampling a sinc function though, that's a whole different ballpark.
 
  • #10
Physically or mathematically, what does the Convolution integral compute?

The convolution integral basically tells you how much one function f is changed by another function g.

For example, in electrical engineering you might have some analog device and you want to know what the output will be for some input signal. The analog device can be described by a transfer function and the input signal can be described by a function in time. The output will be given by the convolution of the input signal with the transfer function.

Turning convolution into multiplication is one of the primary motivations of the Laplace transform. Doing the convolution directly may be very cumbersome in the time domain, but when you take the Laplace transform of the two functions, multiply the two transformed functions, and then take the inverse Laplace transform of the product, you'll get back the convolution. This is usually much easier because most of the signals you'll encounter are of only a small number of forms (exponentials, polynomials, sinusoids, impulses, and constant functions) and taking the transform is just a matter of looking at a table.
 

Suggested for: What are the interpretations of Convolution integral?

Replies
10
Views
1K
Replies
3
Views
925
Replies
21
Views
1K
Replies
8
Views
774
Replies
2
Views
833
Replies
4
Views
829
Replies
2
Views
496
Back
Top