Fourier Analysis, definition of convolution

1. Oct 1, 2008

Narcol2000

I having a hard time understanding an aspect of the definition of the convolution of two functions. Here is the lead up to its definition...

It goes on to discuss what the observed distribution h(z) will be if we try to measure f(x) with an apparatus with resolution function g(y). And tries to justify why h(z) is defined at the convolution of the functions f and g, however i have a problem with one of its statements. The book says:

here is the part I have an issue with.

Why is the probability of a reading being between x and x+dx, equal to f(x) dx? I thought f(x) was an arbitrary function representing a relation between an independant variable x and the observable f(x), why is it being treated as a probability density function

If i just accept f(x) dx as being the probability of a true reading lying between x and x + dx. Then I can understand how the definition of h(z) as the convolution follows from this, but i just don't understand how an arbitrary function can be treated as a probabilty density. If f(x) = x (or any function that doesn't have a finite integral between - and + infinity, surely this argument doesn't hold, meaning that f(x) can't be arbitrary, yet if f(x) is an observable quantity surely it must be possible for it to have an arbitrary dependancy on x.

Last edited: Oct 1, 2008
2. Oct 1, 2008

Yeah, that stuff about probability seems kind of hand-wavey and unnecessary. Generally speaking, convolution is defined over a much larger class of functions than pdf's. In the case that both f(x) and g(x) are positive and integrable, though, this derivation is fine, and is an interesting interpretation of convolution.

The derivations of the convolution integral that I'm more familiar with start out with linear, time-invariant systems, then work out impulse response, and then put the whole thing together. This is also kinda hand-wavey, since it typically uses Dirac deltas, but is still more general and explicit than the derivation in the OP (which doesn't even state the linearity assumptions used in the measurement model).

3. Oct 1, 2008

marcusl

I find the wording in your text quite confusing, but the concept is sound and worth understanding. It might be easier to think of a concrete example--capturing ("measuring") an image with a lens. f is intensity of the scene as a function of position x (actually we need x and y). The lens forms an image with a resolution given by its "point spread function" g(x,y), which is an Airy disk--the little round diffraction pattern that arises from the circular aperture.

If we turn our lens to the skies, then a delta-function or point excitation (starlight) produces exactly g. But an extended object or scene f(x,y) produces an image f*g, where * is a convolution. This is the sense in which instruments introduce a response function.

4. Oct 1, 2008

Narcol2000

Thanks for clearing that up, saying that f(x) is an experimental observable yet having a definition that restricts f(x) to being a pdf is a recipe for confusion!

I sought out a derivation http://cnx.org/content/m10085/latest/ along the lines of what quadraphonics suggested and i must say i prefer that more general derivation. Would have been far better for the book to lead out with the more general definition then provide the example as marcusl mentioned.

thx again.

Last edited: Oct 1, 2008
5. Oct 1, 2008

One comment on the $$f(x) \, dx$$ line - in introductory mathematical statistics, we introduce the idea that if $$f(x)$$ is a density for a continuous random variable, the quantity $$f(x) \, dx$$ can be considered to be the probability of finding an observation between $$x$$ and $$dx$$. It then leads to a heuristic explanation of why $$\Pr(a \le X \le B)$$ is given by

$$\int_a^b f(x) \, dx$$

If the snippet provided by the OP relates to some probability-related application, that could be the tie-in. I must say, however, that I've never seen it used as a lead-in to convolution; I agree that without greater context it is odd.

6. Oct 1, 2008