I having a hard time understanding an aspect of the definition of the convolution of two functions. Here is the lead up to its definition...(adsbygoogle = window.adsbygoogle || []).push({});

It goes on to discuss what the observed distribution h(z) will be if we try to measure f(x) with an apparatus with resolution function g(y). And tries to justify why h(z) is defined at the convolution of the functions f and g, however i have a problem with one of its statements. The book says: It is apparent that any attempt to measure the value of a physical quantity is

limited, to some extent, by the finite resolution of the measuring apparatus used.

On the one hand, the physical quantity we wish to measure will be in general a

function of an independent variable, x say, i.e. the true function to be measured

takes the form f(x). On the other hand, the apparatus we are using does not give

the true output value of the function; a resolution function g(y) is involved. By

this we mean that the probability that an output value y = 0 will be recorded

instead as being between y and y +dy is given by g(y) dy.

here is the part I have an issue with. The probability that a true reading lying between x and x + dx, and so having

probability f(x) dx of being selected by the experiment, will be moved by the

instrumental resolution by an amount z − x into a small interval of width dz is

g(z − x) dz. Hence the combined probability that the interval dx will give rise to

an observation appearing in the interval dz is f(x) dx g(z − x) dz. Adding together

the contributions from all values of x that can lead to an observation in the range

z to z + dz, we find that the observed distribution is given by

[tex]

h(z) = \int^{\infty}_{-\infty} f(x)g(z-x)dx

[/tex]

Why is the probability of a reading being between x and x+dx, equal to f(x) dx? I thought f(x) was an arbitrary function representing a relation between an independant variable x and the observable f(x), why is it being treated as a probability density function The probability that a true reading lying between x and x + dx, and so having probability f(x) dx of being selected by the experiment,

If i just accept f(x) dx as being the probability of a true reading lying between x and x + dx. Then I can understand how the definition of h(z) as the convolution follows from this, but i just don't understand how an arbitrary function can be treated as a probabilty density. If f(x) = x (or any function that doesn't have a finite integral between - and + infinity, surely this argument doesn't hold, meaning that f(x) can't be arbitrary, yet if f(x) is an observable quantity surely it must be possible for it to have an arbitrary dependancy on x.

**Physics Forums - The Fusion of Science and Community**

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Fourier Analysis, definition of convolution

Loading...

Similar Threads - Fourier Analysis definition | Date |
---|---|

I Motivation for Fourier series/transform | Oct 17, 2016 |

I Lebesgue measure and Fourier theory | May 6, 2016 |

2d fourier analysis of shapes | Aug 18, 2011 |

Fourier analysis question | Jun 21, 2010 |

Fourier analysis of a Lorentzian/Cauchy/Breit–Wigner distribution | Nov 3, 2009 |

**Physics Forums - The Fusion of Science and Community**