I having a hard time understanding an aspect of the definition of the convolution of two functions. Here is the lead up to its definition... It goes on to discuss what the observed distribution h(z) will be if we try to measure f(x) with an apparatus with resolution function g(y). And tries to justify why h(z) is defined at the convolution of the functions f and g, however i have a problem with one of its statements. The book says: here is the part I have an issue with. Why is the probability of a reading being between x and x+dx, equal to f(x) dx? I thought f(x) was an arbitrary function representing a relation between an independant variable x and the observable f(x), why is it being treated as a probability density function If i just accept f(x) dx as being the probability of a true reading lying between x and x + dx. Then I can understand how the definition of h(z) as the convolution follows from this, but i just don't understand how an arbitrary function can be treated as a probabilty density. If f(x) = x (or any function that doesn't have a finite integral between - and + infinity, surely this argument doesn't hold, meaning that f(x) can't be arbitrary, yet if f(x) is an observable quantity surely it must be possible for it to have an arbitrary dependancy on x.