- #1
Narcol2000
- 25
- 0
I having a hard time understanding an aspect of the definition of the convolution of two functions. Here is the lead up to its definition...
It goes on to discuss what the observed distribution h(z) will be if we try to measure f(x) with an apparatus with resolution function g(y). And tries to justify why h(z) is defined at the convolution of the functions f and g, however i have a problem with one of its statements. The book says:
here is the part I have an issue with.
Why is the probability of a reading being between x and x+dx, equal to f(x) dx? I thought f(x) was an arbitrary function representing a relation between an independant variable x and the observable f(x), why is it being treated as a probability density function
If i just accept f(x) dx as being the probability of a true reading lying between x and x + dx. Then I can understand how the definition of h(z) as the convolution follows from this, but i just don't understand how an arbitrary function can be treated as a probabilty density. If f(x) = x (or any function that doesn't have a finite integral between - and + infinity, surely this argument doesn't hold, meaning that f(x) can't be arbitrary, yet if f(x) is an observable quantity surely it must be possible for it to have an arbitrary dependancy on x.
It is apparent that any attempt to measure the value of a physical quantity is
limited, to some extent, by the finite resolution of the measuring apparatus used.
On the one hand, the physical quantity we wish to measure will be in general a
function of an independent variable, x say, i.e. the true function to be measured
takes the form f(x). On the other hand, the apparatus we are using does not give
the true output value of the function; a resolution function g(y) is involved. By
this we mean that the probability that an output value y = 0 will be recorded
instead as being between y and y +dy is given by g(y) dy.
It goes on to discuss what the observed distribution h(z) will be if we try to measure f(x) with an apparatus with resolution function g(y). And tries to justify why h(z) is defined at the convolution of the functions f and g, however i have a problem with one of its statements. The book says:
The probability that a true reading lying between x and x + dx, and so having
probability f(x) dx of being selected by the experiment, will be moved by the
instrumental resolution by an amount z − x into a small interval of width dz is
g(z − x) dz. Hence the combined probability that the interval dx will give rise to
an observation appearing in the interval dz is f(x) dx g(z − x) dz. Adding together
the contributions from all values of x that can lead to an observation in the range
z to z + dz, we find that the observed distribution is given by
[tex]
h(z) = \int^{\infty}_{-\infty} f(x)g(z-x)dx
[/tex]
here is the part I have an issue with.
The probability that a true reading lying between x and x + dx, and so having probability f(x) dx of being selected by the experiment,
Why is the probability of a reading being between x and x+dx, equal to f(x) dx? I thought f(x) was an arbitrary function representing a relation between an independant variable x and the observable f(x), why is it being treated as a probability density function
If i just accept f(x) dx as being the probability of a true reading lying between x and x + dx. Then I can understand how the definition of h(z) as the convolution follows from this, but i just don't understand how an arbitrary function can be treated as a probabilty density. If f(x) = x (or any function that doesn't have a finite integral between - and + infinity, surely this argument doesn't hold, meaning that f(x) can't be arbitrary, yet if f(x) is an observable quantity surely it must be possible for it to have an arbitrary dependancy on x.
Last edited: