# Combining probability distribution functions

1. Aug 9, 2013

### hermano

Hi,

I'm comparing different measurement methods. I listed and derived an equation for each error component per measurement method and calculated the probability distribution using the Monte-Carlo method (calculating each error 300.000 times assuming a normal distribution of the input variable). However, the outcome of an Monte-Carlo simulation is a probability distribution for each error component under study. I want to combine these separate probability distribution functions per error component for each measurement methods to come to an overall probability distribution function such that I can compare the uncertainty of each measurement method. How can I do this? Anybody a good reference?

2. Aug 9, 2013

### Stephen Tashi

What do you mean by "error component". Are you talking about the components of a vector?

What do you mean by "combine"? Are the "components" added together like vectors? - or like scalars? - or are they inputs to some non-linear scalar valued function?

Does "uncertainty" mean the standard deviation of the measurement? If you simulated the distribution of some errors by Monte-Carlo, why didn't you also simulate the "combination" of these errors?

3. Aug 9, 2013

### hermano

With error component I mean the error source. For example, you measure the length of a bar. Then there are different error components/sources (or uncertainty components) which contribute to the total measurement uncertainty such as, the limited resolution of your ruler, the thermal expansion of your ruler under influence of the temperature.

No, the different error components are 100.000 times calculated using a Monte-Carlo simulation assuming a normal probability distribution for each error (before doing this, an analytical expression is derived for each error component and the width 'a' of the error interval is given). This will give you a vector of 100.000 error values per error component. From these 100.000 values you can calculate the mean error, standard deviation, uncertainty etc. My question is how can I combine these various standard deviations or uncertainties from the different error components to global (total) uncertainty.

No, uncertainty is not the standard deviation. You can calculate the uncertainty from the standard deviation but this is not the same.
Because the errors are independent. I have an analytical expression for each error separate, but no expression for a combination of all the errors together.

4. Aug 9, 2013

### Stephen Tashi

You didn't explain how the error "components" are to be combined. The example of the ruler suggests that they are added.

And you didn't define what you mean by "uncertainty".

5. Aug 9, 2013

### hermano

I want to calculate the total uncertainty. So I think you have to add them, but I am not really sure as they are independent. But that is the question of my whole problem. How do I have to "combine" the probability distributions from the various error components?

Uncertainty is the component of a reported value that characterizes the range of values within which the true value is asserted to lie. An uncertainty estimate should address error from all possible effects (both systematic and random) and, therefore, usually is the most appropriate means of expressing the accuracy of results. This is consistent with ISO guidelines.

6. Aug 9, 2013

### Stephen Tashi

I think you mean "whether they are independent".

Since you can't describe how the errors "combine", perhaps you should state the details of this problem and perhaps someone can interpret it from that perspective.

If you can estimate the covariance of the errors, you can estimate the standard deviation of their sum, even if the errors are dependent.

That may be fine for ISO guidelines, but it doesn't define "uncertainty" in mathematical terms. You stated that uncertainty can be calculated from the standard deviation of the distribution of a measurement but you didn't specify how it would be calculated. Is "uncertainty" supposed to be some kind of "confidence interval"?