- #1

fluidistic

Gold Member

- 3,767

- 134

Say you have measured several quantities and you can calculate the value of a physical quantity as a sum/product of the several measured quantities, and you are interested to calculate the uncertainty of that physical quantity.

I have just learned about the Monte Carlo method applied to error (or uncertainty) analysis.

To me, it is much more intuitive than the formula giving the standard deviation of the mean as a square root of the sum squared of the partial derivatives of the expression related to the variable whose standard deviation is looked after, multiplied by the standard deviation of each variable, squared. This formula has severe limitations, such as it assumes that the variables are uncorrelated, not so big, linear, etc. Furthermore, as far as I have read, Monte Carlo is more accurate if well applied.

The "inputs" of the Monte carlo are the distributions related to each variables. For example, if we assume (educated guess) that a Gaussian represents well the values that a voltmeter shows, with the mean being the value displayed and the standard deviation between the last digit displayed, we "try out" plugging values satisfying this condition, in the relationship between the physical quantity of interest and its associated measured quantities. We do the same for all quantities, and we thus obtain a distribution of values for the one of interest. Then it is a matter of reporting the median, 32nd and 68th percentiles, I believe.

This technique is apparently very strong and can overcome a lot of problems the usually taught formula in undergraduate physics/engineering meets in real life problems.

To me, it is one step further away from plugging the minimum and the maximum value a variable can take, and see the obtained value of the quantity of interest (which is not that good of a way to get the uncertainty, but better than nothing), in terms of difficulty of understanding.

How is this Monte Carlo method not even mentioned in the usual curriculum in science? I am totally puzzled. (I hope the premise of my question isn't flawed).

I have just learned about the Monte Carlo method applied to error (or uncertainty) analysis.

To me, it is much more intuitive than the formula giving the standard deviation of the mean as a square root of the sum squared of the partial derivatives of the expression related to the variable whose standard deviation is looked after, multiplied by the standard deviation of each variable, squared. This formula has severe limitations, such as it assumes that the variables are uncorrelated, not so big, linear, etc. Furthermore, as far as I have read, Monte Carlo is more accurate if well applied.

The "inputs" of the Monte carlo are the distributions related to each variables. For example, if we assume (educated guess) that a Gaussian represents well the values that a voltmeter shows, with the mean being the value displayed and the standard deviation between the last digit displayed, we "try out" plugging values satisfying this condition, in the relationship between the physical quantity of interest and its associated measured quantities. We do the same for all quantities, and we thus obtain a distribution of values for the one of interest. Then it is a matter of reporting the median, 32nd and 68th percentiles, I believe.

This technique is apparently very strong and can overcome a lot of problems the usually taught formula in undergraduate physics/engineering meets in real life problems.

To me, it is one step further away from plugging the minimum and the maximum value a variable can take, and see the obtained value of the quantity of interest (which is not that good of a way to get the uncertainty, but better than nothing), in terms of difficulty of understanding.

How is this Monte Carlo method not even mentioned in the usual curriculum in science? I am totally puzzled. (I hope the premise of my question isn't flawed).

Last edited by a moderator: