# Puzzled about the non-teaching of Monte Carlo method(s) for error analysis

Gold Member
Say you have measured several quantities and you can calculate the value of a physical quantity as a sum/product of the several measured quantities, and you are interested to calculate the uncertainty of that physical quantity.

I have just learned about the Monte Carlo method applied to error (or uncertainty) analysis.
To me, it is much more intuitive than the formula giving the standard deviation of the mean as a square root of the sum squared of the partial derivatives of the expression related to the variable whose standard deviation is looked after, multiplied by the standard deviation of each variable, squared. This formula has severe limitations, such as it assumes that the variables are uncorrelated, not so big, linear, etc. Furthermore, as far as I have read, Monte Carlo is more accurate if well applied.

The "inputs" of the Monte carlo are the distributions related to each variables. For example, if we assume (educated guess) that a Gaussian represents well the values that a voltmeter shows, with the mean being the value displayed and the standard deviation between the last digit displayed, we "try out" plugging values satisfying this condition, in the relationship between the physical quantity of interest and its associated measured quantities. We do the same for all quantities, and we thus obtain a distribution of values for the one of interest. Then it is a matter of reporting the median, 32nd and 68th percentiles, I believe.

This technique is apparently very strong and can overcome a lot of problems the usually taught formula in undergraduate physics/engineering meets in real life problems.
To me, it is one step further away from plugging the minimum and the maximum value a variable can take, and see the obtained value of the quantity of interest (which is not that good of a way to get the uncertainty, but better than nothing), in terms of difficulty of understanding.

How is this Monte Carlo method not even mentioned in the usual curriculum in science? I am totally puzzled. (I hope the premise of my question isn't flawed).

Last edited by a moderator:
• atyy

hutchphd
Homework Helper
To me, it is much more intuitive than the formula giving the standard deviation of the mean as a square root of the sum squared of the partial derivatives of the expression related to the variable whose standard deviation is looked after, multiplied by the standard deviation of each variable, squared.
As you rightly point out, intuitive (and useful) is in the eye of the beholder.

There is nothing wrong with Monte Carlo methods but in the end, all you get is a number, or a set of numbers.
If in fact you have multiple sources of error that are not independent it is not sufficient to look at only the mean and one SD numbers for each source but rather the mesh needs to fill the parameter space using appropriate density of points and skill at randomizing.. And then you gat a set of numbers that are only as good as the process you have used, and tell you nothing directly about the sources of error.

My experiences where I have been looking seriously at sources of error have usually been aimed at eliminating the volubility of measurement from, say, a medical diagnostic system. In this circumstance the Monte Carlo approach is far less useful than trying to orthogonalize the inputs of error and look at them analytically using the old math. In fact my most lucrative and perhaps satisfying contracts involved rescuing production engineers who knew only Monte Carlo methods. But they are indeed easy to understand.

The answer is of course that you need to know both methods. For complicated systems, Monte Carlo usually involves an app that is purchased..... how hard is that to do?? However the answer is often found in a more nuanced analytical approach and that is why you are advised to learn it and understand it.

• DaveE and jim mcnamara
How is this Monte Carlo method not even mentioned in the usual curriculum in science? I am totally puzzled. (I hope the premise of my question isn't flawed).

In the physics curricula I am familiar with, the student learns mechanics, thermodynamics, electrodynamics, statistical mechanics, quantum mechanics, possibly optics, and electives. The student takes similar courses in mathematics, and engineering. I meet many graduates everyday. These graduates vary in their readiness to do the job, or conduct research. It is easy to find "skills and techniques" that the schools have not taught them, but look at all the important topics, the schools did teach. The curricula is already very full. What would you leave out?

Sometimes Monte-Carlo is addressed in computer classes, where large data sets are common.

I have used Monte-Carlo techniques after graduation, and I think it is OK to learn something after graduating with the Bachelor's degree, or teaching pre-baccalaureate interns in summer occupations.

Gold Member
In the physics curricula I am familiar with, the student learns mechanics, thermodynamics, electrodynamics, statistical mechanics, quantum mechanics, possibly optics, and electives. The student takes similar courses in mathematics, and engineering. I meet many graduates everyday. These graduates vary in their readiness to do the job, or conduct research. It is easy to find "skills and techniques" that the schools have not taught them, but look at all the important topics, the schools did teach. The curricula is already very full. What would you leave out?

Sometimes Monte-Carlo is addressed in computer classes, where large data sets are common.

I have used Monte-Carlo techniques after graduation, and I think it is OK to learn something after graduating with the Bachelor's degree, or teaching pre-baccalaureate interns in summer occupations.
In this case, simply replace the analytical formula teaching with the Monte Carlo method.

In this case, simply replace the analytical formula teaching with the Monte Carlo method.

This quote puzzles me. In what case? In all cases? If so, are you suggesting physics teaching should no longer treat physical systems with equations, but should in some manner, treat the system running a Monte Carlo to cover every conceivable calculation. I presume a knowledge of the analytical equation would guide us in setting up the Monte Carlo.
I think a linearized error analysis treatment, although it has limitations, is important for all scientists, and engineers to learn. One advantage in linear error analysis is that it can be applied to many areas, in many fields. Most theories in undergraduate physics are linear, for the most part. I have taken two to three courses that treat non-linear estimation and control, and this is useful, however, it was important to take courses in linear estimation and control as a prerequisite. Shouldn't Monte Carlo techniques come later?

• hutchphd
DaveE
Gold Member
I think a key point here, as @hutchphd said, is to distinguish getting an answer versus understanding the system. In the engineering world "what" isn't any more important than "why" or "how". This is why, in my experience, computer simulations are 95% verification and 5% results. An answer that you don't understand, and therefore trust, is of questionable value. Likewise, a system model has value even when it gives bad results because you may then be informed about how to modify the model.

In education, I think it is more important to teach concepts than tools, you can learn tools later. For an extreme example, why teach how to solve integrals when Wolfram can do it for you?

I'm also not sure of your premise that Monte Carlo isn't taught, but you may have to search for it a bit.

• hutchphd
Staff Emeritus
In this case, simply replace the analytical formula teaching with the Monte Carlo method.

This strikes me as a bad idea.

(1) There are cases - quite a few - where the analytic method is perfectly fine. Simple cases should require a pen and paper, not a computer.

(2) It is important to understand the output of the Monte Carlo, oerhaps by understanding limiting cases that can be solved analytically. This is particularly important since there are non-intuitive aspects to statistics, fitting and uncertainty quantification. (e.g. the Punzi effect and the better-than-zero effect)

(3) If this were a good idea, why stop here? Why ever do an analytic calculation? Just load it into a computer and run a simulation.

• DaveE
Gold Member
This quote puzzles me. In what case? In all cases?
In all cases where you would apply the standard analytical formula.
If so, are you suggesting physics teaching should no longer treat physical systems with equations, but should in some manner, treat the system running a Monte Carlo to cover every conceivable calculation. I presume a knowledge of the analytical equation would guide us in setting up the Monte Carlo.
I do not understand why the first sentence follows from my thoughts.

I think a linearized error analysis treatment, although it has limitations, is important for all scientists, and engineers to learn. One advantage in linear error analysis is that it can be applied to many areas, in many fields. Most theories in undergraduate physics are linear, for the most part. I have taken two to three courses that treat non-linear estimation and control, and this is useful, however, it was important to take courses in linear estimation and control as a prerequisite. Shouldn't Monte Carlo techniques come later?
I do not disagree with your first sentence, but you are the one who mentioned the problem of too little time to cover everything. In this case, since Monte Carlo is more intuitive (subjective, I know, as been pointed out already) and very easy to implement, it wins the 1st place, ahead of the standard analytical formula which has limited scope and still requires a computer anyway (or a pocket calculator).
I think a key point here, as @hutchphd said, is to distinguish getting an answer versus understanding the system. In the engineering world "what" isn't any more important than "why" or "how". This is why, in my experience, computer simulations are 95% verification and 5% results. An answer that you don't understand, and therefore trust, is of questionable value. Likewise, a system model has value even when it gives bad results because you may then be informed about how to modify the model.
In education, I think it is more important to teach concepts than tools, you can learn tools later. For an extreme example, why teach how to solve integrals when Wolfram can do it for you?
Sure, but here again, Monte Carlo wins hands down compared to the analytical formula (in terms of understanding what is going on, what is happening, etc.). You need a computer for both anyways, unless you have learned and remember how to take the square root of real numbers, by hand. (I was not taught any algorithm to do so in my whole life, and I suspect this is the case for most people).
I'm also not sure of your premise that Monte Carlo isn't taught, but you may have to search for it a bit.
I meant for uncertainty analysis. But yes, I should search more to see if this is included as a standard in undergraduate classes.
This strikes me as a bad idea.

(1) There are cases - quite a few - where the analytic method is perfectly fine. Simple cases should require a pen and paper, not a computer.
In practice most students will use a computer, or a pocket calculator. The formula ##\sigma_f = \sqrt{\left( \frac{\partial f}{\partial x} \right)^2 \sigma_x^2 + \left( \frac{\partial f}{\partial y} \right)^2 \sigma_y^2 + ...}##. Where the analytical formula shines would be where the numbers turn out to yield a number whose square root is well known, this does not happen frequently, to say the least, even in undergraduate classes.

(2) It is important to understand the output of the Monte Carlo, oerhaps by understanding limiting cases that can be solved analytically. This is particularly important since there are non-intuitive aspects to statistics, fitting and uncertainty quantification. (e.g. the Punzi effect and the better-than-zero effect)
Yes, I think you are right. By the way, what is the "better-than-zero" effect?

(3) If this were a good idea, why stop here? Why ever do an analytic calculation? Just load it into a computer and run a simulation.
You stop here because in this particular case, a very simple and intuitive algorithm replaces an analytical formula that requires a numerical output from a computer. When you don't need a numerical output, no need to push brainlessly further to apply Monte Carlo at all costs like a fanboy.

hutchphd
Homework Helper
Where the analytical formula shines would be where the numbers turn out to yield a number whose square root is well known, this does not happen frequently, to say the least, even in undergraduate classes.

No this is not where the analytical formula "shines".

It useful because it allows you analyze the constituent sources of error without ever plugging in actual numbers. Hence the term analytical formula.

But badly educated engineers made me employable at a handsome rate, so I guess I shouldn't complain. Unless I desire quality results.

Gold Member
No this is not where the analytical formula "shines".

It useful because it allows you analyze the constituent sources of error without ever plugging in actual numbers. Hence the term analytical formula.

But badly educated engineers made me employable at a handsome rate, so I guess I shouldn't complain. Unless I desire quality results.
Let's take Wikipedia's Ohm's law example: ##\sigma_R \approx R \sqrt{\left(\frac{\sigma_V}{V}\right)^2 + \left( \frac{\sigma_I}{I} \right)^2}##. What is the useful information you get without plugging numbers? Is it that you expect ##\sigma_R## to grow as the relative uncertainty in the variables involved in the relation between R and those variables grows? Or... something else?

Staff Emeritus
what is the "better-than-zero" effect?

You can have a case where a systematic uncertainty that is non-zero can give a tighter limit than if it were zero. More specifically, in a Poisson counting experiment with a single source of events whose average event count is known exactly, the upper limit on when no events are observed is traditionally quoted to be 3 events at 95% CL. The “Better Than Zero” effect is the observation that the upper limit is seen to go below 3 with some methods when zero events are observed and systematic uncertainties are introduced.

hutchphd
Homework Helper
Let's take Wikipedia's Ohm's law example:

hutchphd
Homework Helper
Thanks. This provides an excellent example.

Suppose I wish to design an experiment to optimally measure this resistor.

There are random reading errors and my ammeter has a percent deviation of 10% versus 2% for my voltmeter. I can minimize these by repeated measurements, but I wish to utilize my time wisely: maybe ~100 measurements total.

I know immediately what the formula tells me to do.

How does one proceed in the world of Monte Carlo??

.

• Gold Member
Thanks. This provides an excellent example.

Suppose I wish to design an experiment to optimally measure this resistor.

There are random reading errors and my ammeter has a percent deviation of 10% versus 2% for my voltmeter. I can minimize these by repeated measurements, but I wish to utilize my time wisely: maybe ~100 measurements total.

I know immediately what the formula tells me to do.

How does one proceed in the world of Monte Carlo??

.
I think I am slow and I am missing something. In your case, as I understand it, the voltmeter has a 2% deviation regardless of the current used (thus voltage read)? Similarly for the ammeter, except that this is 10%. Then the formula tells me that ##\sigma_R## is fixed, regardless of the current used. Surely I am missing something, or I haven't understood your case well?

hutchphd
Homework Helper
Something else the Monte Carlo gives no direct insight about.
The standard error of the mean of N measurements equals ##\frac \sigma {\sqrt N}## so you can tune error analytically to optimize the number of repeat measurements for each.
This would reguire a lot of Monte Carlo. Physicists use analytical methods because they have power, not because they are the simplest or most transparent.

Gold Member
I am trying to convince myself that the analytical formula can give a deeper insight than Monte Carlo, but I did not understand your post about the resistor example. Could you clarify it please?
Something else the Monte Carlo gives no direct insight about.
The standard error of the mean of N measurements equals ##\frac \sigma {\sqrt N}## so you can tune error analytically to optimize the number of repeat measurements for each.
This would reguire a lot of Monte Carlo. Physicists use analytical methods because they have power, not because they are the simplest or most transparent.
I do not really understand again. For a given set of measurement(s) yielding the value of interest, Monte Carlo provides you sigma, right? In what way does this differ from the analytical formula yielding sigma? Only the computational cost?

hutchphd
Homework Helper
I am attempting to design the experiment, knowing the single measurement precision of the ammeter and voltmeter. By taking multiple measurements I can produce different errors in each mean. Using the analytic formula I can figure out how to optimize the experimental procedure for a given expenditure of effort.
Such optimizations are often required in the real would with many input error sources. Repeatedly plugging guesses into a Monte Carlo simulation is not a good way to proceed.
The purpose of analytic forms is to allow your mind to symbolically manipulate complicated problems for systems too entangled to otherwise be amenable to such analysis. Asking a computer app for a number is not at all the same.

.

• Gold Member
Dear hutchphd,
I am attempting to design the experiment, knowing the single measurement precision of the ammeter and voltmeter.
Ok, so you got a 10% deviation and a 2% only for 1 measurement.... or any measurement, even though you change the current or voltage?
By taking multiple measurements I can produce different errors in each mean.
In each mean of what, exactly? And now the error is not 10% and 2% anymore for any single measurement?

Using the analytic formula I can figure out how to optimize the experimental procedure for a given expenditure of effort.
How, exactly? I do not see it. As you have worded it, I see no way to improve ##\sigma _R## by changing the current or the voltage applied. The only way to reduce the uncertainty would be to brainlessly repeat the measurement to reduce the standard deviation of the mean. Unless of course you throw out everything and focus on the real world problem, in which case you would use an AC ("high frequency" to kill off thermoelectric effects but not so high either as to prevent the skin effect, low intensity as not to heat up the resistor whose resistance you want to measure, but not too low either, else the uncertainty in the voltage starts to become an important part of the error, etc.). But none of this is determined with formulas, be it either numerical or analytical, to me.
Such optimizations are often required in the real would with many input error sources.
So I really, really, really hope I will be able to understand what you've written so far, because I want to do things in an efficient way.

Repeatedly plugging guesses into a Monte Carlo simulation is not a good way to proceed.
I mean, it is probably less useful than repeating a measurement (since you gain no more information with Monte Carlo), but isn't sensitifivity analysis done that way, i.e. fix all inputs of Monte Carlo but one, and vary this one to see how the output changes? That is a useful thing to do, or not?

The purpose of analytic forms is to allow your mind to symbolically manipulate complicated problems for systems too entangled to otherwise be amenable to such analysis. Asking a computer app for a number is not at all the same.

.
Sure, I do agree with you, in general. But in this particular case, I do not see how the analytical formula provides the information on how to use wisely the time, i.e. by performing 100 times the experiment. Do you do anything different between those 100 experiments? If you change the current or voltage for instance, will the voltmeter/ammeter still show a 10% and 2% deviation (in real world I know that no, of course, but the way you worded it, to me, points that yes... hence my confusion). I don't really see how the analytical formula provides any guidance at all. I am not even comparing it to MC yet, just common sense.

hutchphd
Homework Helper
The only way to reduce the uncertainty would be to brainlessly repeat the measurement to reduce the standard deviation of the mean.
Not brainlessly. At least not when I do it. That is the whole point.

First, in the example, a single measurement means measuring V or measuring I not measuring both. I know that the imprecision in the I is 5 times worse than V for a single measurement. To make the measured mean value I to have the same precision as V one needs to repeat the current measurement (jiggle the wires, bump the table, stomp the floor in between as prudent) 25 times. This equality of errors will be the best solution (I can show this analytically but not right now) for the stated problem.....so I would measure I 100 times and V 4 times to be near 100 measurements as desired. The RMS measurement error for this case $$\frac {\sigma _R} R =\sqrt {\frac {(.1)^2} {100} + \frac {(.02)^2} 4 }=\frac {.02} {\sqrt 2}$$ assuming I did the arithmetic OK. Notice this imprecision is smaller than the better (2%) measurement device random imprecision and optimizes the result at requested level of effort.

but isn't sensitifivity analysis done that way, i.e. fix all inputs of Monte Carlo but one, and vary this one to see how the output changes? That is a useful thing to do, or not?
Only if all you know how to do is a mindless Monte Carlo Simulation. Again that's the point. Giving it a name does not make it smart.

• • marcusl and fluidistic
Office_Shredder
Staff Emeritus
Gold Member
Based on my experience six years ago I think numerical simulations were probably undertaught, but this is clearly too extreme.

The post here is a bad idea, as proven by the fact that the OP doesn't understand any of the descriptions of why it's a bad idea. I think what probably happened is you didn't understand how standard deviations intuitively, found out you can kind of get a computer to do it for you, and then wished that you never had to learn it in the first place. But there are loads of examples where understanding the formula and a bit of algebra can solve a problem that would be mind numbingly annoying to deal with by just stimulating. It also means you're not going to be able to verify your simulator results - if there's a bug and it outputs bad numbers, many of the posters here would be able to catch it, but if you don't know how these errors propagate then you can't.

• Gold Member
Office_shredder, I think you may have some cognitive bias against me. If you reread my first post, I am wondering why MC is not mentioned (nor taught) in standard undergraduate (physics) curriculum. I have nothing against what is taught (the mentioned analytical formula). It is only after someone stated that there is no time to cover everything that in this very restrained case I mentioned to then teach the most useful in real world (which also happens to be more intuitive to me) way to compute uncertainties, which I thought would be MC over the analytical formula. I would argue that this is as extreme as not mentioning MC (I am not even talking about teaching MC, just mentioning it).

Now hutchphd is teaching me that the analytical formula provides insights that the MC do not, it's taking time for me to see it, but does that mean I wish I had never been taught the analytical formula and used it with a button in a computer? :) Not necessarily. :)
With MC, you also get a mean and a standard deviation (same thing as the analytical formula), that's the reason why I did not see hutchphd's point at first, but now I see exactly what he meant. Thank you (and sorry for your time), hutchphd.

Ok, so, good, the analytical formula can be used to planify future measurements in a wise way that MC can't (or at least, not as quickly as one would hope. I am no expert at all, just learned about it, so I may very well miss things!). So it should not be thrown away in real life experiments and let its place to MC entirely. OK.

Then my original post still stand though. There might be no time to cover Monte Carlo (and thus ask students to use it), but shouldn't it at least be mentioned, possibly only covering its advantages and disadvantages over the standard propagation of errors formula? That does seem like a big deal to me. It is so easy to implement that.... ok I stop here :D.

Office_Shredder
Staff Emeritus
Gold Member
It sounds like we're basically in agreement actually, cool.

• fluidistic
hutchphd
• • 