Monte Carlo Simulation vs ML Models

AI Thread Summary
Monte Carlo simulations are preferred over machine learning models in scenarios where the analysis involves complex interactions among multiple variables, particularly when dealing with uncertainties and tolerances in engineering applications, such as analog circuit design. These simulations allow engineers to randomly vary component values within specified tolerances to assess worst-case performance scenarios, which is crucial for ensuring reliability in production. Unlike machine learning, which focuses on prediction or classification, Monte Carlo simulations generate a wide range of outputs from numerous iterations, providing insights into variations and extreme cases rather than simply averaging results. This approach is particularly useful when traditional analytical methods become cumbersome due to added complexity, enabling quicker verification of hypotheses and performance metrics.
fog37
Messages
1,566
Reaction score
108
Hello,

I have become familiar with ML and a number of ML models (supervised and unsupervised). I would like to now learn about Monte Carlo simulations since they are so ubiquitous in many fields.

When would we choose to do a Monte Carlo simulation instead of building a ML model (supervised, unsupervised, reinforcement learning)?
What kind of problems are more suitable for a MC simulation?

Thank you!
 
Technology news on Phys.org
Others will give better answers, but I can give you an example where Monte Carlo simulation is very important. When EEs design analog circuits, you can get some very subtle interactions between slight variations in the values of the resistors, capacitors, inductors, transformers, etc. There are tolerances associated with the parameters for each of those components (and also temperature coefficients for those values), and the worst-case performance of the circuit will not necessarily be when all components are at their upper tolerance limit (or lower) at the same time.

So we use Monte Carlo SPICE simulations to vary the values of the components randomly within their tolerance bands, and look at the families of plots for the simulations to see if the topology we are using and the tolerances we are specifying will meet the overall performance specs for the circuit.

I can tell you from personal experience that it is very non-trivial to design a high-performance analog filter with specifications you want (filter passband ripple, cutoff frequency, stopband ripple, etrc.) when you are varying so many component values all at once.

Good times. :smile:
 
berkeman said:
Others will give better answers, but I can give you an example where Monte Carlo simulation is very important. When EEs design analog circuits, you can get some very subtle interactions between slight variations in the values of the resistors, capacitors, inductors, transformers, etc. There are tolerances associated with the parameters for each of those components (and also temperature coefficients for those values), and the worst-case performance of the circuit will not necessarily be when all components are at their upper tolerance limit (or lower) at the same time.

So we use Monte Carlo SPICE simulations to vary the values of the components randomly within their tolerance bands, and look at the families of plots for the simulations to see if the topology we are using and the tolerances we are specifying will meet the overall performance specs for the circuit.

I can tell you from personal experience that it is very non-trivial to design a high-performance analog filter with specifications you want (filter passband ripple, cutoff frequency, stopband ripple, etrc.) when you are varying so many component values all at once.

Good times. :smile:
Thank you. I see how your are trying to optimize something (some variable) that depends on many other input variables and use randomization to get there.

I guess ML models are made for prediction (regression or classification) or clustering. Reinforcement learning is about making the right decisions inside a particular environment after some trial and error learning.

So would MC simulations instead be about running a model many many times using several inputs to which we randomly assign specific values at every different iteration with the resulting of getting many many generally different outputs which we will then average out together?
 
fog37 said:
So would MC simulations instead be about running a model many many times using several inputs to which we randomly assign specific values at every different iteration with the resulting of getting many many generally different outputs which we will then average out together?
We don't average the outputs of the MC SPICE simulations -- we use them to see what the worst-case performance can be. For example, when my first simulations of a particular anti-alias filter for an audio application showed that my passband ripple was too large, I explored other polynomials for the filter and ended up picking one that had fewer terms and was not as sharp at cutoff, but had much better passband ripple performance. There have been other times when I was not happy with what my simulations were showing me about another analog filter circuit, so I opted instead to do simple filtering with the analog front end, and digitize the waveform to implement the rest of the filtering digitally.

Without using MC simulations, I doubt I would have seen how bad the passband ripple could get for that first circuit, and my first prototypes may have worked just fine. But if you are designing a circuit for volume production, all those tolerance variations will show up in some of the production units, which can cause performance problems in the field.
 
fog37 said:
So would MC simulations instead be about running a model many many times using several inputs to which we randomly assign specific values at every different iteration with the resulting of getting many many generally different outputs which we will then average out together?
That is the idea, except the data is not always averaged. A complicated MC simulation can generate a massive amount of detailed data. What you do with that data depends on what question you are asking about the problem. You could be looking for averages, variation, extreme examples, etc.
It is often true that simple, standard analysis problems become much more difficult with the addition of a couple of "if .. then" conditions in the problem statement. The analysis can be a lot trickier than a simple MC model, where all you have to do is to add the "if ... then" condition into the simulation code. There have been threads in this forum where the mathematical analysis of a problem required a long, complicated, discussion and the easiest way to determine when the analysis was correct was to verify it with a relatively simple MC simulation.
 
  • Like
Likes fog37 and berkeman
Thread 'Is this public key encryption?'
I've tried to intuit public key encryption but never quite managed. But this seems to wrap it up in a bow. This seems to be a very elegant way of transmitting a message publicly that only the sender and receiver can decipher. Is this how PKE works? No, it cant be. In the above case, the requester knows the target's "secret" key - because they have his ID, and therefore knows his birthdate.
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...
Back
Top