- #1
- 996
- 5
Let's say I want to simulate a 30-year investment return scenario by running n simulations (e.g. n = 1000) using a normal distribution with mean x% and standard deviation y%.
My first approach was to generate exactly n sets of 30 samples from N~(x,y) but I realized that for any given set of 30 samples the sample average isn't necessarily close to x%. Wouldn't a more valid approach be to run a sufficient number of scenarios to obtain n sets, each of which has a sample average within a pre-determined tolerance of x? It seems to me the answer should be yes.
I've seen results of financial monte carlo simulators that are offered by well known financial institutions, but nobody I talk to seems to know the details of how it is done.
My first approach was to generate exactly n sets of 30 samples from N~(x,y) but I realized that for any given set of 30 samples the sample average isn't necessarily close to x%. Wouldn't a more valid approach be to run a sufficient number of scenarios to obtain n sets, each of which has a sample average within a pre-determined tolerance of x? It seems to me the answer should be yes.
I've seen results of financial monte carlo simulators that are offered by well known financial institutions, but nobody I talk to seems to know the details of how it is done.