# A Number of pseudoexperiments

1. Jul 6, 2016

### ChrisVer

How can someone decide what's the best number of produced pseudoexperiments in his set up?
In particular I have a value $N$ which I want to vary wrt 2 nuisance parameters with relative uncertainties $\delta_1,\delta_2$. I am producing $n$ pseudoexperiments (samples) in each calculating the mean and the standard deviation of
$N_i =N_i^0 \Big[1 + \delta_1 \mathcal{N}_1(0,1) + \delta_2 \mathcal{N}_2(0,1) \Big]$
How can I decide whether the $n$-trials I am choosing is optimal?
I have reached the following conclusion after some thinking but I am not sure.... the sample's relative uncertainty that is $\sqrt{\text{Var}(N)}/\bar{N}$ should be as small as possible... is that a correct way?

2. Jul 6, 2016

Staff Emeritus
The answer is "as many as you can". There is no point where another pseudo-experiment makes things worse and not better.

3. Jul 6, 2016

### ChrisVer

In general I've seen plots where they show the "observed" let's say value (that they vary in each PE), and the sampled values for different/increasing $n$-trials (eg 100,1000,10000,1000000)... so, the sampled values get closer to the observed value, but also they get more statistically constrained... I was wondering if by looking at something like this allows someone to decide with what $n$ they should go.

4. Jul 6, 2016

### Staff: Mentor

As much as your computers can reasonably compute. The uncertainty from the limited number of pseudoexperiments goes down, which is great.