Optimizing Pseudoexperiment Sample Size for Accurate Results

  • Context: Graduate 
  • Thread starter Thread starter ChrisVer
  • Start date Start date
Click For Summary

Discussion Overview

The discussion focuses on determining the optimal number of pseudoexperiments to produce in a statistical setup involving nuisance parameters and their uncertainties. Participants explore the implications of varying sample sizes on the accuracy and reliability of results derived from these pseudoexperiments.

Discussion Character

  • Exploratory, Technical explanation, Debate/contested

Main Points Raised

  • One participant inquires about the best number of pseudoexperiments to produce, suggesting that the relative uncertainty of the sample should be minimized as a criterion for optimality.
  • Another participant asserts that the optimal number is simply "as many as you can," indicating that additional pseudoexperiments will not degrade results.
  • A similar viewpoint is reiterated, emphasizing that increasing the number of trials leads to sampled values that converge closer to the observed value, thus enhancing statistical constraints.
  • Another contribution suggests that the limit on the number of pseudoexperiments is dictated by computational feasibility, with the uncertainty decreasing as more pseudoexperiments are conducted.

Areas of Agreement / Disagreement

Participants express differing opinions on the optimal number of pseudoexperiments, with some advocating for maximizing the number while others propose a more nuanced approach based on minimizing relative uncertainty. The discussion remains unresolved regarding the best strategy.

Contextual Notes

Participants do not reach a consensus on the criteria for determining the optimal number of pseudoexperiments, and there are assumptions about computational limits and statistical behavior that are not fully explored.

ChrisVer
Science Advisor
Messages
3,372
Reaction score
465
How can someone decide what's the best number of produced pseudoexperiments in his set up?
In particular I have a value [itex]N[/itex] which I want to vary wrt 2 nuisance parameters with relative uncertainties [itex]\delta_1,\delta_2[/itex]. I am producing [itex]n[/itex] pseudoexperiments (samples) in each calculating the mean and the standard deviation of
[itex]N_i =N_i^0 \Big[1 + \delta_1 \mathcal{N}_1(0,1) + \delta_2 \mathcal{N}_2(0,1) \Big][/itex]
How can I decide whether the [itex]n[/itex]-trials I am choosing is optimal?
I have reached the following conclusion after some thinking but I am not sure... the sample's relative uncertainty that is [itex]\sqrt{\text{Var}(N)}/\bar{N}[/itex] should be as small as possible... is that a correct way?
 
Physics news on Phys.org
The answer is "as many as you can". There is no point where another pseudo-experiment makes things worse and not better.
 
Vanadium 50 said:
The answer is "as many as you can". There is no point where another pseudo-experiment makes things worse and not better.

In general I've seen plots where they show the "observed" let's say value (that they vary in each PE), and the sampled values for different/increasing [itex]n[/itex]-trials (eg 100,1000,10000,1000000)... so, the sampled values get closer to the observed value, but also they get more statistically constrained... I was wondering if by looking at something like this allows someone to decide with what [itex]n[/itex] they should go.
 
As much as your computers can reasonably compute. The uncertainty from the limited number of pseudoexperiments goes down, which is great.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 24 ·
Replies
24
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 28 ·
Replies
28
Views
7K