How to implement proper error estimation using MC

Click For Summary

Discussion Overview

The discussion revolves around the implementation of error estimation using Monte Carlo (MC) methods in the context of a quantum projection measurement experiment. Participants explore how to extract the variable ##x## from measurements influenced by noise in the fixed variable ##y##, which is sampled from a Gaussian distribution. The focus is on determining the necessary number of measurements to achieve a desired uncertainty in ##x##.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant describes a method for estimating ##x## by sampling ##y## from its Gaussian distribution and using binomial sampling to obtain measurements of ##f##.
  • Another participant questions whether ##y## and its standard deviation ##dy## are known exactly, seeking clarification on the parameters involved.
  • A later reply suggests that it is possible to compute the uncertainty in ##x## based on the sampling distribution, but emphasizes that this does not directly measure the uncertainty in ##x## itself.
  • There is a proposal to model the percentage of 1s in the results using a Gaussian distribution, assuming knowledge of ##x##, to assess the likelihood of the observed results.

Areas of Agreement / Disagreement

Participants express differing views on how to properly account for uncertainty in ##x##, with no consensus reached on the best method for estimating this uncertainty. The discussion remains unresolved regarding the appropriate approach to incorporate the spread in ##y## into the uncertainty estimation for ##x##.

Contextual Notes

Participants note potential limitations in the approach, including the need to clarify how the spread in ##y## affects the uncertainty in ##x## and the implications of using Gaussian modeling in the context of the data.

kelly0303
Messages
573
Reaction score
33
Hello! I have a situation of the following form. I have a function ##f = xy##. In my experiment ##y## is fixed, but it has some noise to it, such that at each measurement it is basically sampled from a Gaussian with the mean given by the fixed value and the standard deviation given by the known uncertainty on ##y##, call it ##dy##. At each instance of the measurement, ##f## is a number between 0 and 1, but when I actually record the measurement I get either 0 or 1, sampled from a binomial distribution with probability ##f## (it is a quantum projection measurement). What I need is, after a given number of measurements, to extract ##x## and I want to check how many measurements I need for a given uncertainty on ##x##.

The way I am thinking of doing it is like this: Fix ##x## to a certain value (close to what I expect in practice). For each event, sample ##y## from its Gaussian distribution, calculate f, then get 0 or 1 based on Binomial sampling with probability f. Do this N times, which will give me ##\sim fN## non zero events. Now I do all these steps again a large number of times (e.g. 1000) and I get a Gaussian distribution over the values of f, with a mean and standard deviation. Now, in order to estimate ##x## I can divide the central value of f by the central value of y. But I am not sure how to estimate the uncertainty on x. Should I just divide the standard deviation of f by y? Or do I need to account for the spread in y, too? Given that I already used the spread in y in the first step, that feels like double counting it, so I am not sure what is the right way. Thank you!
 
Physics news on Phys.org
Are y and the standard deviation of dy exactly known to you?
 
Office_Shredder said:
Are y and the standard deviation of dy exactly known to you?
Yes
 
You can kind of just compute this exactly. For any possible choice of x, you know what fraction of the time your sampling will return a 0 instead of a 1. Then you can compute things like what value of x makes it so you would only see at least as extreme a result as you got 5% of the time (in either direction, either x being a large number and being surprised by the number of 0s, or x is a small number and you are surprised by the number of 1s). I would argue this doesn't tell you the uncertainty you have for x in the same way that no p value actually measures your uncertainty in the parameter that is being estimated, but it will tell you what values of x would likely be able to give the result that you actually measured.

Depending on how much data you have to crunch through you might want to make some estimate here, for example modeling the percentage of f value that are 1 using a gaussian assuming you know what x is to help figure out how unlikely the given result was.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K