Question on Importance Sampling (Monte Carlo method)

  • Thread starter Thread starter kasraa
  • Start date Start date
  • Tags Tags
    Method Sampling
kasraa
Messages
15
Reaction score
0
Hi,

Suppose I have N iid samples from a distribution q, and I want to estimate another distributin, p, using those samples (Importance Sampling).

By "standard importance sampling", I mean the case where samples (prior samples. i.e. samples from q) have equal weights (w_i = 1/N).

In the case of "standard importance sampling", I should perform these steps:

1) compute (unnormalized) weights for those sample according to p(s_i)/q(s_i) (s_{i} is the i'th sample from q)
2) normalize those weights
3) then an estimate of p would be this:
\hat{p} = \sum_{i=1}^N w_{i} \delta(i)

(w_i are normalized weights computed at step 2. delta(i) is the Dirac delta function at s_i)


Now consider the case where samples (prior samples, i.e. samples from q) are weighted (differnt weights, and normalized. for example u_i).

Is it enough (justified) to change the (unnormalized) weights (computed at step 1) to p(s_i)u_{i}/q(s_i)?
(multiplying prior weights and "standard importance sampling" weights together?)


Thanks in advance.
 
Physics news on Phys.org
kasraa said:
... Is it enough (justified) to change the (unnormalized) weights (computed at step 1) to p(s_i)u_{i}/q(s_i)?
(multiplying prior weights and "standard importance sampling" weights together?)

Possibly not, because any Monte Carlo simulation (including importance sampling) is essentially based on approximating the (cumulative) distribution by the empirical distribution, i.e.

P(x) = Prob[X\le x] = E_P[I[X\le x]] \approx \frac{1}{N}\sum_{i=1}^N I[X_i\le x]

where I is the Boolean indicator function and the X_i are taken from distribution P. To change the weights from (1/N) to other numbers you'd need to change the sampling method to ensure that the "weighted" empirical distribution remains a good approximation to the CDF.

However if you do find a way to overcome that, the new importance sampling formula would easily follow from the change of measure formula, with

P(x) = E_P[I[X\le x]] = E_Q\left[I[X\le x]\frac{dP}{dQ}\right] = E_Q\left[I[X\le x]\frac{p(X)}{q(X)}\right] \approx \sum_{i=1}^N w_i I[X_i\le x]\frac{p(X_i)}{q(X_i)}

where the X_i are samples such that

\sum_{i=1}^N w_i I[X_i\le x]

closely approximates Q(x).
 
Thanks.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...

Similar threads

Back
Top