I Validity of replacing X by E[X] in a formula

andrewkirk
Science Advisor
Homework Helper
Insights Author
Gold Member
Messages
4,140
Reaction score
1,741
Hello all. I am working on proving some theorems about Monte Carlo simulation and have proven a theorem that, in a certain formula, it is valid to replace a random variable in the denominator of a fraction by its expected value. I have been wondering whether this result can be generalised to obtain wider application.

A nice generalisation of the theorem would be as follows:

If ##(U_k)_{k\in \mathbb N}## and ##(V_k)_{k\in \mathbb N}## are sequences of random variables, not necessarily independent, and ##\lim_{k\to\infty}\frac{\sqrt{\mathrm{Var}(V_k)}}{E[V_k]}=0##, then
$$\lim_{k\to\infty}E\left[\frac{U_k}{V_k}\right]=\lim_{k\to\infty}\frac{E\left[U_k\right]}{E\left[V_k\right]}$$
provided the limit on the RHS exists. (##k## is the number of Monte Carlo trials)

Before setting out to try to work out whether this is correct and, if so, to prove it, I'd like to first check if anybody knows of any similar results from analysis or probability theory. While it would be fun to prove it from scratch, it's a bit peripheral to what I'm doing so, if there's a known result that validates it, it would be better to just use that.

There may be some additional premises needed in order to make it work.

Thank you in advance for any suggestions.
 
Physics news on Phys.org
When A and B are independent, E(A/B)=E(A)E(1/B). In general E(1/B)≠1/E(B).
 
mathman said:
When A and B are independent, E(A/B)=E(A)E(1/B). In general E(1/B)≠1/E(B).
Yes, that's part of the process I went through. A very simple example is when B has value 1 or 2, each with probability 50%. Then E[1/B]=3/4 and 1/E[ B ]=2/3. However, under certain constraints like those above (and maybe a few more - part of the problem is to work out which ones) that inequality can become an equality in the limit as ##k\to\infty##. In general, the numerator and denominator will not be independent, so we can't necessarily factor the expectation.
 
The condition \frac{\sigma (V_k)}{E(V_k)} -> 0 seems to lead to a constant distribution, as long as E(V_k) is bounded. I suspect this is what you are looking for.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top