Suppose that nx is binomially distributed: B((n-1)p, (n-1)p(1-p))(adsbygoogle = window.adsbygoogle || []).push({});

I wish to find the expected value of a function f(x), thus

[itex] \sum_{nx=0}^{n-1} B() f(x) [/itex]

Assume that f() is non-linear, decreasing and continuous, f(x) = c is [0,1] to [0, ∞)

I want to show that the above sum converges to f(p) if n→∞

By computation using all types of different functions (even very extreme ones), it is clear that it converges to that. The only thing is that while the proof looks like it'd be simple, I just couldn't figure it out. If f() is linear, then we just get f((n-1)p/n) which converges to f(p), if it's not, Jensen's inequality tells us that we will get something else.

If we are not happy with the sum we can use n→∞ to show the sum goes to:

[tex] \int_{0}^{∞} \text{(normal distribution of nx with same mean and variance as above)} f(nx/n) dnx [/tex]

Just wondering if anyone has a tip or so of how I can approach this. Thanks

**Physics Forums - The Fusion of Science and Community**

Dismiss Notice

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Convergence of the Expected Value of a Function

Loading...

Similar Threads - Convergence Expected Value | Date |
---|---|

A Value at Risk, Conditional Value at Risk, expected shortfall | Mar 9, 2018 |

I What is a convergent argument? | Jan 26, 2017 |

Convergence question | Mar 7, 2014 |

Prove convergence in probability for n * Poisson variable to zero | Aug 8, 2013 |

EM algorithm convergence KF log likelihood decrease | Mar 1, 2013 |

**Physics Forums - The Fusion of Science and Community**