- #1

- 8

- 0

## Main Question or Discussion Point

Suppose that nx is binomially distributed: B((n-1)p, (n-1)p(1-p))

I wish to find the expected value of a function f(x), thus

[itex] \sum_{nx=0}^{n-1} B() f(x) [/itex]

Assume that f() is non-linear, decreasing and continuous, f(x) = c is [0,1] to [0, ∞)

I want to show that the above sum converges to f(p) if n→∞

By computation using all types of different functions (even very extreme ones), it is clear that it converges to that. The only thing is that while the proof looks like it'd be simple, I just couldn't figure it out. If f() is linear, then we just get f((n-1)p/n) which converges to f(p), if it's not, Jensen's inequality tells us that we will get something else.

If we are not happy with the sum we can use n→∞ to show the sum goes to:

[tex] \int_{0}^{∞} \text{(normal distribution of nx with same mean and variance as above)} f(nx/n) dnx [/tex]

Just wondering if anyone has a tip or so of how I can approach this. Thanks

I wish to find the expected value of a function f(x), thus

[itex] \sum_{nx=0}^{n-1} B() f(x) [/itex]

Assume that f() is non-linear, decreasing and continuous, f(x) = c is [0,1] to [0, ∞)

I want to show that the above sum converges to f(p) if n→∞

By computation using all types of different functions (even very extreme ones), it is clear that it converges to that. The only thing is that while the proof looks like it'd be simple, I just couldn't figure it out. If f() is linear, then we just get f((n-1)p/n) which converges to f(p), if it's not, Jensen's inequality tells us that we will get something else.

If we are not happy with the sum we can use n→∞ to show the sum goes to:

[tex] \int_{0}^{∞} \text{(normal distribution of nx with same mean and variance as above)} f(nx/n) dnx [/tex]

Just wondering if anyone has a tip or so of how I can approach this. Thanks