MHB Greatest probability - Expected value

mathmari
Gold Member
MHB
Messages
4,984
Reaction score
7
Hey! :o

The geometric distribution with parameter $p\in (0,1)$ has the probability function \begin{equation*}f_X(x)=p(1-p)^{x-1}, \ \ x=1, 2, 3, \ldots\end{equation*}

I have shown that $f_X$ for each value of $p\in (0,1)$ is strictly monotone decreasing, as follows:
\begin{align*}f_X(x+1)=p(1-p)^{x+1-1}=p(1-p)^{(x-1)+1}=p(1-p)^{x-1}(1-p)\overset{(\star)}{<}p(1-p)^{x-1}=f_X(x)\end{align*} $(\star)$ : Since $p\in (0,1)$ we have that \begin{equation*}0<p<1\Rightarrow -1<-p<0 \Rightarrow 0<1-p<1\end{equation*}

That means that the value $x=1$ has the greatest probability. But the expected value of a random variable $X$ with geometric distribution is $\frac{1}{p}$. Why is it like that and not equal to the value with the greatest probability? (Wondering)
 
Physics news on Phys.org
Hey mathmari! (Smile)

The value with the greatest probably is known as the Mode, which is:
$$\text{Mode} = \mathop{\mathrm{arg\,max}}_{x\in \mathbb N} f_X(x)$$
It is one of the Center metrics, just like Mean and Median.
However, the Expected Value, also known as Mean, is the average weighted on probability, or:
$$\text{Expected Value} = \sum_{x\in\mathbb N}xf_X(x)$$
If the distribution is symmetric, they are the same, but otherwise they are not. (Thinking)
 
I like Serena said:
The value with the greatest probably is known as the Mode, which is:
$$\text{Mode} = \mathop{\mathrm{arg\,max}}_{x\in \mathbb N} f_X(x)$$

What do you mean by arg? I got stuck right now. (Wondering)

I like Serena said:
However, the Expected Value, also known as Mean, is the average weighted on probability, or:
$$\text{Expected Value} = \sum_{x\in\mathbb N}xf_X(x)$$
If the distribution is symmetric, they are the same, but otherwise they are not. (Thinking)

Ah ok!
 
Last edited by a moderator:
mathmari said:
What do you mean by arg? I got stuck right now.

I introduced the $\mathrm{arg\,max}$ notation only to illustrate the difference with the expected value.
$\mathrm{arg\,max}$ is the value (the argument) for which the given expression takes its maximum. (Nerd)

In this case we can calculate the expected value with:
$$\text{Expected Value} = \sum_{x\in\mathbb N}xf_X(x) = \sum x p(1-p)^{x-1}
= p\sum x (1-p)^{x-1} = p \sum \d{}p\Big[-(1-p)^x\Big] \\
= -p \d{}p\left[ \sum (1-p)^x\right] = -p\cdot \d{}p\left[ \frac{1}{1-(1-p)}\right]
= -p\cdot \d{}p\left[ \frac 1p\right] = -p \cdot -\frac 1{p^2} = \frac 1p
$$
(Thinking)
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top