MHB Discrete-continuous random variable

AI Thread Summary
The discussion revolves around calculating the variance of a random variable \(X\) defined by two normally distributed variables \(Z_1\) and \(Z_2\) based on a Bernoulli variable \(B\). The initial proposed formula for the variance of \(X\) is \( \sigma_{X}^{2}= p\ \sigma_{1}^{2} + (1-p)\ \sigma_{2}^{2} \). Participants clarify that this holds true under the assumption of independence between \(Z_1\) and \(Z_2\) and that the means of the distributions are equal. However, if the means differ, the variance formula must also account for the squared difference of the means, leading to a more complex expression. The final consensus emphasizes the importance of considering both variances and the means when calculating the total variance of \(X\).
OhMyMarkov
Messages
81
Reaction score
0
Hello everyone!

I'm looking at the following random variables:

$Z_1$ is normally distributed with zero mean and variance $\sigma _1 ^2$
$Z_2$ is normally distributed with zero mean and variance $\sigma _2 ^2$

$B$ is Bernoulli with probability of success $p$.

$X$ is a random variable that takes $Z_1$ if $B=1$ and $Z_2$ if $B=0$.

What is the variance of $X$?
 
Physics news on Phys.org
Re: Dicrete-continuous random variable!

OhMyMarkov said:
Hello everyone!

I'm looking at the following random variables:

$Z_1$ is normally distributed with zero mean and variance $\sigma _1 ^2$
$Z_2$ is normally distributed with zero mean and variance $\sigma _2 ^2$

$B$ is Bernoulli with probability of success $p$.

$X$ is a random variable that takes $Z_1$ if $B=1$ and $Z_2$ if $B=0$.

What is the variance of $X$?

The 'instinctive' answer should be...

$\displaystyle \sigma_{X}^{2}= p\ \sigma_{1}^{2} + (1-p)\ \sigma_{2}^{2}$ (1)

Kind regards

$\chi$ $\sigma$
 
Re: Dicrete-continuous random variable!

My statistics are a bit rusty, but we have:

$$P(B = 1) = p$$
$$P(B = 0) = 1 - p$$

And we're given that:

$$VAR(X | B = 1) = \sigma_1^2$$
$$VAR(X | B = 0) = \sigma_2^2$$

And since $Z_1$ and $Z_2$ are independent, you can just add the variances up:

$$VAR(X) = P(B = 1) VAR(X | B = 1) + P(B = 0) VAR(X | B = 0)$$

Which gives:

$$VAR(X) = p \sigma_1^2 + (1 - p) \sigma_2^2 = \sigma_1^2 + (\sigma_2^2 - \sigma_1^2) p$$

I've checked the result empirically.
 
Re: Dicrete-continuous random variable!

Bacterius said:
[snip]

And since $Z_1$ and $Z_2$ are independent, you can just add the variances up:

$$VAR(X) = P(B = 1) VAR(X | B = 1) + P(B = 0) VAR(X | B = 0)$$

[snip]
Hi Bacterius,

I don't think that is true, in general. Have you thought about what would change if $Z_1$ and $Z_2$ did not have the same mean?

Suggestion: You might start with
$$var[X] = E[X^2] - E[X]^2$$
 
Re: Dicrete-continuous random variable!

awkward said:
Hi Bacterius,

I don't think that is true, in general. Have you thought about what would change if $Z_1$ and $Z_2$ did not have the same mean?

Suggestion: You might start with
$$var[X] = E[X^2] - E[X]^2$$

Yes, it only works in this particular case. I did not consider beyond the problem asked.
 
Re: Dicrete-continuous random variable!

Cool problem.

Assign $Z_{i}$ to have mean $\mu_{i}$ and pdf $f_{i}(x)$.

Everything should follow if we find a pdf $f(x)$ for $X$.

$f(x)=P(X=x)=P(X=x|Z_{1})P(Z_{1})+P(X=x|Z_{2})P(Z_{2})$
(Law of Total Probability)
$=pf_{1}(x)+(1-p)f_{2}(x).$

$E(X)=\int_{\mathbb{R}}x*f(x)dx$
$=\int_{\mathbb{R}}{x*[pf_{1}(x)+(1-p)f_{2}(x)]}dx $
$=p\int_{\mathbb{R}}{x*f_{1}(x)dx} + (1-p)\int_{\mathbb{R}}{{x}*f_{2}dx}$
$=pE(Z_{1})+(1-p)E(Z_{2})$
$=p\mu_{1}+(1-p)\mu_{2}$.

$E(X^{2})=\int_{\mathbb{R}}{x^{2}}f(x)dx$
$=\int_{\mathbb{R}}{x^{2}}[pf_{1}(x)+(1-p)f_{2}(x)]dx $
$=p\int_{\mathbb{R}}{x^{2}f_{1}(x)dx}+(1-p)\int_{\mathbb{R}}{x^{2}f_{2}(x)dx}$
$=pE(Z_{1}^{2})+(1-p)E(Z_{2}^{2}) $
$=p(\sigma_{1}^{2}+\mu_{1}^2)+(1-p)(\sigma_{2}^{2}+\mu_{2}^2)$
(Because we know $var(Z)=E(Z^{2})-E(Z)^{2}$)

$var(X)=E(X^{2})-E(X)^2$, for which I am getting
$p\sigma_{1}^{2}+(1-p)\sigma_{2}^{2}+p(1-p)(\mu_{1}-\mu_{2})^{2}$.

This concurs with past answers, but it's strange. The problem is what the variance of X would be if the means are very different like two distributions:

... .. . .. ... ... _________________________ . .. . ... . ... .. ... ..

Calculating the total variance should involve the distance from each point to the total mean where the total mean is pretty far from each distribution's mean. But if the above work is correct, all the information you need about how the total variance is changed when the means are changed is in the difference between the means. (Not the sum- made a mistake typing it up).
 
Last edited:
I'm taking a look at intuitionistic propositional logic (IPL). Basically it exclude Double Negation Elimination (DNE) from the set of axiom schemas replacing it with Ex falso quodlibet: ⊥ → p for any proposition p (including both atomic and composite propositions). In IPL, for instance, the Law of Excluded Middle (LEM) p ∨ ¬p is no longer a theorem. My question: aside from the logic formal perspective, is IPL supposed to model/address some specific "kind of world" ? Thanks.
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...

Similar threads

Replies
30
Views
4K
Replies
42
Views
4K
Replies
2
Views
2K
Replies
7
Views
2K
Replies
7
Views
2K
Back
Top