Joint Distribution of Changing Mean

Click For Summary
SUMMARY

The discussion centers on the joint distribution of a random variable X with a mean \mu that is randomly distributed around 0 with a standard deviation of 1. Participants confirm that the standard deviation of X is sqrt(2), derived from the formula E(X^2 | E(X) = m) = 1 + E(m^2). The conversation explores the relationship between the variances of two normal distributions and how they contribute to the overall variance of X. The conclusion emphasizes that X can be viewed as the sum of two independent normal distributions, leading to the established variance of sqrt(2).

PREREQUISITES
  • Understanding of joint distributions in probability theory
  • Familiarity with the properties of normal distributions
  • Knowledge of variance and standard deviation calculations
  • Proficiency in conditional expectation notation and operations
NEXT STEPS
  • Study the derivation of E(X^2 | E(X) = m) in the context of joint distributions
  • Explore the implications of the Central Limit Theorem on the sum of normal distributions
  • Learn about the properties of conditional expectations in probability theory
  • Investigate the relationship between variance and standard deviation in random variables
USEFUL FOR

Statisticians, data scientists, and mathematicians interested in advanced probability theory, particularly those working with joint distributions and normal random variables.

ghotra
Messages
53
Reaction score
0
Let X be a random variable with mean \mu and standard deviation 1.

Let's add a twist.

Suppose \mu is randomly distributed about 0 with standard deviation 1.

At each iteration, we select a new \mu according to its distributuion. This mean is then used in the distribution for X. Then we pick an X according to its distribution.

My question: What is the resulting joint distribution? Given this joint distribution, I should be able to calculate the mean and standard deviation. Clearly, the mean X will be 0, but what will be the standard deviation of X? It seems that it should, at a minimum, be greater than 1.

Thanks!
 
Last edited:
Physics news on Phys.org
let m=mean

E(X2|E(X)=m)=1+m2

E(m2)=1, since std dev(m)=1 and mean=0

Therefore E(X2)=2 or

std. dev.(X)=sqrt(2)
 
Could you elaborate a bit more? I can confirm that the std deviation is indeed sqrt(2), however, I don't understand where the following formula comes from:

E(X^2 | E(x) = m) = 1 + E(m^2)

From the definition,

\sigma_x^2 = E(x^2) - E(x)^2 = E(x^2) - m^2

Presumably, I stick your formula into the formula I just wrote above...but I'm still confused where your formula comes from. Also, m (in my original post \mu) is not the same...it is determined by a normal distribution.
 
I think the easiest way to do this is to simplify the description -- X is the sum of two normal distributions. (admittedly, it's good to be able to do it different ways, though)
 
Hurkyl said:
I think the easiest way to do this is to simplify the description -- X is the sum of two normal distributions.

Interesting, I had wondered if that was okay to do...as the variance of X would then be the sum of the variances of the two normal distributions...and this is, in fact, sqrt(2). Could you explain how these are equivalent pictures?

In general, I would like to consider a set of distributions

\mu_1 \sigma_1

\mu_2 \sigma_2

\mu_3 \sigma_3

...

where the \mu_i are distributed normally with mean \mu and std deviation \Delta \mu

where the \sigma_i are distributed normally with mean \sigma and std deviation \Delta \sigma

For each distribution, we pick x once. What is the expected value of x and what is the standard deviation?
 
Last edited:
X is a normal distribution centered about u.

X - u is a normal distribution centered about 0.

X - u is, presumably independent from u. (You never actually specified in your problem that the only dependence of X on u is that u is the mean of the distribution on X, but I assume it was meant)

X = (X - u) + u
 
Could you elaborate a bit more? I can confirm that the std deviation is indeed sqrt(2), however, I don't understand where the following formula comes from:

E(X^2 | E(x) = m) = 1 + E(m^2)

From the definition,

sig2(x)=E(x2)-m2

Presumably, I stick your formula into the formula I just wrote above...but I'm still confused where your formula comes from. Also, m (in my original post ) is not the same...it is determined by a normal distribution.

note: (You wrote: E(X^2 | E(x) = m) = 1 + E(m^2). If you look carefully at what I wrote I had m^2 on the right in that line and then took average over m on both sides to get E(X^2).)

In your expression for sig2(x), you implicitly defined it for a specific value of m. In the expression I wrote, I just made it explicit and rearranged terms, while using the fact that the variance of x is 1.

All I assumed about m is that it was random with first and second moments 0 and 1; normal distribution is unnecessary.
 

Similar threads

Replies
4
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 43 ·
2
Replies
43
Views
5K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
5
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K