Undergrad How to view conditional variance intuitively?

Click For Summary
The discussion focuses on understanding the conditional variance of a normalized Gaussian random variable, specifically when the data is divided into positive and negative segments. Each segment has a conditional variance of approximately 0.363, derived from the formula 1−2/π. Participants emphasize the importance of intuition regarding conditional variances, noting that they do not necessarily exceed the variance of the original distribution. A mathematical derivation is sought, with references to specific calculations involving the probability density function of the Gaussian variable. The conversation highlights the surprising behavior of variance in tail segments, which decreases as one moves further away from the mean.
yamata1
Messages
61
Reaction score
1
We have a sample of X, a Normalized Gaussian random variable.We divide the data into positive and negative.
Each will have a conditional variance of ## 1−\frac{2}{π}## .
Can someone show how to get this result ?

I found this problem here (page 3) : https://www.dropbox.com/s/18pjy7gmz0hl6q7/Correlation.pdf?dl=0

Thank you.
 
Physics news on Phys.org
The title of the thread speaks of viewing something intuitively but your question
Can someone show how to get this result ?
seems ask for a mathematical derivation.

The relevant passage in the document concerns forming a correct intuition about conditional variances - the correct intuition being that they need not be larger (or smaller) than the variance of the original distribution that is constrained.

The problem becomes much easier when we consider the behavior in lower dimensions for Gaussian variables. The intuition is as follows. Take a sample of X, a Normalized Gaussian random variable. Verify that the variance is 1. Divide the data into positive and negative. Each will have a conditional variance of 1−2/π=≈0.363. Divide the segments further, and there will be additional drop in variance. And, although one is programmed to think that the tail should be more volatile, it isn’t so; the segments in the tail have an increasingly lower variance as one gets further away, see in Fig.4.
 
Stephen Tashi said:
The title of the thread speaks of viewing something intuitively but your question

seems ask for a mathematical derivation.

The relevant passage in the document concerns forming a correct intuition about conditional variances - the correct intuition being that they need not be larger (or smaller) than the variance of the original distribution that is constrained.
I am failing to understand the way to get this answer . Is there a simple computation or property that easily gives this answer ? I tried the formula for ##v_x(Q)## without success.
 
Consider the case when the random variable ##X## has a probability density given by ##g(x) = 2 \frac{1}{\sqrt{2 \pi} } e^{-{x^2/2}}## for ##x \ge 0##.

The variance of ##X## is ##\sigma^2_X = \int_0^\infty x^2 g(x) dx -( \int_0^{\infty} x g(x) dx)^2##

##\int_0^\infty x^2 g(x) dx = \int_0^\infty x^2 2 \frac{1}{\sqrt{2 \pi}} e^{-{x^2/2}} dx##
## = 2 \int_0^\infty x^2 \frac{1}{\sqrt{2 \pi}} e^{-{x^2/2}} dx ##
##= \int_{-\infty}^{\infty} x^2 \frac{1}{\sqrt{2 \pi}} e^{-{x^2/2}} dx = 1##
since the last integral is the same as computing the variance of a normal distribution that has mean zero and variance 1.

##\int_0^\infty x g(x) dx = \int_0^\infty x (2 \frac{1}{\sqrt{2 \pi}} e^{-{x^2/2}}) dx##
## = ( -2 \frac{1}{\sqrt{2 \pi}} e^{-{x^2/2}}) |_0^\infty ##
##= 0 - ( -2 \frac{1}{\sqrt{2 \pi}})##
## = \frac{\sqrt{2}}{\sqrt{\pi}}##

So ##\sigma^2_X = 1 - (\frac{\sqrt{2}}{\sqrt{\pi}})^2 = 1 - 2/\pi##
 
  • Like
Likes zinq, yamata1 and FactChecker
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

Replies
0
Views
855
  • · Replies 7 ·
Replies
7
Views
3K
Replies
1
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 7 ·
Replies
7
Views
27K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 36 ·
2
Replies
36
Views
4K