Undergrad Demonstration of inequality between 2 variance expressions

Click For Summary
The discussion centers on proving the inequality between two variance expressions, specifically that σ_o,1² < σ_o,2². The first observable variance σ_D,1² is defined as a sum involving the terms (2ℓ + 1) and a constant factor, while the second observable variance σ_D,2² incorporates a similar summation but with a different formulation. The goal is to establish the inequality by demonstrating that the squared sum of C_ℓ is greater than a specific combination of terms involving X and Y, where X is an increasing function and Y is a decreasing function of ℓ. The author is seeking suggestions or assistance to complete this proof. The discussion highlights the mathematical intricacies involved in comparing these variance expressions.
fab13
Messages
300
Reaction score
7
TL;DR
In an astrophysics context, I would like to prove than ##\sigma_{o, 1}^{2}<\sigma_{o, 2}^{2}## but I have difficulties to derive this inequality.
Just to remind, ##C_\ell## is the variance of random variables ##a_{\ell m}## following a Gaussian PDF (in spherical harmonics of Legendre) :

##C_{\ell}=\left\langle a_{l m}^{2}\right\rangle=\frac{1}{2 \ell+1} \sum_{m=-\ell}^{\ell} a_{\ell m}^{2}=\operatorname{Var}\left(a_{l m}\right)##

1) Second observable :
##
\sigma_{D, 2}^{2}=\dfrac{2 \sum_{\ell_{\min }}^{\ell_{\max }}(2 \ell+1)}{\left(f_{s k y} N_{p}^{2}\right)}
##
so :
##
\sigma_{o, 2}^{2}=\dfrac{\sigma_{D, 2}^{2}}{\left(\sum_{\ell_{\min }}^{\ell_{\max }}(2 \ell+1) C_{\ell}\right)^{2}}
##

2) First observable :
##
\sigma_{D, 1}^{2}=\sum_{\ell_{\min }}^{\ell_{\max }} \dfrac{2}{(2 \ell+1)\left(f_{s k y} N_{p}^{2}\right)}
##
so :
##
\sigma_{o, 1}^{2}=\dfrac{\sigma_{D, 1}^{2}}{\left(\sum_{\ell_{\min }}^{\ell_{\max }} C_{\ell}\right)^{2}}
##
3) Goal :
I would like to prove than ##\sigma_{o, 1}^{2}<\sigma_{o, 2}^{2}## but I have difficulties to derive this inequality.
 
Last edited:
Physics news on Phys.org
Things are progressing in my demonstration.

All I need to do now is to prove the following inequality, by taking ## X = 2 \ell + 1 ## and ## Y = C_\ell ##:

## \big(\sum Y \big)^{2} > \sum X^{- 1} \sum XY^{2}\quad (1) ##

with ## X ## and ## Y ## which are functions of ## \ell ## (see above) and ## X ## is increasing while ## Y ## is assumed to be decreasing.

The sum ## \sum ## is actually done over ## \sum_{\ell =\ell_{min}}^{\ell_{max}} ##, it was just to make it more readable than I did not write in ##(1) ##.

Any suggestion, track or help is welcome.

Best regards
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 36 ·
2
Replies
36
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K