Probability inequality for the sum of independent normal random variables

Click For Summary

Discussion Overview

The discussion revolves around the existence of a probability inequality for the sum of independent normal random variables, specifically focusing on bounding the tail probabilities of the normalized sum of these variables. The scope includes theoretical exploration and mathematical reasoning related to probability inequalities.

Discussion Character

  • Exploratory
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant inquires about a probability inequality for the sum of independent normal random variables, suggesting a form involving a function of epsilon, variance, and sample size.
  • Another participant states that there is an exact equality related to the distribution of the normalized sum of these variables, indicating that it follows a normal distribution.
  • A participant references a specific inequality for bounding the tail probability of a standard normal variable, noting its limitations regarding the invertibility of the bounding function.
  • Another participant questions whether the upper bound function can incorporate variance and sample size, suggesting that this could lead to an exact statement of the tail probability.

Areas of Agreement / Disagreement

Participants express differing views on the existence and form of a suitable probability inequality, with no consensus reached on a specific bounding function for the tail probability of normal variables.

Contextual Notes

Participants highlight limitations in existing inequalities, such as the non-invertibility of certain bounding functions and the need for additional parameters like variance and sample size in the bounds.

phonic
Messages
28
Reaction score
0
Dear all,

I wonder wheather there exsits a probability inequality for the sum of independent normal random variables (X_i are i.i.d. normal random varianble with mean \mu and variance \sigma^2):
<br /> P\left(\frac{1}{n}\sum_{i=1}^n X_i - \mu&gt; \epsilon\right)\leq<br /> f(\epsilon, \sigma^2,n) \right).<br />

We know that Bernstein inequality is for the sum of bounded random variables:
<br /> P\left(\frac{1}{n}\sum_{i=1}^n X_i -\mu &gt; \epsilon\right)\leq<br /> \exp\left(-\frac{n\epsilon^2}{2\sigma^2+ 2c\epsilon/3} \right).<br />

I wonder whether there is some similar inequality for normal variables.

Thanks!

Phonic
 
Physics news on Phys.org
There is an exact equality; it follows from Σ X/n ~ N(μ, σ^2/n).
 
Last edited:
Tanks for your reply. Then the problem is to bound the tail probability of this normal variable. I know one inequality is (R. D. Gordon, The Annals of Mathematical Statistics, 1941(12), pp 364-366)
<br /> P(z \geq x) = \int_x^\infty \frac{1}{\sqrt{2\pi}}<br /> e^{-\frac{1}{2}z^2} dz \leq \frac{1}{x}<br /> \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}\mbox{\hspace{1cm}for } x&gt;0,<br />
where z is a standard normal variable.

The problem of this inequality is that the function \frac{1}{x}<br /> e^{-\frac{1}{2}x^2} is nor invertible (no analytical inverse function). Do you know some other bound for tail probability of a normal variable? Thanks a lot!

EnumaElish said:
There is an exact equality; it follows from Σ X/n ~ N(μ, σ^2/n).
 
Haven't you changed the upper bound function? Can the new function not have σ^2 or n as arguments? If it can, then you have an exact statement of the tail probability.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 39 ·
2
Replies
39
Views
4K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 25 ·
Replies
25
Views
3K