Sum independent normal variables

Click For Summary

Discussion Overview

The discussion centers on the properties of sums of independent normal random variables, exploring whether independence is a necessary condition for the sum to also be normally distributed. Participants examine various scenarios involving correlated variables, transformations, and the implications of the Central Limit Theorem (CLT).

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that independence is necessary for the sum of normal random variables to be normally distributed, citing examples where dependent variables do not yield a normal sum.
  • Others propose that if random variables are correlated but not perfectly dependent, they could potentially be transformed into independent variables.
  • One participant mentions that the claim holds true if the variables are jointly normally distributed, referencing the relationship between multivariate Gaussian distributions and their marginals.
  • Several participants discuss the Central Limit Theorem, noting that it implies convergence to normality for finite sums of random variables from any distribution with finite variance.
  • There is a suggestion that generating functions could be used to prove the normality of sums of independent normal random variables.
  • Some participants explore the concept of vector spaces of random variables and the conditions under which a set of random variables can be transformed into a basis of mutually independent variables.
  • Questions arise regarding transformations that preserve normality, with discussions on linear and affine transformations and their effects on normal random variables.
  • Participants express uncertainty about the implications of covariance and joint distributions for correlated normal random variables.

Areas of Agreement / Disagreement

Participants do not reach a consensus on whether independence is strictly necessary for the sum of normal random variables to be normally distributed. Multiple competing views remain regarding the role of correlation, joint distributions, and the conditions under which normality is preserved.

Contextual Notes

Some discussions involve assumptions about the nature of random variables and their distributions, particularly regarding joint distributions and the implications of covariance. The exploration of vector spaces and transformations introduces additional complexity that remains unresolved.

  • #31
BWV said:
but you have to define normal by some generating function.

A normal random variable ##\eta## can , in many different ways, be represented as the sum of two other independent normal random variables.

I don't understand how you want to restrict the types of representations of ##\eta## that are permitted.
 
Physics news on Phys.org
  • #32
  • #33
mathman said:
Added note: These random variables are uncorrelated.
Is that correct? The intension of the η=σξ example is to randomly flip the sign on half of ξ, which does not change the distribution, but half of η is perfectly negatively correlated to ξ, and this information is recovered in P(ξ+η=0)=P(σ=−1)=1/2.

ISTM there are problems with the construction on this, but they are above my pay grade. ξ is a function on ℝ, while σ is discrete. How would you actually flip the sign randomly on half of ℝ? any countably finite number in σ would not change the outcome of the distribution, and if σ in uncountably infinite, then how is P(ξ+η=0) not 0?
 
  • #34
BWV said:
How would you actually flip the sign randomly on half of ℝ?
Why would that be necessary? If we grant that you can take a random sample from ##\xi##, in order to realize a sample of ##\xi \sigma## you only need one to decide whether to flip the value of that particular random sample by using one realization of ##\sigma##.

The probability of any particular realization of a sample from a normally distributed random variable is zero, so if zero probability events are going to be a conceptual problem, they are already present in the idea of taking a sample from ##\xi##.
 
  • #35
BWV said:
Is that correct? The intension of the η=σξ example is to randomly flip the sign on half of ξ, which does not change the distribution, but half of η is perfectly negatively correlated to ξ, and this information is recovered in P(ξ+η=0)=P(σ=−1)=1/2.

ISTM there are problems with the construction on this, but they are above my pay grade. ξ is a function on ℝ, while σ is discrete. How would you actually flip the sign randomly on half of ℝ? any countably finite number in σ would not change the outcome of the distribution, and if σ in uncountably infinite, then how is P(ξ+η=0) not 0?
Covariance ##E(\xi\eta)=E(\sigma\xi^2)=E(\sigma)E(\xi^2)=0##, since ##E(\sigma)=0## while ##\sigma## is independent of ##\xi## and both means ##=0##.
 
  • #36
mathman said:
Covariance ##E(\xi\eta)=E(\sigma\xi^2)=E(\sigma)E(\xi^2)=0##, since ##E(\sigma)=0## while ##\sigma## is independent of ##\xi## and both means ##=0##.
So this is a good example to show that even uncorrelated is not enough to guarantee that the sum is normal. It requires independence.
 
  • #37
FactChecker said:
So this is a good example to show that even uncorrelated is not enough to guarantee that the sum is normal. It requires independence.
Independence is not necessary If the variables are jointly normal, even if correlated, the sum will be normal.
 
  • Like
Likes   Reactions: FactChecker
  • #38
mathman said:
Independence is not necessary If the variables are jointly normal, even if correlated, the sum will be normal.
I stand corrected. I should have just said that uncorrelated is not enough.
 
  • #39
Stephen Tashi said:
Are their theorems that deal with the general question? - Given a set of random variables ##\{X_1,X_2,..,X_M\}##, when does their exist some subspace ##S## of the vector space of random variables such that ##S## contains each ##X_i## and ##S## has a basis of mutually independent random variables?
"Linearly independent" presumably implies we are considering a set of random variables to be a vector space under the operations of multiplication by scalars and addition of the random variables.
Suppose we take "transformed" to mean transformed by a linear transformation in a vector space. If the vector space containing the random variables ##{X_1,X_2,...X_M}## has a finite basis ##B_1,B_2,...B_n## consisting of mutually independent random variables then (trivially) for each ##X_k## there exists a possibly non-invertible linear transformation ##T## that transforms some linear combination of the ##B_i## into ##X_k##.

If the smallest vector space containing the ##X_i## is infinite dimensional (e.g the vector space of all measurable functions on the real number line) , I don't know what happens.

I don't recall any texts that focus on vector spaces of random variables. Since the product of random variables is also a random variable, the topic for textbooks seems to be the algebra of random variables. But that approach downplays the concept of probability distributions.
Would be interesting to see if there is something similar to "Decouple" a basis for such space, so that any two ##X_i, X_j## are independent but the span is preserved.
 
  • #40
WWGD said:
Would be interesting to see if there is something similar to "Decouple" a basis for such space, so that any two ##X_i, X_j## are independent but the span is preserved.
That would require a change of the independent variables to a different set.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K