Mean tends Infinity in Gaussian Case

Click For Summary
SUMMARY

The discussion centers on the behavior of a Gaussian distribution as its mean approaches infinity. Participants clarify that while the integral of a probability density function (PDF) must remain equal to one, increasing the mean without adjusting the variance leads to a limit that approaches zero, not a Dirac-Delta function. It is established that a Gaussian distribution approximates a Dirac-Delta function only when the standard deviation approaches zero, not when the mean increases indefinitely. The consensus is that shifting the mean does not yield a meaningful Dirac-Delta representation.

PREREQUISITES
  • Understanding of Gaussian distributions and their properties
  • Knowledge of Dirac-Delta functions and their mathematical significance
  • Familiarity with probability density functions (PDFs) and integration
  • Basic concepts of variance and standard deviation in statistics
NEXT STEPS
  • Explore the mathematical properties of Dirac-Delta functions in distribution theory
  • Study the implications of shifting means in Gaussian distributions
  • Investigate the relationship between variance and the shape of Gaussian distributions
  • Learn about the convergence of probability distributions and their limits
USEFUL FOR

Statisticians, mathematicians, and data scientists interested in the theoretical aspects of probability distributions and their applications in statistical modeling.

architect
Messages
30
Reaction score
0
Hi,

I am wondering what happens to a Gaussian distribution when its mean tends to infinity. By looking at the equation of a Gaussian one might infer that the limit will go to zero; but does this imply that it reduces to a Dirac-Delta function? More precisely, if we integrate this limit we obtain 0, which suggests that this is a Dirac-Delta function. However, I am not convinced that it actually is since by increasing the mean the standard deviation should not change.

At least we are certain that a Gaussian approximates a Dirac-Delta when its standard deviation \sigma goes to zero.

Thanks

Alex
 
Last edited:
Physics news on Phys.org
architect said:
Hi,

I am wondering what happens to a Gaussian distribution when its mean tends to infinity. By looking at the equation of a Gaussian one might infer that the limit will go to zero; but does this imply that it reduces to a Dirac-Delta function? More precisely, if we integrate this limit we obtain 0, which suggests that this is a Dirac-Delta function. However, I am not convinced that it actually is since by increasing the mean the standard deviation should not change.

At least we are certain that a Gaussian approximates a Dirac-Delta when its standard deviation \sigma goes to zero.

Thanks

Alex


First, the Dirac Delta is a distribution, not a function in the strict sense, And yes, it's the limit of the Gaussian distribution as the variance goes to zero. As such the mean must go to infinity at the limit since the integral of a PDF by definition must be unity. Because of this, the mean cannot go infinity unless there is a corresponding reduction in the variance.
 
Last edited:
Thanks for your reply.

Sorry but I did not fully understood what you are trying to say. The original question was the effect of an increasing mean to the Gaussian distribution. Does this mean that it will approximate a Dirac-Delta distribution or not?

If we take the limit of a Gaussian distribution as the mean approaches infinity the answer is zero. Similarly if we take the limit of a Gaussian distribution as the standard deviation approaches zero the answer is also zero. In the second case (sigma approaches zero) one might safely assume that a Dirac-Delta distribution provides a good approximation. Can one make a similar claim for the case where the mean goes to infinity? To my understanding as we increase the mean the standard deviation need not necessarily change. Taking into consideration that a Dirac-Delta is deduced from a zero standard deviation I cannot see why taking the mean to infinity we obtain a Dirac-Delta.

As you mentioned a Dirac-Delta is not a function in the strict sense.

Alex.
 
The key to understanding this is that the integral of the PDF is defined to be 1. Therefore, as the mean increases for a symmetric Gaussian distribution the variance must decrease. As the mean gets very large and variance decreases toward zero, the value of the integral, by definition, remains at one. The convention is that the integral of the PDF is unity at the limit.

http://planetphysics.org/encyclopedia/DiracsDeltaDistribution.html
 
This is surprising to me, as it suggests that as the mean gets very large the symmetric Gaussian approximates a Dirac-Delta. I thought that by increasing the mean is equivalent to shifting the Gaussian along the axis of real values (say x-axis). In fact, this is evident by simulating the pdf for an increasing mean \mu.

I agree that the integral has to evaluate to 1 always. However, I thought that this was more related to the case where as the standard deviation decreases an increase in pdf (not in the mean of the distribution) is required in order to keep the area=1.
 
The normal distribution looks like
[1/(√(2π))σ]e-(x-m)2/2σ2.

σ may remain fixed as m becomes infinite - the limiting distribution looks meaningless.
m may remain fixed as σ -> 0, in which case we end up with a delta function at m.
 
Last edited:
architect said:
This is surprising to me, as it suggests that as the mean gets very large the symmetric Gaussian approximates a Dirac-Delta. I thought that by increasing the mean is equivalent to shifting the Gaussian along the axis of real values (say x-axis). In fact, this is evident by simulating the pdf for an increasing mean \mu.

I agree that the integral has to evaluate to 1 always. However, I thought that this was more related to the case where as the standard deviation decreases an increase in pdf (not in the mean of the distribution) is required in order to keep the area=1.

I guess I wasn't clear here. The y-axis plots the values of the dirac delta. At x=0, f(x)=infinity. For all other real x f(x)=0. f has the form of the limit of the Gaussian pdf as the variance goes to 0, but doesn't actually map to a probability space at zero. Distribution Theory involves more than probability spaces as far as I know, but I'm not well versed in generalized DT.
 
Last edited:
SW VandeCarr said:
I guess I wasn't clear here. The y-axis plots the values of the dirac delta. At x=0, f(x)=infinity. For all other real x f(x)=0. f has the form of the limit of the Gaussian pdf as the variance goes to 0, but doesn't actually map to a probability space at zero. Distribution Theory involves more than probability spaces as far as I know, but I'm not well versed in generalized DT.

I think you misunderstood architect. He did not refer to the case as the probability density of the gaussian distribution at its mean goes to infinity. He was referring to the case, as mathman pointed out, where we shift the mean towards infinity (visualised as shifting along the x-axis). I agree that the limiting distribution is meaningless (I don't know you managed to derive a dirac-delta function from just shifting the mean of the gaussian distribution and I'm skeptical).

Architect, did you have a motivation for this question?
 
Thanks for your replies. Perhaps the key word in here is "meaningless" and yes I agree with ych22. It is most likely that the result of shifting the mean to infinity is meaningless, although I am not entirely convinced that it is. For example, let me assume that we let the mean of a symmetric Gaussian run towards infinity. If we observe the distribution from its origin in space (say x=0,y=0) as it moves away from this point, intuitively it will be become a spike. This was my motivation ych22. Therefore, what I thought (which might be wrong) is the following: If we observe a Gaussian distribution as its mean tends to infinity it might most likely resemble a Dirac-Delta (spike). Similarly, if we observe a Gaussian (at any mean) whose standard deviation tends to zero it is also a Dirac-Delta. What I did in order to confirm the first case was to take the limit of the Gaussian function as \mu goes to infinity and then integrate the resultant function. The result was zero. We know that if we integrate a "function" that at x=0, f(x)=infinity and for all other real x, f(x)=0 we get zero. Therefore, this made me think that as we let the mean go to infinity we obtain a Dirac-Delta. My limited knowledge could not support this idea, since as mentioned earlier shifting the mean should not affect in any way the sd as described by a Dirac-Delta.

Still not sure...

Thanks
 
  • #10
architect said:
Thanks for your replies. Perhaps the key word in here is "meaningless" and yes I agree with ych22. It is most likely that the result of shifting the mean to infinity is meaningless, although I am not entirely convinced that it is. For example, let me assume that we let the mean of a symmetric Gaussian run towards infinity. If we observe the distribution from its origin in space (say x=0,y=0) as it moves away from this point, intuitively it will be become a spike. This was my motivation ych22. Therefore, what I thought (which might be wrong) is the following: If we observe a Gaussian distribution as its mean tends to infinity it might most likely resemble a Dirac-Delta (spike). Similarly, if we observe a Gaussian (at any mean) whose standard deviation tends to zero it is also a Dirac-Delta. What I did in order to confirm the first case was to take the limit of the Gaussian function as \mu goes to infinity and then integrate the resultant function. The result was zero. We know that if we integrate a "function" that at x=0, f(x)=infinity and for all other real x, f(x)=0 we get zero. Therefore, this made me think that as we let the mean go to infinity we obtain a Dirac-Delta. My limited knowledge could not support this idea, since as mentioned earlier shifting the mean should not affect in any way the sd as described by a Dirac-Delta.

Still not sure...

Thanks

Just plot out X1~N(0,1) and X2~N(10000000,1). I think it's obvious that the second distribution's probability density function is a linear map along the x-axis...

Since the support for all normal distributions is the entire set of real numbers, we can always translate a normal distribution along the x-axis by some arbitrary value and expect it to "remain the same".
 
  • #11
Thanks for all your time in replying and assistance with this.

BR,

Alex
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
5K