The normal equivalent for a discrete random variable

In summary: Therefore, the sum is essentially the same as the integral, which equals one for a Gaussian distribution. However, as mentioned before, this only holds in the limit and not for a discrete variable.
  • #1
Ad VanderVen
169
13
TL;DR Summary
A discrete variable ##K## is distributed according to the formula used for the normal distribution. How can the sum for all values of ##k## still be equal to one?
De normal distribution has the following form:
$$\displaystyle f \left(x \right) \, = \,\frac{1}{2}~\frac{\sqrt{2}~e^{-\frac{1}{2}~\frac{\left(x -\nu \right)^{2}}{\tau ^{2}}}}{\tau ~\sqrt{\pi }}$$
and it's integral is equal to one:
$$\displaystyle \int_{-\infty }^{\infty }\!1/2\,{\frac { \sqrt{2}}{\tau\, \sqrt{\pi }}{{\rm e}^{-1/2\,{\frac { \left( x-\nu \right) ^{2}}{{\tau}^{2}}}}}}\,{\rm d}x \, = \, 1$$
However, I now take with ##k## as an integer:
$$\displaystyle P \left( K=k \right) \, = \,1/2\,{\frac { \sqrt{2}}{\tau \, \sqrt{\pi }}{{\rm e}^{-1/2\,{\frac { \left( k-\nu \right) ^{2}}{{\tau }^{2}}}}}}$$
For example, for ##\nu = 0## and ##\tau:=25.0## I get:
$$\displaystyle \sum _{k=-\infty }^{\infty } 0.01595769121\,{{\rm e}^{- 0.0008000000000\,{k}^{2}}}= 1.0$$
and one would expect a value lower than 1. How is that possible?
 
Last edited:
Physics news on Phys.org
  • #2
Ad VanderVen said:
Summary:: A discrete variable ##K## is distributed according to the formula used for the normal distribution. How can the sum for all values of ##k## still be equal to one?
It cannot. The normal distribution is a continuous distribution, not a discrete distribution. It doesn’t makes sense to say that a discrete random variable has a continuous distribution
 
  • Like
Likes WWGD
  • #3
Ad VanderVen said:
Summary:: A discrete variable ##K## is distributed according to the formula used for the normal distribution. How can the sum for all values of ##k## still be equal to one?

De normal distribution has the following form:
$$\displaystyle f \left(x \right) \, = \,\frac{1}{2}~\frac{\sqrt{2}~e^{-\frac{1}{2}~\frac{\left(x -\nu \right)^{2}}{\tau ^{2}}}}{\tau ~\sqrt{\pi }}$$
and it's integral is equal to one:
$$\displaystyle \int_{-\infty }^{\infty }\!1/2\,{\frac { \sqrt{2}}{\tau\, \sqrt{\pi }}{{\rm e}^{-1/2\,{\frac { \left( x-\nu \right) ^{2}}{{\tau}^{2}}}}}}\,{\rm d}x \, = \, 1$$
However, I now take with ##k## as an integer:
$$\displaystyle P \left( K=k \right) \, = \,1/2\,{\frac { \sqrt{2}}{\tau \, \sqrt{\pi }}{{\rm e}^{-1/2\,{\frac { \left( k-\nu \right) ^{2}}{{\tau }^{2}}}}}}$$
For example, for ##\nu = 0## and ##\tau:=25.0## I get:
$$\displaystyle \sum _{k=-\infty }^{\infty } 0.01595769121\,{{\rm e}^{- 0.0008000000000\,{k}^{2}}}= 1.0$$
and one would expect a value lower than 1. How is that possible?
It would never have occurred to me to try something like that. There seems no particular reason for it to equal one. Can you prove that it does? Or is this a numerical approximation?
 
  • #4
=1 may be a coincidence. Your sum is equivalent to in integral with an integrand constant over each interval of unit length.
 
  • #5
For sure an approximation and the real value is ##<1##. But it maybe a very good way to approximate.
 
  • #6
If you have a "Large-Enough" number of data points, your distribution may approximate (i.e., converge in different senses ) to a Normal Distribution. Edit: Further, your formula is one of a density function, not a cdf (Cumulative Distribution), which does not make sense for a discrete variable.
 
  • #7
Dale said:
It cannot. The normal distribution is a continuous distribution, not a discrete distribution. It doesn’t makes sense to say that a discrete random variable has a continuous distribution
Read:

Roy, Dilip (2007). The Discrete Normal Distribution. Communications in Statistics - Theory and Methods. 32, 10, pp. 1871-1883. Published online: 15 Feb 2007.
 
  • #8
nuuskur said:
For sure an approximation and the real value is ##<1##. But it maybe a very good way to approximate.
Aha. I'd like to see what the real value is. It seems to me that it wouldn't be that close. If it is that close, then I'm surprised and impressed that the errors cancel out so well.

Also, it isn't obvious that the sum of that series is less than one. It is less than it should be in the convex regions and greater than it should be in the concave regions. There might be an interesting reason that two wrongs are making a right.
 
  • #9
Regardless, it is wrong to use a continuous distribution for a discrete random variable. In a continuous distribution the probability of getting exactly any of the discrete values is 0.

To go from a pdf to a pmf requires either a sampling and a normalization or an integration.
 
  • #10
I haven't fully worked this through but it seems to me that:
Hornbein said:
Also, it isn't obvious that the sum of that series is less than one. It is less than it should be in the convex regions and greater than it should be in the concave regions.
Yes, and because we know that a Gaussian distribution is the limiting case of binomial as ## n \to \infty ## I would expect it to exactly equal one.

Dale said:
To go from a pdf to a pmf requires either a sampling and a normalization or an integration.
Yes, and the series in the OP is a Riemann sum that does that integration.
 
  • #11
pbuk said:
Yes, and the series in the OP is a Riemann sum that does that integration.
It is in the limit. But it was not taken to the limit.
 
  • #12
Hornbein said:
It is in the limit. But it was not taken to the limit.
I haven't checked the details (exercise for the interested), but I think the RHS of the equation in the OP
$$
\displaystyle P \left( K=k \right) \, = \,1/2\,{\frac { \sqrt{2}}{\tau \, \sqrt{\pi }}{{\rm e}^{-1/2\,{\frac { \left( k-\nu \right) ^{2}}{{\tau }^{2}}}}}}
$$
is equivalent to the RHS of the De Moivre-Laplace theorem:
$$ \lim_{n=0}^{\infty} {n \choose k}\,p^{k}q^{n-k} = {\frac {1}{\sqrt {2\pi npq}}}\,e^{-{\frac {(k-np)^{2}}{2npq}}} $$
so yes, I think we are taking it to the limit.
 
  • #13
pbuk said:
Yes, and the series in the OP is a Riemann sum that does that integration
Yes, but that series is not the normal distribution. A discrete variable cannot be "distributed according to the formula used for the normal distribution". It is a rather minor and excessively pedantic point, I know.
 
  • Like
Likes pbuk
  • #14
Dale said:
Yes, but that series is not the normal distribution. A discrete variable cannot be "distributed according to the formula used for the normal distribution".
Yes, the question in the OP - particularly the summary - was poorly phrased: we should not be talking about a PDF of a discrete variable. Perhaps if we had focused on the CDF it would have been less of a surprise that the CDF of the discrete variable is equal to the CDF of the normal distribution at each integer step and therefore less of a surprise that the CDF is 1 at ## \infty ##.
 
  • Like
Likes Dale
  • #15
An asymptotic formula is obtained in Chapter 9 of the book of Concrete Mathematics, by Graham, Knuth and Patashnik by using the Euler-Maclaurin formula applied to the Gaussian (I have written their end formula in terms of our variables for the purposes of the thread):

$$
\sum_{k = -\infty}^\infty \dfrac{e^{-k^2 / 2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} = \int_{-\infty}^\infty \dfrac{e^{-x^2 / 2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} + \mathcal{O} (\tau^{-(N+2)/2}) \quad \text{as } \tau \rightarrow \infty
$$

here ##2 \tau^2 =## integer.

The book says, the sum ##\sum_k e^{-k^2/n}## is already close to ##\sqrt{n \pi}## even for ##n = 2##: the sum is ##2.506628288## and ##\sqrt{2 \pi} \approx 2.506628275##. They say that for ##n = 100## that the sum agrees with ##10 \sqrt{\pi}## to 427 decimal places! (Note ##n=2## corresponds to ##\tau = 1## and ##n=100## corresponds to ##\tau \approx 7##)

In chapter 9 of the book they say using "advanced methods" you can obtain a rapidly converging series:

$$
\sum_{k = -\infty}^\infty \dfrac{e^{-k^2 / 2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} = \int_{-\infty}^\infty \dfrac{e^{-x^2 /2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} + 2 e^{- 2 \tau^2 \pi^2} + \mathcal{O} (e^{- 2 \cdot 4 \tau^2 \pi^2})
$$

for positive ##\tau## large enough, but where there is no requirement that ##2 \tau^2## is an integer. Let's put ##\tau = 25.0## to find what the leading order correction would be,

$$
2 e^{- 2 \times 25^2 \pi^2} = 2.5563 \times 10^{- 5358} .
$$

So that, @Ad VanderVen, would be why you are getting out ##1.0## for your particular example of ##\nu = 0## and ##\tau = 25.0##.

The "advanced methods" is just the Poisson's summation formula:

As ##\sum_{k=-\infty}^\infty F (k + t)## is a periodic function in ##t## with period 1 we can write it as a Fourier series,

\begin{align*}
\sum_{k=-\infty}^\infty F (k + t) &= \sum_{m=-\infty}^\infty e^{2 \pi imt} \int_0^1 e^{-2 \pi ims} \sum_{k=-\infty}^\infty F (k + s) ds
\nonumber \\
&= \sum_{m=-\infty}^\infty e^{2 \pi imt} \sum_{k=-\infty}^\infty \int_0^1 e^{-2 \pi ims} F (k + s) ds
\nonumber \\
&= \sum_{m=-\infty}^\infty e^{2 \pi imt} \sum_{k=-\infty}^\infty \int_k^{k+1} F (s) e^{-2 \pi ims} ds
\nonumber \\
&= \sum_{m=-\infty}^\infty e^{2 \pi imt} \int_{-\infty}^\infty F (s) e^{-2 \pi ims} ds
\nonumber \\
&= \sum_{m=-\infty}^\infty \tilde{F} (2 \pi m) e^{2 \pi imt}
\end{align*}

where

$$
\tilde{F} (y) = \int_{-\infty}^\infty F (s) e^{-iys} ds
$$

Put ##F(s) = e^{-s^2/n}##, then

\begin{align*}
\tilde{F} (y) &= \int_{-\infty}^\infty e^{-s^2/n} e^{-iys} ds
\nonumber \\
&= \int_{-\infty}^\infty e^{-(s^2 - inys)/n} ds
\nonumber \\
&= \int_{-\infty}^\infty e^{-(s - iny/2)^2/n - ny^2/4} ds
\nonumber \\
&= e^{ - ny^2/4} \int_{-\infty}^\infty e^{-s^2/n} ds
\nonumber \\
&= \sqrt{n \pi} e^{ - ny^2/4} .
\end{align*}

So that

\begin{align*}
\sum_{k=-\infty}^\infty e^{-(k + t)^2/n} &= \sqrt{n \pi} \sum_{m=-\infty}^\infty e^{ - n (2 \pi m)^2/4} e^{2 \pi imt}
\nonumber \\
&= \sqrt{n \pi} \left( 1 + 2 \sum_{m=1}^\infty e^{ - m^2 \pi^2 n} \cos 2 \pi mt
\right)
\nonumber \\
&= \sqrt{n \pi} \left( 1 + 2 e^{ - \pi^2 n} \cos 2 \pi t + 2 e^{ - 4 \pi^2 n} \cos 4 \pi t + 2 e^{ - 9 \pi^2 n} \cos 6 \pi t + \cdots
\right)
\end{align*}

Note there is no requirement that ##n## be an integer, just that is be positive.

As they note in the book, "This formula gives a rapidly convergent series for the sum" for ##t = 0##:

$$
\sum_{k=-\infty}^\infty e^{-k^2/n} = \sqrt{n \pi} \left( 1 + 2 e^{ - \pi^2 n} + 2 e^{ - 4 \pi^2 n} + 2 e^{ - 9 \pi^2 n} + \cdots
\right)
$$
 
Last edited:
  • #16
I think you can use the Poisson's summation formula to obtain formula for the general case and for moments of the distribution as well, and obtain that the sum approximates the integrals very well for ##\tau## large enough. I'm tired today, I think I've got all the typos.

From the Poisson's summation formula derived in the previous post,

$$
\sum_{k=-\infty}^\infty e^{-(k + t)^2/n} = \sqrt{n \pi} \left( 1 + 2 \sum_{m=1}^\infty e^{ - m^2 \pi^2 n} \cos 2 \pi m t
\right)
$$

we have by putting ##t = - \nu##,

$$
\sum_{k=-\infty}^\infty \dfrac{e^{-(k - \nu)^2/2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} = \int_{-\infty}^\infty \dfrac{e^{-(x - \nu)^2/2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} dx + 2 \sum_{m=1}^\infty e^{ - m^2 \pi^2 2 \tau^2} \cos 2 \pi m \nu .
$$

We can use the Poisson's summation formula to obtain formula for the moments.

Put ##F_q(s) = s^q e^{-s^2/n}##, then

\begin{align*}
\tilde{F}_q (y) &= \int_{-\infty}^\infty s^q e^{-s^2/n} e^{-iys} ds
\nonumber \\
&= i^q \dfrac{\partial^q}{\partial y^q} \int_{-\infty}^\infty e^{-s^2/n} e^{-iys} ds
\nonumber \\
&= i^q \dfrac{\partial^q}{\partial y^q} \sqrt{n \pi} e^{ - ny^2/4}
\end{align*}

So that for ##q=1##, ##\tilde{F}_1 (y) = i (-ny/2) \sqrt{n \pi} e^{ - ny^2/4}## and,

\begin{align*}
\sum_{k=-\infty}^\infty (k + t) e^{- (k + t)^2 / n} &= \sum_{k=-\infty}^\infty F_1 (k + t)
\nonumber \\
&= \sum_{m=-\infty}^\infty \tilde{F}_1 (2 \pi m) e^{2 \pi imt}
\nonumber \\
&= -i n \pi \sqrt{n \pi} \sum_{m=-\infty}^\infty m e^{- m^2 \pi^2 n} e^{2 \pi imt}
\nonumber \\
&= 0 + 2 n \pi \sqrt{n \pi} \sum_{m = 1}^\infty m e^{- m^2 \pi^2 n} \sin 2 \pi mt .
\end{align*}

From which we obtain,

$$
\sum_{k=-\infty}^\infty \dfrac{(k - \nu)}{\sqrt{2} \tau \sqrt{\pi}} e^{- (k - \nu)^2 / 2 \tau^2} = \int_{-\infty}^\infty \dfrac{(x - \nu)}{\sqrt{2} \tau \sqrt{\pi}} e^{- (x - \nu)^2 / 2 \tau^2} dx - 4 \tau^2 \pi \sum_{m = 1}^\infty m e^{- m^2 \pi^2 2 \tau^2} \sin 2 \pi m \nu
$$

or if you like,

\begin{align*}
\sum_{k=-\infty}^\infty \dfrac{k}{\sqrt{2} \tau \sqrt{\pi}} e^{- (k - \nu)^2 / 2 \tau^2} &= \int_{-\infty}^\infty \dfrac{x}{\sqrt{2} \tau \sqrt{\pi}} e^{- (x - \nu)^2 / 2 \tau^2} dx - \nu + \nu \sum_{k=-\infty}^\infty \dfrac{1}{\sqrt{2} \tau \sqrt{\pi}} e^{- (k - \nu)^2 / 2 \tau^2}
\nonumber \\
& - 4 \tau^2 \pi \sum_{m = 1}^\infty m e^{- m^2 \pi^2 2 \tau^2} \sin 2 \pi m \nu
\nonumber \\
&= \int_{-\infty}^\infty \dfrac{x}{\sqrt{2} \tau \sqrt{\pi}} e^{- (x - \nu)^2 / 2 \tau^2} dx +
2 \nu \sum_{m=1}^\infty e^{ - m^2 \pi^2 2 \tau^2} \cos 2 \pi m \nu
\nonumber \\
& - 4 \tau^2 \pi \sum_{m = 1}^\infty m e^{- m^2 \pi^2 2 \tau^2} \sin 2 \pi m \nu .
\end{align*}So that for ##q=2##, ##\tilde{F}_2 (y) = (n/2) \sqrt{n \pi} e^{ - ny^2/4} - (ny/2)^2 \sqrt{n \pi} e^{ - ny^2/4}##

\begin{align*}
\sum_{k=-\infty}^\infty (k + t)^2 e^{- (k + t)^2 / n} &= \sum_{k=-\infty}^\infty F_2 (k + t)
\nonumber \\
&= \sum_{m=-\infty}^\infty \tilde{F}_2 (2 \pi m) e^{2 \pi imt}
\nonumber \\
&= \frac{n}{2} \sqrt{n \pi} \sum_{m=-\infty}^\infty e^{- m^2 \pi^2 n} e^{2 \pi imt} - \left( \frac{n}{2} \right)^2 (2 \pi)^2 \sqrt{n \pi} \sum_{m=-\infty}^\infty m^2 e^{- m^2 \pi^2 n} e^{2 \pi imt}
\nonumber \\
&= \frac{1}{2} \sqrt{\pi} n^{3/2} + \sqrt{\pi} n^{3/2} \sum_{m = 1}^\infty e^{- m^2 \pi^2 n} \cos 2 \pi mt
\nonumber \\
& \; - 2 (n \pi)^{3/2} \sum_{m = 1}^\infty m^2 e^{- m^2 \pi^2 n} \cos 2 \pi mt
\end{align*}

From which we obtain,

\begin{align*}
\sum_{k=-\infty}^\infty \dfrac{(k - \nu)^2}{\sqrt{2} \tau \sqrt{\pi}} e^{- (k - \nu)^2 / 2 \tau^2} &= \int_{-\infty}^\infty \dfrac{(x - \nu)^2}{\sqrt{2} \tau \sqrt{\pi}} e^{- (x - \nu)^2 / 2 \tau^2} dx
\nonumber \\
& \quad - \; 2 \tau^2 \sum_{m = 1}^\infty (2 \pi m^2 - 1) e^{- m^2 \pi^2 2 \tau^2} \cos 2 \pi m \nu
\end{align*}
 
Last edited:
  • #17
By choosing a large sigma (25) you ensure the step sizes in the summation are small relative to the majority of the distribution (+- 2 standard deviations would divide amidst all the distribution into 100 increments). I doubt this would work well for a normalized sigma
 
Last edited:
  • #18
BWV said:
By choosing a large sigma (25) you ensure the step sizes in the summation are small relative to the majority of the distribution (+- 2 standard deviations would divide amidst all the distribution into 100 increments). I doubt this would work well for a normalized sigma

The derivation of the Poisson's summation formula here for the Gaussian is straightforward. It involves the Fourier transform of a Gaussian which gives back a Gaussian but moving the ##2 \tau^2## from the denominator in the exponent to the numerator (it introduces a factor of ##1/4## in the exponent as well), but then ##y^2## in the exponent gets replaced by ##(2 \pi m)^2## so you end up with ##e^{- m^2 2 \tau^2 \pi^2}## as your corrections where ##\pi^2 \approx 10##:

$$
\sum_{k = -\infty}^\infty \dfrac{e^{-k^2 / 2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} = \int_{-\infty}^\infty \dfrac{e^{-x^2 /2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} + 2 e^{- 2 \tau^2 \pi^2} + 2 e^{- 2 \cdot 4 \tau^2 \pi^2}+ \mathcal{O} (e^{- 2 \cdot 9 \tau^2 \pi^2}) ,
$$

from that it is not so surprising that the approximation is so good even for ##\tau = 1##:

\begin{align*}
2 e^{- 2 \times 1^2 \pi^2} &= 5.35057 \times 10^{-9}
\nonumber \\
2 e^{- 2 \cdot 4 \times 1^2 \pi^2} &= 1.02450 \times 10^{-34} .
\end{align*}

It is some kind of coincidence that the sum approximates the integration very well here even for low values of ##\tau##. Your argument may help explain why the accuracy gets even better as ##\tau## increases.
 
Last edited:
  • #19
julian said:
The derivation of the Poisson's summation formula here for the Gaussian is straightforward. It involves the Fourier transform of a Gaussian which gives back a Gaussian but moving the ##2 \tau^2## from the denominator in the exponent to the numerator (it introduces a factor of ##1/4## in the exponent as well), but then ##y^2## in the exponent gets replaced by ##(2 \pi m)^2## so you end up with ##e^{- m^2 2 \tau^2 \pi^2}## as your corrections where ##\pi^2 \approx 10##:

$$
\sum_{k = -\infty}^\infty \dfrac{e^{-k^2 / 2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} = \int_{-\infty}^\infty \dfrac{e^{-x^2 /2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} + 2 e^{- 2 \tau^2 \pi^2} + 2 e^{- 4 \tau^2 \pi^2}+ \mathcal{O} (e^{- 9 \tau^2 \pi^2}) ,
$$

from that it is not so surprising that the approximation is so good even for ##\tau = 1##:

\begin{align*}
2 e^{- 2 \times 1^2 \pi^2} &= 5.35057 \times 10^{-9}
\nonumber \\
2 e^{- 4 \times 1^2 \pi^2} &= 1.43143 \times 10^{-17} .
\end{align*}

It is some kind of coincidence that the sum approximates the integration very well here even for low values of ##\tau##. Your argument may help explain why the accuracy gets even better as ##\tau## increases.
I ran a quick test in MATLAB for a 1 sigma 2-tailed area, which is
0.68268949 with sigma/tau =1, here are the results with different step sizes (alternatively can read as values for tau / sigma):

1: 0.882883729439719
10: 0.706483145523855
100: 0.685105166523426
1000: 0.682931422533150
10000: 0.682713688806255
 
  • #20
But I just put the sum ##\sum_{k=-N}^N \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}}## into Wolfram for say ##N=40## and get:

$$
\sum_{k=-40}^{40} \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}} = 1.000000005350575982148479362482248080537...
$$

which seems good to many, many decimal places in that I can increase ##N## past ##40## but it only changes the answer by miniscule amounts.
 
Last edited:
  • #21
julian said:
1.000000005350575982148479362482248080537...
For larks did n=10^9 in MATLAB and got

1.0000000053505762043215554513153620064258575439453125
 
  • #23
1.000000005350575982148479362482248080537...
BWV said:
For larks did n=10^9 in MATLAB and got

1.0000000053505762043215554513153620064258575439453125
You can do a crude estimation of the error you would expect to get from only summing from ##-40## to ##40##:

\begin{align*}
\sum_{k=-\infty}^\infty \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}} - \sum_{k=-40}^{40} \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}} &= 2 \sum_{k=41}^\infty \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}}
\nonumber \\
&= 2 \sum_{k=0}^\infty \dfrac{e^{-(k + 41)^2/2}}{\sqrt{2 \pi}}
\nonumber \\
&= 2 e^{-41^2/2} \sum_{k=0}^\infty \dfrac{e^{-(k^2/2 + 41 k)}}{\sqrt{2 \pi}}
\nonumber \\
&< 2 e^{-41^2/2} \sum_{k=0}^\infty \dfrac{e^{- 41 k}}{\sqrt{2 \pi}}
\nonumber \\
&= \sqrt{\frac{2}{\pi}} \dfrac{e^{-41^2/2}}{1 - e^{-41}}
\nonumber \\
& \approx 7.540984 \times 10^{-366}
\end{align*}

What limitation is there on the accuracy of Wolfram's outputted answer? What does MATLAB give for ##\sum_{k=-40}^{40} \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}}##?
 
Last edited:
  • #24
pbuk said:
Given the origin of the sum values > 1 are distracting. Probably better with the standard continuity correction when approximating a discrete distribution with a continuous one:
$$ \sum_{k=-40}^{40} \dfrac{e^{-(k - 0.5)^2/2}}{\sqrt{2 \pi}} $$
https://www.wolframalpha.com/input?i=\sum_{k=-40}^{40}+\dfrac{e^{-(k-0.5)^2/2}}{\sqrt{2+\pi}}
The Poisson summation formula works when the function is in the Schwartz space ##f \in \mathcal{S}(\mathbb{R})## which the Gaussian function times a polynomial is, which means that the calculations I did in post #15 and #16 are correct (mod any typos). So it is most definitely correct that:

$$
\sum_{k=-\infty}^\infty \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}} > 1
$$

As for ##\nu \not= 0## we had that:

$$
\sum_{k=-\infty}^\infty \dfrac{e^{-(k - \nu)^2/2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} = \int_{-\infty}^\infty \dfrac{e^{-(x - \nu)^2/2 \tau^2}}{\sqrt{2} \tau \sqrt{\pi}} dx + 2 \sum_{m=1}^\infty e^{ - m^2 \pi^2 2 \tau^2} \cos 2 \pi m \nu .
$$

So we have for ##\tau = 1## and ##\nu = 0.5## that:

$$
\sum_{k=-\infty}^\infty \dfrac{e^{-(k - 0.5)^2/2}}{\sqrt{2 \pi}} = \int_{-\infty}^\infty \dfrac{e^{-(x - 0.5)^2/2}}{\sqrt{2 \pi}} dx + 2 \sum_{m=1}^\infty (-1)^m e^{ - 2 m^2 \pi^2} .
$$

So we have including first order correction term:

\begin{align*}
\sum_{k = -\infty}^\infty \dfrac{e^{-(k - 0.5)^2 / 2}}{\sqrt{2 \pi}} & \approx \int_{-\infty}^\infty \dfrac{e^{-(x - 0.5)^2 /2}}{\sqrt{2 \pi}} dx - 2 e^{- 2 \pi^2}
\nonumber \\
& \approx 1 - 5.350576 \times 10^{-9}
\nonumber \\
& \approx 0.999999994649424
\end{align*}

We can do a crude approximation of the error:

\begin{align*}
2 \sum_{m=2}^\infty (-1)^m e^{- 2 m^2 \pi^2} &< 2 \sum_{m=2}^\infty e^{- 2 m^2 \pi^2}
\nonumber \\
& = 2 \sum_{m=0}^\infty e^{- 2 (m + 2)^2 \pi^2}
\nonumber \\
& = 2 e^{-8 \pi^2} \sum_{m=0}^\infty e^{- 2 (m^2 + 4m) \pi^2}
\nonumber \\
& < 2 e^{-8 \pi^2} \sum_{m=0}^\infty e^{- 8 m \pi^2}
\nonumber \\
& = 2 \frac{e^{-8 \pi^2} }{1 - e^{-8 \pi^2}}
\nonumber \\
& \approx 1.0245001 \times 10^{-34}
\end{align*}

Meaning that we most definitely have:

$$
\sum_{k = -\infty}^\infty \dfrac{e^{-(k - 0.5)^2 / 2}}{\sqrt{2 \pi}} < 1 .
$$

EDIT I did make a typo in post #18 and this one. Not important typos though. I think I've corrected them now.
 
Last edited:
  • #25
julian said:
1.000000005350575982148479362482248080537...
What limitation is there on the accuracy of Wolfram's outputted answer? What does MATLAB give for ##\sum_{k=-40}^{40} \dfrac{e^{-k^2/2}}{\sqrt{2 \pi}}##?
1.0000000053505762043215554513153620064258575439453125
vs
1.0000000053505762043215554513153620064258575439453125
so same number to 100 decimal places
 
  • #26
BWV said:
1.0000000053505762043215554513153620064258575439453125
vs
1.0000000053505762043215554513153620064258575439453125
so same number to 100 decimal places
So wolfram is just not as accurate as matlab!
 

1. What is a discrete random variable?

A discrete random variable is a type of variable in statistics that can take on a finite or countably infinite number of values. These values are usually whole numbers and are determined by chance or probability.

2. What is the normal equivalent for a discrete random variable?

The normal equivalent for a discrete random variable is a continuous random variable that has a similar distribution to the discrete variable. It can be used to approximate the probabilities of the discrete variable.

3. How is the normal equivalent for a discrete random variable calculated?

The normal equivalent for a discrete random variable is calculated using the normal distribution formula, where the mean and standard deviation are determined from the discrete variable's distribution. This calculation assumes that the discrete variable follows a binomial or Poisson distribution.

4. Why is the normal equivalent for a discrete random variable important?

The normal equivalent for a discrete random variable is important because it allows us to use the properties of the normal distribution to make predictions and calculate probabilities for the discrete variable. This can be especially useful when dealing with large sample sizes.

5. Can the normal equivalent for a discrete random variable be used for any type of discrete variable?

No, the normal equivalent can only be used for discrete variables that follow a binomial or Poisson distribution. If the discrete variable follows a different distribution, other methods such as the central limit theorem may be used to approximate a normal distribution.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
855
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
2
Views
1K
Replies
0
Views
357
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
817
  • General Math
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
Back
Top