CDF of summation of random variables

  • I
  • Thread starter EngWiPy
  • Start date
  • #1
1,367
61
Hi,

I have this random variable ##\beta=\sum_{k=1}^K\alpha_k##, where ##\{\alpha_k\}_{k=1}^{K}## are i.i.d. random variables with CDF ##F_{\alpha}(\alpha)=1-\frac{1}{\alpha+1}## and PDF ##\frac{1}{(1+\alpha)^2}##. I want to find the CDF of the random variable ##\beta##. So, I used the Moment Generating Function (MGF) which is defined as:

[tex]\mathcal{M}_{\beta}(s)=E_{\beta}[e^{s\beta}]=\prod_{k=1}^K\mathcal{M}_{\alpha_k}(s)[/tex]

The CDF then can be found using the inverse Laplace transform of ##\frac{\mathcal{M}_{\beta}(-s)}{s}##. After doing all of this I ended up with

[tex]\frac{\mathcal{M}_{\beta}(-s)}{s}=\sum_{l=0}^K{K\choose l}s^{l-1}\,e^{ls}\text{Ei}^l(-s)[/tex]

where ##\text{Ei}(x)## is the exponential integral function. I have two questions:

  1. Is there another more simple way of computing the CDF?
  2. Can I evaluate the inverse Laplace transform of ##\frac{\mathcal{M}_{\beta}(-s)}{s}## other than numerically?
Thanks in advance
 

Answers and Replies

  • #2
mathman
Science Advisor
7,867
450
There is an alternative method, using characteristic functions instead of moment generating functions, where the inverse transform is Fourier transform rather than Laplace. Whether this will be any better I couldn't say. If you are not familiar with ch. functions, it is like the moment generator with is in the exponent instead of s.
 
  • #3
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,170
569
Is there another more simple way of computing the CDF?
Yes, just do ##X_1 + X_2## (i.e. a convolution) directly. Then once you get comfortable with that, convolve that result with ##X_3##, and so on.
- - - -

There are a lot of problems in your writeup.

Hi,

I have this random variable ##\beta=\sum_{k=1}^K\alpha_k##, where ##\{\alpha_k\}_{k=1}^{K}## are i.i.d. random variables with CDF ##F_{\alpha}(\alpha)=1-\frac{1}{\alpha+1}## and PDF ##\frac{1}{(1+\alpha)^2}##. I want to find the CDF of the random variable ##\beta##. So, I used the Moment Generating Function (MGF) which is defined as:

[tex]\mathcal{M}_{\beta}(s)=E_{\beta}[e^{s\beta}]=\prod_{k=1}^K\mathcal{M}_{\alpha_k}(s)[/tex]
The convention is to use a capital letter like ##X## to denote the random variable, and specific values it takes on with lower case, as in ##X(\omega) = x## or in the common shorthand, for ##X = x##. You seem to have overloaded ##\alpha##.

You didn't state that domain that it takes values in, but from what I can tell, it is a real non-negative random variable. The PDF looks close to a cauchy distirbution, but you cannot have negative values here or else your CDF is illegal (i.e. CDF's are bounded in [0,1]).

The CDF only reaches one as ##x \to \infty##, but ##E[X] = \int_{\eta}^{\infty} (1-CDF)dx = \infty## hence your random variable doesn't even have a first moment.

for any real ##r \neq 0##

Your MGF transform = ##E\Big[\exp\big(rX\big)\Big] \geq \exp \Big( E \big[rX\big] \Big) = \exp \Big( r E \big[X\big] \Big) = \exp \Big( r \infty \Big) = \infty ##

by Jensen's Inequality,

hence the MGF is worthless, as it maps your random variable to 1, with r =0 and ##\infty## otherwise. There are many non-defective random variables that I can think of that don't have a first moment, that get the same results, hence your transform cannot be invertible.
- - - -
edit: technical nit:

it really should say something like

for any real ##r \gt 0##

Your MGF transform = ##E\Big[\exp\big(rX\big)\Big] \geq \exp \Big( E \big[rX\big] \Big) = \exp \Big( r E \big[X\big] \Big) = \exp \Big( r \infty \Big) = \infty ##
- - - - -
edit 2: the integral exists for any ##r \leq 0##. This transform doesn't generate moments anymore since the derivatives don't exist at r = 0, however there may be some useful things that can be done including a kind of Chernoff bound, again with restriction that ##r \lt 0##. That is of some interest.

Still, characteristic function approach seems maybe preferable.
 
Last edited:
  • #4
1,367
61
There is an alternative method, using characteristic functions instead of moment generating functions, where the inverse transform is Fourier transform rather than Laplace. Whether this will be any better I couldn't say. If you are not familiar with ch. functions, it is like the moment generator with is in the exponent instead of s.
I am somewhat familiar with it, and as far as I know it follows the same procedure as the MGF, but it uses Fourier transform instead of Laplace transform. I need something that can simplify the result (if possible) such that I can write it in an elegant form.
 
Last edited:
  • #5
1,367
61
Yes, just do ##X_1 + X_2## (i.e. a convolution) directly. Then once you get comfortable with that, convolve that result with ##X_3##, and so on.
....
It doesn't sound like an efficient way for more than two random variables, unless there is a pattern in the convolution process.

I didn't understand what you mean by ##E[X]=\infty##! What is ##X##, ##\eta## and ##CDF## in your analysis?
 
  • #6
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,170
569
It doesn't sound like an efficient way for more than two random variables, unless there is a pattern in the convolution, which I suspect doesn't exist.

I didn't understand what you mean by ##E[X]=\infty##! What is ##X##, ##\eta## and ##CDF## in your analysis?
That's right -- sometimes, though, there is a pattern. One tell tale sign in convolving certain pathological distributions like Cauchy, is the nth convolution looks identical to the original distribution. Patiently convolving exponential distributions gives the pattern for the Erlang (which can be then proven with induction) and so on.

As for the expected value, you need a random variable with capital letters here. I went with ##X## by custom. You didn't specify the domain your random variable ##X## exists in, but based on your cdf formula ##X## can only be real non-negative. So for any ##\eta \geq 0## (i.e. where ##\eta ## is the lowest value that ##X## takes on) that integral to from ##\eta ## to ##\infty## gives you infinity. In general for real non-negative random variables, integrating 1 minus the CDF (i.e. integrating the complement of the CDF) will give you the expected value of the random variable. When your r.v. doesn't have a first moment, you end up quite limited in what you can do with it and what you can say about it.
 
  • #7
1,367
61
That's right -- sometimes, though, there is a pattern. One tell tale sign in convolving certain pathological distributions like Cauchy, is the nth convolution looks identical to the original distribution. Patiently convolving exponential distributions gives the pattern for the Erlang (which can be then proven with induction) and so on.

As for the expected value, you need a random variable with capital letters here. I went with ##X## by custom. You didn't specify the domain your random variable ##X## exists in, but based on your cdf formula ##X## can only be real non-negative. So for any ##\eta \geq 0## (i.e. where ##\eta ## is the lowest value that ##X## takes on) that integral to from ##\eta ## to ##\infty## gives you infinity. In general for real non-negative random variables, integrating 1 minus the CDF (i.e. integrating the complement of the CDF) will give you the expected value of the random variable. When your r.v. doesn't have a first moment, you end up quite limited in what you can do with it and what you can say about it.
Yes, the domain is ##[0,\infty)##. I will try the convolution for K=3 and see if I get a pattern.

I just viewed the Cauchy Distribution on Wikipedia and it says that:

[tex]\sum_{k=1}^Kw_k\frac{X_k}{Y_k}[/tex] follows standard Cauchy distribution where ##X_k## and ##Y_k## are both Gaussian distributed with mean 0 and variance 1, and ##\sum_kw_k=1##. In my case, actually, ##\alpha_k=\frac{a_k}{b_k}##, but both ##a_k## and ##b_k## are exponential random variables with mean 1 (they are the magnitude squared of Gaussian RVs with mean 0 and variance 1). It is also said that Cauchy distribution has no mean and MGF, but it has a characteristic function. However, the pdf is different in both cases. For standar Cauchy distrution the PDF is

[tex]f_X(x)=\frac{1}{\pi}\frac{1}{1+x^2}[/tex]

while in my case the PDF is given by

[tex]f_X(x)=\frac{1}{(1+x)^2}[/tex]
 
Last edited:
  • #8
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,170
569
I'd give it a shot. The idea of characteristic function is to use points on the unit circle to transform your problem, while not distorting length in the process -- hence your best shot of not having a blow up to infinity type of issue.

- - - - -
I actually think I've seen your PDF before, but I'm drawing a blank right now as to where. To the extent its (somewhat) well known, you could just look up the known results.

Just remember that when you don't have a finite expected value, all the various limit theorems fail to apply. This means there is no law of large numbers for your problem, and a lot of things I personally take for granted, won't apply here.
 
  • #9
1,367
61
I'd give it a shot. The idea of characteristic function is to use points on the unit circle to transform your problem, while not distorting length in the process -- hence your best shot of not having a blow up to infinity type of issue.
I think I will try the CF approach (as well as the convolution).

I actually think I've seen your PDF before, but I'm drawing a blank right now as to where. To the extent its (somewhat) well known, you could just look up the known results.
"known results" like what and where?

Just remember that when you don't have a finite expected value, all the various limit theorems fail to apply. This means there is no law of large numbers for your problem, and a lot of things I personally take for granted, won't apply here.
What are the implications of not having a finite expected value and not having the law of large numbers be applicable?
 
  • #10
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,170
569
"known results" like what and where?
Basically like what you looked up for Cauchy -- you could look up the result of the convolution for ##n ## Cauchy r.v.'s -- people have already done this (and posted findings on the Internet). If I could only remember the name of the distribution -- it may or may not come to me.

What are the implications of not having a finite expected value and not having the law of large numbers be applicable?
I'm not really sure what to say here. The Expected Value is, I'd suggest, the single most important calculation in all of probability theory. You don't have an expected value.

Most people have a notion that when you add up lots of observations of random variables, and average them, you tend to see 'long-run' behavior. But that is out -- this requires Weak (or Strong) Law of Large Numbers to justify but you don't have an expected value so no go. In general convolving lots of random variables is messy and difficult and its much easier to apply various useful bounds but they pretty much all require a first moment -- you can't even use Markov's Inequality here. I'm not aware of much work being done on random variables like this, but maybe there is literature on dealing with them.
- - -

while in my case the PDF is given by

[tex]f_X(x)=\frac{1}{(1+x)^2}[/tex]
Btw, I think your PDF integrates to ##\frac{\pi}{2} \neq 1##, hence you need to rescale the PDF so that it integrates to one, and thus becomes a valid probability density function.
 
  • #11
1,367
61
I, myself, suspect it must had be done, but not sure where to look.

As, for the PDF scaling, I derived it from the CDF which is ##1-\frac{1}{1+x}## which evaluates to 1 as ##x\to\infty##, which should be the same as integrating the PDF to infinity, right?
 
  • #12
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,170
569
I, myself, suspect it must had be done, but not sure where to look.

As, for the PDF scaling, I derived it from the CDF which is ##1-\frac{1}{1+x}## which evaluates to 1 as ##x\to\infty##, which should be the same as integrating the PDF to infinity, right?
whoops. Ignore my PDF comment. You wrote ##f_X(x)=\frac{1}{(1+x)^2}## but for some reason I read it as ##
f_X(x)=\frac{1}{(1+x^2)}
## which is not the same thing, of course
 
  • #13
1,367
61
No worries.

I am trying to follow the CF approach, and got stuck at this integral:

[tex]\int_0^{\infty}\frac{e^{itx}}{(1+x)^2}\,dx[/tex]

I searched some Fourier transform tables, and all of them include the expression ##1/(1+x^2)## (which is the Cauchy distribution) but not ##1/(1+x)^2##!! I didn't find such an expression in the table of integrals either. I tried to do the integration by parts, but I got ##\infty##!! Is this integral integrable?
 
  • #14
mathman
Science Advisor
7,867
450
No worries.

I am trying to follow the CF approach, and got stuck at this integral:

[tex]\int_0^{\infty}\frac{e^{itx}}{(1+x)^2}\,dx[/tex]

I searched some Fourier transform tables, and all of them include the expression ##1/(1+x^2)## (which is the Cauchy distribution) but not ##1/(1+x)^2##!! I didn't find such an expression in the table of integrals either. I tried to do the integration by parts, but I got ##\infty##!! Is this integral integrable?
It is integrable, but I don't know if there is an explicit expression.
 
  • #15
1,367
61
Yes, I meant to write it as an expression in a compact form, not integration, because this is an intermediate step.
 
  • #16
1,367
61
Is it possible to approximate ##\sum_{k=1}^K\frac{x_i}{y_i}## as ##K\to\infty## in any simple way, where ##\{x_i\}_{i=1}^K## and ##\{y_i\}_{i=1}^K## are all i.i.d. random variables that follow exponential distribution with mean 1? If not, what about ##\sum_{k=1}^K\frac{1}{y_i}##?
 
  • #17
1,367
61
[tex]\int_0^{\infty}\frac{e^{itx}}{(1+x)^2}\,dx[/tex]
Could anyone comment on this integral if it can be evaluated in a compact form or I need to evaluate it numerically?

Thanks in advance
 

Related Threads on CDF of summation of random variables

Replies
9
Views
1K
Replies
10
Views
3K
Replies
1
Views
1K
Replies
7
Views
1K
Replies
9
Views
2K
Replies
8
Views
26K
Replies
1
Views
3K
Top