How Do You Find the Density Function of a Sum of Exponential Random Variables?

Click For Summary
The discussion focuses on finding the density function of the sum of independent, identically distributed exponential random variables, which results in a gamma distribution. Participants explore methods such as using characteristic functions and convolution of density functions, emphasizing the importance of induction in the process. The characteristic function of the sum of two exponential variables is shown to relate to the gamma distribution, leading to an induction hypothesis for n variables. There are challenges in evaluating integrals correctly, particularly regarding limits and integration techniques. Ultimately, the conversation highlights the connection between exponential and gamma distributions through their characteristic functions and convolution properties.
glacier302
Messages
34
Reaction score
0
Let X1,...,Xn be independent, identically distributed random variables with exponential distribution of parameter λ. Find the density function of S = X1+...+Xn. (This distribution is called the gamma distribution of parameters n and λ). Hint: Proceed by induction.

At first I tried computing the characteristic function of X1+...+Xn, which is equal to the characteristic function of X1 raised to the nth power since the Xi are independent and identically distributed. But this didn't look like the characteristic function of any probability distribution that I know, so that was a dead end.

We're told to proceed by induction, but I'm not sure how to do that with density functions.

Useful information:

The probability density function of each of the Xi is f(x) = λe^(-λx).

The probability density function of the sum of two independent random variables is the convolution of their density functions. So if the density function of X is f(x) and the density function of Y is g(x), then the density function of X+Y is ∫f(T)g(x-T)dT (integral from -∞ to ∞).

Any help would be much appreciated! : )
 
Physics news on Phys.org
We're told to proceed by induction, but I'm not sure how to do that with density functions.

To use induction in evaluating the product of n+1 characteristic functions, you get to assume that the product of the first n of them gives you the characteristic function of a gamma distribution. So the problem will be to show that the characteristic function of a gamma distribution times the characteristic of an exponential gives you the characteristic function of another gamma distribution.

To use induction in evaluating the convolution of n+1 exponentials, you have a big multiple intergral in n+1 variables ds1, ds2,... etc. Induction let's you say that evaluating the integral with respect to the first n of these variables results in a gamma distribution. So you need to show that the convolution of a gamma distribution with an exponential produces another gamma distribution.

I haven't done the problem, so I don't know which of those methods works. If the integration doesn't seem to work, try applying integration by parts. That's just speculation on my part, because, as I said, I haven't worked the problem.
 
Okay, so now I'm trying to do induction. Since the Xi are exponentially distributed, X1 has Gamma distribution with parameters 1 and λ. But when I try to find the density of X1 + X2 where X1 and X2 are exponentially distributed, I get this:

density of (X1+X2) = ∫f(x-y)f(y)dy from y = 0 to y = ∞ (since X1, X2 are independent)
= ∫λe^-(λ(x-y))λe^(-λy)dy
= λ^2∫e^(-λx)e^(λy)e^(-λy)dy
= λ^2∫e^(-λx)dy
= λ^2e^(-λx)∫dy
= λ^2e^(-λx)*y (evaluated from y = 0 to y = ∞)
= ∞

When I calculated the density of a random variable with Gamma distribution and parameters 2 and λ, I got λ^2e^(-λx)*x. This looks kind of like the second to last step of my work above, except that there's an x instead of y, and it isn't evaulated from 0 to ∞.

Am I making some stupid mistake here with evaluating the integral??
 
Last edited:
The limits of integration aren't from minus infinity to infinity since the quantity (x-y) can't be negative for the exponential distribution. The variable y only goes from 0 to x.
 
You are making it far too hard. I assume you know the forms of the characteristic functions of exponential and gamma functions. Let

<br /> \phi_x(t)<br />

stand for the characteristic function of an exponential distribution (this is the c.f. of X_1). The c.f. of X_1 + X_2 is

<br /> \int \int exp(t(x_1 + x_2)) f(x_1) f(x_2) \, dx_1 dx_2 = \left(\phi_x(t)\right)^2<br />

With a little observation you should be able to show that this is the c.f. function of a particular gamma distribution. You've just shown the statement is true for n = 2.

Now make the induction assumption: that for some k \ge 2,

<br /> T = X_1 + X_2 + \cdots + X_k <br />

has a gamma distribution. Then you know what the c.f. for this sum is. Now, for k + 1 you have

<br /> \iiiint exp[t(x_1 + x_2 + \cdots + x_k + x_{k+1})] f(x_1) f(x_2) \cdots f(x_k) f(x_{k+1}) \, dx_1 dx_2 \cdots dx_k dx_{k+1}<br />

Break this into two expressions: one with the sum over x1 through xk, the other over x{k+1}. Each integration gives a c.f., and you know the form of each one.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K