Why is the Sum of Exponentially Distributed Variables Exponentially Distributed?

  • Thread starter Thread starter A-fil
  • Start date Start date
  • Tags Tags
    Statistics
Click For Summary

Homework Help Overview

The discussion revolves around the properties of sums of independent exponentially distributed random variables and their relationship with geometrically distributed variables. The original poster is tasked with showing that the sum of these variables results in another exponential distribution with a specific parameter.

Discussion Character

  • Conceptual clarification, Mathematical reasoning, Problem interpretation

Approaches and Questions Raised

  • The original poster attempts to understand why the sum of independent exponentially distributed variables, when summed over a geometrically distributed number of terms, would still be exponentially distributed. They express confusion about the transition from Erlang distribution to exponential distribution.
  • Some participants suggest using convolutions to calculate the distribution of the sum, indicating a method to approach the problem.
  • There is a discussion about the conditional probability related to the number of terms in the sum and how it affects the overall distribution.

Discussion Status

Participants are actively engaging with the problem, providing hints and exploring different interpretations of the distribution. The original poster has expressed gratitude for the guidance received and appears to be synthesizing the information shared, indicating a productive exchange of ideas.

Contextual Notes

The original poster mentions potential language barriers affecting their understanding of the problem, which may influence the clarity of their questions and interpretations.

A-fil
Messages
7
Reaction score
0

Homework Statement


Let X1, X2, … be independent exponentially distributed stochastic variables with parameter λ. For the sum Y = X1 + X2 + … + XN, where N is a geometrically distributed stochastic variable with parameter p, show that Y is exponentially distributed with parameter pλ.


Homework Equations


The expected value of exponentially distributed stochastic variables is 1/λ.
The expected value of geometrically distributed stochastic variables is 1/p.


The Attempt at a Solution


What I don’t understand is mainly why Y would be exponentially distributed. According to my notes, for N constant, Y would be Erlang(N,λ).

However, if we assume that the sum of exponentially distributed variables is exponentially distributed (how to prove this?), one should be able to calculate the expected value for this distribution as:
E(Y)=E(\sum_{i=1}^{N}X_{i})=E(\sum_{i=1}^{N} \underbrace{E(X_{i})}_{1/λ} ) =\frac{E(N)}{λ}=\frac{1}{pλ}
Implying that Y~Exp(pλ). Is this correct?

So, I would appreciate any comments or hints hints that I could get from anyone here. I got the problem in Japanese and my Japanese isn't perfect so if the problem seems incorrect, feel free to point that out.

My first time posting here so I hope that I'm not breaking any rules.
 
Physics news on Phys.org
Hi A-fil! :smile:

Did you see convolutions? That's how to calculate the distribution of a sum...
 
A-fil said:

Homework Statement


Let X1, X2, … be independent exponentially distributed stochastic variables with parameter λ. For the sum Y = X1 + X2 + … + XN, where N is a geometrically distributed stochastic variable with parameter p, show that Y is exponentially distributed with parameter pλ.


Homework Equations


The expected value of exponentially distributed stochastic variables is 1/λ.
The expected value of geometrically distributed stochastic variables is 1/p.


The Attempt at a Solution


What I don’t understand is mainly why Y would be exponentially distributed. According to my notes, for N constant, Y would be Erlang(N,λ).

However, if we assume that the sum of exponentially distributed variables is exponentially distributed (how to prove this?), one should be able to calculate the expected value for this distribution as:
E(Y)=E(\sum_{i=1}^{N}X_{i})=E(\sum_{i=1}^{N} \underbrace{E(X_{i})}_{1/λ} ) =\frac{E(N)}{λ}=\frac{1}{pλ}
Implying that Y~Exp(pλ). Is this correct?

So, I would appreciate any comments or hints hints that I could get from anyone here. I got the problem in Japanese and my Japanese isn't perfect so if the problem seems incorrect, feel free to point that out.

My first time posting here so I hope that I'm not breaking any rules.

You are correct that for fixed N the sum is Erlang, so that P{Y in (y,y + dy)|N = n} = f_n(y)*dy, where f_n is the n-Erlang density. However, you want to get P{Y in (y,y+dy)}. Do you see it now?

RGV
 
Thanks both of you for your help! I think I got it, but just to make sure (and for others who might have gotten stuck on something similar) I'll write it down.

f(y)=P(Y \in (y,y+dy)) = \sum_{i=1}^{\infty} P(Y \in (y,y+dy)|N=i)P(N=i)

We know that
P(N=k)=(1-p)^{k-1}p

To calculate the conditional probability one can use convultion, thus yielding:
P(Y \in (y,y+dy)|N=k) = \frac{(\lambda y)^{k-1}}{(k-1)!}\lambda e^{-\lambda y}

This yields

f(y)=\sum_{i=1}^{\infty} \frac{(\lambda y)^{k-1}}{(k-1)!}\lambda e^{-\lambda y}*(1-p)^{k-1}p=p \lambda e^{-\lambda y} \sum_{i=1}^{\infty} \frac{(\lambda y - \lambda y p)^{i-1}}{(i-1)!}=p \lambda e^{-\lambda y} \sum_{i=0}^{\infty} \frac{(\lambda y - \lambda y p)^{i}}{i!}=p \lambda e^{-\lambda y} e^{\lambda y - \lambda y p}=p \lambda e^{-p \lambda y}

Thus, Y~exp(pλ)
Q.E.D.
 

Similar threads

Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
5
Views
5K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K