Why was the moment generating function defined and what is its purpose?

Click For Summary
The moment generating function (mgf) is defined as mgf(x) = E(e^(tx)) to facilitate the recovery of expected values through differentiation at t = 0. It uniquely identifies a probability distribution; if two random variables have the same mgf, they share the same distribution. MGF is particularly useful for determining the distribution of sums of independent random variables. While not every distribution has an mgf, all have a characteristic function, which serves similar purposes in defining distributions. The choice of e^(tx) allows for generating moments through power series expansion, where coefficients correspond to expected values.
Avichal
Messages
294
Reaction score
0
Estimation of x i.e. E(x) = Ʃx.p(x) ... p(x) is probabiltiy of x
Now my book defines another function mgf(x) i.e. moment generating function of x which is defined as: -
mgf(x) = E(etx)

I don't understand why was this function defined. Basically we included etx in our function because then if we differentiate it, we get the E(x) formula back again at t = 0. Why take such pain?

I can only see one use where p(x) is such that mgf(x) forms a nice taylor series of some function which can be easily differentiated but other than that I see no use.
 
Physics news on Phys.org
Simplest application: the moment generating function uniquely defines a distribution, so
* If you determine that two random variables have the same mgf, you know they have the same distribution
* Slightly different: Suppose you know the general form for the mgf of (say) the normal distribution - how it depends on \mu and \sigma. If you find that a new random variable has an mgf that follows the same type of patter, you know the new variable is normally distributed as well as the values of its mean and standard deviation.

Moment generating functions can also be used to determine the distribution of a sum of several independent random variables (assuming mgfs exist for every summand)

Finally, although not every distribution has an mgf, every distribution does have something known as a characteristic function, defined as
<br /> \phi_x(s) = E[e^{isx}]<br />

(where i^2 = -1). These also uniquely define distributions, and although the underlying mathematical justifications are deeper, several of the conclusions that can be drawn from c.f.s result from manipulation which are similar to those done with moment generating functions. If you've seen them for mgfs they won't be new with characteristic functions.
 
An important application of characteristic functions is for computing the distribution function of the sum of independent random variables. The characteristic function of the sum is simply the product of the individual characteristic functions.
 
Why was etx specifically chosen? What is t here?
They could have easily defined mgf(x) as E[f(x)] for some function f(x), right? Why etx?
 
Think of it this way: the goal is to have a function from which you can generate moments: if you write the mgf (as defined) as a power series in t, the coefficient of \frac{t^n}{n!} is E[x^n].
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
4K
  • · Replies 3 ·
Replies
3
Views
6K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K