How does the mgf relate to the probability distribution?

  • Context: Graduate 
  • Thread starter Thread starter Josh S Thompson
  • Start date Start date
  • Tags Tags
    Moment
Click For Summary

Discussion Overview

The discussion revolves around the relationship between moment-generating functions (mgfs) and probability distributions, including their interpretation, properties, and implications for random variables and data samples. Participants explore theoretical aspects, practical applications, and the nuances of using moments to analyze data.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants note that mgfs can simplify finding moments of random variables, but raise questions about the interpretation of mgfs evaluated at different values of t, particularly t=1.
  • One participant explains that the mgf of a random variable X is defined as ##t\mapsto \mathbb{E}[e^{tX}]##, and its evaluation at t=1 yields ##\mathbb{E}[e^{X}]##, which is relevant for exponential functions of X.
  • Another participant questions the property that two distributions with the same mgf are identical, seeking clarification on its implications.
  • Some participants argue that having the same moments implies the same distribution, discussing the role of moments in describing data spread.
  • There is a discussion about the limitations of using sample moments to draw conclusions about the underlying population, with some asserting that sample moments can provide useful estimates.
  • Participants mention the lognormal distribution as an example of a distribution derived from exponential functions of normally distributed random variables, highlighting its relevance in finance.
  • Some express skepticism about the practical applications of certain distributions, questioning their relevance outside of academic contexts.

Areas of Agreement / Disagreement

Participants express differing views on the implications of moment equality for distributions, the interpretation of mgfs, and the utility of moments in analyzing data. The discussion remains unresolved regarding the extent to which moments can be used to differentiate between data samples.

Contextual Notes

Participants highlight the distinction between random variables and sets of numbers, noting that sets require additional structure to be treated as random variables. There is also mention of the limitations of sample moments in providing insights about the population.

Josh S Thompson
Messages
111
Reaction score
4
mgfs can be used to find the moments of a random variable with relative ease.

But you have to evaluate the function at t=0. Many times these functions are such that t can be all real numbers.

How would you interpret an mgf evaluated at t=1. How does t relate to the probability distribution.
 
Physics news on Phys.org
Since the mgf of X is ##t\mapsto \mathbb{E}[e^{tX}]##, its evaluation at ##t=1## is ##\mathbb{E}[e^{X}]## which will, for instance, be of interest if you are working with exponential functions of the random variable ##X##.

One relationship of mgf to the pdf is (apart from the obvious one that the mgf is obtained by integrating a function that involves the pdf), according to wiki, that:

'An important property of the moment-generating function is that if two distributions have the same moment-generating function [ie giving identical values for all values of ##t##], then they are identical at almost all points.'
 
andrewkirk said:
'An important property of the moment-generating function is that if two distributions have the same moment-generating function [ie giving identical values for all values of t t], then they are identical at almost all points.'
why
 
what do you need t for. I think if a two random variables have the same moments then they have the same distribution because the moments describe the spread of the data or the spread of the random variable and just thinking it out if you have infinite moments why does that not describe the random variable with infinite precision.
 
For example if you have [2,3,4,3,2,10,11] it would have a similar variance to say [2,5,6,3,7,3,2]
but it is different data, how do you use moments to see the difference in the data.
 
Josh S Thompson said:
For example if you have [2,3,4,3,2,10,11] it would have a similar variance to say [2,5,6,3,7,3,2]
but it is different data, how do you use moments to see the difference in the data.
Those two things are sets of numbers, not random variables. Random variables have moment generating functions. Sets of numbers do not, without doing the additional work of building structure around them to turn them into a random variable.

The sets of numbers will have 'sample moments', but that's not enough to give a useful moment generating function.
 
andrewkirk said:
Those two things are sets of numbers, not random variables. Random variables have moment generating functions. Sets of numbers do not, without doing the additional work of building structure around them to turn them into a random variable.

The sets of numbers will have 'sample moments', but that's not enough to give a useful moment generating function.

ok so can you use sample moments to draw conclusions about that sample.
 
Can you explain how to interpret this;
E[x^3] - E[x^2]^(3/2)
 
Josh S Thompson said:
ok so can you use sample moments to draw conclusions about that sample.
One doesn't need to draw conclusions about the sample, because all the possible information about the sample is plainly visible right there in the sample. What one typically does is use the sample to draw conclusions (make estimates) about the population from which it has been sampled. The sample moments can be used for that. The simplest example of doing that is where the sample mean is used as an estimate of the population mean.
 
  • #10
andrewkirk said:
One doesn't need to draw conclusions about the sample, because all the possible information about the sample is plainly visible right there in the sample. What one typically does is use the sample to draw conclusions (make estimates) about the population from which it has been sampled. The sample moments can be used for that. The simplest example of doing that is where the sample mean is used as an estimate of the population mean.

I understand that. This is what I'm trying to do; I'm looking at two samples of data [2,3,4,3,2,10,11] which has an average difference from the mean of 3.14 and [1,1,8,7,8,2,1] which also has an average difference from the mean of 3.14.

Surely these two data samples are not the same. If you calculate standard deviation the first set of numbers has a higher standard deviation because it incorporates the second moment. I'm asking if higher moments can be used to look at the spread of data and how you would do this. I think its just one of those things that you get or don't get. Please don't answer and talk about estimators for a population because in the real world things aren't distributed with two parameters.
 
  • #11
Josh S Thompson said:
Surely these two data samples are not the same.
Of course they're not the same. For a start, one has a lower minimum and the other has a higher maximum.

I'm afraid I can't see what this has to do with evaluating MGFs at t=1.
 
  • #12
Why would you be working with exponential functions of a random variable
 
  • #13
Josh S Thompson said:
Why would you be working with exponential functions of a random variable
A very common example is the lognormal distribution, which is ubiquitous in finance. It's the distribution of a random variable that is the exponent of a normally distributed RV.
 
  • #14
how do you derive that distribution
 
  • #15
And what do you use it for. I'm pretty sure they only use it in academia for sum reason
 
  • #16
Does it have a cdf or something because that's not that good of a reason but what do I know.
 
  • #17
As I said it is used everywhere in finance - and in the business world, not just in academia. Pricing and valuation in insurance and banking would be unrecognisable without it. If you'd like to know more about it, the wikipedia article I linked above is very good.

It's used because it's skew and bounded below at zero, which is a feature of many random variables in finance like insurance claim sizes or investment returns; and also because it's closely related to the normal distribution, which is very well understood and easy to manipulate.
 
  • #18
Ok thank you I appreciate the help but I still think you can use moments of a sample to determine the spread of the data
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 36 ·
2
Replies
36
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K