Relating Moments from one Distribution to the Moments of Another

  • Context: Undergrad 
  • Thread starter Thread starter Steve Zissou
  • Start date Start date
  • Tags Tags
    Stats
Click For Summary

Discussion Overview

The discussion revolves around the relationship between the moments of one probability distribution and those of another, particularly in the context of fitting a normal distribution to a gamma distribution. Participants explore how to characterize a random variable by relating its known moments to those of a different distribution.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions how to relate the moments of a normal distribution to those of a different distribution without specific parameter values.
  • Another participant suggests that defining a "fit" between distributions requires a precise mathematical formulation, including how to handle the tails of the distributions.
  • It is proposed that one could seek a gamma distribution that provides a least squares fit to the normal distribution, though uniqueness of the solution may not be guaranteed.
  • Participants discuss the importance of defining a function that measures the fit between two distributions based on the parameters of interest.
  • There is mention of the potential complexity in defining goodness of fit, especially regarding the treatment of tails in the distributions.
  • A reference to Abramowitz and Stegun is made as a resource for understanding the relationship between moments of distributions.

Areas of Agreement / Disagreement

Participants express varying views on how to define the relationship between the moments of different distributions, and there is no consensus on a specific method or approach to achieve this.

Contextual Notes

Participants highlight the need for precise definitions and criteria when discussing the relationship between distributions, indicating that assumptions about the distributions and their parameters are critical to the discussion.

Steve Zissou
Messages
75
Reaction score
4
TL;DR
I hope to clarify my question soon, but I'd like to know how we can relate the moments of one sort of distribution to the moments of another.
Ok, I'm sure I can find a smarter way to pose this question, and I will try to define the question more carefully in coming days. That having been said, consider this:
Let's say we have a random variable X (or whatever). I can calculate the moments of this variable with no problem. In fact let's say X is normal. But I would like to characterize the distribution by a different distribution. In other words it is useful to model X by describing it with a different distribution. How can I relate the moments of X (that I know) to the moments of a new, different distribution?

Does this question make sense?

Thanks, friends
 
Physics news on Phys.org
Steve Zissou said:
TL;DR Summary: I hope to clarify my question soon, but I'd like to know how we can relate the moments of one sort of distribution to the moments of another.

I can calculate the moments of this variable with no problem.
To calculate the moments you need to know the probability distribution. So how do you calculate the second moment of x ?
 
Steve Zissou said:
In fact let's say X is normal. But I would like to characterize the distribution by a different distribution.

Saying that X is normally distributed doesn't define a particular distribution unless you also state the specific values of the parameters of that normal distribution. Let's say you begin by saying that X is normally distributed with mean 30 and standard deviation 15.8. Suppose you decide to characterize X by a different distribution. Are you asking how to characterize X with a distribution that may have different numerical values for its mean and standard deviation? Are you asking if there is only one way to pick different numerical values for the mean and standard deviation of the new distribution? If so, you haven't defined a precise mathematical problem. What criteria are we to use to judge which particular different numerical values "characterize" X?

If you are thinking of a situation where we have data from a sample that we use to estimate the unknown parameters of a distribution then there may be different customary estimators (i.e. formulas for making the estimates) for different distributions. The estimators are random variables. The distribution of an estimator depends on the distribution we assume for X. There are various criteria for which estimators are "best" - depending on one's interests. In such cases, if you change the assumed distribution for X, you might change your estimating formula and get a different numerical estimates for the parameters of the new distribution. There is no general procedure that applies to all such situations. It depends on the particular distributions and goals of estimation.
 
  • Like
Likes   Reactions: hutchphd
hutchphd and Stephen Tashi, thank you for your quick replies. Stephen I think you've already made a great point. But regardless here is my attempt to "tighten up" my question.

Ok, so let's say we have some data that ostensibly "comes from" the normal:

$$f(x)=\frac{1}{\sigma\sqrt{2\pi}}\exp\left[ -\frac{1}{2}\left( \frac{x-\mu}{\sigma} \right)^2 \right]$$

...and has mean ##\mu##, variance ##\sigma^2## and skew 0 (whatever).

Now, for whatever reason, we'd like to "fit" this data with the Gamma distribution:

$$f(x)=\frac{\beta^\alpha}{\Gamma(\alpha)}x^{\alpha-1}\exp\left[ -\beta x \right]$$

with mean ##\frac{\alpha}{\beta}##, variance ##\frac{\alpha}{\beta^2}## and skew ##\frac{2}{\sqrt{\alpha}}.##

Is there a way to think about relating ##\mu## and ##\sigma^2## to ##\alpha## and ##\beta##?

Thanks!
 
Steve Zissou said:
Now, for whatever reason, we'd like to "fit" this data with the Gamma distribution:

Steve Zissou said:
Is there a way to think about relating ##\mu## and ##\sigma^2## to ##\alpha## and ##\beta##?

Yes - provided you define "fit" in some precise way. For example, you might seek to find the gamma distribution that is the "least squares" fit to the normal distribution.

One complication to that approach is that there may not be a unique solution since there are several parameters involved. And a practical consideration is that one may not care about the tails of the distributions involved.
 
@Stephen Tashi
I see. So 1) define some best fit, and 2) make some specification for behavior of the tails
 
Steve Zissou said:
@Stephen Tashi
I see. So 1) define some best fit, and 2) make some specification for behavior of the tails

In general, define a function that measures the "fit" between two distributions in some precise way so that the function is a function of the parameters of interest. Treat the parameters of one distribution as given and try to find the parameters of the other distribution that maximize the fit.

Of course defining the task precisely does not guarantee it has a solution or a unique solution.

A function measuring fit could have rules that eliminate the tails from consideration or it could weight what happens on the tails less than what happens on the more probable values.

We often see articles along the lines of: The such-and-such Approximation to the so-and-so Distribution (e.g. the Normal approximation to the Binomial), I tend to get lost in the details of such articles without having a clear idea of how the authors have defined the goodness of approximation. If you look up articles of this type, you might find examples relating the moments of two distributions.
 
I have always found perusing Abramowitz and Stegun chapter 26 to be edifying. It is pretty condensed.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
5
Views
6K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 3 ·
Replies
3
Views
1K