# A Joint Used to Show lack of Correlation?

#### StoneTemplePython

Gold Member
I think we can appeal to a more advanced form of integration and solve that technical problem, but we can also sidestep the question of a joint density. To compute the expected value of a function of a random variable $g(X)$ we only need the density $f(x)$ for $X$. $E(g(x)) = \int g(x) f(x) dx$. The question of whether $X$ is correlated with $X^2$ only requires computing $E(X)$, $E(X^2)$ and $E((X)(X^2)) = E(X^3)$. Those expectations are functions of $X$, so they can be computed using only the 1D density function for $X$.

It would an interesting exercise in abstract mathematics to say the correct words for defining a joint density for $(X,X^2)$ in 2D and to use that definition to show computation using the joint density is equivalent to taking the 1D view of things. However, I don't know if that interests you - or whether I could do it.
This is popularly called the Law of The Unconcious Statistician. It is implied by Law of Total Expectation.
Or in more abstract form, it is implied by the fact that $E\Big[g(X)\big \vert X\Big] = g(X)$

#### FactChecker

Gold Member
2018 Award
So there must be something in the context of the problem that either explicitly or implicitly defines the density function of $X$ (the joint density function of $(X,X^2)$ can be derived from the density of $X$) .

#### WWGD

Gold Member
So there must be something in the context of the problem that either explicitly or implicitly defines the density function of $X$ (the joint density function of $(X,X^2)$ can be derived from the density of $X$) .
And it only took 27 posts to just get to the right formulation of the question. Serenity now!

#### WWGD

Gold Member
I just thought of another seemingly implicit assumption about distribution s. When the mean is described as the arithmetic average of numbers, i.e. $(x_1+x_2+....+x_n)/n$ which assumes a uniform distribution. I don't remember this assumption being stated explicitly.

#### StoneTemplePython

Gold Member
I just thought of another seemingly implicit assumption about distribution s. When the mean is described as the arithmetic average of numbers, i.e. $(x_1+x_2+....+x_n)/n$ which assumes a uniform distribution. I don't remember this assumption being stated explicitly.
This is completely inaccurate. It's a definition and assumes no such thing. You are also likely mixing up statistics and probability theory.

Among other things, the SLLN tells us that iid sums of random variables with a finite mean
$\frac{1}{n}\big(X_1 + X_2 +... +X_n) \to \mu$ with probability one.

suppose those $X_i$'s are iid standard normal random variables, then for any natural number $n$
$\frac{1}{n}\big(X_1 + X_2 +... +X_n)$ is a gaussian distributed random variable, not a uniformly distributed random variable. You should have been able to figure this is out your self by looking at the MGF or CF for Uniform random variables and say any other convolution of iid random variables.

- - - -
edit:
if you know what you're doing you can even apply SLLN (or WLLN) to non-iid random variables with different means (supposing you meet a sufficient condition like kolmogorov criterion) which makes the idea that
$\frac{1}{n}\big(X_1 + X_2 +... +X_n)$
is somehow uniformly distributed even more bizarre

#### WWGD

Gold Member
This is completely inaccurate. It's a definition and assumes no such thing. You are also likely mixing up statistics and probability theory.

Among other things, the SLLN tells us that iid sums of random variables with a finite mean
$\frac{1}{n}\big(X_1 + X_2 +... +X_n) \to \mu$ with probability one.

suppose those $X_i$'s are iid standard normal random variables, then for any natural number $n$
$\frac{1}{n}\big(X_1 + X_2 +... +X_n)$ is a gaussian distributed random variable, not a uniformly distributed random variable. You should have been able to figure this is out your self by looking at the MGF or CF for Uniform random variables and say any other convolution of iid random variables.

- - - -
edit:
if you know what you're doing you can even apply SLLN (or WLLN) to non-iid random variables with different means (supposing you meet a sufficient condition like kolmogorov criterion) which makes the idea that
$\frac{1}{n}\big(X_1 + X_2 +... +X_n)$
is somehow uniformly distributed even more bizarre
I am not saying that the expression is uniformly distributed. What I mean is that, strictly speaking, the expected value or mean is defined ( discrete case) as $\Sigma x_if(x_i)$ , where $f(x)$ is the associated density. But if we define the mean / expected value as$(x_1+....x_n)/n$, this means we are assuming $f(x_i)=1/n$ for all $x_i$ or at least it ends up coming down to the same thing as $x_1 *1/n+.......+x_n *1/n$

#### StoneTemplePython

Gold Member
I am not saying that the expression is uniformly distributed. What I mean is that, strictly speaking, the expected value or mean is defined ( discrete case) as $\Sigma x_if(x_i)$ , where $f(x)$ is the associated density. But if we define the mean / expected value as$(x_1+....x_n)/n$, this means we are assuming $f(x_i)=1/n$ for all $x_i$ or at least it ends up coming down to the same thing as $x_1 *1/n+.......+x_n *1/n$
I understand the analogy you're trying to make -- I'm tempted to sign on off on "at least it ends up coming down to the same thing..." though I think it creates problems and isn't very helpful.

At this stage I'd suggest not having an interpretation -- just understanding the definition and the inequalities that are deployed. It will also make it easier to understand the CLT -- otherwise what is that-- an implicit 'uniform distribution between $x_i$'s except we they have 'extra' mass rescaled by square root of n'? That doesn't make any sense to me.

In both cases (really WLLN and CLT), whether you divide by $n$ or $\sqrt{n}$, it really has to do with carefully managing how variance grows / contracts/ stabilizes as you add random variables. That's really the point.

- - - -
note: You're using the wrong terminology. A discrete random variable doesn't have a probability density -- absolutely continuous ones do. Too much of this thread reads like "Casual" posts in Math Section -- something you've complained about before.

#### FactChecker

Gold Member
2018 Award
Anything put in terms of the random variables, like $(X_1+X_2+...+X_n)/n$ is a random variable with a probability distribution, not a fixed number. Anything put in terms of the sample results, like $(x_1+x_2+...+x_n)/n$ is a single number, which estimates the mean but is not exact.

#### WWGD

Gold Member
Anything put in terms of the random variables, like $(X_1+X_2+...+X_n)/n$ is a random variable with a probability distribution, not a fixed number. Anything put in terms of the sample results, like $(x_1+x_2+...+x_n)/n$ is a single number, which estimates the mean but is not exact.
Well, I was referring to the $x_i$ as the population itself, so this _is_ the mean as I know it. I would agree if the $x_i$ was sample data.

#### Stephen Tashi

Well, I was referring to the $x_i$ as the population itself, so this _is_ the mean as I know it. I would agree if the $x_i$ was sample data.
If the $x_1,x_2,...x_n$ are the possible values of the population, the mean of the population is not defined to be $\frac{ \sum_{i=1}^n x_i}{n}$.

#### FactChecker

Gold Member
2018 Award
If the $x_1,x_2,...x_n$ are the possible values of the population, the mean of the population is not defined to be $\frac{ \sum_{i=1}^n x_i}{n}$.
Unless they are uniformely distributed, each with a probability 1/n.

#### WWGD

Gold Member
If the $x_1,x_2,...x_n$ are the possible values of the population, the mean of the population is not defined to be $\frac{ \sum_{i=1}^n x_i}{n}$.
Precisely. But this is the definition used in the book I browsed. I understand that the x_i are scaled by f(x_i). I may have missed a section where the author states the assumption that these variables are uniformly distributed.Edit: I remember that a random sample from a population is a collection $\{X_1,.....,X_n \}$ of independent, I.D random variables.

Last edited:

#### FactChecker

Gold Member
2018 Award
But this is the definition used in the book I browsed.
Just to be careful with words, I wouldn't call that the definition. I would call the $\sum f(x_i)*x_i$ the definition. In the case of a uniform distribution, this turns out to be $\sum_{i=1}^n 1/n *x_i$

#### Stephen Tashi

Precisely. But this is the definition used in the book I browsed.
You need to read the book carefully enough to understand the difference between a population mean and a sample mean. (The conceptual structure of mathematical statistics is extremely sophisticated. For example, what is the definition of a "statistic"?)

The fact that the sample mean of observations $x_1,x_2,...x_n$ is defined to be $\frac{ \sum_{i=1}^n x_i}{n}$ has no assumption or implication about the distribution from which the samples are drawn. If the sample values are $\{1,1,2,2,2,2,3\}$ there is nothing in the definition of sample mean that says you treat the values $\{1,2,3\}$ as if they are uniformly distributed.

#### WWGD

Gold Member
Please see attached , page 5 of Doug C. Montgomery's Applied stats. Couldn't make picture clearer but hopefully clear-enough.

#### Attachments

• 27.3 KB Views: 6

#### Stephen Tashi

It's too blurred. What are you trying to convey by quoting it?

#### WWGD

Gold Member
Please see attached , page 5 of Doug C. Montgomery's Applied stats. Couldn't make picture clearer but hopefully clear-enough.
Seriously, I am not making it up, I am using the definition from the reference I cited. I am on my phone now, will look it up on web and if it is there, I will attach it.

#### WWGD

Gold Member
It's too blurred. What are you trying convey by quoting it?
I am trying to show that the definition of population mean used in the book is precisely the same as the arithmetic mean, for finite population s.

#### Stephen Tashi

I am trying to show that the definition of population mean used in the book is precisely the same as the arithmetic mean, for finite population s.
There is nothing in those words that assumes the values in the population are uniformly distributed. The expression $\frac{\sum_{i=1}^n x_i }{n}$ makes no assumption that each $x_i$ is a different value.

There is also nothing in those words that assumes the sampling procedure must be to give each member of the population the same probability of being included in the sample.

Where assumptions about the distribution enter the picture is when we want to prove theorems about the behavior of the sample mean as a random variable. A typical theorem would assume a sample is taken from a population in a particular manner (e.g. "random sampling without replacement"). However the definition of sample mean is not a theorem about the sample mean.

Last edited:

"Joint Used to Show lack of Correlation?"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving