MHB Simplifying a Messy Question - Any Help Appreciated!

  • Thread starter Thread starter nacho-man
  • Start date Start date
  • Tags Tags
    Simplifying
AI Thread Summary
The discussion revolves around simplifying a complex question related to a joint distribution function and its properties. Participants express confusion about integrating the function and understanding how certain values of lambda affect the independence of random variables X and Y. The joint distribution is identified as a specific case of the bivariate normal distribution, which has well-known properties that can aid in solving the problem. Clarifications are sought regarding the marginal distributions and the necessity of showing integration steps versus using properties directly. Overall, the thread emphasizes the importance of understanding bivariate distributions and their implications for independence and marginalization.
nacho-man
Messages
166
Reaction score
0
Please refer to the attached image.

Is there a way to simplify this question? It looks really messy but I have a feeling there is some nifty way around it. Surely they don't want us to integrate that entire function.

I also have no idea about part d)
how can some values of lambda result in X and Y being independent, and others not?

Any help appreciated.
Thanks in advnance
 

Attachments

  • mathq2.jpg
    mathq2.jpg
    16.9 KB · Views: 95
Physics news on Phys.org
nacho said:
Please refer to the attached image.

Is there a way to simplify this question? It looks really messy but I have a feeling there is some nifty way around it. Surely they don't want us to integrate that entire function.

I also have no idea about part d)
how can some values of lambda result in X and Y being independent, and others not?

Any help appreciated.
Thanks in advnance

Before to try some 'brute force approach' my be it is useful to observe that the given joint distribution...

$\displaystyle f(x,y) = \frac{1}{2\ \pi\ \lambda^{3}}\ e^{- \frac{1}{2\ \lambda^{4}}\ \{(x-\lambda)^{2} + (y-\lambda)^{2} - 2\ \sqrt{1-\lambda^{2}}\ (x-\lambda)\ (y-\lambda)\}}\ (1)$

... is a particular case of the bivariate normal distribution...

$\displaystyle f(x,y) = \frac{1}{2\ \pi\ \sigma_{x}\ \sigma_{y}\ \sqrt{1- \rho^{2}}}\ e^{- \frac{z}{2\ (1-\rho^{2})}}\ (2)$

... where...

$\displaystyle z = \frac{(x-\mu_{x})^{2}}{\sigma_{x}^{2}} + \frac{(y-\mu_{y})^{2}}{\sigma_{y}^{2}} - 2\ \frac{\rho\ (x - \mu_{x})\ (y-\mu_{y})}{\sigma_{x}\ \sigma_{y}}\ (3) $

... and $\displaystyle \mu_{x}= \mu_{y} = \sigma_{x} = \sigma_{y} = \sqrt{1-\rho^{2}} = \lambda$...

Kind regards

$\chi$ $\sigma$
 
thanks chi

I am unsure what to do with this information, how am I supposed to utilise it?

I have a feeling that you want me to use known properties of a bivariate distribution, and that the function will have the same properties, just with some transformations?

could also you suggest what I area I study in order to solve questions like these?
 
chisigma said:
Before to try some 'brute force approach' my be it is useful to observe that the given joint distribution...

$\displaystyle f(x,y) = \frac{1}{2\ \pi\ \lambda^{3}}\ e^{- \frac{1}{2\ \lambda^{4}}\ \{(x-\lambda)^{2} + (y-\lambda)^{2} - 2\ \sqrt{1-\lambda^{2}}\ (x-\lambda)\ (y-\lambda)\}}\ (1)$

... is a particular case of the bivariate normal distribution...

$\displaystyle f(x,y) = \frac{1}{2\ \pi\ \sigma_{x}\ \sigma_{y}\ \sqrt{1- \rho^{2}}}\ e^{- \frac{z}{2\ (1-\rho^{2})}}\ (2)$

... where...

$\displaystyle z = \frac{(x-\mu_{x})^{2}}{\sigma_{x}^{2}} + \frac{(y-\mu_{y})^{2}}{\sigma_{y}^{2}} - 2\ \frac{\rho\ (x - \mu_{x})\ (y-\mu_{y})}{\sigma_{x}\ \sigma_{y}}\ (3) $

... and $\displaystyle \mu_{x}= \mu_{y} = \sigma_{x} = \sigma_{y} = \sqrt{1-\rho^{2}} = \lambda$...

The main reason why I remember the normal bivariate distribution is that we can use its very comfortable properties...

Bivariate Normal Distribution -- from Wolfram MathWorld

Regarding the points a) we have that the marginal distribution function of the X is...

$\displaystyle f_{x} (x) = \int_{- \infty}^{+ \infty} f(x,y)\ dy = \frac{1}{\sigma_{x}\ \sqrt{2\ \pi}}\ e^{- \frac{(x - \mu_{x})^{2}}{2\ \sigma_{x}^{2}}} = \frac{1}{\lambda\ \sqrt{2\ \pi}}\ e^{- \frac{(x - \lambda )^{2}}{2\ \lambda^{2}}}\ (1)$

Now the point b) is direct consequence of (1)... why?...

Kind regards

$\chi$ $\sigma$
 
chisigma said:
The main reason why I remember the normal bivariate distribution is that we can use its very comfortable properties...

Bivariate Normal Distribution -- from Wolfram MathWorld

Regarding the points a) we have that the marginal distribution function of the X is...

$\displaystyle f_{x} (x) = \int_{- \infty}^{+ \infty} f(x,y)\ dy = \frac{1}{\sigma_{x}\ \sqrt{2\ \pi}}\ e^{- \frac{(x - \mu_{x})^{2}}{2\ \sigma_{x}^{2}}} = \frac{1}{\lambda\ \sqrt{2\ \pi}}\ e^{- \frac{(x - \lambda )^{2}}{2\ \lambda^{2}}}\ (1)$

Now the point b) is direct consequence of (1)... why?...

The knowledge of $\displaystyle f_{x} (x)$ and the following useful article...

http://mpdc.mae.cornell.edu/Courses/MAE714/biv-normal.pdf

... permits us to find the conditional distribution...

$\displaystyle f_{Y|X=x} (y) = \frac{f(x,y)}{f_{x}(x)} = \frac{1}{\sqrt{2\ \pi}\ \sigma_{y}\ \sqrt{1 - \rho^{2}}}\ e^{- \frac{1}{2\ \sigma_{y}^{2}\ (1-\rho^{2})}\ \{y - \mu_{y} - \rho\ \frac{\sigma{y}}{\sigma_{x}}\ (x-\mu_{x})\}^{2}} = \frac{1}{\sqrt{2\ \pi}\ \lambda^{3}}\ e^{- \frac{1}{2\ \lambda^{4}}\ \{y - 2\ \lambda - \lambda\ (x-\lambda)\}^{2}}\ (1)$

Now the (1) is a standard normal ditribution with mean $\displaystyle \mu_{y} + \rho\ \frac{\sigma_{y}}{\sigma_{x}}\ (x - \mu_{x})$ and variance $\displaystyle (1- \rho^{2})\ \sigma_{y}^{2}$ so that is... $\displaystyle E \{Y|X=x\} = \lambda + \sqrt{1 - \lambda^{2}}\ (x - \lambda)\ (2)$

... and the point b) is answered... the remaining point c) and d) shouldn't be too difficult to attack at this point... expecially the point d)!...Kind regards $\chi$ $\sigma$
 
point of clarification - how would you describe the parameters of our joint PDF in terms of the bivariate normal distribution?

also, is it independent for $\lambda$ = 1? for part d)

Paramaters of distributions always confuse me, I lost a huge bulk of marks on my mid-term because I don't understand what is trying to be said/communicated.

Would you be able to explain that?
Thanks!

edit: also for part a) when finding the marginal distributions, is it sufficient for us to say $ f_{X}(x) = \int...dy = ...$ or do we actually have to show the integration? ie, is this just a property we can use, or one which we must derive?

for part b)
i got a different answer from you, and i don't know what I did wrong

so $f_{X}(x)$ = $\frac{1}{\lambda \sqrt{2\pi}}$ $e^{(\frac{-(x-\lambda)^2)}{2(\lambda)^2}} $

and $f_{X,Y}(x,y)$ = $\frac{1}{2 \pi (\lambda)^3}$ $e^{...}$

and
$\frac{f_{X,Y}(x,y)}{f_X(x)}$ should at least have $(\lambda)^2 $in the denominator, as opposed to a $\lambda^4$ ?
additionally, my exponential was a different power from yours, i don't know how you simplified yours
 
Last edited:
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top