I Correlation coeff in conditional distribution

georg gill
Messages
151
Reaction score
6
upload_2017-10-6_13-42-6.png

upload_2017-10-6_13-42-30.png

Can someone derive: ##\frac{Cov(Z+\Theta),\Theta)}{\sqrt{Var(Z+\Theta)Var(\Theta)}}=\frac{\sigma ^2}{\sqrt{1+\sigma ^2}}##

My attempt:

Numerator:

##Cov(X,Y)=E[(X-E(X))(Y-E(Y))]=E[(Z+\Theta-\theta)(\Theta-\mu)]##

The denumerator is pretty simple:

##\sqrt{(1+\sigma ^2)\sigma ^2}##
 
Physics news on Phys.org
I presume you actually want ##\rho## which has a value of ##\frac{\sigma}{\sqrt{1 + \sigma^2}}## not ##\frac{\sigma^2}{\sqrt{1 + \sigma^2}}##

key ideas:
1.) Break this up into small manageable subproblems

2.) Remember implications of independence between ##Z## and ##\Theta##, i.e. they have zero covariance which also means their combined variance is just ##var(Z) =1## plus ##var(\Theta) = \sigma^2##

-- numerator --
in general for covariance of two random variables, A,B, we have
##cov\big(A, B\big) = E[(A)(B)] - E[(A)]E[(B)]##

split this up into two lines

##E[(Z + \Theta)(\Theta)] = E[Z \Theta] + E[\Theta^2]##
##E[Z + \Theta]E[\Theta] = E[Z]E[\Theta]+ E[\Theta]E[\Theta]##

notice that our actual expression for the numerator is the first line minus the second

## = \big(E[Z \Theta] + E[\Theta^2]\big)- \big(E[Z]E[\Theta]+ E[\Theta]E[\Theta]\big) =\big(E[Z \Theta]- E[Z]E[\Theta]\big) + \big(E[\Theta^2]- E[\Theta]E[\Theta]\big)##
##= \big(cov(Z,\Theta)\big) + \big(var(\Theta)\big) = \big(0\big) + \big(\sigma^2\big) = \sigma^2##

-- denominator --

##\sqrt{\Big(\big(1 + \sigma^2\big)\big(\sigma^2\big)\Big)} = \sqrt{\big(1 + \sigma^2\big)}\sqrt{\big(\sigma^2\big)} = \sigma\sqrt{\big(1 + \sigma^2\big)}##--- combine numerator and denominator --

##\rho = \frac{\sigma^2}{\sigma\sqrt{\big(1 + \sigma^2\big)}} = \frac{\sigma}{\sqrt{\big(1 + \sigma^2\big)}}##
 
Last edited:
  • Like
Likes georg gill and jim mcnamara
StoneTemplePython said:
I presume you actually want ##\rho## which has a value of ##\frac{\sigma}{\sqrt{1 + \sigma^2}}## not ##\frac{\sigma^2}{\sqrt{1 + \sigma^2}}##

##
Yes sorry I meant ##\rho## which has a value of ##\frac{\sigma}{\sqrt{1 + \sigma^2}}##. Thanks for the answer!
 
How do they arrive at the formula for expectation and variance in the end. I am thinking about:

##E[\Theta|X=x]=E[\Theta]+\rho\sqrt{\frac{Var(\Theta)}{Var(X)}}(x-E[X])##

and

##Var(\Theta|X=x)=Var(\Theta)(1-\rho^2)##

Where do they take this formulas from. Can someone derive them? I get the calculations they do with them.
 
You should start by stating the relationship between ##\Theta## and ##X##. I didn't think this was directly needed when dealing with ##Z## and ##\Theta## but it seems vital now and I don't see this clearly stated anywhere, and ##\Theta## and ##X## seem to be being introduced in your exercise 8b as if you have familiarity with the relationship from the text or perhaps example 8a. Consider this a 'for avoidance of doubt', the relationship is ____, type of statement.

After stating the relationship, it may be prudent to try to draw this out via a picture. Then make an attempt at solving this, either using their stated approach or via direct application of Bayes.

The reality is, once you've stated and formulated everything, the expected values should be easy -- and if you get stuck, following the units should be helpful here as well.

I could weigh in after all of this if needed. But you should be able to (a) clearly state the relationship between ##\Theta## and ##X## and (b) make some progress on your own first.

- - - -
edit:

While I still think a lot more details should be provided, I knew this equation looked quite familiar. Your problem apparently is trying to minimize Linear Least Mean Squared error. I.e. you are trying to minimize

##E\big[(\Theta - \alpha X - b)^2\big]##

do the calculus on this minimization problem (i.e. optimize with respect to a and b) and you'll recover the abstract form of your conditional expected value
 
Last edited:
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Replies
42
Views
4K
Replies
12
Views
2K
Replies
43
Views
5K
Replies
3
Views
2K
Replies
1
Views
2K
Replies
6
Views
1K
Replies
1
Views
1K
Replies
2
Views
2K
Back
Top