MHB Solving Autocovariance Function $\gamma(t+h,t)$

  • Thread starter Thread starter nacho-man
  • Start date Start date
  • Tags Tags
    Function
nacho-man
Messages
166
Reaction score
0
This one is bugging me!

Let ${Z_t}$ ~ $(0, \sigma^2)$

And $X_t = Z_t + \theta Z_t$

im trying to find the autocovariance function $\gamma(t+h,t)$ And nearly have it, but am struggling with some conceptual issues :S

$\gamma(t+h,t) = \text{COV}[Z_{t+h} + \theta Z_{t-1+h}, Z_t + \theta Z_t-1]$

= $\text{COV}(Z_{t+h}, Z_t) + \theta \text{COV}(Z_{t+h}, Z_{t-1}) + \theta \text{COV}(Z_{t-1+h}, Z_t) + \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$

$\text{COV}(Z_{t+h}, Z_t) = \sigma^2$ (at $h=0$)
$\theta \text{COV}(Z_{t+h}, Z_{t-1})$ = $\theta \sigma^2$ (at $h=-1$)
$ \theta \text{COV}(Z_{t-1+h}, Z_t)$ = $\theta \sigma^2$ (at $h=1$)
$ \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$ = $\theta^2 \sigma^2$ (at $h=0$)

So, to summarise,

for h = 0, autocovariance = $\sigma^2 + \theta^2 \sigma^2$
for h = |1|, autocovariance = $\theta \sigma^2 + \theta \sigma^2$ = $2 \theta \sigma^2$ <<< textbook disagrees here!
f0r h>|1|, autocovariance = 0

The answers are attached to this post, I have a discrepancy for h=|1| and cannot see why. Is there a typo in the book?I Have an additional follow up question, depending on the response i receive for this initial post!

Any help very much appreciated as always,
thank you in advance.
 

Attachments

  • Untitled.jpg
    Untitled.jpg
    34.1 KB · Views: 102
Last edited:
Physics news on Phys.org
nacho said:
This one is bugging me!

Let ${Z_t}$ ~ $(0, \sigma^2)$

And $X_t = Z_t + \theta Z_t$

im trying to find the autocovariance function $\gamma(t+h,t)$ And nearly have it, but am struggling with some conceptual issues :S

$\gamma(t+h,t) = \text{COV}[Z_{t+h} + \theta Z_{t-1+h}, Z_t + \theta Z_t-1]$

= $\text{COV}(Z_{t+h}, Z_t) + \theta \text{COV}(Z_{t+h}, Z_{t-1}) + \theta \text{COV}(Z_{t-1+h}, Z_t) + \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$

$\text{COV}(Z_{t+h}, Z_t) = \sigma^2$ (at $h=0$)
$\theta \text{COV}(Z_{t+h}, Z_{t-1})$ = $\theta \sigma^2$ (at $h=-1$)
$ \theta \text{COV}(Z_{t-1+h}, Z_t)$ = $\theta \sigma^2$ (at $h=1$)
$ \theta^2 \text{COV}(Z_{t-1+h}, Z_{t-1})$ = $\theta^2 \sigma^2$ (at $h=0$)

So, to summarise,

for h = 0, autocovariance = $\sigma^2 + \theta^2 \sigma^2$
for h = |1|, autocovariance = $\theta \sigma^2 + \theta \sigma^2$ = $2 \theta \sigma^2$ <<< textbook disagrees here!
f0r h>|1|, autocovariance = 0

The answers are attached to this post, I have a discrepancy for h=|1| and cannot see why. Is there a typo in the book?I Have an additional follow up question, depending on the response i receive for this initial post!

Any help very much appreciated as always,
thank you in advance.

Because we have zero means:

$$\begin{aligned}\gamma_Z(t+1,t)&=E(Z_{t+1}Z_t)\\
&=E( (X_{t+1}+\theta X_t)(X_{t}+\theta X_{t-1}))\\
&=E(X_{t+1}X_t)+E(X_{t+1}\theta X_{t-1})+E(\theta X_t X_t)+E(\theta X_t \theta X_{t-1})
\end{aligned}$$

Now as the $X_i$s are uncorrelated and independent all the expectations but the third are zero, so:

$$\gamma_Z(t+1,t)=\theta \sigma^2$$

and similarly:

$$\begin{aligned}\gamma_Z(t-1,t)&=E(Z_{t-1}Z_t)\\
&=E( (X_{t-1}+\theta X_{t-2})(X_{t}+\theta X_{t}))\\
&=E(X_{t-1}X_t)+E(X_{t-1}\theta X_{t-1})+E(\theta X_{t-2} X_t)+E(\theta X_{t-2} \theta X_{t-1})
\end{aligned}$$

Now for the same reasons as before all the expectations other than the third are zero and we have as before:

$$\gamma_Z(t-1,t)=\theta \sigma^2$$

.
 
Last edited:
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top