Time Series - Autoregressive process and Probability Limit

frenchkiki
Messages
25
Reaction score
0

Homework Statement



Calculate: PLIM (probability limit) \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1}

Homework Equations



Y_t = \rho Y_{t-1} + u_t, t=1,...T, |\rho| <1 which the autoregressive process of order 1

E(u_t) = 0, Var(u_t) = \sigma^2 for t

cov(u_j, u_s) = 0 for j \neq s

The Attempt at a Solution



I know that PLIM \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1} = E[u^2_t Y^2_{t-1}]

I have found Y_{t-1} = \sum^{T-1}_{j=0} \rho^j u_{t-1-j}

Plugging in, I get E[u^2_t Y^2_{t-1}] = E[u^2_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j})^2]=E[(u_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j}))^2]=E[(\sum^{T-1}_{j=0} \rho^j u_{t-j} u_{t-1-j})^2]=\sum^{T-1}_{j=0} \rho^j E[(u_{t-j} u_{t-1-j})^2]

And I am stuck here because I don't know what to do with E[(u_{t-j} u_{t-1-j})^2] ??

Thank you in advance!
 
Last edited:
Physics news on Phys.org
Perhaps they meant to say that ##u_j,u_s## are independent for ##j\neq s##. If That were the case then you could write:

$$E[(u_{t-j} u_{t-1-j})^2]=E[{u_{t-j}}^2 {u_{t-1-j}}^2]=E[{u_{t-j}}^2]\cdot E[{u_{t-1-j}}^2]$$
since ##{u_j}^2,{u_s}^2## will then also be independent.

However they have only given you the weaker condition that ##cov(u_j, u_s) = 0##, which I think is not enough to justify that step.

Indeed, I wonder whether it would be possible to construct a counter-example in which the process ##u_j## is conditional heteroscedastic, so that its unconditional variance is ##\sigma^2## and serial correlation is zero but its conditional variance is a mean-reverting random walk, so that successive values are not independent.

You could ask your teacher whether you are allowed to assume serial independence.
 
Thanks andrewkirk. I've seen somewhere in my notes that the errors (u_t's) are i.i.d. I'll use independence then.
 
frenchkiki said:

Homework Statement



Calculate: PLIM (probability limit) \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1}

Homework Equations



Y_t = \rho Y_{t-1} + u_t, t=1,...T, |\rho| <1 which the autoregressive process of order 1

E(u_t) = 0, Var(u_t) = \sigma^2 for t

cov(u_j, u_s) = 0 for j \neq s

The Attempt at a Solution



I know that PLIM \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1} = E[u^2_t Y^2_{t-1}]

I have found Y_{t-1} = \sum^{T-1}_{j=0} \rho^j u_{t-1-j}

Plugging in, I get E[u^2_t Y^2_{t-1}] = E[u^2_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j})^2]=E[(u_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j}))^2]=E[(\sum^{T-1}_{j=0} \rho^j u_{t-j} u_{t-1-j})^2]=\sum^{T-1}_{j=0} \rho^j E[(u_{t-j} u_{t-1-j})^2]

And I am stuck here because I don't know what to do with E[(u_{t-j} u_{t-1-j})^2] ??

Thank you in advance!

I get a different expression from yours, and the difference is substantial. By iterating the recurrence relation I get
Y_{t-1} = \rho^{t-1} Y_0 + \sum_{j=0}^{t-2} \rho^j u_{t-1-j}.
Thus
u_t^2 Y_{t-1}^2 = \rho^{2t-2} u_t^2 Y_0^2 + 2 \rho^{t-1} Y_0 \sum_{j=0}^{t-2} \rho^j u_{t-1-j} u_t^2 \\<br /> + \sum_{j=0}^{t-2} \rho^{2j} u_{t-1-j}^2 u_t^2 + \sum_{k=1}^{t-2} \sum_{j=0}^{k-1} \rho^{j+k} u_{t-1-j} u_{t-1-k} u_t^2
In order to be able to compute ##E(u_t^2 Y_{t-1}^2)## you need to assume independence of ##u_1, u_2, \ldots u_T##, and you need to make some assumptions about the nature of ##Y_0## and its relation to the ##u_t## sequence.
 
As long as the ##u_k## are i.i.d, when we take the expected value, all the terms that have a factor ##u_\alpha## will become zero and any factors of the form ##{u_\alpha}^2## will become ##\sigma^2##. I think that will get rid of the double sum and the sum on the first line. There will still be a ##E[{Y_0}^2]## factor in the first term, but that's OK because it's not entangled with anything else distributionwise.
 
andrewkirk said:
As long as the ##u_k## are i.i.d, when we take the expected value, all the terms that have a factor ##u_\alpha## will become zero and any factors of the form ##{u_\alpha}^2## will become ##\sigma^2##. I think that will get rid of the double sum and the sum on the first line. There will still be a ##E[{Y_0}^2]## factor in the first term, but that's OK because it's not entangled with anything else distributionwise.

I wanted the OP to deal with these issues, so I deliberately refrained from saying more about them in my message.
 
Ray Vickson said:
I wanted the OP to deal with these issues, so I deliberately refrained from saying more about them in my message.
Oh, fair enough then. I didn't realize that was what your post was aiming at. Sorry for mucking it up.
 
Thanks Ray and andrewkirk.

The terms involving Y_0 go to 0 as T goes to infinity because |\rho| < 1. Each element ends up being the covariance of the squared errors and the whole thing equals to 0.

Thanks for you help!
 
Back
Top