Time Series - Autoregressive process and Probability Limit

Click For Summary

Homework Help Overview

The discussion revolves around calculating the probability limit (PLIM) of a specific expression involving an autoregressive process of order 1, defined by the equation Y_t = ρY_{t-1} + u_t. Participants are exploring the implications of independence and the properties of the error terms in the context of time series analysis.

Discussion Character

  • Exploratory, Assumption checking, Mathematical reasoning

Approaches and Questions Raised

  • Participants are attempting to express the PLIM in terms of expected values and are discussing the implications of independence of the error terms. Questions arise regarding the treatment of covariance and the nature of the error terms, particularly whether they can be assumed to be independent or if they exhibit conditional heteroscedasticity.

Discussion Status

The discussion is active, with participants offering insights and alternative expressions for the problem. Some have suggested that assuming independence of the error terms may simplify the calculations, while others are cautious about this assumption and its implications. There is recognition of the need for further clarification on the nature of the initial conditions and their impact on the expected values.

Contextual Notes

Participants note that the original problem statement includes conditions on the error terms, such as E(u_t) = 0 and Var(u_t) = σ², but there is uncertainty regarding the implications of the covariance condition provided. The discussion also touches on the behavior of terms involving Y_0 as T approaches infinity.

frenchkiki
Messages
25
Reaction score
0

Homework Statement



Calculate: PLIM (probability limit) \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1}

Homework Equations



Y_t = \rho Y_{t-1} + u_t, t=1,...T, |\rho| <1 which the autoregressive process of order 1

E(u_t) = 0, Var(u_t) = \sigma^2 for t

cov(u_j, u_s) = 0 for j \neq s

The Attempt at a Solution



I know that PLIM \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1} = E[u^2_t Y^2_{t-1}]

I have found Y_{t-1} = \sum^{T-1}_{j=0} \rho^j u_{t-1-j}

Plugging in, I get E[u^2_t Y^2_{t-1}] = E[u^2_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j})^2]=E[(u_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j}))^2]=E[(\sum^{T-1}_{j=0} \rho^j u_{t-j} u_{t-1-j})^2]=\sum^{T-1}_{j=0} \rho^j E[(u_{t-j} u_{t-1-j})^2]

And I am stuck here because I don't know what to do with E[(u_{t-j} u_{t-1-j})^2] ??

Thank you in advance!
 
Last edited:
Physics news on Phys.org
Perhaps they meant to say that ##u_j,u_s## are independent for ##j\neq s##. If That were the case then you could write:

$$E[(u_{t-j} u_{t-1-j})^2]=E[{u_{t-j}}^2 {u_{t-1-j}}^2]=E[{u_{t-j}}^2]\cdot E[{u_{t-1-j}}^2]$$
since ##{u_j}^2,{u_s}^2## will then also be independent.

However they have only given you the weaker condition that ##cov(u_j, u_s) = 0##, which I think is not enough to justify that step.

Indeed, I wonder whether it would be possible to construct a counter-example in which the process ##u_j## is conditional heteroscedastic, so that its unconditional variance is ##\sigma^2## and serial correlation is zero but its conditional variance is a mean-reverting random walk, so that successive values are not independent.

You could ask your teacher whether you are allowed to assume serial independence.
 
Thanks andrewkirk. I've seen somewhere in my notes that the errors (u_t's) are i.i.d. I'll use independence then.
 
frenchkiki said:

Homework Statement



Calculate: PLIM (probability limit) \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1}

Homework Equations



Y_t = \rho Y_{t-1} + u_t, t=1,...T, |\rho| <1 which the autoregressive process of order 1

E(u_t) = 0, Var(u_t) = \sigma^2 for t

cov(u_j, u_s) = 0 for j \neq s

The Attempt at a Solution



I know that PLIM \frac{1}{T} \sum^T_{t=2} u^2_t Y^2_{t-1} = E[u^2_t Y^2_{t-1}]

I have found Y_{t-1} = \sum^{T-1}_{j=0} \rho^j u_{t-1-j}

Plugging in, I get E[u^2_t Y^2_{t-1}] = E[u^2_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j})^2]=E[(u_t (\sum^{T-1}_{j=0} \rho^j u_{t-1-j}))^2]=E[(\sum^{T-1}_{j=0} \rho^j u_{t-j} u_{t-1-j})^2]=\sum^{T-1}_{j=0} \rho^j E[(u_{t-j} u_{t-1-j})^2]

And I am stuck here because I don't know what to do with E[(u_{t-j} u_{t-1-j})^2] ??

Thank you in advance!

I get a different expression from yours, and the difference is substantial. By iterating the recurrence relation I get
Y_{t-1} = \rho^{t-1} Y_0 + \sum_{j=0}^{t-2} \rho^j u_{t-1-j}.
Thus
u_t^2 Y_{t-1}^2 = \rho^{2t-2} u_t^2 Y_0^2 + 2 \rho^{t-1} Y_0 \sum_{j=0}^{t-2} \rho^j u_{t-1-j} u_t^2 \\<br /> + \sum_{j=0}^{t-2} \rho^{2j} u_{t-1-j}^2 u_t^2 + \sum_{k=1}^{t-2} \sum_{j=0}^{k-1} \rho^{j+k} u_{t-1-j} u_{t-1-k} u_t^2
In order to be able to compute ##E(u_t^2 Y_{t-1}^2)## you need to assume independence of ##u_1, u_2, \ldots u_T##, and you need to make some assumptions about the nature of ##Y_0## and its relation to the ##u_t## sequence.
 
As long as the ##u_k## are i.i.d, when we take the expected value, all the terms that have a factor ##u_\alpha## will become zero and any factors of the form ##{u_\alpha}^2## will become ##\sigma^2##. I think that will get rid of the double sum and the sum on the first line. There will still be a ##E[{Y_0}^2]## factor in the first term, but that's OK because it's not entangled with anything else distributionwise.
 
andrewkirk said:
As long as the ##u_k## are i.i.d, when we take the expected value, all the terms that have a factor ##u_\alpha## will become zero and any factors of the form ##{u_\alpha}^2## will become ##\sigma^2##. I think that will get rid of the double sum and the sum on the first line. There will still be a ##E[{Y_0}^2]## factor in the first term, but that's OK because it's not entangled with anything else distributionwise.

I wanted the OP to deal with these issues, so I deliberately refrained from saying more about them in my message.
 
Ray Vickson said:
I wanted the OP to deal with these issues, so I deliberately refrained from saying more about them in my message.
Oh, fair enough then. I didn't realize that was what your post was aiming at. Sorry for mucking it up.
 
Thanks Ray and andrewkirk.

The terms involving Y_0 go to 0 as T goes to infinity because |\rho| < 1. Each element ends up being the covariance of the squared errors and the whole thing equals to 0.

Thanks for you help!
 

Similar threads

Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
Replies
5
Views
4K
Replies
9
Views
2K