# Time Series: Residuals of ARMA model

1. Aug 2, 2011

### kingwinner

Time Series: "Residuals" of ARMA model

To check whether the white noise {at} are uncorrelated, we usually look at the residuals (which are sample estimates of the white noise {at}) and residual plots. But I just don't understand the meaning of "residuals" in the context of ARMA model...

In the above definition, the residual "at hat" is in terms of the white noise terms
at-1, at-2, ..., at-q. But we know that the white noise terms are unobservable (residuals are observable, white noise terms are not), and there is no way we can know the exact values of at-1, at-2, ..., at-q, right? Now if we don't know everything on the right hand side, how can we calculate the residuals "at hat"? I just don't understand how residuals of ARMA model can be calculated based on this definition.

I tried searching the internet, but couldn't find much.
Hopefully someone can explain. Thank you!

2. Aug 2, 2011

### Stephen Tashi

Re: Time Series: "Residuals" of ARMA model

It would be easier to comment on material you quoted if we clarify how "residual" is defined in that material.

A problem in reading material on time series is that you must figure out what notation is being used for indexed quantities. For example, if we have a quantity denoted as $q_i$ , are we to regard it as a random variable? Or is it a particular numerical value, which may have come from one realization of a random variable?

The subject of ARMA models isn't fresh in my mind. The general pattern that I recall is this:

Let $x_ i$ denote the numerical values of a specific set of time series data. The model for the data expresses each value as a linear combination of random variables. You can't deduce any numerical values for the random variables by looking at an equation for a particular $x_i$ (for example, $x_{20}$). However, you can make progress if you look at equations that involve higher order differences of the $x_i$. Define
$$\triangle x_i = x_{i+1} - x_i$$
$$\triangle^2 x_i= \triangle ( x_{i+1} - x_i ) = x_{i+2} - x_{i+1} - (x_{i+1} - x_i)$$
etc.

If you form equations that have a higher order difference on the left hand side, then many of the variables in the right hand side may cancel out. You may be able to solve for particular numerical values of these variables or the numerical values of particular functions of them (for example, enough to give you an estimate of their sample variance).

So when you say something is "unobservable", in an ARMA model, what do you mean? Do you mean that we can't compute numerical values for it when we are given numerical values of the time series data? Are you saying we can't compute it even if we use tricks involving higher order differences?

3. Aug 2, 2011

### kingwinner

Re: Time Series: "Residuals" of ARMA model

I don't think the tricks involving higher order differences would solve the problem.

I believe the white noise {a_t} are unobservable (this term is used in all of my textbooks) because the true value of the parameters, phi's and theta's, are unknown. We can never calculate the exact value of {a_t}. We can only calculate the (observable) residuals "a_t hat", but based on the above definition of residual, to calculate a_t hat, we need to know the values of the (unobservable) white noise terms
at-1, at-2, ..., at-q which seem to be going in circles because they are defining the residuals in terms of the white noises? What sense does it make?

So how can we actually compute the values of "residuals" of ARMA model?

Thanks!

Last edited: Aug 2, 2011
4. Aug 2, 2011

### Stephen Tashi

Re: Time Series: "Residuals" of ARMA model

If you take the view that the parameters of your model are unknown, I don't think you can calculate residual errors between the data and the predicted values. This is the usual situation, isn't it? For example, if you take the view that data is generated by a linear regression model, but that the slope and intercept of the line are unknown, then you can't calculate the residual errors.

5. Aug 2, 2011

### kingwinner

Re: Time Series: "Residuals" of ARMA model

The true values of the parameters phi's and theta's are unkown, and we can only estimate them, hence we have the "phi hat" and "theta hat". (aside: this is also the key to explaining the important difference between the terms residual and error in regression) Since estimates of the parameters are used, we don't get the white noise a_t, but only the residual "a_t hat", an estimate of the white noise.

But my problem is not with estimating the parameters (phi and theta). When you look at the definition of "residual" above, they are defining the residual in terms of the white noises
at-1, at-2, ..., at-q which doesn't make any sense to me. On the right hand side of the definition, everything is known or can be calculated except for the white noise terms. But since we don't know the values of the white noise terms, there is no way we can calculate any residual, so I'm really confused as to how residuals of ARMA model are calculated in practice.

6. Aug 2, 2011

### Stephen Tashi

Re: Time Series: "Residuals" of ARMA model

OK, to summarize your objection to the definition of residuals, you object to the use of the hat on them since you think that they cannot be numerically computed.

As I recall, you have to stipulate some initial conditions as part of the parameters of the model. For example, to compute $X_0$, we need "fictitious" noise terms
$a_{-1}, a_{-2}, a_{-q}$ and X values with negative indices $X_{-1}, X_{-2},..X_{-p}$. It wouldn't surprise me if all these are set to zero, but you'll have to consult your materials to see. With that initial data, you can compute the residuals numerically.