Time Series: Partial Autocorrelation Function (PACF)

Click For Summary
The discussion centers on finding the partial autocorrelation function (PACF) for a stationary AR(2) process with a constant term. It is established that both the processes with and without the constant term have the same autocorrelation functions because correlation coefficients are based on mean-centered deviations, making the mean irrelevant. The inquiry extends to whether the PACFs of the two processes are also the same. It is concluded that they are indeed the same, as the PACF is derived from the covariance matrix, which is also mean-corrected. Understanding these properties is crucial for analyzing time series data effectively.
kingwinner
Messages
1,266
Reaction score
0
Consider a stationary AR(2) process:
Xt - Xt-1 + 0.3Xt-2 = 6 + at
where {at} is white noise with mean 0 and variance 1.
Find the partial autocorrelation function (PACF).

I searched a number of time series textbooks, but all of them only described how to find the PACF for an ARMA process with mean 0 (i.e. without the constant term). So if the constant term "6" above wasn't there, then I know how to find the PACF, but how about the case WITH the constant term "6" as shown above?

I'm guessing that (i) and (ii) below would have the same PACF, but I'm just not so sure. So do they have the same PACF? Can someone explain why?
(i) Xt - Xt-1 + 0.3Xt-2 = 6 + at
(ii) Xt - Xt-1 + 0.3Xt-2 = at

Any help would be much appreciated! :)
 
Physics news on Phys.org
Yes, (i) and (ii) have the same autocorrelation functions. Correlation coefficients are defined based on mean-centered deviations, so changes in the means only of the correlated values have no effect on the correlation.
 
kingwinner said:
I see. How about the PARTIAL autocorrelation functions of (i) and (ii)? Are they the same? Why or why not?

http://en.wikipedia.org/wiki/Partial_autocorrelation_function
http://fedc.wiwi.hu-berlin.de/xplore/tutorials/sfehtmlnode59.html
I wasn't familiar with this, but based on those links, it should be the same. In particular, if you look at the second, the correction from ACF to PACF is calculated from the covariance matrix. Covariances, like correlations, are mean-corrected.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
1K