Autocorrelation tests
1 Durbin-Watson test
The aim of the Durbin-Watson test is to verify if a time series presents autocorrelation or not. Specifically, let’s consider a time series \(X_t = (x_1, \dots, x_i, \dots, x_t)\), then evaluating an AR(1) model, i.e. \[ x_t = \phi_1 x_{t-1} + u_t \tag{1}\] we would like to verify if \(\phi_1\) is significantly different from zero. The test statistic, denoted as \(\text{DW}\), is computed as: \[ \text{DW} = \frac{\sum_{i=2}^{t} (x_{i} - x_{i-1})^{2} }{\sum_{i=2}^{t} x_{i-1}^2} \approx 2(1 - \phi_1) \] The null hypothesis \(H_0\) is the absence of autocorrelation, i.e. \[ H_0: \phi_1 = 0 \quad H_1: \phi_1 \neq 0 \] Under \(H_0\) the Durbin-Watson statistic is approximated as \(DW \approx 2(1-0) = 2\). The test always generates a statistic between 0 and 4. However, there is not a known distribution for critical values. Hence to establish if we can reject or not \(H_0\) when we have values very different from 2, we should look at the tables.
2 Breush-Godfrey
The Breush-Godfrey test is similar to Durbin-Watson, but it allows for multiple lags in the regression. In order to perform the test let’s fit an AR(p) model on the a time series \(X_t = (x_1, \dots, x_i, \dots, x_t)\), i.e. \[ x_t = \phi_1 x_{t-1} + \dots + \phi_p x_{t-p} + u_t \tag{2}\] The null hypothesis \(H_0\) is the absence of autocorrelation, i.e. \[ \begin{aligned} {} & H_0: \phi_1 = \dots = \phi_p = 0 \\ & H_1: \phi_1 \neq 0, \dots, \phi_p \neq 0 \end{aligned} \] The null hypothesis \(H_0\) is tested looking at the F statistic that is distributed as a Fisher–Snedecor distribution, i.e \(\text{F} \sim F_{p, n-p-1}\). Alternatively is is possible to use the \(\text{LM}\) statistic, i.e. \(\text{LM} = nR^2 \sim \chi(p)\) where \(R^2\) is the R squared of the regression in Equation 2.
3 Box–Pierce test
Let’s consider a sequence of \(n\) IID observations, i.e. \(u_t \sim \text{IID}(0, \sigma^2)\). Then, the autocorrelation for the \(k\)-lag can be estimated as: \[ \hat{\rho}_k = \mathbb{C}r\{u_t, u_{t-k}\}= \frac{\sum_{t=k}^{n} u_{t} u_{t-k}}{\sum_{t=k}^{n} u_{t}^2} \text{.} \]
Moreover, since \(\hat{\rho}_k \sim N\left(0,\frac{1}{n}\right)\), standardizing \(\hat{\rho}_k\) one obtain
\[
\sqrt{n} \hat{\rho}_k \sim N(0,1) \implies n \hat{\rho}_k^2 \sim \chi^{2}_{1}\text{.}
\] It is possible to generalize the result considering \(m\)-auto correlations. In specific, let’s define a vector containing the first \(m\) standardized auto-correlations. Due to the previous result it converges in distribution to a multivariate standard normal, i.e. \[
\sqrt{n} \begin{bmatrix} \hat{\rho}_1 \\ \vdots \\ \hat{\rho}_k \\ \vdots \\ \hat{\rho}_m \end{bmatrix} \underset{n \to \infty }{\overset{d}{\longrightarrow}} \mathcal{N}(\boldsymbol{0}_{m \times 0}, \mathbb{I}_{m \times m}) \text{.}
\] Remembering that the sum of the squares of \(m\)-normal random variable is distributed as a \(\chi^2(m)\), one obtain the Box–Pierce test as \[
BP_{m} = n\sum_{k = 1}^{m} \hat{\rho}_{k}^{2} \underset{H_0}{\overset{d}{\longrightarrow}} \chi^2_{m} \text{,}
\] where the null hypothesis and the alternative are \[
\begin{aligned}
{} & H_0: \rho_1 = \dots = \rho_m = 0 \\
& H_1: \rho_1 \neq 0, \dots, \rho_p \neq 0
\end{aligned}
\] Note that such test, also known as Portmanteau test, provide an asymptotic result valid only for large samples.
3.1 Ljung-Box test
Since the Box–Pierce test provide a consistent framework only for large samples, when dealing with a small samples it is preferable to use an alternative version, known as Ljung-box test, defined with a correction factor, i.e. \[ LB_m = n(n+2)\sum_{k = 1}^{m} \frac{\hat{\rho}_{k}^{2}}{n-k} \underset{H_0}{\overset{d}{\longrightarrow}} \chi^2(m) \]
Independently from the statistic test used, i.e. \(Q_m = BP_m\) or \(Q_m = LB_m\), in general both are rejected when \[ \begin{cases} Q_m > \chi^2_{1-\alpha, m} \quad H_0 \text{ rejected} \\ Q_m < \chi^2_{1-\alpha, m} \quad H_0 \text{ non rejected} \end{cases} \] where \(\chi^2_{1-\alpha, m}\) is the quantile with probability \(1-\alpha\) of the \(\chi^2_{m}\) distribution with \(m\) degrees of freedom. If we reject \(H_0\), the time series presents autocorrelation, otherwise if \(H_0\) is non rejected we have no autocorrelation.
Citation
@online{sartini2024,
author = {Sartini, Beniamino},
title = {Autocorrelation Tests},
date = {2024-05-01},
url = {https://greenfin.it/statistics/tests/autocorrelation-tests.html},
langid = {en}
}