A new Laplace second-order autoregressive time-series model - NLAR(2)

A time-series model for Laplace (double-exponential) variables having second-order autoregressive structure (NLAR(2)) is presented. The model is Markovian and extends the second-order process in exponential variables, NEAR(2), to the case where the marginal distribution is Laplace. The properties of the Laplace distribution make it useful for modeling in some cases where the normal distribution is not appropriate. The time-series model has four parameters and is easily simulated. The autocorrelation function for the process is derived as well as third-order moments to further explore dependency in the process. The model can exhibit a broad range of positive and negative correlations and is partially time reversible. Joint distributions and the distribution of differences are presented for the first-order case NLAR(1).


I. INTRODUCTION
I N STANDARD time-series analysis, one assumes the marginal distributions of { X, } are normal. However, a Gaussian distribution will not aiways be appropriate. In earlier works stationary non-Gaussian time-series models were developed for variables with positive and highly skewed marginal distributions [l]-161.
Other situations still remain for which Gaussian marginals are inappropriate, i.e., where the marginal time-series variable being modeled, although not skewed or inherently positive valued, has a large kurtosis or long-tailed distribution. The position errors in a large navigation system have such a distribution. In particular, Hsu [7] modeled pooled position errors using the double exponential distribution. McGill [8] showed that the Laplace distribution provides a characterization of the error in a timing device under periodic excitation. Again, speech-waves are modeled using Laplace variables [9]. In the "speech-like" process given by the linear autoregressive (AR(l)) model x, = cx,-, +(1 -C2)l'*En, (1.1) where 0.8 I c < 0.9, the innovation sequence {E,} is independent and identically distributed (i.i.d.) Laplace [lo]. In image coding systems using a two-dimensional discrete cosine transform, Reininger and Gibson [ll] showed that the Laplace distribution gives the best approximation to the distribution of the non-DC coefficients. Recently Sethia and Anderson [12] required a stationary autoregressive Manuscript received May 14, 1984; revised March 12, 1985 process with Laplace marginals in their research in communications technology.
Some of the special properties of the double exponential distribution are discussed briefly in the next section.
The approach taken in this paper to develop a family of Laplacian time-series models (NLAR (2)) follows that of the earlier work for the new exponential autoregressive (NEAR) model in [4] and [6]. In Section III we establish the validity of the four-parameter, Markovian, randomcoefficient linear model by analyzing the innovation structure. It is shown that a convex combination of three-scaled Laplace variables can be combined with an independent pair of Laplace variables to obtain another Laplace variable. Necessary and sufficient conditions for the existence of the NLAR(2) model are given using results of Nicholls and Quinn [13].
The random-coefficient approach is not the only way to generate Laplace variables with a specified correlation structure. The literature contains numerous articles on generation of random sequences. One approach put forth in several papers [14]- [18] involves passing white Gaussian noise through a linear filter followed by a zero-memory nonlinear transform. This is a general procedure that produces exactly the required marginal distribution and a good approximation to the autocorrelation structure. However, the scheme lacks the simplicity of the method being proposed, which is just a random-coefficient linear combination of Laplacian random variables. Moreover, the filtering approach produces, for example, in the first-order autoregressive case, only one process. It is important to note that in nonnormal time series there are infinitely many processes with a given marginal and autocorrelation structure. The NLAR(2) does this; the difference in the various NLAR(2) processes can, for instance, be explored through third and fourth joint moments.
The NLAR(2) time-series model provides great flexibility to systems modeling because of the broad range of correlations and dependence structure that can be obtained with the use of the four parameters. We demonstrate in Section V that the correlations {p(I)} satisfy Yule-Walker-type equations as in the AR(2) process. We investigate the parameter space within which the NLAR(2) model is valid.
Finally, in Section VI we demonstrate the high degree of symmetry underlying the NLAR(2) model by showing that E(XiXjX,) = 0 for all i, j, k. This property is useful in U.S. Government work not protected by U.S. copyright. model fitting and in determination of reversibility/directionality in the model. We show that further analysis of the residuals as given by Lawrence and Lewis [19] is necessary to further address directionality in the NLAR(2) model.
Several cases of the NLAR(2) family are analogous to those developed in (61 for the NEAR(2) model, which will not be listed here. However, the results for the first-order NLAR(l) model presented in Section IV are new.

LAPLACE DISTRIBUTION
The Laplace distribution is also known as the double exponential. In general, the density of a Laplace distributed variable L has two parameters-a location parameter -cc < p < + 00 and a scale parameter A > 0. The parameter p is fixed here at zero. For -00 < x < cc we have fL(x; A) = & exp (-lx@). (

2.1)
In what follows we will define { L, } as a sequence of i.i.d. random variables of the Laplace distribution with X = 1 (standard Laplace). The characteristic function of the standard Laplace variable is and we have if n is odd if n is even, so that E(L) = 0, var(L) = 2, skewness is zero, and kurtosis is 3. The value of the kurtosis indicates that the symmetric Laplace distribution has heavier tails than the normal distribution, for which the kurtosis is 0. The sum Y = CyzlLi of n 2 2 i.i.d. Laplace variables can be written as the difference of two i.i.d. random variables Y,, Y, with Gamma distribution, shape parameter n, and scale parameter 1. This follows immediately from the characteristic function c#+(w) = (1 + 0*)-n = (1 + io))"(l -iw)), We recognize (2.6) as the product of the characteristic functions of two i.i.d. innovation variables ei and -e2 as described in the EAR(l) process in [l]. Also, from (2.7) Thus E is the solution of a first-order liner autoregressive equation X, = pX,-, + E,, where { X,, } is a stationary time series with double exponential marginal distribution for all n. We call this the LAR(l) model. It has the same properties as the EAR(l) model in [l] with two important differences. First, if -1 -C p < 0 negative serial correlations for odd lags are obtained. Secondly it is partially time reversible in the sense that' for all 1 and n, both of the following are true: These results are derived in Sections IV and VI. Note, however, that since LAR(l) is a linear AR(l) model with non-Gaussian innovation {E,}, it is not fully time reversible [21]. Finally, note that this LAR(l) model has the zero-defect property; when z, = 0 then X,/X,-, = p and p can be determined exactly in long enough runs of the series { X, }. This property is generally undesirable, but the broader NLAR(2) model developed in the next section is free of this defect, except for the special parameter values for which it reduces to the LAR(l) model.
Equations (3.1) and (3.2) have a direct physical interpretation. The observed process at time n, X,, is only one of three possibilities: 1) X, is some multiple of what it was at time n -1, piX,-i, plus some independent random noise en; 2) X,, is some multiple (possibly different than pi) of its value at time n -2, p2 X,I-2, plus some independent random noise; and 3) X,, is just random noise c, independent of everything up to time n.
The work of Nicholls and Quinn [13] on random-coefficient autoregressive models is relevant to the NLAR(2) process. They have given the necessary and sufficient conditions for the existence of the unique covariance stationary solution to the following class of univariate random-coefficient autoregressive (RCA) models of order k, RCA(k), zn = t {Vi + Bn(i>}Zn-i + cn, (3.3) i=l n = 0, fl, *2,-a., where the following conditions hold.
1) The y, are real constants.
3) {en } is a scalar second-order stationary, independent process, independent of {B,}, with E(ez) = a* for all n.
They No marginal distribution is ascribed to solutions of the general RCA(k) models in [13]. It is, in fact, determined by the independent choices of the innovation and the 641 random coefficients. However, by specifying the marginal distribution and the random coefficients, we restrict the innovation more than the RCA(k) model does. If the X, in (3.1) or Z, in (3.3) have a standard Laplace marginal distribution, then all their moments are given by (2.3). From (3.1) or (3.3) it follows that for all k = 1,2, . . . 1 E( 6;") = {(2k)!} 1 -(@l"" + a,p;") .E(E ,2(k-i))/(2i)!} and for this to be true it is necessary that alp:" + a,p,'" < 1.
(3.6) Since (pi and a2 are probabilities it is necessary that ]&] I 1 for i = 1,2 for (3.6) to hold. If not there exists for every (pi and a2 an integer m such that (~$12~ or (YIPS"' is greater than 1. We have now established the necessary conditions on the innovation { E, }, and on pi and p2 for the existence of a unique strictly stationary solution to (3.3) with a marginal Laplace distribution and with the random coefficients given by (3.2). In Theorem 1 we show that ]&I I 1 for i = 1,2 is also a sufficient condition and that such an innovation random variable en exists. We also give its explicit form-a convex combination of Laplace random variables. For simplicity we regard the parameter space as being described by strict inequalities for (Y~ and pi. The proof of Theorem 1 closely follows the one in [8] for the NEAR(2) model and is not given here.
Many special cases of the NLAR(2) model could be mentioned. The following have one or more of the parameters at their boundary value and have valid but less complicated results for the distribution of {e,,} in (3.7). If a1 = ff* = 0 then { en } is the i.i.d. sequence { L, } and X, = E,. If (Y~ = 1 then {en} is the innovation of the LAR(l) model derived from (2.7) and (2.8). If ]&] = ]p2] = 1 and (Y~ + a2 < 1, then each en is distributed as a scaled Laplace random variable, ,/wL,. This model is called the TLAR(2) model, which is easily extendable to higher-order autoregressions. If (Y~ < 1 and a2 = 0 or p2 = 0, then { z, } is the innovation of the new first-order autoregressive model NLAR (l). This model is the subject of the next section.

V. AUTOCORRELATIONSTRUCTUREOFTHE NLAR(2) MODEL
In this section we show that the autocorrelations p(Z) = Corr (X,, Xn-,), I = 0, f 1, f2, * . . of the NLAR(2) model satisfy the Yule-Walker-type difference equations; thus, the second-moment dependency aspects are indistinguishable in form from those for the AR(2) process. We also compare the admissible regions of an AR(2) with an NLAR(2) with four parameters and an NLAR(2) with only two parameters. All the plots in Fig. 1 were generated from a grid of equally spaced values of a, and a2. In Fig. l(a) (2) In Section V we demonstrated that the second-moment dependency aspects of the NLAR(2) model were indistinguishable in form from those of the ordinary AR(2) model. Also, it is well known that if the linear autoregressive model is not Gaussian, then the process is not completely determined by the first and second moments. Thus in model identification it becomes necessary to examine third-order moments to further identify the process. Special third-order moments ,5(X:X,+,), for all I, are known as directional moments. If the directional moments for all 1 are equal, which is necessary for a process to be fully time reversible, we say the process is partially time reversible in the sense of directional moments.
A process is fully time reversible [23] if the joint distribution of X,, Xn+r;.*, Xnfr, is the same as that for x #Z+r, xn+r-l,-. -9 X, for all Y and for all n. Since LAR(l), a special case of NLAR(2), is not fully time reversible, NLAR(2) is in general not time reversible.
In this section we show by induction arguments that all the third-order moments of NLAR(2) are the same as those for Gaussian AR(2), i.e., E(XjXjX,) = 0 for i, j, k. This implies particularly that the directional moments of NLAR(2) are equal and therefore that NLAR(2) is always partially time reversible.
The residual analysis in [6] and [19] using cross correlations between linear autoregressive residuals R, = X, -alXn-, -a2L2, and their squares Rf,, does not shed any new light on the directionality/reversibility in the NLAR(2) model or help identify the appropriateness of the Laplacian model. This is because all. third moments have zero expectation. Thus, we see that E(RiR,+,) = E(R,Rt+,) = 0 for all 1.
Note that the basis for the residual analysis using the { R, } process is that this process is uncorrelated, but not necessarily independent. The moment results show that the DEWALD AND LEWIS: LAPLACE AUTOREGRESSIVE TIME SEFtIES 651 R, have zero skewness. In fact, it is easy to show that the distribution of R, is the same as the distribution of -R,. Thus, the R, are symmetric, although they will, of course, not have Laplacian distributions.

VII. CONCLUSIONSANDFURTHERANALYSIS
We have demonstrated that like the other canonical distributions, the Laplace distribution has several special properties. We presented and justified the use of a very broad Markovian model that has four parameters and the correlation structure and third-order behavior of an AR(2) model. It is easy to simulate on a computer.
There are many other uses of the NLAR(2) construction within the context of time series analysis. A moving average model (NLMA (l)) and a mixed model (NLARMA(l, 1)) have been derived. A detailed discussion of these models along with other possibilities will be reported elsewhere.
If the residual analysis for nonlinear autoregressive processes suggested in [6] and [19] is to be useful in modeling with NLAR(2), it must be extended to consider at least some special fourth-order moments, such as E(X,f-,Rz), E(Rj!-,Ri), or E(X2-rR,), in order to distinguish the process from other candidates.
Finally, the joint probability density function for the NLAR(l) model will be used elsewhere to investigate the important problem of parameter estimation in the model. A likelihood analysis for the NLAR(2) model appears to be much more difficult, but is also possible.