XLSTAT-Time Series Analysis

Tutorial
View tutorials
The XLSTAT-Time Series Analysis module has been developed to provide XLSTAT users with a powerful solution for time series analysis and forecasting.

XLSTAT-Time Series Analysis functions provide you with outstanding tools to find out the degree of dependence between the values of a time series, to discover trends - seasonal or not, to apply specific pretreatments such as the Autoregressive Moving Average variants and finally to build predictive models.

Features

You can find tutorials that explains how XLSTAT-Time Series Analysis works here.

Demo version

A trial version of XLSTAT-Time Series Analysis is included in the main XLSTAT download.

Prices and ordering

These analyses are included in the XLStat-Forecast and XLStat-Premium packages.

 

 

 

DETAILED DESCRIPTIONS

Fourier transformation

Fourier transformation transforms one complex-valued function of a real variable into another. The domain of the original function is typically time. The domain of the new function is typically called the frequency domain. It describes which frequencies are present in the original function. In effect, the Fourier transform decomposes a function into oscillatory functions. The expression “Fourier transform” refers both to the frequency domain representation of a function, and to the process or formula that "transforms" one function into the other.

Fourier transformation use

Fourier transformation is used to transform a time series or a signal to its Fourier coordinates, or to do the inverse transformation. While the Excel function is limited to powers of two for the length of the time series, XLSTAT is not restricted. Outputs optionally include the amplitude and the phase.

 

 

Spectral analysis

Tutorial
View a tutorial
Spectral analysis allows transforming a time series into its coordinates in the space of frequencies, and then to analyze its characteristics in this space. The magnitude and phase can be extracted from the coordinates. It is then possible to build representations such as the periodogram or the spectral density, and to test if the series is stationary. By studying the spectral density, seasonal components and/or noise can be identified. Spectral analysis is a very general method used in a variety of domains.

The spectral representation of a time series Xt, (t=1,…,n), decomposes Xt into a sum of sinusoidal components with uncorrelated random coefficients. From there we can obtain decomposition the autocovariance and autocorrelation functions into sinusoids.

Spectral density

The spectral density corresponds to the transform of a continuous time series. However, we usually have only access to a limited number of equally spaced data, and therefore, we need to obtain first the discrete Fourier coordinates (cosine and sine transforms), and then the periodogram. From the periodogram, using a smoothing function, we can obtain a spectral density estimate which is a better estimator of the spectrum.

The spectral density estimate (or discrete spectral average estimator) of the time series Xt uses weights. The weights, are either fixed by the user, or determined by the choice of a kernel. XLSTAT suggests the use of the following kernels:

  • Parzen
  • Quadratic spectral
  • Tukey-Hanning
  • Truncated

White noise tests

XLSTAT optionally displays two test statistics and the corresponding p-values for white noise: Fisher's Kappa and Bartlett's Kolmogorov-Smirnov statistic.

XLSTAT-Time spectral analysis

Using fast and powerful methods, XLSTAT automatically computes the Fourier cosine and sine transforms of Xt, for each Fourier frequency, and then the various functions that derive from these transforms.

 

 

Descriptive statistics

Tutorial
View a tutorial
One of the key issues in time series analysis is to determine whether the value we observe at time t depends on what has been observed in the past or not. If the answer is yes, then the next question is how.

Autocovariances, autocorrelations, and partial autocorrelations

The sample autocovariance function (ACVF) and the autocorrelation function (ACF) give an idea of the degree of dependence between the values of a time series. The visualization of the ACF or of the partial autocorrelation function (PACF) helps to identify the suitable models to explain the past observations and to do predictions. The theory shows that the PACF function of an AR(p) – an autoregressive process of order p - is zero for lags greater than p.

Cross-correlations

The cross-correlations function (CCF) allows to relate two time series, and to determine if they co-vary and to which extend.
The ACVF, the ACF, the PACF and CCF are computed by this tool.

Normality and white noise tests at different time lags

One important step in time series analysis is the transformation of time which goal is to obtain a white noise. Obtaining a white noise means that all deterministic and autocorrelations components have been removed. Several white noise tests, based on the ACF, are available to test whether a time series can be assumed to be a white noise or not.

 

 

Time Series Transformation

XLSTAT offers four different possibilities for transforming a time series Xt into Yt, (t=1,…,n):

Box-Cox transform (fixed or optimised)

Box-Cox transformation to improve the normality of the time series; the Box-Cox transformation is defined by the following equation:

Yt = [ ( X2t - 1 ) / λ , (Xt > 0, λ ≠ 0 ) or (Xt ≥ 0, λ > 0 ) ; ln( Xt ), (Xt > 0, λ = 0) ]

XLSTAT accepts a fixed value of λ, or it can find the value that maximizes the likelihood of the residuals, the model being a simple linear model with the time as sole explanatory variable.

Differencing (1-B)d(1-Bs)D

Differencing, to remove trend and seasonalities and to obtain stationarity of the time series. The difference equation writes:

Yt = (1-B)d (1-Bs)D Xt

where d is the order of the first differencing component, s is the period of the seasonal component, D is the order of the seasonal component, and B is the lag operator defined by:

BXt = Xt-1

The values of (d, D, s) can be chosen in a trial and error process, or guessed by looking at the descriptive functions (ACF, PACF). Typical values are (1,1,s), (2,1,s). s is 12 for monthly data with a yearly seasonality, 0 when there is no seasonality.

Detrending and deseasonalizing

Detrending and deseasonalizing, using the classical decomposition model which writes:

Xt = mt + st + et

where mt is the trend component and st the seasonal component, and et is a N(0,1) white noise component.

XLSTAT allows fitting this model in two separate and/or successive steps:

Detrending by polynomial regression

X t = m t + e t = Si=0..k aiti + et

where k is the polynomial degree. The ai parameters are obtained by fitting a linear model to the data. The transformed time series writes:

Y t = e t = X t - = Si=0..p aiti

Deseasonalization by linear model

Xt = st + et = µ + bi + et, i = t mod p

where p is the period. The bi parameters are obtained by fitting a linear model to the data. The transformed time series writes:

Yt = et = Xt - µ - bi

Note: there exist many other possible transformations. Some of them are available in the transformations tool of XLSTAT (see the "Preparing data" section). Linear filters may also be applied. Moving average smoothing methods which are linear filters are available in the "Smoothing" tool of XLSTAT.

 

 

Smoothing of time series

Several smoothing methods are available in the XLSTAT-Time software. They are described below.

Simple exponential smoothing

This model is sometimes referred to as Brown's Simple Exponential Smoothing, or the exponentially weighted moving average model.
Exponential smoothing is useful when one needs to model a value by simply taking into account past observations. It is called "exponential" because the weight of past observations decreases exponentially. This method it is not very satisfactory in terms of prediction, as the predictions are constant after n+1.

Double exponential smoothing

This model is sometimes referred to as Brown's Linear Exponential Smoothing or Brown's Double Exponential Smoothing. It allows taking into account a trend that varies with time. The predictions take into account the trend as it is for the last observed data.

Holt’s linear exponential smoothing

This model is sometimes referred to as the Holt-Winters non seasonal algorithm. It enables taking into account a permanent component and a trend that varies with time. This model adapts itself quicker to the data compared with the double exponential smoothing. It involves a second parameter. The predictions for t>n take into account the permanent component and the trend component.

Holt-Winters seasonal additive model

This method considers a trend that varies with time and a seasonal component with a period p. The predictions take into account the trend and the seasonality. The model is called additive because the seasonality effect is stable and does not grow with time.

Holt-Winters seasonal multiplicative model

Tutorial
View a tutorial
This method ponders a trend that varies with time and a seasonal component with a period p. The predictions take into account the trend and the seasonality. The model is called multiplicative because the seasonality effect varies with time. The more the discrepancies between the observations are high, the more the seasonal component increases.

Note 1: for all the above models, XLSTAT estimates the values of the parameters that minimize the mean square error (MSE). However, it is also possible to maximize the likelihood, as, apart from the Holt-Winters multiplicative model, it is possible to write these models as ARIMA models. For example, the simple exponential smoothing is equivalent to an ARIMA(0,1,1) model, and the Holt-Winters additive model is equivalent to an ARIMA (0,1,p+1)(0,1,0) p. If you prefer to maximize the likelihood, we advise you to use the ARIMA procedure of XLSTAT.

Note 2: for all the above models, initial values for S, T and D, are required. XLSTAT offers several options, including backcasting to set these values. When backcasting is selected, the algorithm reverses the series, starts with simple initial values corresponding to the Y(x) option, then computes estimates and uses these estimates as initial values.

Moving average

This model is a simple way to take into account past and optionally future observations to predict values. It works as a filter that is able to remove noise. While with the smoothing methods defined below, an observation influences all future predictions (even if the decay is exponential), in the case of the moving average the memory is limited to q. If the constant l is set to zero, the prediction depends on the past q values and on the current value, and if l is set to one, it also depends on the next q values. Moving averages are often used as filters, and not as way to do accurate predictions.

Fourier smoothing

The concept of the Fourier smoothing is to transform a time series into its Fourier coordinates, then remove part of the higher frequencies, and then transform the coordinates back to a signal. This new signal is a smoothed series.

XLSTAT-Time offers a wide selection of ARIMA models such as ARMA (Autoregressive Moving Average), an ARIMA (Autoregressive Integrated Moving Average) or a SARIMA (Seasonal Autoregressive Integrated Moving Average).

 

 

ARIMA

Tutorial
View a tutorial
XLSTAT-Time offers a wide selection of ARIMA models such as ARMA (Autoregressive Moving Average), an ARIMA (Autoregressive Integrated Moving Average) or a SARIMA (Seasonal Autoregressive Integrated Moving Average).

ARIMA algebra

The models of the ARIMA family allow to represent in a synthetic way phenomena that vary with time, and to predict future values with a confidence interval around the predictions.

The mathematical writing of the ARIMA models differs from one author to the other. The differences concern most of the time the sign of the coefficients. XLSTAT is using the most commonly found writing, used by most software.
Si we define by Xt a series with mean µ, then if the series is supposed to follow an ARIMA(p,d,q)(P,D,Q)s model, we can write:

[ Yt = (1 – B)d (1 – Bs)D Xt - µ ;
Φ(B)Ø(Bs))Yt = θ(B) Θ(Bs) Zt, Zt∞N(0,σ2) ]

with

[ Φ(z) = 1 – Σpi=1 Φi zi, Ø(z)= 1 – Σpi=1 Øi zi ;
θ(z) = 1 + Σqi=1 θi zi, Θ(z) = 1 + Σqi=1 Θi zi ]

p is the order of the autoregressive part of the model.
q is the order of the moving average part of the model.
d is the differencing order of the model.
D is the differencing order of the seasonal part of the model.
s is the period of the model (for example 12 if the data are monthly data, and if one noticed a yearly cyclicity in the data).
P is the order of the autoregressive seasonal part of the model.
Q is the order of the moving average seasonal part of the model.

  • Remark 1: the Yt process is causal if and only if for any z such that |z|≤1, f(z)≠0 and q(z)≠0.
  • Remark 2: if D=0, the model is an ARIMA(p,d,q) model. In that case, P, Q and s are considered as null.
  • Remark 3: if d=0 and D=0, the model simplifies to an ARMA(p,q) model.
  • Remark 4: if d=0, D=0 and q=0, the model simplifies to an AR(p) model.
  • Remark 5: if d=0, D=0 and p=0, the model simplifies to an MA(q) model.

Explanatory variables

XLSTAT allows you to take into account explanatory variables through a linear model. Three different approaches are possible:

  1. OLS: A linear regression model is fitted using the classical linear regression approach, then the residuals are modeled using an (S)ARIMA model.
  2. CO-LS: If d or D and s are not zero, the data (including the explanatory variables) are differenced, then the corresponding ARMA model is fitted at the same time as the linear model coefficients using the Cochrane and Orcutt (1949) approach.
  3. GLS: A linear regression model is fitted, then the residuals are modeled using an (S)ARIMA model, then we loop back to the regression step, in order to improve the likelihood of the model by changing the regression coefficients using a Newton-Raphson approach.

Note: if no differencing is requested (d=0 and D=0), and if there are no explanatory variables in the model, the constant of the model is estimated using CO-LS.

 

 

Mann-Kendall Trend Tests

Tutorial
View a tutorial
Mann-Kendall trend test is a nonparametric test to be used when a trend is identified in a series, even if there is a seasonal component in the series.

Mann-Kendall test history

This test is the result of the development of the nonparametric trend test first proposed by Mann (1945). This test was further studied by Kendall (1975) and improved by Hirsch et al (1982, 1984) who allowed to take into account a seasonality.

Mann-Kendall trend test hypotheses

The null hypothesis H0 for these tests is that there is no trend in the series.

The three alternative hypotheses are that there is a negative, non-null, or positive trend.

The Mann-Kendall tests are based on the calculation of Kendall's tau measure of association between two samples, which is itself based on the ranks with the samples.

Mann-Kendall trend test

In the particular case of the trend test, the first series is an increasing time indicator generated automatically for which ranks are obvious, which simplifies the calculations.

To calculate the p-value of this test, XLSTAT can calculate, as in the case of the Kendall tau test, an exact p-value if there are no ties in the series and if the sample size is less than 50. If an exact calculation is not possible, a normal approximation is used, for which a correction for continuity is optional but recommended.

Seasonal Mann-Kendall test

In the case of seasonal Mann-Kendall test, we take into account the seasonality of the series. This means that for monthly data with seasonality of 12 months, one will not try to find out if there is a trend in the overall series, but if from one month of January to another, and from one month February and another, and so on, there is a trend.

For this test, we first calculate all Kendall's tau for each season, then calculate an average Kendall’s tau. The variance of the statistic can be calculated assuming that the series are independent (eg values of January and February are independent) or dependent, which requires the calculation of a covariance. XLSTAT allows both (serial dependence or not).

To calculate the p-value of this test, XLSTAT uses a normal approximation to the distribution of the average Kendall tau. A continuity correction can be used.

 

 

Homogeneity tests for time series

Tutorial
View a tutorial

Principles of homogeneity tests for time series

Homogeneity tests enables you to determine if a series maybe consider as homogeneous over time, or if there is a time at which a change occurs.

Homogeneity tests involve a large number of tests, XLSTAT-Time offer four tests (Pettitt, Buishand, SNHT, or von Neumann), for which the null hypothesis is that a time series is homogenous between two given times.

The variety of the tests comes from the fact that there are many possible alternative hypotheses: changes in distribution, changes in average (one or more times) or presence of trend.

Homogeneity test p-value approximation

The tests presented in this tool correspond to the alternative hypothesis of a single shift. For all tests, XLSTAT provides p-values using Monte Carlo resamplings. Exact calculations are either impossible or too costly in computing time.

Note 1: If you have a clear idea of the time when the shift occurs, one can use the tests available in the parametric or nonparametric tests sections. For example, assuming that the variables follow normal distributions, one can use the test z (known variance) or the Student t test (estimated variance) to test the presence of a change at time t. If one believes that the variance changes, you can use a comparison test of variances (F-test in the normal case, for example, or Kolmogorov-Smirnov in a more general case).

Note 2: The tests presented below are sensitive to a trend (for example a linear trend). Before applying these tests, you need to be sure you want to identify a time at which there is a shift between two homogeneous series.

Pettitt’s test

The Pettitt's test is a nonparametric test that requires no assumption about the distribution of data. The Pettitt's test is an adaptation of the tank-based Mann-Whitney test that allows identifying the time at which the shift occurs.
In his article of 1979 Pettitt describes the null hypothesis as being that the T variables follow the same distribution F, and the alternative hypothesis as being that at a time t there is a change of distribution. Nevertheless, the Pettitt test does not detect a change in distribution if there is no change of location. For example, if before the time t, the variables follow a normal N(0,1) distribution and from time t a N (0,3) distribution, the Pettitt test will not detect a change in the same way a Mann-Whitney would not detect a change of position in such a case. In this case, one should use a Kolmogorov Smirnov based test or another method able to detect a change in another characteristic than the location. We thus reformulate the null and alternative hypotheses:

H0: The T variables follow one or more distributions that have the same location parameter.

Two-tailed test: Ha: There exists a time t from which the variables change of location parameter.

Left-tailed test: Ha: There exists a time t from which the variables location is reduced by D.

Left-tailed test: Ha: There exists a time t from which the variables location is augmented by D.

Alexandersson’s SNHT test

The SNHT test (Standard Normal Homogeneity Test) was developed by Alexandersson (1986) to detect a change in a series of rainfall data. The test is applied to a series of ratios that compare the observations of a measuring station with the average of several stations. The ratios are then standardized. The series of Xi corresponds here to the standardized ratios. The null and alternative hypotheses are determined by:

H0: The T variables Xi follow a N(0,1) distribution.

Ha: Between times 1 and n the variables follow an N(µ1, 1) distribution, and between n+1 and T they follow an N(µ2,1) distribution.

Buishand’s test

The Buishand’s test (1982) can be used on variables following any type of distribution. But its properties have been particularly studied for the normal case. In his article, Buishand focuses on the case of the two-tailed test, but for the Q statistic presented below the one-sided cases are also possible. Buishand has developed a second statistic R, for which only a bilateral hypothesis is possible.

In the case of the Q statistic, the null and alternative hypotheses are given by:

H0: The T variables follow one or more distributions that have the same mean.
Two-tailed test: Ha: There exists a time t from which the variables change of mean.

Left-tailed test: Ha: There exists a time t from which the variables mean is reduced by D.

Left-tailed test: Ha: There exists a time t from which the variables mean is augmented by D.

von Neumann’s ratio test

The von Neumann ratio test is very powerful at all times but does not allow detecting the time of the change.

 

Cochrane-Orcutt model

Tutorial
View a tutorial
Developed by D.Cochrane and G. Orcutt in 1949, the Cochrane-Orcutt estimation is a well-known econometric approach to account for serial correlation in the error term of linear model. In case of serial correlation, the usual linear regression method is invalid because the standard ( deviations plutot que errors?) are not unbiased.

Results of the Cochrane-Orcutt estimation in XLSTAT:

Goodness of fit statistics: The statistics related to the fitting are shown in this table:

  • Observations: The number of observations used in the calculations. In the formulas shown below, n is the number of observations.
  • Sum of weights: The sum of the weights of the observations used in the calculations. In the formulas shown below, W is the sum of the weights.
  • DF: The number of degrees of freedom for the chosen model (corresponding to the error part).
  • R²: The determination coefficient for the model. This coefficient, whose value is between 0 and 1, is only displayed if the constant of the model has not been fixed by the user. The R² is interpreted as the proportion of the variability of the dependent variable explained by the model. The nearer R² is to 1, the better is the model. The problem with the R² is that it does not take into account the number of variables used to fit the model.
  • Adjusted R²: The adjusted determination coefficient for the model. The adjusted R² can be negative if the R² is near to zero. This coefficient is only calculated if the constant of the model has not been fixed by the user. The adjusted R² is a correction to the R² which takes into account the number of variables used in the model.
  • MSE: The mean of the squares of the errors (MSE).
  • RMSE: The root mean square of the errors (RMSE) is the square root of the MSE.
  • MAPE: The Mean Absolute Percentage Error.
  • DW: The Durbin-Watson statistic. This coefficient is the order 1 autocorrelation coefficient and is used to check that the residuals of the model are not autocorrelated, given that the independence of the residuals is one of the basic hypotheses of linear regression. The user can refer to a table of Durbin-Watson statistics to check if the independence hypothesis for the residuals is acceptable.
  • Cp: Mallows Cp coefficient. The nearer the Cp coefficient is to p*, the less the model is biased.
  • AIC: Akaike’s Information Criterion. This criterion, proposed by Akaike (1973) is derived from the information theory and uses Kullback and Leibler's measurement (1951). It is a model selection criterion which penalizes models for which adding new explanatory variables does not supply sufficient information to the model, the information being measured through the MSE. The aim is to minimize the AIC criterion.
  • SBC: Schwarz’s Bayesian Criterion. This criterion, proposed by Schwarz (1978) is similar to the AIC, and the aim is to minimize it.
  • PC: Amemiya’s Prediction Criterion. This criterion, proposed by Amemiya (1980) is used, like the adjusted R² to take account of the parsimony of the model.

Analysis of variance table: It is used to evaluate the explanatory power of the explanatory variables. Where the constant of the model is not set to a given value, the explanatory power is evaluated by comparing the fit (as regards least squares) of the final model with the fit of the rudimentary model including only a constant equal to the mean of the dependent variable. Where the constant of the model is set, the comparison is made with respect to the model for which the dependent variable is equal to the constant which has been set.

The parameters of the model table: It displays the estimate of the parameters, the corresponding standard error, the Student’s t, the corresponding probability, as well as the confidence interval.

Model equation: The equation of the model is then displayed to make it easier to read or re-use the model.

Autocorrelation coefficient: The estimated value of the autocorrelation coefficient .

Standardized coefficients table: The table of standardized coefficients is used to compare the relative weights of the variables. The higher the absolute value of a coefficient, the more important the weight of the corresponding variable . When the confidence interval around standardized coefficients has for value 0 (this can be easily seen on the chart of normalized coefficients), the weight of a variable in the model is not significant.

Predictions and residuals table: The predictions and residuals table shows, for each observation, its weight, the value of the qualitative explanatory variable, if there is only one, the observed value of the dependent variable, the model's prediction, the residuals and the confidence intervals together with the fitted prediction. Two types of confidence intervals are displayed: a confidence interval around the mean (corresponding to the case where the prediction would be made for an infinite number of observations with a set of given values for the explanatory variables) and an interval around the isolated prediction (corresponding to the case of an isolated prediction for the values given for the explanatory variables). The second interval is always greater than the first, the random values being larger. If the validation data have been selected, they are displayed at the end of the table.

Graphical results of the Cochrane-Orcutt estimation in XLSTAT:

The charts which follow show the results mentioned above.

If there is only one explanatory variable in the model, the first chart displayed shows the observed values, the regression line and both types of confidence interval around the predictions.

The second chart shows the normalized residuals as a function of the explanatory variable. In principle, the residuals should be distributed randomly around the X-axis. If there is a trend or a shape, this shows a problem with the model.

The three charts displayed next show respectively the evolution of the standardized residuals as a function of the dependent variable, the distance between the predictions and the observations (for an ideal model, the points would all be on the bisector), and the standardized residuals on a bar chart. The last chart quickly shows if an abnormal number of values are outside the interval ]-2, 2[ given that the latter, assuming that the sample is normally distributed, should contain about 95% of the datas.

 

Durbin-Watson Test

Tutorial
View a tutorial
Developed by J.Durbin and G.Watson (1950,1951), the Durbin-Watson test is used to detect the autocorrelation in the residuals from a linear regression.

In practice, the errors are often autocorrelated, it leads to undesirable consequences such as sub-optimal least-squares estimates.

Assume that the error terms (epsilon) are stationary and normally distributed with mean 0. The null and alternative hypotheses of the Durbin-Watson test are:

H0: The errors are uncorrelated

H1: The errors are AR(1)

And the test statistic D is:

DW D statistics test formula

In the context of the Durbin-Watson test, the main problem is the evaluation of the p-values. In XLSTAT the Imhof’s procedure (1961) is used to solve this problem.

Results of the Durbin-Watson test in XLSTAT:

In XLSTAT, the results of the Durbin-Watson test are the following:

  • The tables of descriptive statistics show the simple statistics for the residuals. The number of observations, missing values, the number of non-missing values, the mean and the standard deviation (unbiased) are displayed.
  • The value of the statistic D and the p-value of the Durbin-Watson test. A short interpretation is also displayed.

About KCS

Kovach Computing Services (KCS) was founded in 1993 by Dr. Warren Kovach. The company specializes in the development and marketing of inexpensive and easy-to-use statistical software for scientists, as well as in data analysis consulting.

Mailing list Join our mailing list

Home | Order | MVSP | Oriana | XLStat
QDA Miner | Accent Composer | Stats Books
Stats Links | Anglesey

Share: FacebookFacebook TwitterTwitter RedditReddit
Del.icio.usDel.icio.us Stumble UponStumble Upon

 

Like us on Facebook

Get in Touch

  • Email:
    sales@kovcomp.com
  • Address:
    85 Nant y Felin
    Pentraeth, Isle of Anglesey
    LL75 8UY
    United Kingdom
  • Phone:
    (UK): 01248-450414
    (Intl.): +44-1248-450414