# Dixon test for outliers

## Principle of the test

The Dixon test (1950, 1951, 1953), which is actually divided into six tests depending on the chosen statistic and on the number of outliers to identify, was developed to help determine if the greatest value or lowest value of a sample, or the two largest values, or the two smallest ones can be considered as outliers. This test assumes that the data correspond to a sample extracted from a population that follows a normal distribution.

## Detecting outliers

In statistics, an outlier is a value recorded for a given variable, that seems unusual and suspiciously lower or greater than the other observed values. One can distinguish two types of outliers:

• An outlier can simply be related to a reading error (on an measuring instrument), a keyboarding error, or a special event that disrupted the observed phenomenon to the point of making it incomparable to others. In such cases, you must either correct the outlier, if possible, or otherwise remove the observation to avoid that it disturbs the analyses that are planed (descriptive analysis, modeling, predicting).
• An outlier can also be due to an atypical event, but nevertheless known or interesting to study. For example, if we study the presence of certain bacteria in river water, you can have samples without bacteria, and other with aggregates with many bacteria. These data are of course important to keep. The models used should reflect that potential dispersion.

When there are outliers in the data, depending on the stage of the study, we must identify them, possibly with the aid of tests, flag them in the reports (in tables or on graphical representations), delete or use methods able to treat them as such.

To identify outliers, there are different approaches. For example, in classical linear regression, we can use the value of Cook’s d values, or submit the standardized residuals to a Grubbs test to see if one or two values are abnormal. The classical Grubbs test can help identifying one outlier, while the double Grubbs test allows identifying two. It is not recommended to use these methods repeatedly on the same sample. However, it may be appropriate if you really suspect that there are more than two outliers.

## Critical value and p-value for the Dixon test

The literature provides more or less accurate approximations of the critical value beyond which, for a given significance level a, we cannot keep the null hypothesis. However XLSTAT provides an approximation of the critical values based on Monte Carlo simulations. The number of these approximations is by default set to 1000000, which provides more reliable than those provided in the historical articles. XLSTAT also provides on the basis of these simulations, a p-value and the conclusion of the test based on the significance level chosen by the user.

## Results with XLSTAT

The results correspond to the Dixon test are then displayed. An interpretation of the test is provided if a single iteration of the test was requested, or if no observation was identified as being an outlier.
In case several iterations were required, also display a table showing, for each observation, the iteration in which it was removed from the sample.

The z-scores are then displayed if they have been requested.

## References

Barnett V. and Lewis T. (1980). Outliers in Statistical Data. John Wiley and Sons, Chichester, New York, Brisbane, Toronto.

Dixon W.J. (1950). Analysis of extreme values. Annals of Math. Stat., 21, 488-506.

Dixon W.J. (1951). Ratios involving of extreme values. Annals of Math. Stat., 22, 68-78.

Dixon W.J. (1953). Processing data for outliers. J. Biometrics, 9, 74-89.

Hawkins D.M. (1980). Identification of Outliers. Chapman and Hall, London.

International Organization for Standardization (1994). ISO 5725-2: Accuracy (trueness and precision) of measurement methods and results—Part 2: Basic method for the determination of repeatability and reproducibility of a standard measurement method, Geneva.

This analysis is available in the XLStat-Basic addin for Microsoft Excel

Kovach Computing Services (KCS) was founded in 1993 by Dr. Warren Kovach. The company specializes in the development and marketing of inexpensive and easy-to-use statistical software for scientists, as well as in data analysis consulting.