This is a good article. Click here for more information.
From Wikipedia, the free encyclopedia

The Theil–Sen estimator of a set of sample points with outliers (black line) compared to the non-robust ordinary least squares line for the same set (blue). The dashed green line represents the ground truth from which the samples were generated.

In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane ( simple linear regression) by choosing the median of the slopes of all lines through pairs of points. It has also been called Sen's slope estimator, [1] [2] slope selection, [3] [4] the single median method, [5] the Kendall robust line-fit method, [6] and the Kendall–Theil robust line. [7] It is named after Henri Theil and Pranab K. Sen, who published papers on this method in 1950 and 1968 respectively, [8] and after Maurice Kendall because of its relation to the Kendall tau rank correlation coefficient. [9]

Theil-Sen regression has several advantages over Ordinary least squares regression. It is insensitive to outliers. It can be used for significance tests even when residuals are not normally distributed. [10] It can be significantly more accurate than non-robust simple linear regression (least squares) for skewed and heteroskedastic data, and competes well against least squares even for normally distributed data in terms of statistical power. [11] It has been called "the most popular nonparametric technique for estimating a linear trend". [2] There are fast algorithms for efficiently computing the parameters.

Definition

As defined by Theil (1950), the Theil–Sen estimator of a set of two-dimensional points (xi, yi) is the median m of the slopes (yj − yi)/(xj − xi) determined by all pairs of sample points. Sen (1968) extended this definition to handle the case in which two data points have the same x coordinate. In Sen's definition, one takes the median of the slopes defined only from pairs of points having distinct x coordinates. [8]

Once the slope m has been determined, one may determine a line from the sample points by setting the y-intercept b to be the median of the values yi − mxi. The fit line is then the line y = mx + b with coefficients m and b in slope–intercept form. [12] As Sen observed, this choice of slope makes the Kendall tau rank correlation coefficient become approximately zero, when it is used to compare the values xi with their associated residuals yi − mxi − b. Intuitively, this suggests that how far the fit line passes above or below a data point is not correlated with whether that point is on the left or right side of the data set. The choice of b does not affect the Kendall coefficient, but causes the median residual to become approximately zero; that is, the fit line passes above and below equal numbers of points. [9]

A confidence interval for the slope estimate may be determined as the interval containing the middle 95% of the slopes of lines determined by pairs of points [13] and may be estimated quickly by sampling pairs of points and determining the 95% interval of the sampled slopes. According to simulations, approximately 600 sample pairs are sufficient to determine an accurate confidence interval. [11]

Variations

A variation of the Theil–Sen estimator, the repeated median regression of Siegel (1982), determines for each sample point (xi, yi), the median mi of the slopes (yj − yi)/(xj − xi) of lines through that point, and then determines the overall estimator as the median of these medians. It can tolerate a greater number of outliers than the Theil–Sen estimator, but known algorithms for computing it efficiently are more complicated and less practical. [14]

A different variant pairs up sample points by the rank of their x-coordinates: the point with the smallest coordinate is paired with the first point above the median coordinate, the second-smallest point is paired with the next point above the median, and so on. It then computes the median of the slopes of the lines determined by these pairs of points, gaining speed by examining significantly fewer pairs than the Theil–Sen estimator. [15]

Variations of the Theil–Sen estimator based on weighted medians have also been studied, based on the principle that pairs of samples whose x-coordinates differ more greatly are more likely to have an accurate slope and therefore should receive a higher weight. [16]

For seasonal data, it may be appropriate to smooth out seasonal variations in the data by considering only pairs of sample points that both belong to the same month or the same season of the year, and finding the median of the slopes of the lines determined by this more restrictive set of pairs. [17]

Statistical properties

The Theil–Sen estimator is an unbiased estimator of the true slope in simple linear regression. [18] For many distributions of the response error, this estimator has high asymptotic efficiency relative to least-squares estimation. [19] Estimators with low efficiency require more independent observations to attain the same sample variance of efficient unbiased estimators.

The Theil–Sen estimator is more robust than the least-squares estimator because it is much less sensitive to outliers. It has a breakdown point of

meaning that it can tolerate arbitrary corruption of up to 29.3% of the input data-points without degradation of its accuracy. [12] However, the breakdown point decreases for higher-dimensional generalizations of the method. [20] A higher breakdown point, 50%, holds for a different robust line-fitting algorithm, the repeated median estimator of Siegel. [12]

The Theil–Sen estimator is equivariant under every linear transformation of its response variable, meaning that transforming the data first and then fitting a line, or fitting a line first and then transforming it in the same way, both produce the same result. [21] However, it is not equivariant under affine transformations of both the predictor and response variables. [20]

Algorithms

The median slope of a set of n sample points may be computed exactly by computing all O(n2) lines through pairs of points, and then applying a linear time median finding algorithm. Alternatively, it may be estimated by sampling pairs of points. This problem is equivalent, under projective duality, to the problem of finding the crossing point in an arrangement of lines that has the median x-coordinate among all such crossing points. [22]

The problem of performing slope selection exactly but more efficiently than the brute force quadratic time algorithm has been extensively studied in computational geometry. Several different methods are known for computing the Theil–Sen estimator exactly in O(n log n) time, either deterministically [3] or using randomized algorithms. [4] Siegel's repeated median estimator can also be constructed in the same time bound. [23] In models of computation in which the input coordinates are integers and in which bitwise operations on integers take constant time, the Theil–Sen estimator can be constructed even more quickly, in randomized expected time . [24]

An estimator for the slope with approximately median rank, having the same breakdown point as the Theil–Sen estimator, may be maintained in the data stream model (in which the sample points are processed one by one by an algorithm that does not have enough persistent storage to represent the entire data set) using an algorithm based on Δ-nets. [25]

Implementations

In the R statistics package, both the Theil–Sen estimator and Siegel's repeated median estimator are available through the mblm library. [26] A free standalone Visual Basic application for Theil–Sen estimation, KTRLine, has been made available by the US Geological Survey. [27] The Theil–Sen estimator has also been implemented in Python as part of the SciPy and scikit-learn libraries. [28]

Applications

Theil–Sen estimation has been applied to astronomy due to its ability to handle censored regression models. [29] In biophysics, Fernandes & Leblanc (2005) suggest its use for remote sensing applications such as the estimation of leaf area from reflectance data due to its "simplicity in computation, analytical estimates of confidence intervals, robustness to outliers, testable assumptions regarding residuals and ... limited a priori information regarding measurement errors". [30] For measuring seasonal environmental data such as water quality, a seasonally adjusted variant of the Theil–Sen estimator has been proposed as preferable to least squares estimation due to its high precision in the presence of skewed data. [17] In computer science, the Theil–Sen method has been used to estimate trends in software aging. [31] In meteorology and climatology, it has been used to estimate the long-term trends of wind occurrence and speed. [32]

See also

Notes

  1. ^ Gilbert (1987).
  2. ^ a b El-Shaarawi & Piegorsch (2001).
  3. ^ a b Cole et al. (1989); Katz & Sharir (1993); Brönnimann & Chazelle (1998).
  4. ^ a b Dillencourt, Mount & Netanyahu (1992); MatouĆĄek (1991); Blunck & Vahrenhold (2006).
  5. ^ Massart et al. (1997)
  6. ^ Sokal & Rohlf (1995); Dytham (2011).
  7. ^ Granato (2006)
  8. ^ a b Theil (1950); Sen (1968)
  9. ^ a b Sen (1968); Osborne (2008).
  10. ^ Helsel, Dennis R.; Hirsch, Robert M.; Ryberg, Karen R.; Archfield, Stacey A.; Gilroy, Edward J. (2020). Statistical methods in water resources. Techniques and Methods. Reston, VA: U.S. Geological Survey. p. 484. Retrieved 2020-05-22.
  11. ^ a b Wilcox (2001).
  12. ^ a b c Rousseeuw & Leroy (2003), pp. 67, 164.
  13. ^ For determining confidence intervals, pairs of points must be sampled with replacement; this means that the set of pairs used in this calculation includes pairs in which both points are the same as each other. These pairs are always outside the confidence interval, because they do not determine a well-defined slope value, but using them as part of the calculation causes the confidence interval to be wider than it would be without them.
  14. ^ Logan (2010), Section 8.2.7 Robust regression; MatouĆĄek, Mount & Netanyahu (1998)
  15. ^ De Muth (2006).
  16. ^ Jaeckel (1972); Scholz (1978); Sievers (1978); Birkes & Dodge (1993).
  17. ^ a b Hirsch, Slack & Smith (1982).
  18. ^ Sen (1968), Theorem 5.1, p. 1384; Wang & Yu (2005).
  19. ^ Sen (1968), Section 6; Wilcox (1998).
  20. ^ a b Wilcox (2005).
  21. ^ Sen (1968), p. 1383.
  22. ^ Cole et al. (1989).
  23. ^ MatouĆĄek, Mount & Netanyahu (1998).
  24. ^ Chan & PătraƟcu (2010).
  25. ^ Bagchi et al. (2007).
  26. ^ Logan (2010), p. 237; Vannest, Davis & Parker (2013)
  27. ^ Vannest, Davis & Parker (2013); Granato (2006)
  28. ^ SciPy community (2015); Persson & Martins (2016)
  29. ^ Akritas, Murphy & LaValley (1995).
  30. ^ Fernandes & Leblanc (2005).
  31. ^ Vaidyanathan & Trivedi (2005).
  32. ^ Romanić et al. (2014).

References