Iteratively re-weighted least squares

Iteratively re-weighted least squares

The method of iteratively re-weighted least squares (IRLS) is a numerical algorithm for minimizing any specified objective function using a standard weighted least squares method such as Gaussian elimination. Whereas techniques are widely available and highly optimized for weighted least squares, there are few techniques for minimization of other objective functions.

The IRLS is commonly used to perform robust regression with an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally distributed data set. For example, by minimizing the least absolute error rather than the least square error.

Although not a linear regression problem, Weiszfeld's algorithm for approximating the geometric median can also be viewed as a special case of iteratively re-weighted least squares, in which the objective function is the sum of distances of the estimator from the samples.

The method

Starting with a diagonal weighting matrix equal to the identity matrix scriptstyle W = I and a linear problem scriptstyle A x = b, the (weighted) linear equation

: W A x = W b ,

is formed. The least squares solution of this equation is then found using standard linear algebra methods. The residuals

: r = b - A x ,

are calculated and the weighting matrix is updated to some non-negative function scriptstyle f(r) of the residuals, "r", e.g., scriptstyle f(r) = 1/|r|

: W = mathop{ m diag}( f(r) ). ,

With these new weights, the weighted least squares equation is re-solved and the residuals are re-calculated. The process can be iterated many times.

The solution to which this iterative process converges is the minimizer of an objective function related to the function scriptstyle f(r). With scriptstyle f(r) = 1/|r| the objective is the least absolute deviation scriptstyle sum |r_i|.

Convergence

Convergence of the method is not guaranteed. For example, choosing "f(r) = |r|p" with "p" < −1 or "p" ≥ 1 may cause successive solutions to keep oscillating without converging to a limit.

References

* [http://sepwww.stanford.edu/public/docs/sep103/antoine2/paper_html/index.html Stanford Lecture Notes on the IRLS algorithm by Antoine Guitton]
* [http://www.mai.liu.se/~akbjo/LSPbook.html Numerical Methods for Least Squares Problems by Åke Björck] (Chapter 4: Generalized Least Squares Problems.)
* [http://www.nrbook.com/a/bookcpdf/c15-7.pdf Robust Estimation in Numerical Recipes in C by Press et al] (requires the [http://www.nr.com/plugin/plugin_faq.html FileOpen] plugin to view)


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Least squares — The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. Least squares means that the overall solution minimizes the sum of… …   Wikipedia

  • Linear least squares (mathematics) — This article is about the mathematics that underlie curve fitting using linear least squares. For statistical regression analysis using least squares, see linear regression. For linear regression on a single variable, see simple linear regression …   Wikipedia

  • Non-linear least squares — is the form of least squares analysis which is used to fit a set of m observations with a model that is non linear in n unknown parameters (m > n). It is used in some forms of non linear regression. The basis of the method is to… …   Wikipedia

  • Ordinary least squares — This article is about the statistical properties of unweighted linear regression analysis. For more general regression analysis, see regression analysis. For linear regression on a single variable, see simple linear regression. For the… …   Wikipedia

  • Total least squares — The bivariate (Deming regression) case of Total Least Squares. The red lines show the error in both x and y. This is different from the traditional least squares method which measures error parallel to the y axis. The case shown, with deviations… …   Wikipedia

  • Least absolute deviations — (LAD), also known as Least Absolute Errors (LAE), Least Absolute Value (LAV), or the L1 norm problem, is a mathematical optimization technique similar to the popular least squares technique that attempts to find a function which closely… …   Wikipedia

  • List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… …   Wikipedia

  • List of digital estimation techniques — *Linear models **Parameter Estimation ***Deterministic parameters ****Least squares (batch and recursive processing) ****Best linear unbiased estimation (BLUE) ****Maximum likelihood ***Random parameters ****Mean squared ****Maximum a posteriori… …   Wikipedia

  • Geometric median — The geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances for one… …   Wikipedia

  • List of mathematics articles (I) — NOTOC Ia IA automorphism ICER Icosagon Icosahedral 120 cell Icosahedral prism Icosahedral symmetry Icosahedron Icosian Calculus Icosian game Icosidodecadodecahedron Icosidodecahedron Icositetrachoric honeycomb Icositruncated dodecadodecahedron… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”