Recursive least squares (RLS) algorithm is used in adaptive filters to find the filter coefficients that relate to recursively producing the least squares (minimum of the sum of the absolute squared) of the error signal (difference between the desired and the actual signal). This is contrast to other algorithms that aim to reduce the mean square error. The difference is that RLS filters are dependent on the signals themselves, whereas MSE filters are dependent on their statistics (specifically, the autocorrelation of the input and the cross-correlation of the input and desired signals). If these statistics are known, an MSE filter with fixed co-efficients (i.e., independent of the incoming data) can be built.
Motivation
Suppose that a signal is transmitted over an echoey, noisy channel that causes it to be received as
:
where represents white noise. We will attempt to recover the desired signal by use of an FIR filter, :
:
Our goal is to estimate the parameters of the filter , and at each time "n" we refer to the new least squares estimate by . As time evolves, we would like to avoid completely redoing the least squares algorithm to find the new estimate for , in terms of .
The benefit of the RLS algorithm is that there is no need to invert matrices, thereby saving computational power. Another advantage is that it provides intuition behind such results as the Kalman filter.
Discussion
The idea behind RLS filters is to minimize a cost function by appropriately selecting the filter coefficients , updating the filter as new data arrives. The error signal and desired signal are defined in the negative feedback diagram below:
The error implicitly depends on the filter coefficients through the estimate :
:
The weighted least squares error function —the cost function we desire to minimize—being a function of e(n) is therefore also dependent on the filter coefficients::where