Online Recursive Square Root

From ControlTheoryPro.com

Jump to: navigation, search
Symbol.gif
Online Recursive Square Root
Green carrot left.gif
All System Identification Articles All Parameter Identification Articles
Green carrot.jpg
In order to prevent spam, users must register before they can edit or create articles.


1 Introduction to Online Recursive Square Root Method for Kalman filter updates[1]

There are many ways to update a Kalman filter. One method is online recursive least squares (RLS); another is the online recursive square root (RSR) method. The online recursive square root method of updating Kalman filters is considered superior to the RLS method for systems with time-varying paramters. Additionally, the RSR method is generally more accurate, more rapidly converging, and more robust (when using a varibble forgetting factor).[2]

1.1 Relationship to Least Squares Methods


At its heart a least squares solution (or polynomial fit) of inputs to outputs is an optimization problem. All optimization problems involve a cost function related to system being modeled ro controller. That cost function is minimized or maximized (depending on the cost function and task at hand). Therefore when studying optimization problems don't be surprised to find LaTeX: \frac{d}{dt}J = 0 where LaTeX: J is the cost function.

Least squares solution are all about minimizing the square of the error at each point (or input/output pairing). So in least squares the error between the actual output and the predicted output is compared at each point. That error is squared and those squares are summed.

Least square is named for the cost function it minimizes. It is one of the most common means of optimizing solutions becuase it is easy to understand, easy to implment, and widely known. However, other cost functions exist and, depending on the application, can be better. A quick illustration of this point would be a system where you can tolerate a certain amount of error at each point but the maximum error must be kept to a minimum. Instead of minimizing the sum of the square of the errors a better cost function would be the minimization of the maximum error between an actual system output and the predicted output.

Kalman filters attempt to minimized the error between actual system outputs and predicted outputs. But that minimization of the error is based on a cost function. The RLS method is 1 means, based on least squares, of minimizing the error in the Kalman filter output. The recusive square root method presented here is another cost function for minimizing the Kalman filter error.

2 Online Recursive Square Root Method for Kalman filter updates[3]

This algorithm comes from a journal article on attitude control of the International Space Station[4].

LaTeX: \hat{\theta}_{i}\left(k\right)=\hat{\theta}_{i}\left(k-1\right)+S_{i}\left(k-1\right)f_{i}\left(k\right)g_{i}\left(k\right)\xi_{i}\left(k\right),


where

LaTeX: \xi\left(k\right)=x_{di}\left(k\right)-\hat{\theta}_{i}^{T}\left(k-1\right)\phi\left(k\right),
LaTeX: f_{i}\left(k\right)=S_{i}^{T}\left(k-1\right)\phi\left(k\right),
LaTeX: g_{i}\left(k\right)=\left[f_{i}^{T}\left(k\right)f_{i}\left(k\right)+1\right]^{-1},
LaTeX: S_{i}\left(k\right)=\frac{S_{i}\left(k-1\right) \left [ I_{n+m}-g_{i}\left(k\right)\alpha_{i}\left(k\right)f_{i}\left(k\right)f_{i}^{T}\left(k\right) \right ] }{\sqrt{\beta_{i}\left(k\right)}},
for LaTeX: trace\left [ S_{i}\left(k-1\right)\right ] >c

and

LaTeX: S_{i}\left(k\right)=S_{i}\left(k-1\right)\left [ I_{n+m}-g_{i}\left(k\right)\alpha_{i}\left(k\right)f_{i}\left(k\right)f_{i}^{T}\left(k\right)\right ] ,
for LaTeX: trace\left [ S_{i}\left(k-1\right)\right ] >c

and

LaTeX: \alpha_{i}\left(k\right)=\frac{1}{1+\sqrt{g_{i}\left(k\right)}}

and

LaTeX: \beta_{i}\left(k\right)=\left | 1-\frac{\left | \xi_{i}\left(k\right) \right | \times \sqrt{g_{i}\left(k\right)}}{\sqrt{\Sigma_{i}}} \right |.

3 See Also

4 References

  • Spradlin, Gabriel T. '"AN EXPLORATION OF PARAMETER IDENTIFICATION TECHNIQUES: CMG TEMPERATURE PREDICTION THEORY AND RESULTS." Master's Thesis, University of Houston, Houston, TX December 2005.
  • Zhao, X. M., Shieh, L. S., Sunkel, J. W., and Yuan, Z. Z., "Self-tuning Control of Attitude and Momentum Management of the Space Station," AIAA J. of Guidance, Control, and Dynamics, Vol. 15, No. 1, 1992, pp. 17-27.

4.1 Notes

  1. Spradlin, pg. 194
  2. Zhao et all
  3. Spradlin, pp. 194-198
  4. Zhao et all