Introduction to Parameter Identification
 Introduction to Parameter Identification SISO MIMO In order to prevent spam, users must register before they can edit or create articles.

## 1 Introduction to Parameter Identification

In order to control or properly analyze a complex system a mathematical model is required. There are 2 ways to create a model for any given system. The first method is to examine each component directly and analytically describe each. The second method is to use input and output data from the actual system and construct a mathematical approximation.[1]

When constructing an analytical model each component must be described, mathematically, in detail. Once each component is described they can be integrated into a complete end-to-end system model. Constructing each component is non-trivial and integrating the components can be significantly more difficult than constructing the individual components. Additionally, a system could be made up of components whose internal workings are unknown or whose internal workings change dramatically based on external conditions. Parameter identification techniques determine an approximate model from measured input and output data. There are 2 basic methods of parameter identification. The fastest is generally Off-Line identification. Typically this involves a least-squares technique for determining a relationship between the input and output data. The slower, more versatile, method is online, or recursive, identification. Recursive identification involves sequential input and output data to construct a model which adjusts over time to the changing conditions. Both methods are linear but the recursive techniques can often handle some non-linearities. Handling of these non-linear portions of a model would be akin to making a small angle assumption around a desired operating point but the recursive algorithms allow that operating point to change over time.[2]

## 2 Basic Mathematics of Parameter Identification[3]

In this section a basic introduction to the mathematics of parameter identification. If it is assumed that 2nd order difference equation then

 $LaTeX: y\left(kT+2T\right)+d_{i}y\left(kT+T\right)+d_{2}y\left(kT\right)=n_{1}r\left(kT\right)$

where

$LaTeX: R\left(s\right)$ is an unknown system input,
$LaTeX: y\left(0\right)=\alpha_{1}$ is a boundary condition,
$LaTeX: y\left(T\right)=\alpha_{2}$ is a boundary condition, and
$LaTeX: d_{1}, d_{2}, n_{1}$ are unknown equation coefficients.

Since there are 3 unknowns, a minimum of 3 equations is needed

 $LaTeX: \begin{bmatrix}y\left(kT+2T\right)\\y\left(kT+3T\right)\\y\left(kT+4T\right)\end{bmatrix}=\begin{bmatrix}y\left(kT+T\right) & y\left(kT+2T\right) & y\left(kT+3T\right)\\y\left(kT\right) & y\left(kT+T\right) & y\left(kT+2T\right)\\r\left(kT\right) & r\left(kT+T\right) & r\left(kT+2T\right)\end{bmatrix}\begin{bmatrix}-d_{1}\\-d_{2}\\n_{1}\end{bmatrix}$

Define

 $LaTeX: F_{3}=\begin{bmatrix}y\left(kT+2T\right)\\y\left(kT+3T\right)\\y\left(kT+4T\right)\end{bmatrix}$

 $LaTeX: E_{3}=\begin{bmatrix}y\left(kT+T\right) & y\left(kT+2T\right) & y\left(kT+3T\right)\\y\left(kT\right) & y\left(kT+T\right) & y\left(kT+2T\right)\\r\left(kT\right) & r\left(kT+T\right) & r\left(kT+2T\right)\end{bmatrix}$

 $LaTeX: X_{3}=\begin{bmatrix}-d_{1}\\-d_{2}\\n_{1}\end{bmatrix}$

Note that

 $LaTeX: X_{3}=E_{3}^{-1}F_{3}$

if

 $LaTeX: det\left(E_{3}\right)\ne0$

If the data, $LaTeX: y(kT)$ and $LaTeX: r(kT)$, contain measurement errors then more data points than unknowns should be used to find the optimal parameters.

 $LaTeX: y\left(kT+T\right)=-dy\left(kT\right)+nr\left(kT\right)$

 $LaTeX: \begin{bmatrix}y\left(T\right)\\y\left(2T\right)\\y\left(3T\right)\\y\left(4T\right)\end{bmatrix}=\begin{bmatrix}y\left(0\right) & r\left(0\right)\\y\left(T\right) & r\left(T\right)\\y\left(2T\right) & r\left(2T\right)\\y\left(3T\right) & r\left(3T\right)\end{bmatrix}\begin{bmatrix}-d\\n\end{bmatrix}$

where

$LaTeX: y\left( kT+NT \right)$ is abbreviated as $LaTeX: y\left( NT \right)$ and
 $LaTeX: f=\begin{bmatrix}y\left(T\right)\\y\left(2T\right)\\y\left(3T\right)\\y\left(4T\right)\end{bmatrix}, E=\begin{bmatrix}y\left(0\right) & r\left(0\right)\\y\left(T\right) & r\left(T\right)\\y\left(2T\right) & r\left(2T\right)\\y\left(3T\right) & r\left(3T\right)\end{bmatrix}, X=\begin{bmatrix}-d\\n\end{bmatrix}$

Then

 $LaTeX: X=E^{-1}f$

but

 $LaTeX: E^{-1}=?$

To find the optimal parameter set, $LaTeX: X^{*}$

 $LaTeX: X^{*}=E^{+}f$

where

 $LaTeX: E^{+}=\left(E^{T}E\right)^{-1}E^{T}$ pseudo-inverse

is the pseudo-inverse.

It can be seen, from what follows, that $LaTeX: X^{*}$ is the optimal solution. The error vector is defined as

 $LaTeX: e\equiv f-EX$ error

and the squared error is defined as

 $LaTeX: J\equiv e^{T}e=\left(f-EX\right)^{T}\left(f-EX\right)=f^{T}f-2X^{T}E^{T}f+X^{T}E^{T}EX$

where

$LaTeX: f^{T}f$ is a scalar constant.

The necessary condition for optimality is

 $LaTeX: \frac{dJ}{dX}=0-2E^{T}f+2E^{T}EX^{*}$

When $LaTeX: \frac{dJ}{dX}=0$ then

 $LaTeX: X^{*}=\left(E^{T}E\right)^{-1}E^{T}f$

The sufficient condition for optimality is

 $LaTeX: \frac{d}{dX}\frac{dJ}{dX}>0$

for minimum

 $LaTeX: \frac{d}{dX}\frac{dJ}{dX}=2E^{T}E>0$

## 3 References

• Franklin, G. F., Powell, J. D., and Workman, M. 1998 Digital Control of Dynamic Systems. 3rd. Addison-Wesley Longman Publishing Co., Inc. ISBN 0201331535
• Spradlin, Gabriel T. '"AN EXPLORATION OF PARAMETER IDENTIFICATION TECHNIQUES: CMG TEMPERATURE PREDICTION THEORY AND RESULTS." Master's Thesis, University of Houston, Houston, TX December 2005.

### 3.1 Notes

1. Franklin et all