13 | | Matlab:\begin{itemize} |
14 | | \item matlab\_\-bdm \end{itemize} |
| 16 | Special cases include: \begin{itemize} |
| 17 | \item estimation of unknown mean and variance of a Gaussian density from independent samples.\end{itemize} |
| 18 | \hypertarget{tut_arx_off}{}\section{Off-line estimation:}\label{tut_arx_off} |
| 19 | This particular model belongs to the exponential family, hence it has conjugate distribution (i.e. both prior and posterior) of the Gauss-inverse-Wishart form. See \mbox{[}ref\mbox{]} |
| 20 | |
| 21 | Estimation of this family can be achieved by accumulation of sufficient statistics. The sufficient statistics Gauss-inverse-Wishart density is composed of: \begin{description} |
| 22 | \item[Information matrix ]which is a sum of outer products \[ V_t = \sum_{i=0}^{n} \left[\begin{array}{c}y_{t}\\ \psi_{t}\end{array}\right] \begin{array}{c} [y_{t}',\,\psi_{t}']\\ \\\end{array} \] \item[\char`\"{}Degree of freedom\char`\"{} ]which is an accumulator of number of data records \[ \nu_t = \sum_{i=0}^{n} 1 \] \end{description} |
| 23 | \hypertarget{tut_arx_on}{}\section{On-line estimation}\label{tut_arx_on} |
| 24 | For online estimation with stationary parameters can be easily achieved by collecting the sufficient statistics described above recursively. |
| 25 | |
| 26 | Extension to non-stationaly parameters, $ \theta_t , r_t $ can be achieved by operation called forgetting. This is an approximation of Bayesian filtering see \mbox{[}Kulhavy\mbox{]}. The resulting algorithm is defined by manipulation of sfficient statistics: \begin{description} |
| 27 | \item[Information matrix ]which is a sum of outer products \[ V_t = V_{t-1} + \phi \left[\begin{array}{c}y_{t}\\ \psi_{t}\end{array}\right] \begin{array}{c} [y_{t}',\,\psi_{t}']\\ \\\end{array} +(1-\phi) V_0 \] \item[\char`\"{}Degree of freedom\char`\"{} ]which is an accumulator of number of data records \[ \nu_t = \nu_{t-1} + \phi + (1-\phi) \nu_0 \] \end{description} |
| 28 | where $ \phi $ is the forgetting factor, typically $ \phi \in [0,1]$ roughly corresponding to the effective length of the exponential window by relation:\[ \mathrm{win_length} = \frac{1}{1-\phi}\] Hence, $ \phi=0.9 $ corresponds to estimation on exponential window of effective length 10 samples. |
| 29 | |
| 30 | Statistics $ V_0 , \nu_0 $ are called alternative statistics, their role is to stabilize estimation. It is easy to show that for zero data, the statistics $ V_t , \nu_t $ converge to the alternative statistics.\hypertarget{tut_arx_str}{}\section{Structure estimation}\label{tut_arx_str} |
| 31 | For this model, structure estimation is a form of model selection procedure. Specifically, we compare hypotheses that the data were generated by the full model with hypotheses that some regressors in vector $\psi$ are redundant. The number of possible hypotheses is then the number of all possible combinations of all regressors. |
| 32 | |
| 33 | However, due to property known as nesting in exponential family, these hypotheses can be tested using only the posterior statistics. (This property does no hold for forgetting $ \phi<1 $). Hence, for low dimensional problems, this can be done by a tree search (method \hyperlink{classbdm_1_1ARX_16b02ae03316751664c22d59d90c1e34}{bdm::ARX::structure\_\-est()}). Or more sophisticated algorithm \mbox{[}ref Ludvik\mbox{]}\hypertarget{tut_arx_soft}{}\section{Software Image}\label{tut_arx_soft} |
| 34 | Estimation of the ARX model is implemented in class \hyperlink{classbdm_1_1ARX}{bdm::ARX}. \begin{itemize} |
| 35 | \item models from exponential family share some properties, these are encoded in class \hyperlink{classbdm_1_1BMEF}{bdm::BMEF} which is the parent of ARX \item one of the parameters of \hyperlink{classbdm_1_1BMEF}{bdm::BMEF} is the forgetting factor which is stored in attribute {\tt frg}, \item posterior density is stored inside the estimator in the form of \hyperlink{classbdm_1_1egiw}{bdm::egiw} \item references to statistics of the internal {\tt egiw} class, i.e. attributes {\tt V} and {\tt nu} are established for convenience.\end{itemize} |
| 36 | \hypertarget{tut_arx_try}{}\section{How to try}\label{tut_arx_try} |
| 37 | The best way to experiment with this object is to run matlab script {\tt arx\_\-test.m} located in directory {\tt }./library/tutorial. See \hyperlink{arx_ui}{Running experiment {\tt estimator} with ARX data fields} for detailed description. |
| 38 | |
| 39 | \begin{itemize} |
| 40 | \item In default setup, the parameters converge to the true values as expected. \item Try changing the forgetting factor, field {\tt estimator.frg}, to values $<$1. You should see increased lower and upper bounds on the estimates. \item Try different set of parameters, filed {\tt system.theta}, you should note that poles close to zero are harder to identify. \end{itemize} |