55 | | Using the same logic as before, |
| 56 | This is the minimal configuration of an ARX estimator. Optional elements of bdm::ARX::from_setting() were set using their default values: |
| 57 | |
| 58 | The first three fileds are self explanatory, they identify which data are predicted (field \c rv) and which are in regressor (field \c rgr). |
| 59 | The field \c options is a string of options passed to the object. In particular, class \c BM understand only options related to storing results: |
| 60 | - logbounds - store also lower and upper bounds on estimates (obtained by calling BM::posterior().qbounds()), |
| 61 | - logll - store also loglikelihood of each step of the Bayes rule. |
| 62 | These values are stored in given logger (\ref ug_loggers). By default, only mean values of the estimate are stored. |
| 63 | |
| 64 | Storing of the log-likelihood is useful, e.g. in model selection task when too models are compared. |
| 65 | |
| 66 | The bounds are useful e.g. for visualization of the results. Run of the example should provide result like the following: |
| 67 | \image html arx_basic_example_small.png |
| 68 | \image latex arx_basic_example.png "Typical run of tutorial/userguide/arx_basic_example.m" width=\linewidth |
| 69 | |
| 70 | \section ug2_model_sel Model selection |
| 71 | |
| 72 | In Bayesian framework, model selection is done via comparison of marginal likelihood of the recorded data. See [some theory]. |
| 73 | |
| 74 | A trivial exammple how this can be done is presented in file bdmtoolbox/tutorial/userguide/arx_selection_example.m. The code extends the basic A1 object as follows: |
| 75 | \code |
| 76 | A2=A1; |
| 77 | A2.constant = 0; |
| 78 | |
| 79 | A3=A2; |
| 80 | A3.frg = 0.95; |
| 81 | \endcode |
| 82 | That is, two other ARX estimators are created, |
| 83 | - A2 which is the same as A1 except it does not model constant term in the linear regression. Note that if the constant was set to zero, then this is the correct model. |
| 84 | - A3 which is the same as A2, but assumes time-variant parameters with forgetting factor 0.95. |
| 85 | |
| 86 | Since all estimator were configured to store values of marginal log-likelihood, we can easily compare them by computint total log-likelihood for each of them and converting them to probabilities. Typically, the results should look like: |
| 87 | \code |
| 88 | Model_probabilities = |
| 89 | |
| 90 | 0.0002 0.7318 0.2680 |
| 91 | \endcode |
| 92 | Hence, the true model A2 was correctly identified as the most likely to produce this data. |
| 93 | |
| 94 | For this task, additional technical adjustments were needed: |
| 95 | \code |
| 96 | A1.name='A1'; |
| 97 | A2.name='A2'; |
| 98 | A2.rv_param = RV({'a2th', 'r'},[2,1],[0,0]); |
| 99 | A3.name='A3'; |
| 100 | A3.rv_param = RV({'a3th', 'r'},[2,1],[0,0]); |
| 101 | \endcode |
| 102 | First, in order to distinguish the estimators from each other, the estimators were given names. Hence, the results will be logged with prefix given by the name, such as M.A1ll for field \c ll. |
| 103 | |
| 104 | Second, if the parameters of a ARX model are not specified, they are automatically named \c theta and \c r. Howver, in this case, \c A1 and \c A2 differ in size, hence their random variables differ and can not use the same name. Therefore, we have explicitly used another names (RVs) of the parameters. |
| 105 | |
| 106 | \section ug2_bm_composition Composition of estimators |
| 107 | |
| 108 | Similarly to mpdfs which could be composed via \c mprod, the Bayesian models can be composed. However, justification of this step is less clear than in the case of epdfs. |
| 109 | |
| 110 | One possible theoretical base of composition is the Marginalized particle filter, which splits the prior and the posterior in two parts: |
| 111 | \f[ f(x_t|d_1\ldots d_t)=f(x_{1,t}|x_{2,t},d_1\ldots d_t)f(x_{2,t}|d_1\ldots d_t) \f] |
| 112 | each of these parts is estimated using different approach. The first part is assumed to be analytically tractable, while the second is approximated using empirical approximation. |
| 113 | |
| 114 | The whole algorithm runs by parallel evaluation of many \c BMs for estimation of \f$ x_{1,t}\f$, each of them conditioned on value of a sample of \f$x_{2,t}\f$. |
| 115 | |
| 116 | For example, the forgetting factor, \f$ \phi \f$ of an ARX model can be considered to be unknown. Then, the whole parameter space is \f$ [\theta_t, r_t, \phi_t]\f$ decomposed as follows: |
| 117 | \f[ f(\theta_t, r_t, \phi_t) = f(\theta_t, r_t| \phi_t) f(\phi_t) \f] |
| 118 | Note that for known trajectory of \f$ \phi_t \f$ the standard ARX estimator can be used if we find a way how to feed the changing \f$ \phi_t \f$ into it. |
| 119 | This is achieved by a trivial extension using inheritance method bdm::BM::condition(). |
| 120 | |
| 121 | |