Changeset 948 for library/doc/tutorial/02userguide_estim.dox
- Timestamp:
- 05/18/10 16:54:25 (14 years ago)
- Files:
-
- 1 modified
Legend:
- Unmodified
- Added
- Removed
-
library/doc/tutorial/02userguide_estim.dox
r944 r948 2 2 \page userguide_estim BDM Use - Estimation and Bayes Rule 3 3 4 Baysian theory is predominantly used in system identification, or estimation problems. 5 This section is concerned with recursive estimation, as implemneted in prepared scenario \c estimator. 4 Bayesian theory is predominantly used in system identification, or estimation problems. 5 This section is concerned with recursive estimation, as implemented in prepared scenario \c estimator. 6 7 Table of contents: 8 \ref ug2_theory 9 \ref ug2_arx_basic 10 \ref ug2_model_sel 11 \ref ug2_bm_composition 12 \ref ug_est_ext 6 13 7 14 The function of the \c estimator is graphically illustrated: … … 31 38 - <b> Bayes rule </b> as defined above, operation bdm::BM::bayes() which expects to get the current data record \c dt, \f$ d_t \f$ 32 39 - <b> evidence </b> i.e. numerical value of \f$ f(d_t|d_1\ldots d_{t-1})\f$ as a typical side-product, since it is required in denominator of the above formula. 33 For some models, computation of this value may require extra effort, and can be swit hed off.40 For some models, computation of this value may require extra effort, and can be switched off. 34 41 - <b> prediction </b> the object has enough information to create the one-step ahead predictor, i.e. \f[ f(d_{t+1}| d_1 \ldots d_{t}), \f] 35 42 36 43 Implementation of these operations is heavily dependent on the specific class of prior pdf, or its approximations. We can identify only a few principal approaches to this problem. For example, analytical estimation which is possible within sufficient the Exponential Family, or estimation when both prior and posterior are approximated by empirical densities. 37 These approaches are first level of descendants of class \c BM, classes bdm::BMEF and bdm::PF, re pectively.44 These approaches are first level of descendants of class \c BM, classes bdm::BMEF and bdm::PF, respectively. 38 45 39 46 \section ug2_arx_basic Estimation of ARX models … … 51 58 This is the minimal configuration of an ARX estimator. 52 59 53 The first three fi leds are self explanatory, they identify which data are predicted (field \c rv) and which are in regressor (field \c rgr).60 The first three fields are self explanatory, they identify which data are predicted (field \c rv) and which are in regresor (field \c rgr). 54 61 The field \c log_level is a string of options passed to the object. In particular, class \c BM understand only options related to storing results: 55 62 - logbounds - store also lower and upper bounds on estimates (obtained by calling BM::posterior().qbounds()), … … 66 73 In Bayesian framework, model selection is done via comparison of evidence (marginal likelihood) of the recorded data. See [some theory]. 67 74 68 A trivial exam mple how this can be done is presented in file bdmtoolbox/tutorial/userguide/arx_selection_example.m. The code extends the basic A1 object as follows:75 A trivial example how this can be done is presented in file bdmtoolbox/tutorial/userguide/arx_selection_example.m. The code extends the basic A1 object as follows: 69 76 \code 70 77 A2=A1; … … 78 85 - A3 which is the same as A2, but assumes time-variant parameters with forgetting factor 0.95. 79 86 80 Since all estimator were configured to store values of marginal log-likelihood, we can easily compare them by computin ttotal log-likelihood for each of them and converting them to probabilities. Typically, the results should look like:87 Since all estimator were configured to store values of marginal log-likelihood, we can easily compare them by computing total log-likelihood for each of them and converting them to probabilities. Typically, the results should look like: 81 88 \code 82 89 Model_probabilities = … … 100 107 \section ug2_bm_composition Composition of estimators 101 108 102 Similarly to pdfs which could be composed via \c mprod, the Bayesian models can be composed toget rer. However, justification of this step is less clear than in the case of epdfs.109 Similarly to pdfs which could be composed via \c mprod, the Bayesian models can be composed together. However, justification of this step is less clear than in the case of epdfs. 103 110 104 111 One possible theoretical base of composition is the Marginalized particle filter, which splits the prior and the posterior in two parts: … … 166 173 \image latex frg_example.png "Typical run of tutorial/userguide/frg_example.m" width=\linewidth 167 174 168 Note: error bars in this case are not directly comparable with those of previous examples. The MPF class implements the qbounds function as minimum and maximum of bounds in the con didered set (even if its weight is extreemly small). Hence, the bounds of the MPF are probably larger than it should be. Nevertheless, they provide great help when designing and tuning algorithms.175 Note: error bars in this case are not directly comparable with those of previous examples. The MPF class implements the qbounds function as minimum and maximum of bounds in the considered set (even if its weight is extremely small). Hence, the bounds of the MPF are probably larger than it should be. Nevertheless, they provide great help when designing and tuning algorithms. 169 176 170 177 \section ug_est_ext Matlab extensions of the Bayesian estimators … … 178 185 In order to create a new extension of an estimator, copy file with class mexLaplaceBM.m and redefine the methods therein. If needed create new classes for pdfs by inheriting from mexEpdf, it the same way as in the mexLaplace.m example class. 179 186 180 For olist of all estimators, see \ref app_base.187 For list of all estimators, see \ref app_base. 181 188 */