Changeset 944 for library/doc/tutorial/02userguide_estim.dox
- Timestamp:
- 05/16/10 23:13:21 (14 years ago)
- Files:
-
- 1 moved
Legend:
- Unmodified
- Added
- Removed
-
library/doc/tutorial/02userguide_estim.dox
r870 r944 1 1 /*! 2 \page userguide 2BDM Use - Estimation and Bayes Rule2 \page userguide_estim BDM Use - Estimation and Bayes Rule 3 3 4 4 Baysian theory is predominantly used in system identification, or estimation problems. … … 30 30 Since this operation can not be defined universally, the object is defined as abstract class with methods for: 31 31 - <b> Bayes rule </b> as defined above, operation bdm::BM::bayes() which expects to get the current data record \c dt, \f$ d_t \f$ 32 - <b> log-likelihood</b> i.e. numerical value of \f$ f(d_t|d_1\ldots d_{t-1})\f$ as a typical side-product, since it is required in denominator of the above formula.33 For some models, computation of this value may require extra effort, hence it computation can be suppressed by setting BM::set_evalll(false).32 - <b> evidence </b> i.e. numerical value of \f$ f(d_t|d_1\ldots d_{t-1})\f$ as a typical side-product, since it is required in denominator of the above formula. 33 For some models, computation of this value may require extra effort, and can be swithed off. 34 34 - <b> prediction </b> the object has enough information to create the one-step ahead predictor, i.e. \f[ f(d_{t+1}| d_1 \ldots d_{t}), \f] 35 this object can be either created bdm::BM::predictor(), sometimes it is enought only a few values of prediction hence construction of the full predictor would be too expensive operation. For this situation were designed cheaper operations bdm::BM::logpred() or bdm::BM::epredictor().36 These are only basic operations, see full documentation for full range of defined operations.37 35 38 These operation are abstract, i.e. not implemented for the general class.39 36 Implementation of these operations is heavily dependent on the specific class of prior pdf, or its approximations. We can identify only a few principal approaches to this problem. For example, analytical estimation which is possible within sufficient the Exponential Family, or estimation when both prior and posterior are approximated by empirical densities. 40 37 These approaches are first level of descendants of class \c BM, classes bdm::BMEF and bdm::PF, repectively. 41 42 Variants of these approaches are implemented as descendats of these level-two classes. This way, each estimation method (represented a class) is fitted in its place in the tree of approximations. This is useful even from software point of view, since related approximations have common methods and data fields.43 38 44 39 \section ug2_arx_basic Estimation of ARX models … … 54 49 A1.log_level = 'logbounds,logevidence'; 55 50 \endcode 56 This is the minimal configuration of an ARX estimator. Optional elements of bdm::ARX::from_setting() were set using their default values:51 This is the minimal configuration of an ARX estimator. 57 52 58 53 The first three fileds are self explanatory, they identify which data are predicted (field \c rv) and which are in regressor (field \c rgr). 59 54 The field \c log_level is a string of options passed to the object. In particular, class \c BM understand only options related to storing results: 60 55 - logbounds - store also lower and upper bounds on estimates (obtained by calling BM::posterior().qbounds()), 61 - logevidence - store also loglikelihoodof each step of the Bayes rule.56 - logevidence - store also evidence of each step of the Bayes rule. 62 57 These values are stored in given logger (\ref ug_loggers). By default, only mean values of the estimate are stored. 63 58 64 Storing of the log-likelihoodis useful, e.g. in model selection task when two models are compared.59 Storing of the evidence is useful, e.g. in model selection task when two models are compared. 65 60 66 61 The bounds are useful e.g. for visualization of the results. Run of the example should provide result like the following: 67 \image html arx_basic_example_small.png68 62 \image latex arx_basic_example.png "Typical run of tutorial/userguide/arx_basic_example.m" width=\linewidth 69 63 70 64 \section ug2_model_sel Model selection 71 65 72 In Bayesian framework, model selection is done via comparison of marginal likelihoodof the recorded data. See [some theory].66 In Bayesian framework, model selection is done via comparison of evidence (marginal likelihood) of the recorded data. See [some theory]. 73 67 74 68 A trivial exammple how this can be done is presented in file bdmtoolbox/tutorial/userguide/arx_selection_example.m. The code extends the basic A1 object as follows: … … 100 94 A3.rv_param = RV({'a3th', 'r'},[2,1],[0,0]); 101 95 \endcode 102 First, in order to distinguish the estimators from each other, the estimators were given names. Hence, the results will be logged with prefix given by the name, such as M.A1 ll for field \c ll.96 First, in order to distinguish the estimators from each other, the estimators were given names. Hence, the results will be logged with prefix given by the name, such as M.A1_evidence. 103 97 104 98 Second, if the parameters of a ARX model are not specified, they are automatically named \c theta and \c r. However, in this case, \c A1 and \c A2 differ in size, hence their random variables differ and can not use the same name. Therefore, we have explicitly used another names (RVs) of the parameters. … … 106 100 \section ug2_bm_composition Composition of estimators 107 101 108 Similarly to pdfs which could be composed via \c mprod, the Bayesian models can be composed . However, justification of this step is less clear than in the case of epdfs.102 Similarly to pdfs which could be composed via \c mprod, the Bayesian models can be composed togetrer. However, justification of this step is less clear than in the case of epdfs. 109 103 110 104 One possible theoretical base of composition is the Marginalized particle filter, which splits the prior and the posterior in two parts: … … 119 113 This is achieved by a trivial extension using inheritance method bdm::BM::condition(). 120 114 121 Extension of standard ARX estimator to conditional estimator is implemented as class bdm::ARXfrg. The only difference from standard ARX is that this object will change its forgetting factor via method ARXfrg::condition(). Existence of this function is assumed by the MPF estimator.115 Extension of standard ARX estimator to conditional estimator is implemented as class bdm::ARXfrg. The only difference from standard ARX is that this object will obtain its forgetting factor externally as a conditioning variable. 122 116 Informally, the name 'ARXfrg' means: "if anybody calls your condition(0.9), it tells you new value of forgetting factor". 123 117 124 The MPF estimator is implemented by class bdm::MPF. In the toolbox, it can be constructed as follows:118 The MPF estimator for this case is specified as follows: 125 119 \code 126 120 %%%%%% ARX estimator conditioned on frg … … 139 133 phi_pdf.betac = [0.01 0.01]; % stabilizing elememnt of random walk 140 134 135 %%%%%% Particle 136 p.class = 'MarginalizedParticle'; 137 p.parameter_pdf = phi_pdf; % Random walk is the parameter evolution model 138 p.bm = A1; 139 140 % prior on ARX 141 141 %%%%%% Combining estimators in Marginalized particle filter 142 E.class = ' MPF';143 E. BM = A1;% ARX is the analytical part144 E. parameter_pdf = phi_pdf; % Random walk is the parameter evolution model145 E.n = 20;% number of particles142 E.class = 'PF'; 143 E.particle = p; % ARX is the analytical part 144 E.res_threshold = 1.0; % resampling parameter 145 E.n = 100; % number of particles 146 146 E.prior.class = 'eDirich'; % prior on non-linear part 147 E.prior.beta = [ 11]; %148 E.log_level = 'logbounds,logevidence';147 E.prior.beta = [2 1]; % 148 E.log_level = 'logbounds'; 149 149 E.name = 'MPF'; 150 150 … … 168 168 Note: error bars in this case are not directly comparable with those of previous examples. The MPF class implements the qbounds function as minimum and maximum of bounds in the condidered set (even if its weight is extreemly small). Hence, the bounds of the MPF are probably larger than it should be. Nevertheless, they provide great help when designing and tuning algorithms. 169 169 170 \section ug_est_ext Matlab extensions of the Bayesian estimators 171 172 Similarly to the extension of pdf, the estimators (or filters) can be extended via prepared class \c mexBM in directory bdmtoolbox/mex/mex_classes. 173 174 An example of such class is mexLaplaceBM in \<toolbox_dir\>\tutorial\userguide\laplace_example.m 175 176 Note that matlab-extended classes of mexEpdf, specifically, mexDirac and mexLaplace are used as outputs of methods posterior and epredictor, respectively. 177 178 In order to create a new extension of an estimator, copy file with class mexLaplaceBM.m and redefine the methods therein. If needed create new classes for pdfs by inheriting from mexEpdf, it the same way as in the mexLaplace.m example class. 179 180 Foro list of all estimators, see \ref app_base. 170 181 */