Show
Ignore:
Timestamp:
05/16/10 23:13:21 (14 years ago)
Author:
smidl
Message:

Doc + new examples

Files:
1 moved

Legend:

Unmodified
Added
Removed
  • library/doc/tutorial/02userguide_estim.dox

    r870 r944  
    11/*! 
    2 \page userguide2 BDM Use - Estimation and Bayes Rule 
     2\page userguide_estim BDM Use - Estimation and Bayes Rule 
    33 
    44Baysian theory is predominantly used in system identification, or estimation problems.  
     
    3030Since this operation can not be defined universally, the object is defined as abstract class with methods for: 
    3131 - <b> Bayes rule </b> as defined above, operation bdm::BM::bayes() which expects to get the current data record \c dt, \f$ d_t \f$ 
    32  - <b> log-likelihood </b> i.e. numerical value of \f$ f(d_t|d_1\ldots d_{t-1})\f$ as a typical side-product, since it is required in denominator of the above formula. 
    33    For some models, computation of this value may require extra effort, hence it computation can be suppressed by setting BM::set_evalll(false). 
     32 - <b> evidence </b> i.e. numerical value of \f$ f(d_t|d_1\ldots d_{t-1})\f$ as a typical side-product, since it is required in denominator of the above formula. 
     33   For some models, computation of this value may require extra effort, and can be swithed off. 
    3434 - <b> prediction </b> the object has enough information to create the one-step ahead predictor, i.e. \f[ f(d_{t+1}| d_1 \ldots d_{t}), \f]  
    35         this object can be either created bdm::BM::predictor(), sometimes it is enought only a few values of prediction hence construction of the full predictor would be too expensive operation. For this situation were designed cheaper operations bdm::BM::logpred() or bdm::BM::epredictor(). 
    36 These are only basic operations, see full documentation for full range of defined operations. 
    3735         
    38 These operation are abstract, i.e. not implemented for the general class.  
    3936Implementation of these operations is heavily dependent on the specific class of prior pdf, or its approximations. We can identify only a few principal approaches to this problem. For example, analytical estimation which is possible within sufficient the Exponential Family, or estimation when both prior and posterior are approximated by empirical densities.  
    4037These approaches are first level of descendants of class \c BM, classes bdm::BMEF and bdm::PF, repectively. 
    41  
    42 Variants of these approaches are implemented as descendats of these level-two classes. This way, each estimation method (represented a class) is fitted in its place in the tree of approximations. This is useful even from software point of view, since related approximations have common methods and data fields. 
    4338 
    4439\section ug2_arx_basic Estimation of ARX models 
     
    5449A1.log_level = 'logbounds,logevidence'; 
    5550\endcode  
    56 This is the minimal configuration of an ARX estimator. Optional elements of bdm::ARX::from_setting() were set using their default values: 
     51This is the minimal configuration of an ARX estimator.  
    5752 
    5853The first three fileds are self explanatory, they identify which data are predicted (field \c rv) and which are in regressor (field \c rgr). 
    5954The field \c log_level is a string of options passed to the object. In particular, class \c BM understand only options related to storing results: 
    6055 - logbounds - store also lower and upper bounds on estimates (obtained by calling BM::posterior().qbounds()), 
    61  - logevidence - store also loglikelihood of each step of the Bayes rule. 
     56 - logevidence - store also evidence of each step of the Bayes rule. 
    6257These values are stored in given logger (\ref ug_loggers). By default, only mean values of the estimate are stored. 
    6358 
    64 Storing of the log-likelihood is useful, e.g. in model selection task when two models are compared. 
     59Storing of the evidence is useful, e.g. in model selection task when two models are compared. 
    6560 
    6661The bounds are useful e.g. for visualization of the results. Run of the example should provide result like the following: 
    67 \image html arx_basic_example_small.png  
    6862\image latex arx_basic_example.png "Typical run of tutorial/userguide/arx_basic_example.m" width=\linewidth 
    6963 
    7064\section ug2_model_sel Model selection 
    7165 
    72 In Bayesian framework, model selection is done via comparison of marginal likelihood of the recorded data. See [some theory]. 
     66In Bayesian framework, model selection is done via comparison of evidence (marginal likelihood) of the recorded data. See [some theory]. 
    7367 
    7468A trivial exammple how this can be done is presented in file bdmtoolbox/tutorial/userguide/arx_selection_example.m. The code extends the basic A1 object as follows: 
     
    10094A3.rv_param = RV({'a3th', 'r'},[2,1],[0,0]); 
    10195\endcode 
    102 First, in order to distinguish the estimators from each other, the estimators were given names. Hence, the results will be logged with prefix given by the name, such as M.A1ll for field \c ll. 
     96First, in order to distinguish the estimators from each other, the estimators were given names. Hence, the results will be logged with prefix given by the name, such as M.A1_evidence. 
    10397 
    10498Second, if the parameters of a ARX model are not specified, they are automatically named \c theta and \c r. However, in this case, \c A1 and \c A2 differ in size, hence their random variables differ and can not use the same name. Therefore, we have explicitly used another names (RVs) of the parameters. 
     
    106100\section ug2_bm_composition Composition of estimators 
    107101 
    108 Similarly to pdfs which could be composed via \c mprod, the Bayesian models can be composed. However, justification of this step is less clear than in the case of epdfs. 
     102Similarly to pdfs which could be composed via \c mprod, the Bayesian models can be composed togetrer. However, justification of this step is less clear than in the case of epdfs. 
    109103 
    110104One possible theoretical base of composition is the Marginalized particle filter, which splits the prior and the posterior in two parts: 
     
    119113This is achieved by a trivial extension using inheritance method bdm::BM::condition(). 
    120114 
    121 Extension of standard ARX estimator to conditional estimator is implemented as class bdm::ARXfrg. The only difference from standard ARX is that this object will change its forgetting factor via method ARXfrg::condition(). Existence of this function is assumed by the MPF estimator. 
     115Extension of standard ARX estimator to conditional estimator is implemented as class bdm::ARXfrg. The only difference from standard ARX is that this object will obtain its forgetting factor externally as a conditioning variable.  
    122116Informally, the name 'ARXfrg' means: "if anybody calls your condition(0.9), it tells you new value of forgetting factor". 
    123117 
    124 The MPF estimator is implemented by class bdm::MPF. In the toolbox, it can be constructed as follows: 
     118The MPF estimator for this case is specified as follows: 
    125119\code 
    126120%%%%%% ARX estimator conditioned on frg 
     
    139133phi_pdf.betac = [0.01 0.01];       % stabilizing elememnt of random walk 
    140134 
     135%%%%%% Particle 
     136p.class = 'MarginalizedParticle'; 
     137p.parameter_pdf = phi_pdf;         % Random walk is the parameter evolution model 
     138p.bm    = A1; 
     139 
     140% prior on ARX 
    141141%%%%%% Combining estimators in Marginalized particle filter 
    142 E.class = 'MPF'; 
    143 E.BM = A1;                         % ARX is the analytical part 
    144 E.parameter_pdf = phi_pdf;         % Random walk is the parameter evolution model 
    145 E.n = 20;                          % number of particles 
     142E.class = 'PF'; 
     143E.particle = p;                    % ARX is the analytical part 
     144E.res_threshold = 1.0;             % resampling parameter 
     145E.n = 100;                         % number of particles 
    146146E.prior.class = 'eDirich';         % prior on non-linear part 
    147 E.prior.beta  = [1 1]; %  
    148 E.log_level ='logbounds,logevidence'; 
     147E.prior.beta  = [2 1]; %  
     148E.log_level = 'logbounds'; 
    149149E.name = 'MPF'; 
    150150 
     
    168168Note: error bars in this case are not directly comparable with those of previous examples. The MPF class implements the qbounds function as minimum and maximum of bounds in the condidered set (even if its weight is extreemly small). Hence, the bounds of the MPF are probably larger than it should be. Nevertheless, they provide great help when designing and tuning algorithms. 
    169169 
     170\section ug_est_ext Matlab extensions of the Bayesian estimators 
     171 
     172Similarly to the extension of pdf, the estimators (or filters) can be extended via prepared class \c mexBM in directory bdmtoolbox/mex/mex_classes. 
     173 
     174An example of such class is mexLaplaceBM in \<toolbox_dir\>\tutorial\userguide\laplace_example.m 
     175 
     176Note that matlab-extended classes of mexEpdf, specifically, mexDirac and mexLaplace are used as outputs of methods posterior and epredictor, respectively. 
     177 
     178In order to create a new extension of an estimator, copy file with class mexLaplaceBM.m and redefine the methods therein. If needed create new classes for pdfs by inheriting from mexEpdf, it the same way as in the mexLaplace.m example class. 
     179 
     180Foro list of all estimators, see \ref app_base. 
    170181*/