Changeset 632 for library/doc/tutorial
- Timestamp:
- 09/18/09 00:17:38 (15 years ago)
- Location:
- library/doc/tutorial
- Files:
-
- 5 modified
Legend:
- Unmodified
- Added
- Removed
-
library/doc/tutorial/00install.dox
r616 r632 1 1 /*! 2 \page install How to install BDM2 \page install BDM Use - Installation 3 3 4 4 BDM is written as a multiplatform library, which was tested on Linux, Windows and Mac OSX. -
library/doc/tutorial/01userguide.dox
r630 r632 1 1 /*! 2 \page user_guide Howto Use BDM- System, Data, Simulation2 \page user_guide BDM Use - System, Data, Simulation 3 3 4 4 This section serves as introdustion to the scenario of data simulation. Since it is the simpliest of all scenarios defined in \ref user_guide0 it also serves as introduction to configuration of an experiment (see \ref ui_page) and basic decision making objects (bdm::RV and bdm::DS). … … 110 110 Mathematical interpretation of RV is straightforward. Consider pdf \f$ f(a)\f$, then \f$ a \f$ is the part represented by RV. Explicit naming of random variables may seem unnecessary for many operations with pdf, e.g. for generation of a uniform sample from <0,1> it is not necessary to specify any random variable. For this reason, RV are often optional information to specify. However, the considered scenanrio \c simulator is build in a way that requires RV to be given. 111 111 112 In software, \c RV has three compulsory properties: 113 - <b>name</b>, unique identifier, two RV with the same name are considered to be identical 114 - <b>size</b>, size of the random variable, if not given it is assumed to be 1, 115 - <b>time</b>, more exactly time shift from \f$ t \f$, defaults to 0. 116 For example, scalar \f$ x_{t-2} \f$ is encoded as (name='x',sizes=1,time=-2). 117 Each RV stores array of these elements, hence RV with: 118 \code 119 names={'a', 'b'}; 120 sizes=[ 2 , 3]; 121 times=[-1, 1]; 122 \endcode 123 denotes 5-dimensional vector \f$ [a_{t-1}', b_{t+1}] \f$. 124 125 \subsection ug_rv_alg Algebra on RVs 126 Algebra on RVs (adding, searching in, subtraction, intersection, etc.) is implemented, see bdm::RV. 127 128 For convenience in Matlab, the following operations are defined: 129 - RV(names,sizes,times) creates configuration structure for RV, 130 - RVjoin(rvs) joins configuration structures for array of RVs rvs=[rv1,rv2,...], 131 - RVtimes(rvs,times) assign times to corresponding rvs. 132 133 See examples in bdmtoolbox/tutorial/userguide 134 135 \subsection ug_rv_connect 136 112 137 The \c simulator scenario connects the DataSource to second basic class of BDM, bdm:logger. The logger is a class that take care of storing results -- in this case, results of simulation. 113 138 The connection between these blocks is done automatically. The logger stores results of simulations under the names specified in drv. 114 139 Readers familiar with Simulink environment may look at the RV as being unique identifiers of inputs and outputs of simulation blocks. The inputs are connected automatically with the outputs with matching RV. This view is however, very incomplete, RV have more roles than this. 140 115 141 116 142 \section loggers Loggers for flexible handling of results -
library/doc/tutorial/02userguide2.dox
r630 r632 24 24 object \c Bayesian \c Model (bdm::BM). 25 25 26 \section theory Bayes rule and estimation26 \section ug2_theory Bayes rule and estimation 27 27 The object bdm::BM is basic software image of the Bayes rule: 28 28 \f[ f(x_t|d_1\ldots d_t) \propto f(d_t|x_t,d_1\ldots d_{t-1}) f(x_t| d_1\ldots d_{t-1}) \f] … … 42 42 Variants of these approaches are implemented as descendats of these level-two classes. This way, each estimation method (represented a class) is fitted in its place in the tree of approximations. This is useful even from software point of view, since related approximations have common methods and data fields. 43 43 44 \section arxEstimation of ARX models44 \section ug2_arx_basic Estimation of ARX models 45 45 46 46 Autoregressive models has already been introduced in \ref ug_arx_sim where their simulator has been presented. 47 We will use the datasource defined there to provide data for estimation.47 We will use results of simulation of the ARX datasource defined there to provide data for estimation using MemDS. 48 48 49 The following code is from bdmtoolbox/tutorial/userguide/arx_basic_example 49 The following code is from bdmtoolbox/tutorial/userguide/arx_basic_example.m 50 50 \code 51 51 A1.class = 'ARX'; 52 52 A1.rv = y; 53 A1.rgr = RVtimes({y,y},[-3,-1]) ; 53 A1.rgr = RVtimes([y,y],[-3,-1]) ; 54 A1.options = 'logbounds,logll'; 54 55 \endcode 55 Using the same logic as before, 56 This is the minimal configuration of an ARX estimator. Optional elements of bdm::ARX::from_setting() were set using their default values: 57 58 The first three fileds are self explanatory, they identify which data are predicted (field \c rv) and which are in regressor (field \c rgr). 59 The field \c options is a string of options passed to the object. In particular, class \c BM understand only options related to storing results: 60 - logbounds - store also lower and upper bounds on estimates (obtained by calling BM::posterior().qbounds()), 61 - logll - store also loglikelihood of each step of the Bayes rule. 62 These values are stored in given logger (\ref ug_loggers). By default, only mean values of the estimate are stored. 63 64 Storing of the log-likelihood is useful, e.g. in model selection task when too models are compared. 65 66 The bounds are useful e.g. for visualization of the results. Run of the example should provide result like the following: 67 \image html arx_basic_example_small.png 68 \image latex arx_basic_example.png "Typical run of tutorial/userguide/arx_basic_example.m" width=\linewidth 69 70 \section ug2_model_sel Model selection 71 72 In Bayesian framework, model selection is done via comparison of marginal likelihood of the recorded data. See [some theory]. 73 74 A trivial exammple how this can be done is presented in file bdmtoolbox/tutorial/userguide/arx_selection_example.m. The code extends the basic A1 object as follows: 75 \code 76 A2=A1; 77 A2.constant = 0; 78 79 A3=A2; 80 A3.frg = 0.95; 81 \endcode 82 That is, two other ARX estimators are created, 83 - A2 which is the same as A1 except it does not model constant term in the linear regression. Note that if the constant was set to zero, then this is the correct model. 84 - A3 which is the same as A2, but assumes time-variant parameters with forgetting factor 0.95. 85 86 Since all estimator were configured to store values of marginal log-likelihood, we can easily compare them by computint total log-likelihood for each of them and converting them to probabilities. Typically, the results should look like: 87 \code 88 Model_probabilities = 89 90 0.0002 0.7318 0.2680 91 \endcode 92 Hence, the true model A2 was correctly identified as the most likely to produce this data. 93 94 For this task, additional technical adjustments were needed: 95 \code 96 A1.name='A1'; 97 A2.name='A2'; 98 A2.rv_param = RV({'a2th', 'r'},[2,1],[0,0]); 99 A3.name='A3'; 100 A3.rv_param = RV({'a3th', 'r'},[2,1],[0,0]); 101 \endcode 102 First, in order to distinguish the estimators from each other, the estimators were given names. Hence, the results will be logged with prefix given by the name, such as M.A1ll for field \c ll. 103 104 Second, if the parameters of a ARX model are not specified, they are automatically named \c theta and \c r. Howver, in this case, \c A1 and \c A2 differ in size, hence their random variables differ and can not use the same name. Therefore, we have explicitly used another names (RVs) of the parameters. 105 106 \section ug2_bm_composition Composition of estimators 107 108 Similarly to mpdfs which could be composed via \c mprod, the Bayesian models can be composed. However, justification of this step is less clear than in the case of epdfs. 109 110 One possible theoretical base of composition is the Marginalized particle filter, which splits the prior and the posterior in two parts: 111 \f[ f(x_t|d_1\ldots d_t)=f(x_{1,t}|x_{2,t},d_1\ldots d_t)f(x_{2,t}|d_1\ldots d_t) \f] 112 each of these parts is estimated using different approach. The first part is assumed to be analytically tractable, while the second is approximated using empirical approximation. 113 114 The whole algorithm runs by parallel evaluation of many \c BMs for estimation of \f$ x_{1,t}\f$, each of them conditioned on value of a sample of \f$x_{2,t}\f$. 115 116 For example, the forgetting factor, \f$ \phi \f$ of an ARX model can be considered to be unknown. Then, the whole parameter space is \f$ [\theta_t, r_t, \phi_t]\f$ decomposed as follows: 117 \f[ f(\theta_t, r_t, \phi_t) = f(\theta_t, r_t| \phi_t) f(\phi_t) \f] 118 Note that for known trajectory of \f$ \phi_t \f$ the standard ARX estimator can be used if we find a way how to feed the changing \f$ \phi_t \f$ into it. 119 This is achieved by a trivial extension using inheritance method bdm::BM::condition(). 120 121 56 122 57 123 */ -
library/doc/tutorial/03devguide.dox
r616 r632 1 1 /*! 2 \page dev_guide Howto Use BDM in C++ for your needs 3 \addindex Howto Use BDM in C++ for your needs 4 2 \page dev_guide BDM Use - in C++ 5 3 6 4 \section Intro Logic of BDM -
library/doc/tutorial/04devguide2.dox
r616 r632 1 1 /*! 2 \page dev_guide2 Howto Contribute to BDM - Advanced development2 \page dev_guide2 BDM Development - Contribution guide 3 3 \addindex Howto Contribute to BDM - Advanced development 4 4