root/doc/local/Intro.dox @ 271

Revision 271, 4.7 kB (checked in by smidl, 15 years ago)

Next major revision

Line 
1/*!
2\page intro Introduction to Bayesian Decision Making Toolbox BDM
3
4This is a brief introduction into elements used in the BDM. The toolbox was designed for two principle tasks:
5
6<ul>
7<li> Design of Bayesian decisions-making startegies, </li>
8<li> Bayesian system identification for on-line and off-line scenarios. </li>
9</ul>
10Theoretically, the latter is a special case of the former, however we list
11it separately to highlight its importance in practical applications.
12
13Here, we describe basic objects that are required for implementation of the Bayesian parameter estimation.
14
15Key objects are:
16<dl>
17<dt> Bayesian Model: class \c BM </dt> <dd> which is an encapsulation of the likelihood function, the prior and methodology of evaluation of the Bayes rule. This methodology may be either exact or approximate.</dd>
18<dt> Posterior density of the parameter: class  \c epdf </dt> <dd> representing posterior density of the parameter. Methods defined on this class allow any manipulation of the posterior, such as moment evaluation, marginalization and conditioning. </dd>
19</dl>
20
21\section bm Class BM
22
23The class BM is designed for both on-line and off-line estimation.
24We make the following assumptions about data:
25<ul>
26 <li >an individual data record is stored in a vector, \c vec \c dt, </li>
27 <li> a set of data records is stored in a matrix,\c mat \c D, where each column represent one individual data record </li>
28 </ul>
29
30On-line estimation is implemented by method \code void bayes(vec dt)\endcode
31Off-line estimation is implemented by method \code void bayesB(mat D)\endcode
32
33As an intermediate product, the bayes rule computes marginal likelihood of the data records \f$ f(D) \f$.
34Numerical value of this quantity which is important e.g. for model selection can be obtained by calling method \c _ll().
35
36\section epdf Getting results from BM
37
38Class \c BM offers several ways how to obtain results:
39<ul>
40<li> generation of posterior or predictive pdfs, methods \c _epdf() and \c predictor() </li>
41<li> direct evaluation of predictive likelihood, method \c logpred() </li>
42</ul>
43Underscore in the name of method \c _epdf() indicate that the method returns a pointer to the internal posterior density of the model. On the other hand, \c predictor creates a new structure of type \c epdf().
44
45Direct evaluation of predictive pdfs via logpred offers a shortcut for more efficient implementation.
46
47\section epdf Probability densities
48As introduced above, the results of parameter estimation are in the form of probability density function conditioned on numerical values. This type of information is represented by class \c epdf.
49
50This class allows such as moment evaluation via methods \c mean() and \c variance(), marginalization via method \c marginal(), and conditioning via method \c condition().
51
52Also, it allows generation of a sample via \c sample() and evaluation of one value of the posterior parameter likelihood via \c evallog(). Multivariate versions of these operations are also available by adding suffix \c _m, i.e. \c sample_m() and \c evallog_m().  These methods providen multiple samples and evaluation of likelihood in multiple points respectively.
53
54\section pc Classes for probability calculus
55
56When a more demanding task then generation of point estimate of the parameter is required, the power of general probability claculus can be used. The following classes (together with \c epdf introduced above) form the basis of the calculus:
57<ul>
58<li> \c mpdf a pdf conditioned on another symbolic variable,</li>
59<li> \c RV a symbolic variable on which pdfs are defined.</li>
60</ul>
61The former class is an extension of mpdf that allows conditioning on a symbolic variable. Hence, when numerical results - such as samples -  are required, numericla values of the condition must be provided. The names of methods of the \c epdf are used extended by suffix \c cond, i.e. \c samplecond(), \c evallogcond(), where \c cond precedes matrix estension, i.e. \c samplecond_m() and \c evallogcond_m().
62
63The latter class is used to identify how symbolic variables are to be combined together. For example, consider the task of composition of pdfs via the chain rule:
64\f[ f(a,b,c) = f(a|b,c) f(b) f(c) \f]
65In our setup, \f$ f(a|b,c) \f$ is represented by an \c mpdf while \f$ f(b) \f$ and \f$ f(c) \f$ by two \c epdfs.
66We need to distinguish the latter two from each other and to deside in which order they should be added to the mpdf.
67This distinction is facilitated by the class \c RV which uniquely identify a random varibale.
68
69Therefore, each pdf keeps record on which RVs it represents; \c epdf needs to know only one \c RV stored in the attribute \c rv; \c mpdf needs to keep two \c RVs, one for variable on which it is defined (\c rv) and one for variable incondition which is stored in attribute \c rvc.
70
71*/
Note: See TracBrowser for help on using the browser.