This is a brief introduction into elements used in the BDM. The toolbox was designed for two principle tasks: \begin{itemize} \item Design of Bayesian decisions-making startegies, \item Bayesian system identification for on-line and off-line scenarios. \end{itemize} Theoretically, the latter is a special case of the former, however we list it separately to highlight its importance in practical applications. Here, we describe basic objects that are required for implementation of the Bayesian parameter estimation. Key objects are: \begin{description} \item[Bayesian Model: class {\tt BM} ]which is an encapsulation of the likelihood function, the prior and methodology of evaluation of the Bayes rule. This methodology may be either exact or approximate. \item[Posterior density of the parameter: class {\tt epdf} ]representing posterior density of the parameter. Methods defined on this class allow any manipulation of the posterior, such as moment evaluation, marginalization and conditioning. \end{description} \hypertarget{philosophy_bm}{}\section{Class BM}\label{philosophy_bm} The class BM is designed for both on-line and off-line estimation. We make the following assumptions about data: \begin{itemize} \item an individual data record is stored in a vector, {\tt vec} {\tt dt}, \item a set of data records is stored in a matrix,{\tt mat} {\tt D}, where each column represent one individual data record \end{itemize} On-line estimation is implemented by method \begin{Code}\begin{verbatim} void bayes(vec dt) \end{verbatim} \end{Code} Off-line estimation is implemented by method \begin{Code}\begin{verbatim} void bayesB(mat D) \end{verbatim} \end{Code} As an intermediate product, the bayes rule computes marginal likelihood of the data records $ f(D) $. Numerical value of this quantity which is important e.g. for model selection can be obtained by calling method {\tt \_\-ll()}.\hypertarget{philosophy_epdf}{}\section{Getting results from BM}\label{philosophy_epdf} Class {\tt BM} offers several ways how to obtain results: \begin{itemize} \item generation of posterior or predictive pdfs, methods {\tt \_\-epdf()} and {\tt predictor()} \item direct evaluation of predictive likelihood, method {\tt logpred()} \end{itemize} Underscore in the name of method {\tt \_\-epdf()} indicate that the method returns a pointer to the internal posterior density of the model. On the other hand, {\tt predictor} creates a new structure of type {\tt epdf()}. Direct evaluation of predictive pdfs via logpred offers a shortcut for more efficient implementation.\hypertarget{philosophy_epdf}{}\section{Getting results from BM}\label{philosophy_epdf} As introduced above, the results of parameter estimation are in the form of probability density function conditioned on numerical values. This type of information is represented by class {\tt epdf}. This class allows such as moment evaluation via methods {\tt mean()} and {\tt variance()}, marginalization via method {\tt marginal()}, and conditioning via method {\tt condition()}. Also, it allows generation of a sample via {\tt sample()} and evaluation of one value of the posterior parameter likelihood via {\tt evallog()}. Multivariate versions of these operations are also available by adding suffix {\tt \_\-m}, i.e. {\tt sample\_\-m()} and {\tt evallog\_\-m()}. These methods providen multiple samples and evaluation of likelihood in multiple points respectively.\hypertarget{philosophy_pc}{}\section{Classes for probability calculus}\label{philosophy_pc} When a more demanding task then generation of point estimate of the parameter is required, the power of general probability claculus can be used. The following classes (together with {\tt epdf} introduced above) form the basis of the calculus: \begin{itemize} \item {\tt mpdf} a pdf conditioned on another symbolic variable, \end{itemize} {\tt RV} a symbolic variable on which pdfs are defined. The former class is an extension of mpdf that allows conditioning on a symbolic variable. Hence, when numerical results - such as samples - are required, numericla values of the condition must be provided. The names of methods of the {\tt epdf} are used extended by suffix {\tt cond}, i.e. {\tt samplecond()}, {\tt evallogcond()}, where {\tt cond} precedes matrix estension, i.e. {\tt samplecond\_\-m()} and {\tt evallogcond\_\-m()}. The latter class is used to identify how symbolic variables are to be combined together. For example, consider the task of composition of pdfs via the chain rule: \[ f(a,b,c) = f(a|b,c) f(b) f(c) \] In our setup, $ f(a|b,c) $ is represented by an {\tt mpdf} while $ f(b) $ and $ f(c) $ by two {\tt epdfs}. We need to distinguish the latter two from each other and to deside in which order they should be added to the mpdf. This distinction is facilitated by the class {\tt RV} which uniquely identify a random varibale. Therefore, each pdf keeps record on which RVs it represents; {\tt epdf} needs to know only one {\tt RV} stored in the attribute {\tt rv}; {\tt mpdf} needs to keep two {\tt RVs}, one for variable on which it is defined ({\tt rv}) and one for variable incondition which is stored in attribute {\tt rvc}.