/*! \page user_guide Howto Use BDM - Introduction \addindex Howto Use BDM - Introduction BDM is a library of basic components for Bayesian decision making, hence its direct use is not possible. In order to use BDM the components must be pulled together in order to achieve desired functionality. We expect two kinds of users: - Basic users who run prepared scripts with different parameterizations and analyze their results, - Advanced users who are able to understand the logic of BDM and extend its functionality to new applications. The primary design aim of BDM was to ease development of complex algorithms, hence the target user is the advanced one. However, running experiments is the first task to learn for both types of users. \section param Experiment is fully parameterized before execution Experiments in BDM can be performed using either standalone applications or function bindings in high-level environment. A typical example of the latter being mex file in Matlab environment. The main logic behind the experiment is that all necessary information about it are gathered in advance in a configuration file (for standalone applications) or in configuration structure (Matlab). This approach was designed especially for time consuming experiments and Monte-Carlo studies for which it suits the most. For smaller decision making tasks, interactive use of the experiment can be achieved by showing the full configuration structure (or its selected parts), running the experiment on demand and showing the results. Semi-interactive experiments can be designed by sequential run of different algorithms. This topic will be covered in advanced documentation. \section config Configuration of an experiment Configuration file (or config structure) is organized as a tree of information. High levels represent bigger structures, leafs of the structures are basic data elements such as strings, numbers or vectors. Specific treatment was developed for objects. Since BDM is designed as object oriented library, the configuration was designed to honor the rule of inheritance. That is, offspring of a class can be used in place of its predecessor. Hence, objects (instances of classes) are configured by a structure with compulsory field \c class. This is a string variable corresponding to the name of the class to be used. Consider the following example: \code DS = {class="MemDS"; data = [1, 2, 3, 4, 5, 6, 7]; } \endcode or written equivalently in Matlab as \code DS.class='MemDS'; DS.Data =[1 2 3 4 5 6]; \endcode The code above is the minimum necessary information to run a pre-made algorithm implemented as executable \c estimator or Matlab mex file \c estimator. The expected result for Matlab is: \code >> M=estimator(DS,{}) M = ch0: [6x1 double] \endcode The structure \c M has one field called \c ch0 to which the data from \c DS.Data were copied. This was configured to be the default behavior which can be easily changed by adding more information to the configuration structure. First, we will have a look at all options of MemDS. \section memds How to understand configuration of classes As a first step, the estimator algorithm has created an object of class MemDS and called its method bdm::MemDS::from_setting(). This is a universal method called when creating an instance of class from configuration. Object that does not implement this method can not be created automatically from configuration. The documentation contains the full structure which can be loaded. e.g.: \code { class = 'MemDS'; Data = (...); // Data matrix or data vector --- optional --- drv = {class='RV'; ...} // Identification how rows of the matrix Data will be known to others time = 0; // Index of the first column to user_info, rowid = [1,2,3...]; // ids of rows to be used } \endcode for MemDS. The compulsory fields are listed at the beginning; the optional fields are separated by string "--- optional ---". For the example given above, the missing fields were filled as follows: \code drv = {class="RV"; names="{ch0 }"; sizes=[1];}; time = 0; rowid = [1]; \endcode Meaning that the data will be read from the first column (time=0), all rows of data are to be read (rowid=[1]), and this row will be called "ch0". \note Mixtools reference This object replaces global variables DATA and TIME. In BDM, data can be read and written to a range of \c datasources, objects derived from bdm::DS. \section rvs What is RV and how to use it RV stands for \c random \c variable which is a description of random variable or its realization. This object playes role of identifier of elements of vectors of data (in datasources), expected inputs to functions (in pdfs), or required results (operations conditioning). \note Mixtools reference RV is generalization of "structures" \c str in Mixtools. It replaces channel numbers by string names, and adds extra field size for each record. Mathematical interpretation of RV is straightforward. Consider pdf \f$ f(a)\f$, then \f$ a \f$ is the part represented by RV. Explicit naming of random variables may seem unnecessary for many operations with pdf, e.g. for generation of a uniform sample from <0,1> it is not necessary to specify any random variable. For this reason, RV are often optional information to specify. However, the considered algorithm \c estimator is build in a way that requires RV to be given. The \c estimator use-case expects to join the data source with an array of estimators, each of which declaring its input vector of data. The connection will be made automatically using the mechanism of datalinks (bdm::datalink). Readers familiar with Simulink environment may look at the RV as being unique identifiers of inputs and outputs of simulation blocks. The inputs are connected automatically with the outputs with matching RV. This view is however, very incomplete, RV are much more powerful than this. \section datasource Class inheritance and DataSources As mentioned above, the algorithm \c estimator is written to accept any datasource (i.e. any offspring of bdm::DS). For full list of offsprings, click Classes > Class Hierarchy. At the time of writing this tutorial, available datasources are bdm::DS - bdm::EpdfDS - bdm::MemDS - bdm::FileDS - bdm::CsvFileDS - bdm::ITppFileDS - bdm::MpdfDS - bdm::stateDS The MemDS has already been introduced in the example in \ref memds. However, any of the classes listed above can be used to replace it in the example. This will be demonstrated on the \c EpdfDS class. Brief decription of the class states that EpdfDS "Simulate data from a static pdf (epdf)". The static pdf means unconditional pdf in the sense that the random variable is conditioned by numerical values only. In mathematical notation it could be both \f$ f(a) \f$ and \f$ f(x_t |d_1 \ldots d_t)\f$. The latter case is true only when all \f$ d \f$ denotes observed values. For example, we wish to simulate realizations of a Uniform density on interval <-1,1>. Uniform density is represented by class bdm::euni. From bdm::euni.from_setting() we can find that the code is: \code U={class="euni"; high=1.0; low = -1.0;} \endcode for configuration file, and \code U.class='euni'; U.high = 1.0; U.low = -1.0; U.rv.class = 'RV'; U.rv.names = {'a'}; \endcode for Matlab. The datasource itself, can be then configured via \code DS = {class='EpdfDS'; epdf=@U;}; \endcode in config file, or \code DS.class = 'EpdfDS'; DS.epdf = U; \endcode in Matlab. Contrary to the previous example, we need to tell to algorithm \c estimator how many samples from the data source we need. This is configured by variable \c experiment.ndat. The configuration has to be finalized by: \code experiment.ndat = 10; M=estimator(DS,{},experiment); \endcode The result is as expected in field \c M.a the name of which corresponds to name of \c U.rv . If the task was only to generate random realizations, this would indeed be a very clumsy way of doing it. However, the power of the proposed approach will be revelead in more demanding examples, one of which follows next. \section arx Simulating autoregressive model Consider the following autoregressive model: \f[ y_t \sim \mathcal{N}( a y_{t-3} + b u_{t-1}, r) \f] where \f$ a,b \f$ are known constants, and \f$ r \f$ is known variance. Direct application of \c EpdfDS is not possible, since the pdf above is conditioned on values of \f$ y_{t-3}\f$ and \f$ u_{t-1}\f$. We need to handle two issues: -# extra unsimulated variable \f$ u \f$, -# time delayes of the values. The first issue can be handled in two ways. First, \f$ u \f$ can be considered as input and as such it could be externally given to the datasource. This solution is used in algorithm use-case \c closedloop. However, for the \c estimator scenario we will apply the second option, that is we complement \f$ f(y_{t}|y_{t-3},u_{t-1})\f$ by extra pdf:\f[ u_t \sim \mathcal{N}(0, r_u) \f] Thus, the joint density is now:\f[ f(y_{t},u_{t}|y_{t-3},u_{t-1}) = f(y_{t}|y_{t-3},u_{t-1})f(u_{t}) \f] and we have no need for input since the datasource have all necessary information inside. All that is required is to store them and copy their values to appropriate places. That is done in automatic way using dedicated class bdm::datalink_buffered. The only issue a user may need to take care about is the missing initial conditions for simulation. By default these are set to zeros. Using the default values, the full configuration of this system is: \code y = RV({'y'}); u = RV({'u'}); fy.class = 'mlnorm'; fy.rv = y; fy.rvc = RV({'y','u'}, [1 1], [-3, -1]); fy.A = [0.5, -0.9]; fy.const = 0; fy.R = 0.1; fu.class = 'enorm'; fu.rv = u; fu.mu = 0; fu.R = 0.2; DS.class = 'MpdfDS'; DS.mpdf.class = 'mprod'; DS.mpdf.mpdfs = {fy, epdf2mpdf(fu)}; \endcode Explanation of this example will require few remarks: - class of the \c fy object is 'mlnorm' which is Normal pdf with mean value given by linear function, and covariance matrix stored in LD decomposition, see bdm::mlnorm for details. - naming convention 'mlnorm' relates to the concept of templates in C++. For those unfamiliar with this concept, it is basicaly a way how to share code for different flavours of the same object. Note that mlnorm exist in three versions: mlnorm, mlnorm, mlnorm. Those classes act identically the only difference is that the internal data are stored either in LD decomposition, choleski decomposition or full matrices, respectively. - the same concept is used for enorm, where enorm and enorm are also possible. In this particular use, these objects are equivalent. In specific situation, e.g. Kalman filter implemented on Choleski decomposition (bdm::KalmanCh), only enorm is approprate. - class 'mprod' represents the chain rule of probability. Attribute \c mpdfs of its configuration structure is a list of conditional densities. Conditional density \f$ f(a|b)\f$ is represented by class \c mpdf and its offsprings. Class \c RV is used to describe both variables before conditioning (field \c rv ) and after conditioning sign (field \c rvc). - due to simplicity of implementation, mprod accept only conditional densities in the field \c mpdfs. Hence, the pdf \f$ f(u_t)\f$ must be converted to conditional density with empty conditioning, \f$ f(u_t| \{\})\f$. This is achieved by calling function epdf2mpdf which is only a trivial wrapper creating class bdm::mepdf. The code above can be immediatelly run, usin the same execution sequence of \c estimator as above. \subsection ini Initializing simulation When zeros are not appropriate initial conditions, the correct conditions can be set using additional commands: \code DS.init_rv = RV({'y','y','y'}, [1,1,1], [-1,-2,-3]); DS.init_values = [0.1, 0.2, 0.3]; \endcode The values of \c init_values will be copied to places in history identified by corresponding values of \c init_rv. Initial data is not checked for completeness, i.e. values of random variables missing from \c init_rv (in this case all occurences of \f$ u \f$) are still initialized to 0. \section conc What was demonstrated in this tutorial The purpose of this page was to introduce software image of basic elements of decision making as implemented in BDM. - random values as identification mechanism (bdm::RV) - unconditional pdfs (bdm::epdf), - conditional pdfs (bdm::mpdf), And the use of these in simulation of data and function of datasources. In the next tutorial, Bayesian models (bdm::BM) and loggers (bdm::logger) will be introduced. */