Dynamic bayesian network thesis

Dynamic bayesian network r

This work also focuses on the likelihood in the evaluation instead of the inferred structure. Numerous functional neuroimaging studies suggest that cortical regions selectively couple to one another Greicius, Krasnow, Reiss et al. In spite of a vast literature on data mining and machine learning techniques, these problems have continued to remain difficult. Keywords: ensemble recordings, spike trains, functional connectivity, effective connectivity, dynamic Bayesian networks, multiple single unit activity, spiking cortical networks 1 Introduction Brain networks are of fundamental interest in systems neuroscience. For example, miRNAs were previously not thought to take important roles in gene regulation. They can also be useful in identifying plastic changes in cortical circuitry during learning and memory Mehta, Quirk and Wilson ; Brown, Nguyen, Frank et al. Moreover, for applications that deal with data monitoring or unusual behavior detection, the additional challenge is a design of discovery algorithms aimed at extracting patterns, trends, anomalies in unsupervised settings where data is commonly noisy and even partially unobservable. In this work, we investigate the applicability of Dynamic Bayesian Networks DBNs in inferring the effective connectivity between spiking cortical neurons from their observed spike trains. Each hidden variable has only observed variables as children and parents, with at least two children and possibly no parents. Note that there may be multiple edges from node i to node j, but with different delays. Since the children and parents of the hidden common cause are associated, they may be mistakenly thought to be directly linked Full size image An early work is [ 37 ], which formulates the problem as determining the constraints on the variance-covariance matrix of observed data, and then searching for a causal structure to explain the constraints. After that, by the fact that genes with common parent should have high association, the suspected genes could be clustered, and one hidden common cause could be estimated for each cluster with size at least two. Since BN learning is NP-hard [ 23 ], some DBN learning algorithms use heuristic or stochastic optimization such as simulated annealing, as in Banjo [ 24 ] updated version allows multi-step delays ; genetic algorithm, as in [ 25 ]; and greedy heuristic search in MMHO-DBN [ 26 ] in case the number of parents is large, and exhaustive search is used otherwise.

Hidden common cause The above methods ignore the issue of hidden common cause s by implicitly assuming causal sufficiency, i. Conditional probabilities in P are used to capture the statistical dependence between child nodes and parent nodes.

Boyen et al. In particular, Dynamic Bayesian Network DBN is one potential graphical method for identifying causal relationships between simultaneously observed random variables.

why are dynamic bayesian networks useful in biological applications

Abstract Coordination among cortical neurons is believed to be key element in mediating many high level cortical processes such as perception, attention, learning and memory formation.

Gene network inference Many GRN inference algorithms and models have been proposed, with different levels of details by labeling the edges with different information, see [ 910 ] for surveys of GRN modelling and [ 11 ] for a survey on GRN inference algorithms for microarray expression data.

We could predict the genes with hidden common cause using this clue.

hierarchical dynamic bayesian network

Graph theory, widely used in mathematics and machine learning applications, is increasingly becoming popular in the analysis of large scale neural data Sporns ; Denise, David, Richard et al. Elidan and Friedman [ 49 ] complements [ 47 ] and focuses on learning the dimensionality the number of states of hidden variables.

However, to our knowledge, the previous GRN inference methods all implicitly make the assumption of causal sufficiency, i.

Hierarchical dynamic bayesian network

Each hidden variable has only observed variables as children and parents, with at least two children and possibly no parents. It is in principle impossible to be certain that all relevant factors or common causes have been measured, because factors that are not even conceived of cannot be measured. Each X i,t represents the expression of gene i at time t, where n of them are observed, and n h are hidden. We could predict the genes with hidden common cause using this clue. These delays have been known to affect the network stability, or cause oscillations [ 5 — 8 ]. Some works are less restrictive and allow the hidden variables to have observed variables as parents. Also, previous methods for discovering hidden common causes either do not handle multi-step time delays or restrict that the parents of hidden common causes are not observed genes. In spite of a vast literature on data mining and machine learning techniques, these problems have continued to remain difficult. Nevertheless, the use of DBNs to characterize cortical networks at the single neuron and the ensemble level has not been fully explored.
Rated 7/10 based on 11 review
Download
Dynamic Bayesian Networks: Estimation, Inference and Applications