1. Introduction
The human brain is estimated to contain 100 billion or so neurons and 10 thousand times as many connections. Neurons never function in isolation: each of them is connected to 10, 000 others and they interact extensively every millisecond. Brain cells are organized into neural circuits often in a dynamic way, processing specific types of information and providing the foundation of perception, cognition, and behavior. Brain anatomy and activity can be described at various levels of resolution and are organized on a hierarchy of scales, ranging from molecules to organisms and spanning 10 and 15 orders of magnitude in space and time, respectively. Different dynamic processes on local and global scales generate multiple levels of segregation and integration, and lead to spatially distinct patterns of coherence. At each scale, neural dynamics is determined by processes at the same scale, as well as smaller and larger scales, with no scale being privileged over others. These scales interact with each other and are mutually dependent; the coordinated action yields overall functional properties of cells and organisms.
An ultimate goal of neuroscience is to understand the brain’s driving forces and organizational principles, and how the nervous systems function together to generate behavior. This raises a challenge issue for researchers in the neuroscience community: integrate the diverse knowledge derived from multiple levels of analyses into a coherent understanding of brain structure and function. The accelerating availability of neuroscience data is placing a huge need on mining and modeling methods. These data are generated at different description resolutions, for example, from neuron spike trains to electroencephalogram (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI). A key theme in modern neuroscience is to move from localization of function to characterization of brain networks; mathematical approaches aiming at extracting directed causal connectivity from neural or neuroimaging signals are increasingly in demand. Despite differences in spatiotemporal scales of the brain signals, the data analysis and modeling share some fundamental computation strategies.
Among the diverse computational methods, probabilistic modeling and Bayesian inference play a significant role, and can contribute to neuroscience from different perspectives. Bayesian approaches can be used to analyze or decode brain signals such as spike trains and structural and functional neuroimaging data. Normative predictions can be made regarding how an ideal perceptual system integrate prior knowledge with sensory observations, and thus enable principled interpretations of data from behavioral and psychological experiments. Moreover, algorithms for Bayesian estimation could provide mechanistic interpretations of neural circuits and cognition in the brain. In addition, better understanding of the brain’s computational mechanisms would have a synergistic impact on developing novel algorithms in Bayesian computation, resulting in new technologies and applications.
This chapter reviews and categorizes varieties of mathematical and statistical approaches for measuring and estimating information, networks, causality and dynamics in the multi-scale brain. Specifically, in Section 3, we introduce the fundamentals in information theory and the extended concepts and metrics for describing information processing in the brain, with validity and applications demonstrated on neural signals from multiple scales and aging research. Bayesian inference for neuroimaging data analysis, and cognition modeling of observations from psychological and behavioral experiments as well as the corresponding neural/neuronal underpinnings are provided in Section 4. Graphical models, Bayesian and dynamic Bayesian networks, and some new development, together with their applications in detecting causal connectivity and longitudinal morphological changes are presented in Section 5. We illustrate the attractor dynamics and the associated interpretations for aging brain in Section 6. Conclusions and future directions are given in Section 7.
2. Neuroscience data/signals and brain connectivity
2.1. Recording and imaging techniques at multiple scales
An important breakthrough regarding neuronal activity and neurotransmission is that electrophysiological recordings of single neurons were carried out in the intact brain of an awake or anesthetized animal, or in an explanted piece of tissue [1]. Such recordings have extremely high spatial (micrometer) and temporal (millisecond) resolution and allow direct observation of electrical currents and potentials generated by single nerve cells, which, however, at considerable cost since all cellular recording techniques are highly invasive, requiring surgical intervention and placement of recording electrodes within brain tissue. Neurons communicate via action potentials or spikes; neural recordings are usually transformed into series of discrete spiking events that can be characterized in terms of rate and timing. Less direct observations of electrical brain activity are electromagnetic potentials generated by combined electrical currents of large neuronal populations, i.e. electroencephalography (EEG) and magnetoencephalography (MEG). They are non-invasive as recordings are made through sensors placed on, or near, the surface of the head. EEG and MEG directly record signals of neuronal activity and thus have a high temporal resolution. But the spatial resolution is relatively poor as neither technique allows an unambiguous reconstruction of the electrical sources responsible for the recorded signal. EEG and MEG signals are often processed in sensor space as sources are difficult to localize in anatomical space.
With the development of magnetic resonance imaging (MRI) in 1980s [2], brain imaging took a huge step forward. The strong magnetic field and radiofrequency pulse used in MRI scanning are harmless, making this technique completely noninvasive. MRI is also extremely versatile: by changing the scanning parameters, we can acquire images based on a wide variety of different contrast mechanisms. For example, diffusion MRI is a MRI method allows the mapping of diffusion process of molecules, mainly water, in biological tissues, in vivo and non-invasively. Water molecule diffusion patterns can consequently reveal microscopic details about tissue architecture in the brain. Functional magnetic resonance imaging (fMRI) measures hemodynamic signals, only indirectly related to neural activity. These techniques allow the reconstruction of spatially localized signals at millimeter-scale resolution across the imaged brain volume. In fMRI, the primary measure of activity is the contrast between the magnetic susceptibility of oxygenated and deoxygenated hemoglobin within each voxel; so it is called the
Neural signals recorded via the above techniques differ significantly in both spatial and temporal resolutions and in the directness with which neuronal activity is detected. Simultaneously using two more recording methods within the same experiment can reveal how different neural or metabolic signals are interrelated [3]. Each technique measures a different aspect of neural dynamics and organization, and interpreting neural data sets shall take these differences into account. All methods for observing brain structure and function have advantages but also disadvantages: some methods provide great structural detail but are invasive or cover only a small part of the brain, while others may be noninvasive but have poor spatial or temporal resolution. Nervous systems are organized at multiple scales, from synaptic connections between single cells, to the organization of cell populations within individual anatomical regions, and finally to the large-scale architecture of brain regions and their interconnections or network connectivity. Different techniques are sensitive to different levels of organization. The multi-scale aspect of the nervous system is an essential feature of its organization and network architecture [4].
2.2. Categorization of brain network connectivity
Given the diverse techniques for observing the brain, there are many different ways to describe and measure brain connectivity [5, 6]. Brain connectivity can be derived from histological sections revealing anatomical connections, from electrical recordings of single nerve cells, or from functional imaging of the entire brain. Even with a single recording technique, different ways of processing and analyzing neural data may result in different descriptions of the underlying network. Structural connectivity is a wiring diagram if physical links while functional connectivity describes dynamic interactions. A third class of brain networks is effective connectivity, which encompasses the network of directed interactions between neural elements. Effective connectivity goes beyond structural and functional connectivity by detecting patterns of causal influence among neural elements. These three main types of brain connectivity are defined more precisely as below.
3. Information theory and processing
3.1. Fundamentals and definitions: Entropy, Kullback-Leibler divergence, and mutual information
A major objective of neuroscience is to understand how the brain processes information. Here we provide probabilistic notations and information-theoretic definitions that will be used in this section (definitions denoted with
The
The choice of logarithmic base determines the unit. The most common unit of information is the
The
Entropy is a measure of randomness or uncertainty of the distribution: the more random the distribution, the more information is gathered by observing its value. Specifically, entropy is zero for a deterministic variable and is maximized for a uniform distribution. The conditional entropy is given as below:
The chain rule for entropy is
The
It is a measure of the difference of two distributions, but does not usually satisfy the symmetry condition, that is,
The
Intuitively, mutual information measures the information that
with the conditional mutual information given as following:
3.2. Causal inference: Granger causality, transfer entropy, and directed information
where
If including
where
Transfer entropy is asymmetric and based on transition probabilities; it thus provides directional and dynamic information. The key feature of this information theoretic functional for identifying causality is that, theoretically, it does not assume any particular model for the interaction between the two time series. So, transfer entropy is sensitive to all order correlations, which makes it suitable for exploratory analyses over Granger causality or other model based approaches. This is especially advantages if some unknown non-linear interactions are embedded in the systems to be discovered. It is shown in [17] that for Gaussian variables, Granger causality and transfer entropy are equivalent, which bridges autoregressive and information-theoretic methods in causal inference. Another issue with transfer entropy is that its performance depends on the estimation of transitional probabilities; this requires the order selection for both the driven and driving systems.
The mutual information is symmetric and only measures the correlation or statistical dependence between random processes, but cannot identify causal directionality. The directed information is defined as:
It can also be written as following with the chain rule for entropy:
where the
The difference between mutual information in Eq. (3) and directed information in Eq. (5) is that
3.3. Applications and validity in neuroscience and aging research
4. Probabilistic modeling and Bayesian inference for neural computation, cognition, and behavior
4.1. Bayes’ theorem and approximate inference
A generic problem in science is: given the observed data
That is: from a
A key algorithm challenge for Bayesian inference is for many models of interest, analytical tractability of the above posterior is elusive due to the integral in the denominator. We therefore resort to approximation inference, where the approaches tend to fall into one of following two classes: 1)
4.2. Neuroimaging data analyses using Bayesian approaches
Here I focus on Bayesian inference in fMRI data analysis, mainly for activation detection and hemodynamic response function (HRF) estimation, although the key concepts of Bayesian methods have been applied to structural MRI images as well [37-39]. Graphical model based Bayesian and dynamic Bayesian networks and their applications will be discussed in Section 5.
Bayesian inference has taken fMRI analysis research into an area that classical frequentist statistics have difficulty to address because of some challenging issues associated with the data. For example, fMRI response to stimuli is not instantaneous, but lagged and damped by the hemodynamic response. Estimating HRFs has gained increasing interests, since it provides not only a deep insight into the underlying dynamics of human brain, but also a basis for making inference of brain activation regions. How do we account for the HRF properties such as the nonlinearities and variability over different brain regions? fMRI is a 4-dimensional signal though with spatial and temporal noise correlations [40, 41]. How to incorporate the modeling of the presence of these correlations into the data analysis, alongside considering the clustered pattern of activation? Moreover, group level statistical inference of fMRI time series is usually needed to answer imaging-based scientific questions. How to make valid, sensitive and robust estimation of activation effects in populations of subjects? In fMRI analysis, what we often do is taking acquired data plus a generative model and extracting pertinent information about the brain, i.e. making inference on the model and its parameters. Bayesian statistics requires a prior probabilistic belief about the model parameters to be specified. Such models are typically HRF models, spatial models, and hierarchical multi-subject models, to respectively address the challenges listed above.
4.3. Bayesian brain: Cognition, perception, uncertainty, behavior and neural representations
The neuroscience principle that the nervous system of animals and humans is adapted to the statistical properties of the environment is reflected across all organizational levels, from the activity of single neurons to networks and behavior [62]. A critical aim of the nervous system is to estimate the world state from incomplete and noisy data. During such process, a challenge issue that brains must handle is uncertainty. For example, when we perceive the physical world, make a decision, and take an action, there is uncertainty associated with the sensory system, the motor apparatus, one’s own knowledge, and the world itself. Probability has played a central role in perception and cognition modeling. Specifically, the Bayesian framework of statistical estimation provides a systematic way of dealing with these uncertainties for optimal estimation. Comparison between the optimal and actual behavior gives rise to better understanding about how the nervous system works. Bayesian models have been used to explain results in perception, cognition, behavior, and neural coding in diverse forms [63-67], with differences in distinct assumptions about the world variables and how they relate to each other. However, the same key idea shared by all these Bayesian models is that different sources of information can be integrated for estimation of the relevant variables. Thus the Bayesian approach unifies an enormous range of otherwise apparently disparate behavior within one coherent framework.
A key aim of cognitive science is to reverse-engineer the mind. Cognition modeling based on the probabilistic method begins by identifying ideal solutions to these inductive problems, and then uses algorithms to model the mental processes for approximating these solutions. Neural processes are viewed as mechanisms for implementing these algorithms. Probabilistic models of cognition pursue a top-down strategy, which begins with abstract principles allowing agents to solve problems posed by the world (i.e. the functions minds performing) and then aims to reduce these principles to psychological and neural processes. This analysis results in better flexibility in exploration of the representations and inductive biases underlying human cognition. On the contrary, connectionist models usually follow a bottom-up approach that starts with a neural mechanism characterization and explores what macro-level functional phenomena might emerge. With a formal characterization of an inductive problem, a probabilistic model specifies the hypotheses under investigation, the relation between these hypotheses and observable data, and the prior probability of each hypothesis. By assuming different prior distribution for the hypotheses, different inductive biases can be captured. Although the link between probabilistic inference and neural computation/function is drawing attention of modelers from different backgrounds, little is known concerning how these structured representations can be implemented in neural systems for high-level cognition.
Sufficient results in perception have shown that the nervous system represents its uncertainty about the true state of the world probabilistically and such representations are utilized in two related cognitive areas: information fusion and perceptual decision-making. To fuse information from different sources about the same object, inferences about the object should rely on these sources commensurate with their corresponding uncertainty, as demonstrated in multisensory integration [68, 69] with the sources of different sensory modalities, or between information coming from the senses and being stored in memory [70, 71]. With the Bayesian framework, the organism calculates probability distributions over parameters describing the state of the world, with computation based on sensory information and knowledge accrued from experience. Although the particular sensory information and prior knowledge are specific to the task, the computation follows the same probability rules. Psychological evidence at the behavior level that animals and humans represent uncertainty during perceptual processes caused research into the neural underpinnings of such probabilistic representations. That is: how neurons compute with sensory uncertainty information or even full probability distributions? One scheme is the probabilistic population coding [72] that involves making use of the likelihood function encoded in neural population activity (as described below). Beyond perception, the neural implementation of cognitive probabilistic models has basically not been explored yet [64, 73].
5. Graphical models, Bayesian and dynamic Bayesian networks
5.1. Mathematical description and solution
Graphical models, intersecting probability and graph theories, provide a natural tool for handling uncertainty and complexity that frequently occur in applied mathematics and engineering, and scientific domains involving computation. Many of the classical multivariate probabilistic techniques are special cases of the general graphical models, such as mixture models, factor analysis, hidden Markov models, Kalman filters and Ising models [35, 78, 79]. A graph consists of
In Bayesian Networks, if there is an arrow from node
where
Note that Bayesian networks do not necessarily imply Bayesian statistics. In fact, it is common to use frequentists methods to estimate the parameters of the CPDs. They are so called because they use Bayes’ rule for probabilistic inference. Nevertheless, Bayes net are a useful representation for hierarchical Bayesian models, which form the foundation of applied Bayesian statistics. Bayesian statistical methods in conjunction with Bayesian networks offer an efficient and principled approach for avoiding the data overfitting. Dynamic Bayesian Networks (DBNs) are directed graphical models of stochastic processes, and generalization of hidden Markov models (HMMs) and linear dynamical systems (LDSs). DBN represent the hidden (and observed) state in terms of state variables, which can have complex interdependencies. The simplest DBN is a HMM, with one discrete hidden node and one discrete or continuous observed node per slice. A LDS has the same topology as an HMM, but all the nodes are assumed to have linear-Gaussian distributions. Kalman filter is an online filtering of this model.
A graphical model specifies a complete JPD over all the variables; and all possible inference queries can be answered by marginalization, i.e. summing out over irrelevant variables. However, the JPD has size
5.2. Applications and validity in neuroimaging and aging research
6. Dynamical brain system
6.1. Attractors and brain dynamics
Computational neuroscience illustrates the network dynamics of neurons and synapses with models to reproduce emergent properties or predict observed neurophysiology (e.g. single- and multiple-cell recordings, EEG, MEG, fMRI) and associated behavior [27]. Attractor theory [89] is a powerful theoretical framework that can capture the neural computations inherence in cognitive functions such as attention, memory, and decision making. It is based on mathematical models formulated at the level of neuronal spiking and synaptic activity. An attractor of a dynamical system is a subset of the state space to which orbits originating from typical initial conditions evolve over time. It is common for dynamical system to have more than one attractor. For each such attractor, its
6.2. Attractors dynamics in aging
The stochastic dynamical theory to brain function given above has implications in aging research. In the following, we describe effects of these factors and the associated hypotheses to aging [90]. The stochastic dynamic approach to aging can provide a way to test combinations of pharmacological treatments, which may together help to minimize the cognitive symptoms of aging.
7. Conclusions
Brain structure and activity can be described at various levels of resolution. Recent developments in biotechnology have provided us the ability to measure and record population neuronal activity with more precision and accuracy than ever before, allowing researchers to study and perform detailed analyses which may have been impossible just a few years ago. Brain imaging techniques, such as EEG, MEG, and structural/functional MRI, open macroscopic windows on processes in the working brain. These methods yield high dimensional data sets that are organized in space and time [100]. This creates a huge analysis need to extract interpretable signals and information from the big data, harvesting the full richness of the multi-modality measurements of the multi-scale brain. One of the future directions on the computation side is to develop high-dimensional analysis methods for mining and modeling of the neuroscience data, and thus to assess and interpret properties in the joint data set combining imaging and behavior/stimulus measurements. The objective is to further our understanding about how neural structures of humans and other animals develop, are aged, and create systems able to accomplish basic and complex behavioral tasks.
Acknowledgments
Preparation of this chapter is supported in part by a grant from the National Institute of Aging, K25AG033725.References
- 1.
Purves, D., Augustine, G. J., Fitzpatrick, D., Hall, W. C., LaMantia, A.-S., & White, L. E. Neuroscience , Sinauer Associates, Inc., (2008). - 2.
Ramachandran, V. S. Encyclopedia of the Human Brain , (2002), 3. - 3.
Logothetis, N. K., Pauls, J., Augath, M., Trinath, T., & Oeltermann, A. Neurophysiological investigation of the basis of the fMRI signal, Nature, (2001), 412: 150-157. - 4.
Sporns, O. Networks of the Brain , Massachusetts Institute of Technology, (2011). - 5.
Friston, K. J. Functional and effective connectivity: a review, Brain Connectivity, (2011), 1: 13-36. - 6.
Sporns, O. Discovering the Human Connectome , Massachusetts Institute of Technology, (2012). - 7.
Shannon, C. E. A mathematical theory of communication, Bell System Technical Journal, (1948), 27: 379-423. - 8.
Granger, C. Investigating causal relations by econoetric models and cross-spectral methods, Econometrica, (1969), 37: 424-438. - 9.
Seth, A. K. A MATLAB toolbox for Granger causal connectivity analysis, J Neurosci Methods, (2010), 186: 262-273. - 10.
Quinn, C. J., Coleman, T. P., Kiyavash, N., & Hatsopoulos, N. G. Estimating the directed information to infer causal relationships in ensemble neural spike train recordings, J Comput Neurosci, (2011), 30: 17-44. - 11.
Deshpande, G., LaConte, S., James, G. A., Peltier, S., & Hu, X. Multivariate Granger causality analysis of fMRI data, Hum Brain Mapp, (2009), 30: 1361-1373. - 12.
Akaike, H. A new look at the statistical model identification, IEEE Transactions on Automatic Control, (1974), 19: 716-723. - 13.
Schwartz, G. Estimating the dimension of a model, Annals of Statistics, (1978), 5:461-464. - 14.
Kaminski, M., Ding, M., Truccolo, W. A., & Bressker, S. L. Evaluating causal relations in neural systems: Granger causality, directed transfer function and statistical assessment of significance, Biological Cybernetics, (2001), 85: 145-157. - 15.
Schreiber, T. Measuring information transfer, Phys Rev Lett, (2000), 85: 461-464. - 16.
Vicente, R., Wibral, M., Lindner, M., & Pipa, G. Transfer entropy--a model-free measure of effective connectivity for the neurosciences, J Comput Neurosci, (2011), 30: 45-67. - 17.
Barnett, L., Barrett, A. B., & Seth, A. K. Granger causality and transfer entropy are equivalent for Gaussian variables, Phys Rev Lett, (2009), 103: 238701. - 18.
Marko, H. The bidirectional communication theory- a generalization of information theory, IEEE Transactions on Communications, (1973), 21: 1345-1351. - 19.
Massey, G. Causality, feedback and directed information, in Proceedings of International Symposium on Information Theory and Its Applications , (1990), pp. 27-30. - 20.
Amblard, P. O., & Michel, O. J. On directed information theory and Granger causality graphs, J Comput Neurosci, (2011), 30: 7-16. - 21.
Seghouane, A. K., & Amari, S. Identification of directed influence: Granger causality, Kullback-Leibler divergence, and complexity, Neural Comput, (2012), 24: 1722-1739. - 22.
Kramer, G. Directed Information for Channels with Feedback , Ph.D. Thesis, University of Manitoba, Canada, (1998). - 23.
Liu, Y., & Aviyente, S. The relationship between transfer entropy and directed information, in IEEE Statistical Signal Processing Workshop , (2012), pp. 73-76. - 24.
Li, X., Coyle, D., Maguire, L., Watson, D. R., & Mcginnity, T. M. Gray matter concentration and effective connectivity changes in Alzheimber’s disease: a longitudinal structural MRI study, Diagnostic Neuroradiology, (2011), 53: 773-748. - 25.
Miao, X., Wu, X., Li, R., Chen, K., & Yao, L. Altered connectivity pattern of hubs in default-mode network with Alzheimer’s disease: an Granger causality modeling approach, PLoS One, (2011), 6: e25546. - 26.
Wibral, M., Rahm, B., Rieder, M., Lindner, M., Vicente, R., & Kaiser, J. Transfer entropy in magnetoencephalographic data: quantifying information flow in cortical and cerebellar networks, Prog Biophys Mol Biol, (2011), 105: 80-97. - 27.
Rabinovich, M. I., Friston, K. J., & Varona, P. Principles of Brain Dynamics , Massachusetts Institute of Technology, (2012). - 28.
Vakorin, V. A., Ross, B., Krakovska, O., Bardouille, T., Cheyne, D., & Mcintosh, A. R. Complexity analysis of source activity underlying the neuromagnetic somatosensory steady-state response, Neuroimage, (2010), 51: 83-90. - 29.
Zhang, Y.-C. Complexity and 1/f noise. A phase space approach, Journal of Physique I France, (1991), 1: 971-977. - 30.
Abasolo, D., Hornero, R., Espino, P., Alvarez, D., & Poza, J. Entropy analysis of the EEG background activity in Alzheimer’s disease patients, Physiological Mesurement, (2006), 27: 241-253. - 31.
Misic, B., Mills, T., Taylor, M. J., & Mcintosh, A. R. Brain noise is task-dependent and region-specific, Journal of Neurophysiology, (2010), 104: 2667-2676. - 32.
Yang, A. C., Huang, C.-C., Yeh, H.-L., Liu, M.-E., Hong, C.-J., & Tu, P.-C. Complexity of spontaneous BOLD activity in default mode network is correlated with cognitive function in normal male elderly: a multiscale entropy analysis, Neurobiology of Aging, (2013), 34: 428-438. - 33.
Park, J. H., Kim, S., Kim, C. H., Cichocki, A., & Kim, K. Multiscale entropy analysis of EEG from patients under different pathological conditions, Fractals, (2007), 15: 399-404. - 34.
Robert, C., & Casella, G. Monte Carlo Statistical Methods , Berlin: Springer-Verlag, (2004). - 35.
Jordan, M., Ghahramani, Z., Jaakkola, T., & Saul, L. An introduction to variational methods for graphical models, Machine Learning, (1999), 37: 183-233. - 36.
Ormerod, J. T., & Wand, M. P. Explaining variational approximations, The American Statistician, (2010), 64: 140-153. - 37.
Wang, Y. Statistical Shape Analysis for Image Segmentation and Physical Model-Based Non-Rigid Registration , Ph.D. Thesis, Department of Electrical Engineering, Yale University, (1999). - 38.
Woolrich, M. W., Jbabdi, S., Patenaude, B., Chappell, M., Makni, S., & Behrens, T. Bayesian analysis of neuroimaging data in FSL, Neuroimage, (2009), 45: S173-S186. - 39.
Wang, Y, & Staib, L. H. Boundary finding with prior shape and smoothness models, IEEE Trans. on Pattern Analysis and Machine Intelligence, (2000), 22: 738-743. - 40.
Wang, Y. M., & Xia, J. Unified framework for robust estimation of brain networks from fMRI using temporal and spatial correlation analyses, IEEE Transactions on Medical Imaging, (2009), 28: 1296-1307. - 41.
Wang, Y. M. Modeling and nonlinear analysis in fMRI via statistical learning, in Advanced Image Processing in Magnetic Resonance Imaging , Landini, L. Positano, V., & Santarelli, M. F., Eds., Marcel Dekker International Publisher, (2005), pp. 565-586. - 42.
Genovese, C. A Bayesian time-course model for functional magnetic resonance imaging data (with discussion), Journal of the American Statistical Association, (2000), 95: 691-703. - 43.
Gossl, C., Fahrmeir, I., & Auer, D. P. Bayesian modeling of the hemodynamic response function in bold fmri, Neuroimage, (2001), 14: 140-148. - 44.
Friston, K. J. Bayesian estimation of dynamical systems: an application to fmri, Neuroimage, (2002), 16: 513-530. - 45.
Ciuciu, P., Poline, J. B., Marrelec, G., Idier, J., Pallier, C., & Benali, H. Unsupervised robust nonparametric estimation of the hemodynamic response function for any fmri experiment, IEEE Transactions on Medical Imaging, (2003), 22: 1235-1251. - 46.
Goutte, C., Nielsen, F. A, & Hansen, L. K. Modeling the haemodynamic response in fmri using smooth fir filters, IEEE Transactions on Medical Imaging, (2000), 19: 1188-1201. - 47.
Marrelec, G., Benali, H., Ciuciu, P., Pelegrini-Issac, M., & Poline, J. B. Robust Bayesian estimation of the hemodynamic response function in event-related bold fmri using basic physiological information, Human Brain Mapping, (2003), 15: 1-25. - 48.
Gossl, C., Auer, D. P., & Fahrmeir, L. Bayeisan spatiotemporal inference in functional magnetic resonance imaging, Biometrics, (2001), 57: 554-562. - 49.
Xia, J., Liang, F., & Wang, Y. M. FMRI analysis through Bayesian variable selection with a spatial prior, in IEEE International Symposium on Biomedical Imaging , (2009), pp. 714-717. - 50.
Penny, W. D., Trujillo-Barreto, N. J, & Friston, K. J. Bayesian fmri time series analysis with spatial priors, Neuroimage, (2005), 24: 350-362. - 51.
Woolrich, M., Behrens, T., & Smith, S. Constrained linear basis sets for HRF modeling using variational Bayes, Neuroimage, (2004), 21: 1748-1761. - 52.
Harrison, L. M., Penny, W. D., Ashburner, J., Trujillo-Barreto, N., & Friston, K. J. Diffusion-based spatial priors for imaging, Neuroimage, (2007), 38: 677-695. - 53.
Groves, A. R., Chappell, M. A., & Woolrich, M. W. Combined spatial and non-spatial prior for inference on MRI time-series, Neuroimage, (2009), 45: 795-809. - 54.
Hartvig, N. V., & Jensen, J. L. Spatial mixture modeling of fMRI data, Hum Brain Mapp, (2000), 11: 233-248. - 55.
Woolrich, M., Behrens, T, Beckmann, C, & Smith, S. Mixture models with adaptive spatial regularization for segmentation with an application to fmri data, IEEE Transactions on Medical Imaging, (2005), 24: 1-11. - 56.
Xia, J., Liang, F., & Wang, Y. M. On clustering fMRI using Potts and mixture regression models, in IEEE Engineering in Medicine and Biology Society Conference , (2009), pp. 4795-4798. - 57.
Flandin, G., & Penny, W. D. Bayesian fMRI data analysis with sparse spatial basis function priors, Neuroimage, (2007), 34: 1108-1125. - 58.
Fergusson, T. A Bayesian analysis of some nonparametric problems, Annals of Statistics, (1973), 1: 209-230. - 59.
Kim, S., & Smyth, P. Hierarchical Dirichlet Processes with random effects, in Neural Information Processing Systems , (2006), pp. 697-704. - 60.
Friston, K. J., Penny, W., Phillips, C., Kiebel, S., Hinton, G., & Ashburner, J. Classical and Bayesian inference in neuroimaging: theory, Neuroimage, (2002), 16: 465-483. - 61.
Holmes, A., & Friston, K. Generalisability, random effects & population inference, in Fourth International Conference on Functional Mapping of the Human Brain: Neuroimage , (1998), pp. S754. - 62.
Geisler, W. S., & Diehl, R. L. Bayesian natural selection and the evolution of perceptual systems, Philos Trans R Soc Lond B Biol Sci, (2002), 357: 419-448. - 63.
Ma, W. J. Organizing probabilistic models of perception, Trends Cogn Sci, (2012), 16: 511-518. - 64.
Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. Probabilistic models of cognition: exploring representations and inductive biases, Trends Cogn Sci, (2010), 14: 357-364. - 65.
Fiser, J., Berkes, P., Orban, G., & Lengyel, M. Statistically optimal perception and learning: from behavior to neural representations, Trends Cogn Sci, (2010), 14: 119-130. - 66.
Vilares, I., & Kording, K. Bayesian models: the structure of the world, uncertainty, behavior, and the brain, Ann N Y Acad Sci, (2011), 1224: 22-39. - 67.
Knill, D. C., & Pouget, A. The Bayesian brain: the role of uncertainty in neural coding and computation, Trends Neurosci, (2004), 27: 712-719. - 68.
Atkins, J. E., Fiser, J., & Jacobs, R. A. Experience-dependent visual cue integration based on consistencies between visual and haptic percepts, Vision Res, (2001), 41: 449-461. - 69.
Ernst, M. O., & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion, Nature, (2002), 415: 429-433. - 70.
Weiss, Y., Simoncelli, E. P., & Adelson, E. H. Motion illusions as optimal percepts, Nat Neurosci, (2002), 5: 598-604. - 71.
Kording, K. P., & Wolpert, D. M. Bayesian integration in sensorimotor learning, Nature, (2004), 427: 244-247. - 72.
Ma, W. J., Beck, J. M., Latham, P. E., & Pouget, A. Bayesian inference with probabilistic population codes, Nat Neurosci, (2006), 9: 1432-1438. - 73.
Shi, L., Griffiths, T. L., Feldman, N. H., & Sanborn, A. N. Exemplar models as a mechanism for performing Bayesian inference, Psychon Bull Rev, (2010), 17: 443-464. - 74.
Sanger, T. D. Probability density estimation for the interpretation of neural population codes, J Neurophysiol, (1996), 76: 2790-2793. - 75.
Huys, Q. J. M., Zemel, R. S., Natarajan, R., & Dayan, P. Fast population coding, Neural Computation, (2007), 19: 404-441. - 76.
Deneve, S. Bayesian spiking neurons I: inference, Neural Comput, Jan (2008). , 20, 91-117. - 77.
Jazayeri, M., & Movshon, J. A. Optimal representation of sensory information by neural populations, Nat Neurosci, (2006), 9: 690-696. - 78.
Murphy, K. P. Dynamic Bayesian Networks: Representation, Inference and Learning , Ph.D. Thesis, Department of Computer Science, University of California, Berkeley, (2002). - 79.
Bishop, C. M. Pattern Recognition and Machine Learning , Springer Science + Business Media, LLC, (2006). - 80.
Kschischang, F. R., Frey, B. J., & Loeliger, H.-A. Factor graphs and the sum-product algorithm, IEEE Transactions on Information Theory, (2001), 47: 498-519. - 81.
Peot, M. A., & Shachter, R. D. Fusion and propagation with multiple observations in belief networks, Artificial Intelligence, (1991), 48: 299-318. - 82.
Zheng, X., & Rajapakse, J. C. Learning functional structure from fMR images, Neuroimage, (2006), 31: 1601-1613. - 83.
Rajapakse, J. C., & Zhou, J. Learning effective brain connectivity with dynamic Bayesian networks, Neuroimage, (2007), 37: 749-760. - 84.
Li, J., Wang, Z. J., & Mckeown, M. J. Multi-subject, A. dynamic Bayesian networks (DBNs) framework for brain effective connectivity, in IEEE International Conference on Acoustics, Speech and Signal Processing , (2007). pp. I-429 – I-432. - 85.
Chen, R., & Herskovits, E. H. Network analysis of mild cognitive impairment, Neuroimage, (2006), 29: 1252-1259. - 86.
Chen, R., Resnick, S. M., Davatzikos, C., & Herskovits, E. H. Dynamic Bayesian network modeling for longitudinal brain morphometry, Neuroimage, Feb 1 (2012). , 59, 2330-2338. - 87.
Huang, S., Li, J., Ye, J., Fleisher, A., Chen, K., & Wu, T. Brain effective connectivity modeling for Alzheimer’s disease study by sparse Bayesian network, in The Seventeenth ACM SIGKDD International Conference On Knowledge Discovery and Data Mining (2011), pp. 931-939. - 88.
Song, L., Kolar, M., & Xing, E. P. Time-varying dynamic Bayesian networks, in Proceeding of the 23rd Neural Information Processing Systems , (2009), pp. 1732-1740. - 89.
Brunel, N., & Wang, X. J. Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition, J Comput Neurosci, (2001), 11: 63-85. - 90.
Rolls, E. T., Deco, G., & Loh, M. A Stochastic Neurodynamics Approach to the Changes in Cognition and Memory in Aging , (2010). - 91.
Kelly, K. M., Nadon, N. L., Morrison, J. H., Thibault, O., Barnes, C. A., & Blalock, E. M. The neurobiology of aging, Epilepsy Res, Suppl 1, 2006), 68: S5-20. - 92.
Dere, E., Easton, A., Nadel, I., & Huston, J. P. Handbook of Episodic Memory , Elsevier, Amsterdam, (2008). - 93.
Goldman-Rakic, P. S. The physiological approach: functional architecture of working memory and disordered cognition in schizophrenia, Biol Psychiatry, (1999), 46: 650-661. - 94.
Loh, M., Rolls, E. T., & Deco, G. Statistical fluctuations in attractor networks related to schizophrenia, Pharmacopsychiatry, (2007), 40: S78-S84. - 95.
Sikstrom, S. Computational perspectives on neuromodulation of aging, Acta Neurochir Suppl, (2007), 97: 513-518. - 96.
Burke, S. N., & Barnes, C. A. Neural plasticity in the ageing brain, Nat Rev Neurosci, (2006), 7: 30-40. - 97.
Kesner, R. P., & Rolls, E. T. Role of long-term synaptic modification in short-term memory, Hippocampus, (2001), 11: 240-250. - 98.
Schliebs, R., & Arendt, T. The significance of the cholinergic system in the brain during aging and in Alzheimer’s disease, J Neural Transm, (2006), 113: 1625-1644. - 99.
Rolls, E. T., & Deco, G. The Noisy Brain: Stochastic Dynamics as a Principle of Brain Function , Oxford University Press, (2012). - 100.
Wang, M. Y., Zhou, C., & Xia, J. Statistical analysis for recovery of structure and function from brain images, in Biomedical Engineering, Trends, Researches and Technologies , Komorowska, M. A. & Olsztynska-Janus, S., Eds., (2011), pp. 169-196.