PUBLICATIONS
Abstract
Multichannel recording technologies have revealed travelling waves of neural activity in multiple sensory, motor and cognitive systems. These waves can be spontaneously generated by recurrent circuits or evoked by external stimuli. They travel along brain networks at multiple scales, transiently modulating spiking and excitability as they pass. Here, we review recent experimental findings that have found evidence for travelling waves at single-area (mesoscopic) and whole-brain (macroscopic) scales. We place these findings in the context of the current theoretical understanding of wave generation and propagation in recurrent networks. During the large low-frequency rhythms of sleep or the relatively desynchronized state of the awake cortex, travelling waves may serve a variety of functions, from long-term memory consolidation to processing of dynamic visual stimuli. We explore new avenues for experimental and computational understanding of the role of spatiotemporal activity patterns in the cortex.
Abstract
Sleep spindles are brief oscillatory events during non-rapid eye movement (NREM) sleep. Spindle density and synchronization properties are different in MEG versus EEG recordings in humans and also vary with learning performance, suggesting spindle involvement in memory consolidation. Here, using computational models, we identified network mechanisms that may explain differences in spindle properties across cortical structures. First, we report that differences in spindle occurrence between MEG and EEG data may arise from the contrasting properties of the core and matrix thalamocortical systems. The matrix system, projecting superficially, has wider thalamocortical fanout compared to the core system, which projects to middle layers, and requires the recruitment of a larger population of neurons to initiate a spindle. This property was sufficient to explain lower spindle density and higher spatial synchrony of spindles in the superficial cortical layers, as observed in the EEG signal. In contrast, spindles in the core system occurred more frequently but less synchronously, as observed in the MEG recordings. Furthermore, consistent with human recordings, in the model, spindles occurred independently in the core system but the matrix system spindles commonly co-occurred with core spindles. We also found that the intracortical excitatory connections from layer III/IV to layer V promote spindle propagation from the core to the matrix system, leading to widespread spindle activity. Our study predicts that plasticity of intra- and inter-cortical connectivity can potentially be a mechanism for increased spindle density as has been observed during learning.
Abstract
Voltage-sensitive dye imaging (VSDI) is a key neurophysiological recording tool because it reaches brain scales that remain inaccessible to other techniques. The development of this technique from in vitro to the behaving nonhuman primate has only been made possible thanks to the long-lasting, visionary work of Amiram Grinvald. This work has opened new scientific perspectives to the great benefit to the neuroscience community. However, this unprecedented technique remains largely under-utilized, and many future possibilities await for VSDI to reveal new functional operations. One reason why this tool has not been used extensively is the inherent complexity of the signal. For instance, the signal reflects mainly the subthreshold neuronal population response and is not linked to spiking activity in a straightforward manner. Second, VSDI gives access to intracortical recurrent dynamics that are intrinsically complex and therefore nontrivial to process. Computational approaches are thus necessary to promote our understanding and optimal use of this powerful technique. Here, we review such approaches, from computational models to dissect the mechanisms and origin of the recorded signal, to advanced signal processing methods to unravel new neuronal interactions at mesoscopic scale. Only a stronger development of interdisciplinary approaches can bridge micro- to macroscales.
Abstract
In estimating the frequency spectrum of real-world time series data, we must violate the assumption of infinite-length, orthogonal components in the Fourier basis. While it is widely known that care must be taken with discretely sampled data to avoid aliasing of high frequencies, less attention is given to the influence of low frequencies with period below the sampling time window. Here, we derive an analytic expression for the side-lobe attenuation of signal components in the frequency domain representation. This expression allows us to detail the influence of individual frequency components throughout the spectrum. The first consequence is that the presence of low-frequency components introduces a $1/f^{\alpha}$ component across the power spectrum, with a scaling exponent of $\alpha \approx -2$. This scaling artifact could be composed of diffuse low-frequency components, which can render it difficult to detect a priori. Further, treatment of the signal with standard digital signal processing techniques cannot easily remove this scaling component. While several theoretical models have been introduced to explain the ubiquitous $1/f^{\alpha}$ scaling component in neuroscientific data, we conjecture here that some experimental observations could be the result of such data analysis procedures.
Abstract
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present.
Abstract
During sleep, the thalamus generates a characteristic pattern of transient, 11-15 Hz sleep spindle oscillations, which synchronize the cortex through large-scale thalamocortical loops. Spindles have been increasingly demonstrated to be critical for sleep-dependent consolidation of memory, but the specific neural mechanism for this process remains unclear. We show here that cortical spindles are spatiotemporally organized into circular wave-like patterns, organizing neuronal activity over tens of milliseconds, within the timescale for storing memories in large-scale networks across the cortex via spike-time dependent plasticity. These circular patterns repeat over hours of sleep with millisecond temporal precision, allowing reinforcement of the activity patterns through hundreds of reverberations. These results provide a novel mechanistic account for how global sleep oscillations and synaptic plasticity could strengthen networks distributed across the cortex to store coherent and integrated memories.
Abstract
Beta (β)- and gamma (γ)-oscillations are present in different cortical areas and are thought to be inhibition-driven, but it is not know if these properties also apply to γ-; oscillations in human. Here, we analyze such oscillations in high-density microelectrode array recordings in human and monkey during the wake-sleep cycle. In these recordings, units were classified as excitatory and inhibitory cells. We find that γ-oscillations in human and β-oscillations in monkey are characterized by a strong implication of inhibitory neurons, both in terms of their firing rate and their phasic firing with the oscillation cycle. The β- and γ-waves systematically propagate across the array, with similar velocities, during both wake and sleep. However, only in slow-wave sleep (SWS) β- and γ-oscillations are associated with highly coherent and functional interactions across several millimeters of the neocortex. This interaction is specifically pronounced between inhibitory cells. These results suggest that inhibitory cells are dominantly involved in the genesis of β- and γ-oscillations, as well as in the organization of their large-scale coherence in the awake and sleeping brain. The highest oscillation coherence found during SWS suggests that fast oscillations implement a highly coherent reactivation of wake patterns that may support memory consolidation during SWS.
Abstract
The central coefficients of powers of certain polynomials with arbitrary degree in $x$ form an important family of integer sequences. Although various recursive equations addressing these coefficients do exist, no explicit analytic representation has yet been proposed. In this article, we present an explicit form of the integer sequences of central multinomial coefficients of polynomials of even degree in terms of finite sums over Dirichlet kernels, hence linking these sequences to discrete $n$th-degree Fourier series expansions. The approach utilizes the diagonalization of circulant boolean matrices, and is generalizable to all multinomial coefficients of certain polynomials with even degree, thus forming the base for a new family of combinatorial identities.
LINK
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
LINK
Abstract
Since its introduction, the "small-world" effect has played a central role in network science, particularly in the analysis of the complex networks of the nervous system. From the cellular level to that of interconnected cortical regions, many analyses have revealed small-world properties in the networks of the brain. In this work, we revisit the quantification of small-worldness in neural graphs. We find that neural graphs fall into the "borderline" regime of small-worldness, residing close to that of a random graph, especially when the degree sequence of the network is taken into account. We then apply recently introduced analytical expressions for clustering and distance measures, to study this borderline small-worldness regime. We derive theoretical bounds for the minimal and maximal small-worldness index for a given graph, and by semi-analytical means, study the small-worldness index itself. With this approach, we find that graphs with small-worldness equivalent to that observed in experimental data are dominated by their random component. These results provide the first thorough analysis suggesting that neural graphs may reside far away from the maximally small-world regime.
LINK
Abstract
In the past two decades, significant advances have been made in understanding the structural and functional properties of biological networks, via graph-theoretic analysis. In general, most graph-theoretic studies are conducted in the presence of serious uncertainties, such as major undersampling of the experimental data. In the specific case of neural systems, however, a few moderately robust experimental reconstructions have been reported, and these have long served as fundamental prototypes for studying connectivity patterns in the nervous system. In this paper, we provide a comparative analysis of these "historical" graphs, both in their directed (original) and symmetrized (a common preprocessing step) forms, and provide a set of measures that can be consistently applied across graphs (directed or undirected, with or without self-loops). We focus on simple structural characterizations of network connectivity and find that in many measures, the networks studied are captured by simple random graph models. In a few key measures, however, we observe a marked departure from the random graph prediction. Our results suggest that the mechanism of graph formation in the networks studied is not well captured by existing abstract graph models in their first- and second-order connectivity.
Abstract
Propagating waves occur in many excitable media and were recently found in neural systems from retina to neocortex. While propagating waves are clearly present under anaesthesia, whether they also appear during awake and conscious states remains unclear. One possibility is that these waves are systematically missed in trial-averaged data, due to variability. Here we present a method for detecting propagating waves in noisy multichannel recordings. Applying this method to single-trial voltage-sensitive dye imaging data, we show that the stimulus-evoked population response in primary visual cortex of the awake monkey propagates as a travelling wave, with consistent dynamics across trials. A network model suggests that this reliability is the hallmark of the horizontal fibre network of superficial cortical layers. Propagating waves with similar properties occur independently in secondary visual cortex, but maintain precise phase relations with the waves in primary visual cortex. These results show that, in response to a visual stimulus, propagating waves are systematically evoked in several visual areas, generating a consistent spatiotemporal frame for further neuronal interactions.
Abstract
One of the simplest polynomial recursions exhibiting chaotic behavior is the logistic map $$x_{n+1} = a x_n ( 1 - x_n )$$ with $x_n, a \in \mathbb{Q}: x_n \in [0,1] \ \forall n \in \mathbb{N}$ and $a \in (0,4]$, the discrete-time model of the differential growth introduced by Verhulst almost two centuries ago (Verhulst, 1838). Despite the importance of this discrete map for the field of nonlinear science, explicit solutions are known only for the special cases $a = 2$ and $a = 4$. In this article, we propose a representation of the Verhulst logistic map in terms of a finite power series in the map's growth parameter $a$ and initial value $x_0$ whose coefficients are given by the solution of a system of linear equations. Although the proposed representation cannot be viewed as a closed-form solution of the logistic map, it may help to reveal the sensitivity of the map on its initial value and, thus, could provide insights into the mathematical description of chaotic dynamics.
A top downloaded open-access article in Elsevier Mathematics
Abstract
We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid non-asymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.
Abstract
Propagating waves of activity have been recorded in many species, in various brain states, brain areas, and under various stimulation conditions. Here, we review the experimental literature on propagating activity in thalamus and neocortex across various levels of anesthesia and stimulation conditions. We also review computational models of propagating waves in networks of thalamic cells, cortical cells and of the thalamocortical system. Some discrepancies between experiments can be explained by the "network state", which differs vastly between anesthetized and awake conditions. We introduce a network model displaying different states and investigate their effect on the spatial structure of self-sustained and externally driven activity. This approach is a step towards understanding how the intrinsically-generated ongoing activity of the network affects its ability to process and propagate extrinsic input.
Abstract
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
Abstract
In the hippocampus and the neocortex, the coupling between local field potential (LFP) oscillations and the spiking of single neurons can be highly precise, across neuronal populations and cell types. Spike phase (i.e., the spike time with respect to a reference oscillation) is known to carry reliable information, both with phase-locking behavior and with more complex phase relationships, such as phase precession. How this precision is achieved by neuronal populations, whose membrane properties and total input may be quite heterogeneous, is nevertheless unknown. In this note, we investigate a simple mechanism for learning precise LFP-to-spike coupling in feed-forward networks -- the reliable, periodic modulation of presynaptic firing rates during oscillations, coupled with spike-timing dependent plasticity. When oscillations are within the biological range (2-150 Hz), firing rates of the input change on a timescale highly relevant to spike-timing dependent plasticity (STDP). Through analytic and computational methods, we find points of stable phase-locking for a neuron with plastic input synapses. These points correspond to precise phase-locking behavior in the feed-forward network. The location of these points depends on the oscillation frequency of the inputs, the STDP time constants, and the balance of potentiation and de-potentiation in the STDP rule. For a given input oscillation, the balance of potentiation and de-potentiation in the STDP rule is the critical parameter that determines the phase at which an output neuron will learn to spike. These findings are robust to changes in intrinsic post-synaptic properties. Finally, we discuss implications of this mechanism for stable learning of spike-timing in the hippocampus.