I am intrigued by the countless things we find nature doing in our brains — they are such a marvelous excuse to study networks and the dynamics that unfold on them.
To play around with some common concepts, let’s consider a single neuron as a node in a network. We pick one of many models to describe the neuron’s dynamic behavior, say it listens carefully to all the input it receives. Depending on the input, the neuron eventually generates a spike that it sends out to its connected neurons so that activity propagates from neuron to neuron through the network. On the one hand, the way the activity propagates through the whole network — the large-scale dynamics — depend on the small-scale dynamics of the individual neurons, which is a (local) property of the neuron itself. On the other hand, the small-scale and large-scale dynamics are functionally linked by the network topology, which is a (non-local) property of the whole system.
The topology describes how individual neurons or populations are connected with each other. As a simple example, three identical neurons that are wired as a chain would fire sequentially, whereas three identical neurons that are siblings (and activated by some shared input) would fire synchronously. Clearly, as we go from three to 80 billion neurons, and from "some shared input" to more realistic sensory input, things get more complicated.
Adding to the complication, the dynamic behavior of the neurons and the topology that connects them change each other through neural plasticity. Staying vague, plasticity describes the many different mechanisms through which neurons change — on a cellular level, or their connections among each other. Plasticity plays a major role for learning, and, to me, it is self-organization at its best: we have local plasticity effects (learning rules) that change the connections between individual neurons solely based on local circumstances, but because they take place everywhere, a rich topology develops that spans the whole system.
So, what to make of all this? For my PhD, I want to develop an overarching and consistent picture of the interplay between topology and dynamics in neuronal networks. I use simplified models of spiking neurons and numeric simulations to gain a bottom-up understanding of how the topology of neuronal networks — together with the input to individual parts of the network — shape the population dynamics. The top-down perspective is provided by experiments on modular neuronal cultures that feature a clustered topology, which tunes the cultures’ typical dynamics away from synchronized bursts towards rich, more versatile dynamic states.
Below you can find an overview of the (soon to be more) related and (for now) unrelated projects that I have been working on.
For decades neuroscientists have argued that the cortex might operate at a critical point of a (second-order, non-equilibrium) phase transition. Operating at such a critical point would benefit neuronal networks because it enables optimal information-processing and useful quantities such as the correlation length are maximized.
The evidence that supports or contradicts this hypothesis of criticality in the brain often derives from measurements of neuronal avalanches. If a system is critical, the probability distributions of the size (and duration) of these avalanches follow a power law. Thus, power-law distributions are a common way to check if a system is critical.
Controversially, the results of studies that build on observing power-laws in the neuronal avalanches vary immensely throughout the literature; some (and I am skipping many details) find the brain in a critical state and others in a subcritical state.
We found that the cause of the controversy lies in the way the avalanches are sampled. If an electrode’s signal is used directly (e.g. LFP), then many neurons contribute to the signal, and the events that make up an avalanche have many contributions. Because of the many contributions, spurious correlations are introduced, and this type of coarse-sampling can produce power-law distributions — even when the observed system is not critical.
Mister Estimator is an open-source python toolbox to estimate the branching parameter and the intrinsic timescale in electrophysiologal data.
Originally intended for the analysis of time series from neuronal spiking activity, Mr. Estimator is applicable to a wide range of systems where subsampling — the difficulty to observe the whole system in full detail — limits our capability to record. Applications range from epidemic spreading to any system that can be represented by an autoregressive process.
In the context of neuroscience, the intrinsic timescale can be thought of as the duration over which any perturbation reverberates within the network; it has been used as a key observable to investigate a functional hierarchy across the primate cortex and serves as a measure of working memory. It is also a proxy for the distance to criticality and quantifies a system’s dynamic working point.
See the repository on GitHub for more details.
AtmoCL is an OpenCL port of the All Scale Atmospheric Model (ASAM). The code was initially based on the OpenGL derivative ASAMgpu. While OpenGL as a base for the initial GPU model was the intuitive choice, the more recent OpenCL offers some neat advantages. Apart from allowing the same code to run on a variety of hosts including heterogeneous environments of GPUs, CPUs and accelerators, we can profit from the 3D image class. The mapping from 3D volume to 2D textures - that are the favourable memory format for GPUs - is done by the driver. Further, one can directly access any point of the volume through integer indices instead of the more cumbersome float coordinates, inherent to OpenGL.
One main idea is to export the model state as images where the volume is mapped to 2D cutplanes and state variables are presented in RGB. To animate the pictures as a moving sequence, I developed a lightweight webinterface using bootstrap. It also plots vertical profiles and time series with highcharts. Checkout the demo.