Paul's Blog

Climbing a Fountain 

On September 19, I successfully defended my PhD thesis.

Now, a few weeks later, the realization of what this overwhelming day actually means has slowly sunk in. I am looking back at an incredible time in Viola’s group, and I am happy about this subtle reminder to stop every now and then, catch a breath, and appreciate all the good things that came my way.

I feel sentimental to see this chapter coming to an end, and about having to part ways with many people whom I have grown very fond of. I can’t thank you guys enough, for the countless discussions, emotional support, frequent advice, debugging sessions, or the occasional rant, all of which I enjoyed deeply.

What’s next?


Of course, as Göttingen tradition demands, after the defense I climbed the Gänseliesel fountain in the city center. It was a truly unique experience, and I am happy I could share it with so many of you, family, friends and colleagues, the lines between which have become increasingly blurry over the years.

While typing this post and putting up the photo, I got to see the metaphor in this peculiar tradition. Being up there gives you a good look at where you came from — but also a glimpse of where you are going next.

For me, after a few weeks of vacation and christmas with the family, I am planning to leave academia behind for something more applied: I am super excited to work (and code!) in a larger team, and to come up with well-designed solutions that help people in their everyday life.

Stimulating Cultures 

As of last week, I have a new preprint on arXiv (edit: out now in Science Advances).

First of all, I want to thank the amazing team who worked on this project, especially Hideaki, Johannes and Jordi who had to endure grumpy me during countless group meetings (and a bunch of extra ones). Seriously, thanks guys!

Brains are modular and cultures should be, too!

When presenting this in talks, I start by explaining that it would be cool to have neuronal cultures that well represent the living brain. However, as most of you probably know, cultures do their own thing and tend to be bursty — where occasional, rather short and synchronous events of high activity (bursts) take turns with extended episodes of silence. In 2018, Hideaki and his lab managed to limit these bursts (which usually light up the whole system) to sub-parts of the culture. They achieved this by making the topology of the cultures modular, effectively making it harder for a local burst to propagate to other modules. Thus, the effect was strongest when modules were at the brink of being disconnected from one another. Although individual modules still show the burst-like dynamics, the dynamics of the whole system are less synchronized, getting much closer to real-brain dynamics.

Brains are busy places.

Looking for another aspect that real brains have and cultures lack, we started to consider background noise. Think of it: Brains are constantly exposed to sensory stimuli, which tend to make their way into the dynamics one way or another, leading to an omnipresent (noisy) baseline activity of all neurons. On the other hand, cultures sit around in a glass dish, and although they perceive more of their environment than we usually expect, they do not have much to do. Hence, in this work we stimulated the cultures in a random and asynchronous manner. Adding such an asynchronous input reduced synchrony further than modularity alone.

We then used simulations of LIF-Neurons (using the awesome Brian2 simulator) and developed a minimal, mean-field like model to explain the reduced synchrony under stimulation. We found that the noisy input that makes neurons fire sporadically depletes the average synaptic resources in modules. This can best be seen when considering module-level trajectories in the Rate-Resource plane, as you can see on the right-hand side in the clip below. For long times, when no module-level burst occurs, resources recover and firing-rates are low. Once charged enough, a burst occurs and resources discharge rapidly as the modules’ neurons fire at high rate.

As you might guess from the clip, the noisy input does not only deplete the average amount of synaptic resources, it also lowers the minimum amount of resources needed to start module-level bursts (increasing their frequency). Now, due to the inhomogeneous degree distributions that are caused by the modular architecture (few axons connect across modules, but many axons connect within) the input-driven resource depletion affects the activations across modules more than within. Thus, module-level bursts persist but system-wide synchronization is reduced.


Mr. Estimator 

Mister Estimator is an open-source python toolbox I wrote as my first project in Viola’s group. It allows you to estimate the intrinsic timescale, e.g. in electrophysiologal data.

Originally intended for the analysis of time series from neuronal spiking activity, it works on a wide range of systems where subsampling is a problem (often, it is impossible to observe the whole system in full detail). Applications range from epidemic spreading to any system that can be represented by an autoregressive process.

Why care about this timescale? In general, it serves as a proxy for the distance to criticality and quantifies a system’s dynamic working point. And in the context of neuroscience, you can think of it as the duration over which any perturbation lingers within the network; it has been used as a key observable to uncover the functional hierarchy across primate cortex, and it serves as a measure of working memory.

See the repository on GitHub for more details.

About Color 

Samantha's color palettes

I was looking for suitable color maps the other day when trying to squeeze too much data into a plot. Usually I prefer to just remove some details for the sake of clarity, but even then, color matters.

Procrastinating away the afternoon, I stumbled upon this super nice article on color palettes by Samantha Zhang. She gives a comprehensive overview of aspects to consider when picking colors, such as how to make your plots accessible to colorblind readers. Best of all, she provides a long list of resources, links and tools that help with the process.

Since then, Samantha’s article became my go-to resource on color in data science, and I am currently testing out three of her color maps in a paper draft. Below you find a python snippet to mimic them in matplotlib.

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap

custom_cmaps = dict()
custom_cmaps["cold"] = [
    (0, "white"),
    (0.25, "#DCEDC8"),
    (0.45, "#42B3D5"),
    (0.75, "#1A237E"),
    (1, "black"),
custom_cmaps["hot"] = [
    (0, "white"),
    (0.3, "#FEEB65"),
    (0.65, "#E4521B"),
    (0.85, "#4D342F"),
    (1, "black"),
custom_cmaps["pinks"] = [
    (0, "white"),
    (0.2, "#FFECB3"),
    (0.45, "#E85285"),
    (0.65, "#6A1B9A"),
    (1, "black"),

def cmap_for_mpl(colors, n_bins=512):
    return LinearSegmentedColormap.from_list("custom_cmap", colors, N=n_bins)

# for functions that use color map objects
cmap = cmap_for_mpl(custom_cmaps["pinks"])

# or to get discrete color values, call cmap() with a value between 0 and 1
num_lines = 5
for idx in range(num_lines):
    clr = cmap((idx + 1) / (num_lines + 1))
    x = np.arange(100)/np.pi
    plt.plot(x, np.sin(x + idx*np.pi/4) + idx, label=idx, color=clr)


COVID-19 Inference and Forecast 

How effective are interventions?

Our paper about estimating the effects of governmental interventions on the spread of COVID-19 is now out in Science!


Over the past months, my colleagues and I have worked on modeling the COVID-19 spread in Germany. Our approach uses Bayesian inference and Markov-Chain Monte-Carlo sampling on an SIR-model to find epidemiological parameters. It allows us to identify change points in the spreading rate (that is, when and how much the spreading rate changes). Check the links below for all the details!

[Science] [ArXiv] [GitHub]

I want to take the opportunity to thank everyone involved for this amazing collaboration. This has been, and still is, a truly great team effort. I feel that we have made a valuable contribution, and for me personally, the project made quarantine and working from home much more enjoyable! Thanks guys!

Website Content from Markdown 

Time to celebrate, this is my first content that uses the new format.

For a while I have been noticing that it feels too cumbersome to add content to this site; writing is hard and it is harder when you do it in html. Then there is markdown, which feels completely opposite. It is convenient, easy to read and easy to write in any editor. You can even write a paper with colleagues in real time, without ever leaving your browser. Seriously, if you are not familiar with markdown yet, let me recommend spending a few minutes of procrastination to check it out.

Considering the goal to get markdown files onto my custom site, the only hurdle was to render the .md files to html. First, I considered parsing only once before uploading everything so that page loads remain snappy. Then I realized that parsing an average document takes less than 10ms and ended up using Parsedown, a renderer in php. This allows me to simply drop the .md files into a folder on the server, php fetches them and parsedown creates the html for every file.

See this snippet of php:

foreach (glob("folder_with_markdown_files/*.md") as $file) {
    $html = Parsedown::instance()->text(file_get_contents($file));
    echo "<hr><div class='markdown'>";
    echo $html;
    echo "</div>";

Related things to check out

Criticality lies in the sampling. 


For decades neuroscientists have argued that the cortex might operate at a critical point of a (second-order, non-equilibrium) phase transition. Operating at such a critical point would benefit neuronal networks because it enables optimal information-processing and useful quantities such as the correlation length are maximized.

The evidence that supports or contradicts this hypothesis of criticality in the brain often derives from measurements of neuronal avalanches. If a system is critical, the probability distributions of the size (and duration) of these avalanches follow a power law. Thus, power-law distributions are a common way to check if a system is critical.

Controversially, the results of studies that build on observing power-laws in the neuronal avalanches vary immensely throughout the literature; some (and I am skipping many details) find the brain in a critical state and others in a subcritical state.

We found that the cause of the controversy lies in the way the avalanches are sampled. If an electrode’s signal is used directly (e.g. LFP), then many neurons contribute to the signal, and the events that make up an avalanche have many contributions. Because of the many contributions, spurious correlations are introduced, and this type of coarse-sampling can produce power-law distributions — even when the observed system is not critical.

[Plos CB] [ArXiv] [GitHub]

AtmoCL and AtmoWEB 

AtmoCL is an OpenCL port of the All Scale Atmospheric Model (ASAM) that I worked on during my time at Tropos. The code was initially based on the OpenGL derivative ASAMgpu. While OpenGL as a base for the initial GPU model was the intuitive choice, the (back then) more recent OpenCL offered some neat advantages. Apart from allowing the same code to run on a variety of hosts including heterogeneous environments of GPUs, CPUs and accelerators, we could profit from the 3D image class. The mapping from 3D volume to 2D textures - which are the favourable memory format for GPUs - is done by the driver. Further, one can directly access any point of the volume through integer indices instead of the more cumbersome float coordinates, inherent to OpenGL.

One main idea was to export the model state as images where the volume is mapped to 2D cutplanes and state variables are presented in RGB. To animate the pictures as a moving sequence, I developed a lightweight webinterface using bootstrap. It also plots vertical profiles and time series with highcharts. Checkout the demo.