NeuroInformatics
My group interfaces the fields of neuroscience with mathematics and computer science. Questions involve understanding and modeling learning and information processing in biologically plausible networks of neurons. Many such processes are still poorly understood, and experimental neuroscience data is often open to different interpretations. Mathematical modeling combined with insights and techniques from statistical machine learning and estimation are key methods for progress in this area.
Goals. In my group, we focus on the following two - closely related - research questions: “How do biological neurons compute information?” and “How do biological neurons learn to compute information?”. To answer the research questions, theoretical models are developed to fit existing neuroscience experiments, with and emphasis on spiking neurons. By fitting models to existing neuroscience experiments, theory links with actual neural data such that theoretical models can be constructed that provide falsifiable predictions.
Spiking Reinforcement Learning. We currently focus on the combination of detailed spiking neuron models with insights on computational solutions for reinforcement learning. Scientifically, much is known about effective and powerful reinforcement learning methods, both with respect to spatial and temporal learning, and with respect to stochastic actions and environments. Implementations in neuronal systems however are far and in between, and in particular very few approaches prescribe how such powerful reinforcement learning would take place in layered networks of spiking neurons (like typical cortical structures), while also being biologically plausible.
Precise Spike Timing. Part of the group's effort includes finding models that explain "intermediately precise spike-timing" in neural models. Increasingly evidence is emerging that in biological systems neural networks can operate based on precisely timed spikes, and this might be the main mode of fast computation in distinct parts of the brain. In past work, we showed that computing with precisely time spikes in networks of spiking neurons works in practice just as well as more traditional neural networks. However, just as for traditional rate coding models, we know that the brain does not rely exclusively on timed spikes. Some intermediate computational models are needed. Recent work elsewhere has focused on probabilistic computing in "poisson" spiking neurons; unfortunately, this type of neuron model does particularly poorly when replicating high-resolution temporal precision in real spiking neurons. A different emphasis on the balance between spike-timing and probability encoding may provide a better model.
MSc Projects
For highly motivated EU students with excellent programming skills, I have a few potential MSc thesis projects:
1 Spike-based Bio-Acoustics. Many animals use sound to localize and identify other animals. Bats famously navigate based on ultra-sound. What is less known, is that this process is facilitated by finely tuned temporal coincidence detectors in the form of spiking neurons. Here, we aim to develop spiking neural networks for such tasks as localization and identification based on sound. Part of the aim here is to use the small-scale spiking neural networks that can fit on state-of-the-art neuromorphic chips, like Intel's Loihi chip [2].[1] Zhou, Shibo, and Wei Wang. 2018. “Object Detection Based on LIDAR Temporal Pulses Using Spiking Neural Networks.” arXiv [cs.NE]. arXiv. http://arxiv.org/abs/1810.12436. [2] Davies, Mike, et al. "Loihi: a neuromorphic manycore processor with on-chip learning." IEEE Micro 38.1 (2018): 82-99.
2 Deep Cognitive Spiking Neural Networks.
Spiking neural networks are a promising candidate for implementing deep learning algorithms in a sparse and efficient manner. Deep networks that learn using spiking neurons however are currently limited to unsupervised learning models [1] or shallow networks [2]. The aim of this project is to develop true end-to-end deep spiking neural networks that are able to solve complex cognitive tasks from raw sensory inputs, by combining [2] and [3].
[1] Ferré, P., Mamalet, F., & Thorpe, S. J. (2018). Unsupervised Feature Learning With Winner-Takes-All Based STDP. Frontiers in computational neuroscience, 12, 24. [2] Karamanis, M., Zambrano, D., & Bohté, S. (2018, October). Continuous-Time Spike-Based Reinforcement Learning for Working Memory Tasks. In International Conference on Artificial Neural Networks (pp. 250-262). Springer, Cham. [3] Pozzi, Bohte, Roelfsema, (2018) A Biologically Plausible Learning Rule for Deep Learning in the Brain. Arxiv arXiv:1811.01768
3 Playing Games with Biologically Plausible Deep Memory-based RL
While the success of deep learning is mostly derived from the application of error-backpropagation from supervised examples, the brain rarely receives such precise corrective information. In biology however, learning is mostly based on a combination of unsupervised and reinforcement learning. Recent work [1] has shown that a biologically plausible neural model of reinforcement learning is both equally powerful as error-backprop and surprisingly efficient. An unresolved problem is how recurrent neural network structures can be incorporated in such RL-based learning schemes. Previous work has shown how plausible memory structures can be constructed and learned [2], the challenge is to combine these memory structures, or similar ideas as in [3], with deep networks capable of learning from raw data and apply them for example to the ATARI games, while maintaining biological plausibility.
[1] [Pozzi, Bohte, Roelfsema] A Biologically Plausible Learning Rule for Deep Learning in the Brain. Arxiv arXiv:1811.01768 [2] Rombouts, J., Roelfsema, P., & Bohte, S. M. (2012). Neurally plausible reinforcement learning of working memory tasks. In Advances in Neural Information Processing Systems (NIPS) (pp. 1871-1879). [3] Bellec, Guillaume, et al. "Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets." arXiv preprint arXiv:1901.09049 (2019).
4 Learning to Attend to Classify
In humans, attention focuses neural resources on a limited part of the sensory experience. Psychophysics also tells us that we only learn about that to which we attend. In deep learning, attention models are typically applied to sequence learning, where attention dynamically masks part of the stream [1]. Can we model attention to learn more efficiently?[1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems(pp. 5998-6008).