05-02-2018 Jean-Pascal Pfister


Neural Particle Filter


The brain is able to perform remarkable computations such as extracting the voice of a person talking in a noisy crowd or tracking the position of a pedestrian crossing the road. Even though, we perform everyday those computations in a seemingly effortless way, this ongoing feature extraction task is however far from being trivial. This computational task can be formalised as a filtering problem where the aim is to infer the state of a dynamically changing hidden variable given some noisy observation. A well-known solution to this problem is the Kalman filter for linear hidden dynamics. It is however unclear how to reliably and efficiently perform inference for real-word tasks which are highly nonlinear and high dimensional. Furthermore, it is even less clear how this nonlinear filtering  may be implemented in neural tissue. We recently proposed a neural network model (the Neural Particle Filter) that performs this nonlinear filtering task [1,2] and derived an online learning rule which becomes hebbian in the limit of small observation noise [1,3]. Since this filter is based on unweighted particles (unlike bootstrap particle filter which relies on weighted particles), we showed that it overcomes the known curse of dimensionality of particle filters [2]. 

[1] Kutschireiter, A., Surace, S. C., Sprekeler, H., & Pfister, J.-P. (2017). Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception. Nature Scientific Reports, 7(1), 8722. 
[2] Surace, S. C., Kutschireiter, A., & Pfister, J.-P. (2017). How to avoid the curse of dimensionality: scalability of particle filters with and without importance weights. SIAM Review, In Press. arXiv:1703.07879
[3] Surace, S. C., & Pfister, J. P. (2016). Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes. arXiv:1611.00170