Event

In the domain of computational/theoretical neuroscience a recently revived question is about the complexity of neural data. This question can be tackled by studying the dimensionality of such data: is neural activity high or low dimensional? How does the geometrical structure of neural activity depend on behavior, learning or the underlying connectivity? In my talk I will show how it is possible to link these three aspects (animal behavior, learning and underlying network connectivity) to the geometrical properties of neural data, with an emphasis on dimensionality phenomena. My results depart from neural recordings and aim at building understanding of neural dynamics by means of theoretical and computational tools. Such tools are mainly borrowed from the domain of neural networks dynamics, using a blend of large scale dynamical systems and statistical physics approaches.

Zoom: https://upenn.zoom.us/j/96846928909?pwd=Q3JPTTc5dURmQk5xL01OMjZUc2FXUT09