Speakers

USS 2022

Thursday June 9th

Courtney Paquette

Courtney Paquette

she/her

Assistant Professor

McGill University · CIFAR Canada AI chair · MILA

Optimization Algorithms in the Large: Exact Dynamics, Average-case Analysis, and Stepsize Criticality In this talk, I will present a framework, inspired by random matrix theory, for analyzing the dynamics of optimization algorithms (e.g., 1st-order methods, stochastic gradient descent (SGD), and momentum) when both the number of samples and dimensions are large. Using this new framework, we show that the dynamics of optimization algorithms on a least squares problem with random data become deterministic in the large sample and dimensional limit. In particular, the limiting dynamics for stochastic algorithms are governed by a Volterra integral equation. This model predicts that SGD undergoes a phase transition at an explicitly given critical stepsize that ultimately affects its convergence rate, which we also verify experimentally. Finally, when input data is isotropic, we provide explicit expressions for the dynamics and average-case convergence rates. These rates show significant improvement over the worst-case complexities.


Relevant paper Website
Guillaume Hennequin

Guillaume Hennequin

Lecturer

University of Cambridge

V1 be one: unifying phenomenology and function in a network model of cortical variability Cortical responses to visual stimuli have a rich phenomenology, with well-documented nonlinearities, transients, oscillations, and stimulus-dependent patterns of across-trial variability. What principles of network dynamics underlie these features, and what function(s) do they serve? I will present our recent efforts to address these questions and unify previously disparate bottom-up and top-down models of V1. First, I will show that several key properties of V1 responses are difficult to reconcile with the classical theory of tightly balanced E/I networks; on the contrary, a regime of loose E/I balance accomodates these properties robustly (Hennequin et al., Neuron (2018)). Second, building up on this, I will show that yet more properties of V1 dynamics emerge in a loosely-balanced network model trained to perform sampling-based Bayesian inference (Echeveste et al., Nat. Neurosci. (2020)).


Relevant paper Website Twitter
Guillaume Lajoie

Guillaume Lajoie

Assistant Professor

Dept. de Mathématiques et Statistiques, Université de Montréal · Mila, Québec AI Institute · Canada CIFAR AI Chair

Panel: Dynamics and Optimization in Neuro-AI A panel discussion about Dynamics and Optimization in Neuro-AI with Dr. Courtney Paquette, David Kanaa and Dr. Parikshat Sirpal.

Website Twitter
Parikshat Sirpal

Parikshat Sirpal

Postdoctoral Fellow

Université de Montréal

Panel: Dynamics and Optimization in Neuro-AI A panel discussion about Dynamics and Optimization in Neuro-AI with Dr. Courtney Paquette, David Kanaa and Dr. Guillaume Lajoie.

David Kanaa

David Kanaa

PhD Candidate

Université de Montréal · Mila, Québec AI Institute

Panel: Dynamics and Optimization in Neuro-AI A panel discussion about Dynamics and Optimization in Neuro-AI with Dr. Courtney Paquette, Dr. Parikshat Sirpal and Dr. Guillaume Lajoie.

Twitter
François Paugam

François Paugam

PhD student

Mila, Université de Montréal

Gérer le débuggage et les messages d'erreur en langage Python L'objectif de ce workshop est de fournir des conseils de programmation et un aperçu de la gestion des messages d'erreur et du débogage pour les débutants en programmation Python. Le workshop se déroulera en français. Le materiel peut être téléchargé ici: https://github.com/UNIQUE-Students/python_erreurs_deboggage


GitHub

Friday June 10th

Anna Levina

Anna Levina

Assistant Professor for Computational Neuroscience

University of Tübingen

Maintenance of neuronal states powered by the self-organization It is widely believed that some neuronal states are better suited for computations than others. For example, closeness to the critical state at the second-order phase transition was shown to be particularly suitable for computations in artificial systems. Alternatively, the balance of excitation and inhibition was also associated, i.e., with optimized information encoding. But, in the everchanging and dynamic neuronal populations, how can these states be reached and maintained? I believe the only possible solution is to harness the mechanisms of self-organization. In my talk, I will show examples of self-organization to particular states from neuronal data and propose how synaptic plasticity and adaptation can help maintain them.


Relevant paper Website Twitter
Patrick Desrosiers

Patrick Desrosiers

Researcher and Affiliated Professor in Physics

CERVO Brain Research Center · Université Laval

The hidden low-dimensional dynamics of large neuronal networks Recent experimental studies have shown that the evoked activity of large populations of neurons in living animals is high-dimensional. Yet, neuroscientists often observe that when animals are submitted to simple stimuli, a much smaller number of dimensions than the number of neurons is sufficient to explain a large amount of the variance in the neuronal activity. In this talk, I will attempt to reconcile these two types of observations and to disambiguate the concept of dimensionality through the lens of complex system theory. First, several empirical connectomes will be analyzed to prove that real neuronal networks exhibit a very special mathematical property: their structure has a low effective rank. I will then use this property to prove that the large-scale activity of neural networks is always well approximated by a small dynamical system. Finally, I will exploit this dimension reduction phenomenon to justify the use of PCA in neuroscience and to study the resilience of large neuronal networks.


Group Website Google Scholar
Alex Hernandez-Garcia

Alex Hernandez-Garcia

he/him/él

Postdoctoral Fellow

Mila, Université de Montréal

Panel: The future of Neuro-AI: Climate change A panel discussion about The future of Neuro-AI: Climate change with Julia Kaltenborn, Luz Gomez-Vallejo, Mélisande Teng and Dr. Anne Pasek.

Twitter Website

Anne Pasek

Assistant Professor

Trent University

Panel: The future of Neuro-AI: Climate change A panel discussion about The future of Neuro-AI: Climate change with Julia Kaltenborn, Luz Gomez-Vallejo, Mélisande Teng and Dr. Alex Hernandez-Garcia.

Website Twitter

Julia Kaltenborn

Master student

McGill University

Panel: The future of Neuro-AI: Climate change A panel discussion about The future of Neuro-AI: Climate change with Dr. Anne Pasek, Luz Gomez-Vallejo, Mélisande Teng and Dr. Alex Hernandez-Garcia.

Website

Luz Gomez Vallejo

President at Net Impact Montreal

Net Impact Montreal

Panel: The future of Neuro-AI: Climate change A panel discussion about The future of Neuro-AI: Climate change with Julia Kaltenborn, Dr. Anne Pasek, Mélisande Teng and Dr. Alex Hernandez-Garcia.

Twitter

Mélisande Teng

PhD Candidate

Mila, Québec AI Institute

Panel: The future of Neuro-AI: Climate change A panel discussion about The future of Neuro-AI: Climate change with Julia Kaltenborn, Dr. Anne Pasek, Luz Gomez Vallejo and Dr. Alex Hernandez-Garcia.

Twitter
Sangnie Bhardwaj

Sangnie Bhardwaj

PhD student

Mila, Université de Montréal

Studying Neural Networks through the lens of Dynamical Systems This hands-on workshop will introduce participants to dynamical systems tools for the analysis of neural networks. Please find the material here: https://github.com/sangnie/lyapunov_calc.


Website