The times are Eastern Daylight Time (Québec local time, UTC-4). Click on the times to check your local time.
Thursday, 9th June
Time (EDT) |
Session |
Speaker(s) |
Title |
Links |
08:45 - 09:15 |
Breakfast |
|
|
|
09:15 - 09:25 |
Pre-workshop remarks |
|
|
|
09:25 - 10:50 |
Workshop 1 |
François Paugam
|
Gérer le débuggage et les messages d'erreur en langage Python
- L'objectif de ce workshop est de fournir des conseils de programmation et un aperçu de la gestion des messages d'erreur et du débogage pour les débutants en programmation Python. Le workshop se déroulera en français. Le materiel peut être téléchargé ici: https://github.com/UNIQUE-Students/python_erreurs_deboggage
|
|
10:50 - 11:20 |
Break |
|
|
|
11:20 - 11:30 |
Opening remarks |
|
|
|
11:30 - 12:30 |
Keynote talk 1 |
Guillaume Hennequin
|
V1 be one: unifying phenomenology and function in a network model of cortical variability
- Cortical responses to visual stimuli have a rich phenomenology, with well-documented nonlinearities, transients, oscillations, and stimulus-dependent patterns of across-trial variability. What principles of network dynamics underlie these features, and what function(s) do they serve? I will present our recent efforts to address these questions and unify previously disparate bottom-up and top-down models of V1. First, I will show that several key properties of V1 responses are difficult to reconcile with the classical theory of tightly balanced E/I networks; on the contrary, a regime of loose E/I balance accomodates these properties robustly (Hennequin et al., Neuron (2018)). Second, building up on this, I will show that yet more properties of V1 dynamics emerge in a loosely-balanced network model trained to perform sampling-based Bayesian inference (Echeveste et al., Nat. Neurosci. (2020)).
|
|
12:30 - 13:20 |
Lunch + Networking |
|
|
|
13:20 - 14:20 |
Keynote talk 2 |
Courtney Paquette
|
Optimization Algorithms in the Large: Exact Dynamics, Average-case Analysis, and Stepsize Criticality
- In this talk, I will present a framework, inspired by random matrix theory, for analyzing the dynamics of optimization algorithms (e.g., 1st-order methods, stochastic gradient descent (SGD), and momentum) when both the number of samples and dimensions are large. Using this new framework, we show that the dynamics of optimization algorithms on a least squares problem with random data become deterministic in the large sample and dimensional limit. In particular, the limiting dynamics for stochastic algorithms are governed by a Volterra integral equation. This model predicts that SGD undergoes a phase transition at an explicitly given critical stepsize that ultimately affects its convergence rate, which we also verify experimentally. Finally, when input data is isotropic, we provide explicit expressions for the dynamics and average-case convergence rates. These rates show significant improvement over the worst-case complexities.
|
|
14:20 - 14:30 |
Break |
|
|
|
14:30 - 15:30 |
Discussion Panel 1 |
Guillaume Lajoie
,
Courtney Paquette
,
David Kanaa
,
Parikshat Sirpal
|
Dynamics and Optimization in Neuro-AI |
|
15:30 - 15:45 |
Break |
|
|
|
15:45 - 17:30 |
Poster presentations |
|
Selected posters presented by participants
|
|
Friday, 10th June
Time (EDT) |
Session |
Speaker(s) |
Title |
Links |
08:45 - 09:15 |
Breakfast |
|
|
|
09:15 - 09:25 |
Pre-workshop remarks |
|
|
|
09:25 - 10:50 |
Workshop 2 |
Sangnie Bhardwaj
|
Studying Neural Networks through the lens of Dynamical Systems
|
GitHub
|
10:50 - 11:20 |
Break |
|
|
|
11:20 - 11:30 |
Opening remarks |
|
|
|
11:30 - 12:30 |
Keynote talk 3 |
Anna Levina
|
Maintenance of neuronal states powered by the self-organization
- It is widely believed that some neuronal states are better suited for computations than others. For example, closeness to the critical state at the second-order phase transition was shown to be particularly suitable for computations in artificial systems. Alternatively, the balance of excitation and inhibition was also associated, i.e., with optimized information encoding. But, in the everchanging and dynamic neuronal populations, how can these states be reached and maintained? I believe the only possible solution is to harness the mechanisms of self-organization. In my talk, I will show examples of self-organization to particular states from neuronal data and propose how synaptic plasticity and adaptation can help maintain them.
|
|
12:30 - 13:20 |
Lunch + Networking |
|
|
|
13:20 - 14:20 |
Keynote talk 4
[POSTPONED] |
Patrick Desrosiers
|
The hidden low-dimensional dynamics of large neuronal networks
- Recent experimental studies have shown that the evoked activity of large populations of neurons in living animals is high-dimensional. Yet, neuroscientists often observe that when animals are submitted to simple stimuli, a much smaller number of dimensions than the number of neurons is sufficient to explain a large amount of the variance in the neuronal activity. In this talk, I will attempt to reconcile these two types of observations and to disambiguate the concept of dimensionality through the lens of complex system theory. First, several empirical connectomes will be analyzed to prove that real neuronal networks exhibit a very special mathematical property: their structure has a low effective rank. I will then use this property to prove that the large-scale activity of neural networks is always well approximated by a small dynamical system. Finally, I will exploit this dimension reduction phenomenon to justify the use of PCA in neuroscience and to study the resilience of large neuronal networks.
|
|
14:20 - 14:30 |
Break |
|
|
|
14:30 - 15:30 |
Discussion Panel 2 |
Anne Pasek
,
Luz Gomez Vallejo
,
Mélisande Teng
,
Alex Hernandez-Garcia
,
Julia Kaltenborn
|
The Future of Neuro-AI: Climate Change |
|
15:30 - 15:45 |
Closing remarks |
|
|
|
15:45 - |
Social! |
|
|
|