People

POSTDOCTORAL FELLOWS

Alexandre Payeur

I am interested in the dynamical structure of learning in neural networks and its application to co-adaptive neural interfaces. 

PHD STUDENTS

Ezekiel Williams

I am interested in developing mathematical theory for how cognitive phenomena--learning, memory and decision making--arise from the interaction of nodes (neurons) in a biological or artificial agent. I am fascinated by questions such as (1) what form and dynamics an agent's internal representations should take to enable robust learning and task performance in a given environment; (2) how an agent should encode uncertainty about its environment; (3) during online learning, how an agent might effectively integrate information associated with the current task and information driving learning; (4) how an agent could perform tasks involving information spanning multiple timescales. 

Laura Estefany Suarez 

(co-mentored with Bratislav Misic)
I’m a PhD candidate in the Neuroscience program at McGill University. I'm supervised by Dr. Bratislav Mišić (network neuroscience) and co-supervised by Dr. Guillaume Lajoie (dynamical systems).My research interests lie at the intersection of artificial intelligence and neuroscience. Broadly, my research revolves around the structure-function relationship in biological brains. Specifically, I want to understand how network structure and dynamics interact to shape the computational capacity of the brain, and how we can use this knowledge to establish design principles for the development of better neuromorphic architectures.

François Paugam

(co-mentored with Pierre Bellec)
I am interested in developing mathematical theory for how cognitive phenomena--learning, memory and decision making--arise from the interaction of nodes (neurons) in a biological or artificial agent. I am fascinated by questions such as (1) what form and dynamics an agent's internal representations should take to enable robust learning and task performance in a given environment; (2) how an agent should encode uncertainty about its environment; (3) during online learning, how an agent might effectively integrate information associated with the current task and information driving learning; (4) how an agent could perform tasks involving information spanning multiple timescales. 

Ximeng Mao

I am working on implementing and developing machine learning algorithms, suitable for the spatial-temporal patterns of the neural signals. I aim to explore the hidden dynamics of the data and leverage them as guidance for new model design. My favorite tools are deep generative models, reinforcement learning and gaussian processes. 

Amine Natik

(co-mentored with Guy Wolf)
I'm a PhD student at the Math department at the University of Montreal & MILA, co-supervised by Guy Wolf and Guillaume Lajoie.I received my M.Sc at the University of Ottawa in Probability Theory.Currently I am interested in understanding the information encoded in the hidden layers of trained deep neural networks accomplishing complex tasks. I am using tools from manifold learning, graph signal processing, representation learning and dynamical systems. I am also interested in research at the intersection of AI and Neuroscience.

(co-mentored with Pierre Bellec)
Coming soon

(co-supervized with Yoshua Bengio)
I am interested in understanding the key cognitive inductive biases that enable humans and animals to do out-of-distribution generalisation, and figure out how to integrate them in AI systems. One such inductive bias that I find very interesting is that of a sparse factor graph, in other words: a clear separation between the encoding of causally relevant variables and the encoding of their causal dependency structure. It seems that this is crucial to achieve transitive relational inference, as suggested by recent works like the Tolman Eichenbaum machine. Further, I am interested in understanding what the process of memory consolidation in the brain can teach us about how to develop sophisticated relevancy screening mechanisms and how to make AI more scalable. Finally, I am interesting in understanding what child development / psychology can teach us about how the human brain gradually builds causal models of the world and disentangles invariant features. Besides that, I really enjoy writing mathematical proofs and solving hard chess puzzles.

Sangnie Bhardwaj

I’m a PhD student at UdeM, supervised by Guillaume Lajoie and Hugo Larochelle. I am also a researcher at Google Montreal, where I work on building robust and generalizable image representations. I am interested in achieving this by incorporating principles inspired by visual learning in human brains

MASTER'S STUDENTS

Aude Forcione-Lambert

(co-mentored with Guy Wolf)
Coming soon

Sarthak Mittal 

(co-mentored with Yoshua Bengio)
I am a graduate student in the Mathematics department at the Universite de Montreal (in affiliation with MILA) and am co-supervised by Dr. Guillaume Lajoie and Dr. Yoshua Bengio. I am broadly interested in graphical models, optimization and generalization in Machine Learning. Currently, I have been focusing on attention based models for sequential tasks and the study of causality in ML. My long term goal is to understand more about Deep Learning and bridge the gap between mathematical analysis and machine learning methods.

Leo Choinière 

(co-supervized with Numa Dancause)
I am a Master's student in neuroscience at Université de Montréal cosupervised by Dr Dancause in the neuroscience department. I am interested in brain-machine interfaces, neural dynamics and motor control. My main project aims at optimizing parameters in the context of neurostimulation in preclinical animal models and this implies acquiring both experimental and computational modeling skills. In this context, questions of interest are : Does external electric stimulation of a network keep the neural dynamics in a physiological regime? Can we drive high dimensional neural activity in useful ways using only a small number of input nodes?

Nanda Harishankar Krishna

I’m a Master’s student at Université de Montréal and I’m broadly interested in building efficient and robust deep learning models inspired by the brain, and applying these models in healthcare. Currently, I’m working on understanding the dynamics of latent representations learnt by the brain for specific tasks. 

Léo Gagnon

I am interested in asking deep questions.

UNDERGRADUATE STUDENTS

I am currently a master’s student in applied mathematics at the University of Cambridge. Alongside my undergraduate studies at the Université de Montréal, I worked as a research student at Mila under the supervision of Dr Lajoie. My goal is to apply mathematical principles to investigate learning mechanisms, motivated by questions arising at the intersection of deep learning and computational neuroscience. Specifically, I am interested in formalising the role of mechanisms for sensory integration in shaping neural population dynamics, and in turn the role of these dynamics for computation in neural networks. 

INTERNS STUDENTS

Colin Bredenberg

I am interested in computational theories of synaptic plasticity in the brain. More specifically, I use machine learning and dynamical systems techniques to study how the visual system learns to represent images, while coping with noise in both the environment and the activities of neurons themselves. 

Olivier Tessier-Larivière

(Intern, Mila Professinal Master's)
I am a Professional Masters student in machine learning at Mila. I did my undergraduate in computer engineering at Polytechnique Montreal. I am interested in solving real-world problems using machine learning, particularly in the healthcare field. As part of my internship project, I applied deep learning methods to model neural activity in the peripheral nervous system. 

ALUMI

Bhargav Kanuparthi

Samuel Laferrière