**People**

**POSTDOCTORAL FELLOWS**

**POSTDOCTORAL FELLOWS**

## (co-mentored with Yoshua Bengio)

Max is interested in neural

*dynamics, inference and decision-making with a focus on the role of bias and uncertainty in neuroscience and artificial intelligence. He has done interdisciplinary work at a variety of interfaces: optimal strategies for quantum state discrimination, signal processing and geometric properties of neural nets, as well as model-based inference of immune repertoire dynamics from genetic sequence data. He is currently working on neural decision-making theory in the neuroAI space and how it can serve clinical developmental psychology.***Alexandre Payeur**

**Alexandre Payeur***I am interested in the dynamical structure of learning in neural networks and its application to co-adaptive neural interfaces.*

**Parikshat Sirpal**

(co-mentored with Numa Dancause)

**Parikshat Sirpal***I am interested in the neurobiology of motor recovery and intra-cortical interactions for complex motor control. I utilize traditional signal processing and machine learning techniques to study questions related to the mechanisms involved in the control of movement, the plasticity supporting motor recovery post brain injury and the impact of neuromodulatory approaches on the brain.*

**PHD STUDENTS**

**PHD STUDENTS**

**Ezekiel Williams**

**Ezekiel Williams***I am interested in developing mathematical theory for how cognitive phenomena--learning, memory and decision making--arise from the interaction of nodes (neurons) in a biological or artificial agent. I am fascinated by questions such as (1) what form and dynamics an agent's internal representations should take to enable robust learning and task performance in a given environment; (2) how an agent should encode uncertainty about its environment; (3) during online learning, how an agent might effectively integrate information associated with the current task and information driving learning; (4) how an agent could perform tasks involving information spanning multiple timescales.*

**Laura Estefany Suarez **

(co-mentored with Bratislav Misic)

**Laura Estefany Suarez***I’m a PhD candidate in the Neuroscience program at McGill University. I'm supervised by Dr. Bratislav Mišić (network neuroscience) and co-supervised by Dr. Guillaume Lajoie (dynamical systems).*

*My research interests lie at the intersection of artificial intelligence and neuroscience. Broadly, my research revolves around the structure-function relationship in biological brains. Specifically, I want to understand how network structure and dynamics interact to shape the computational capacity of the brain, and how we can use this knowledge to establish design principles for the development of better neuromorphic architectures.*

**François Paugam**

(co-mentored with Pierre Bellec)

**François Paugam***I am interested in developing mathematical theory for how cognitive phenomena--learning, memory and decision making--arise from the interaction of nodes (neurons) in a biological or artificial agent. I am fascinated by questions such as (1) what form and dynamics an agent's internal representations should take to enable robust learning and task performance in a given environment; (2) how an agent should encode uncertainty about its environment; (3) during online learning, how an agent might effectively integrate information associated with the current task and information driving learning; (4) how an agent could perform tasks involving information spanning multiple timescales.*

**Ximeng Mao**

**Ximeng Mao***I am working on implementing and developing machine learning algorithms, suitable for the spatial-temporal patterns of the neural signals. I aim to explore the hidden dynamics of the data and leverage them as guidance for new model design. My favorite tools are deep generative models, reinforcement learning and gaussian processes.*

**Amine Natik**

(co-mentored with Guy Wolf)

**Amine Natik***I'm a PhD student at the Math department at the University of Montreal & MILA, co-supervised by Guy Wolf and Guillaume Lajoie.I received my M.Sc at the University of Ottawa in Probability Theory.*

*Currently I am interested in understanding the information encoded in the hidden layers of trained deep neural networks accomplishing complex tasks. I am using tools from manifold learning, graph signal processing, representation learning and dynamical systems. I am also interested in research at the intersection of AI and Neuroscience.*

## (co-supervized with Yoshua Bengio)

*I am interested in understanding the key cognitive inductive biases that enable humans and animals to do out-of-distribution generalisation, and figure out how to integrate them in AI systems. One such inductive bias that I find very interesting is that of a sparse factor graph, in other words: a clear separation between the encoding of causally relevant variables and the encoding of their causal dependency structure. It seems that this is crucial to achieve transitive relational inference, as suggested by recent works like the Tolman Eichenbaum machine. Further, I am interested in understanding what the process of memory consolidation in the brain can teach us about how to develop sophisticated relevancy screening mechanisms and how to make AI more scalable. Finally, I am interesting in understanding what child development / psychology can teach us about how the human brain gradually builds causal models of the world and disentangles invariant features. Besides that, I really enjoy writing mathematical proofs and solving hard chess puzzles*

*.*

**Sangnie**** ****Bhardwaj**

**Sangnie**

**Bhardwaj***I’m a PhD student at UdeM, supervised by Guillaume Lajoie and Hugo Larochelle. I am also a researcher at Google Montreal, where I work on building robust and generalizable image representations. I am interested in achieving this by incorporating principles inspired by visual learning in human brains*

**MASTER'S STUDENTS**

**MASTER'S STUDENTS**

**Aude Forcione-Lambert**

(co-mentored with Guy Wolf)

**Aude Forcione-Lambert***Coming soon*

**Sarthak Mittal **

(co-mentored with Yoshua Bengio)

**Sarthak Mittal***I am a graduate student in the Mathematics department at the Universite de Montreal (in affiliation with MILA) and am co-supervised by Dr. Guillaume Lajoie and Dr. Yoshua Bengio. I am broadly interested in graphical models, optimization and generalization in Machine Learning. Currently, I have been focusing on attention based models for sequential tasks and the study of causality in ML. My long term goal is to understand more about Deep Learning and bridge the gap between mathematical analysis and machine learning methods.*

**Leo Choinière **

(co-supervized with Numa Dancause)

**Leo Choinière***I am a Master's student in neuroscience at Université de Montréal cosupervised by Dr Dancause in the neuroscience department. I am interested in brain-machine interfaces, neural dynamics and motor control. My main project aims at optimizing parameters in the context of neurostimulation in preclinical animal models and this implies acquiring both experimental and computational modeling skills. In this context, questions of interest are : Does external electric stimulation of a network keep the neural dynamics in a physiological regime? Can we drive high dimensional neural activity in useful ways using only a small number of input nodes?*

**Nanda Harishankar Krishna**

**Nanda Harishankar Krishna***I’m a Master’s student at Université de Montréal and I’m broadly interested in building efficient and robust deep learning models inspired by the brain, and applying these models in healthcare. Currently, I’m working on understanding the dynamics of latent representations learnt by the brain for specific tasks*

*.*

**Léo Gagnon**

**Léo Gagnon**I am interested in asking deep questions.

**UNDERGRADUATE STUDENTS**

**UNDERGRADUATE STUDENTS**

*I am currently a master’s student in applied mathematics at the University of Cambridge. Alongside my undergraduate studies at the Université de Montréal, I worked as a research student at Mila under the supervision of Dr Lajoie. My goal is to apply mathematical principles to investigate learning mechanisms, motivated by questions arising at the intersection of deep learning and computational neuroscience. Specifically, I am interested in formalising the role of mechanisms for sensory integration in shaping neural population dynamics, and in turn the role of these dynamics for computation in neural networks.*

**INTERNS STUDENTS**

**INTERNS STUDENTS**

**Colin Bredenberg**

**Colin Bredenberg**I am interested in computational theories of synaptic plasticity in the brain. More specifically, I use machine learning and dynamical systems techniques to study how the visual system learns to represent images, while coping with noise in both the environment and the activities of neurons themselves.

**Olivier Tessier-Larivière**

(Intern, Mila Professinal Master's)

**Olivier Tessier-Larivière**I am a Professional Masters student in machine learning at Mila. I did my undergraduate in computer engineering at Polytechnique Montreal. I am interested in solving real-world problems using machine learning, particularly in the healthcare field. As part of my internship project, I applied deep learning methods to model neural activity in the peripheral nervous system.

**ALUMI**

**ALUMI**

**Bhargav Kanuparthi**

**Bhargav Kanuparthi****Samuel Laferrière**

**Samuel Laferrière**