I am interested in understanding the key cognitive inductive biases that enable humans and animals to do out-of-distribution generalisation, and figure out how to integrate them in AI systems. One such inductive bias that I find very interesting is that of a sparse factor graph, in other words: a clear separation between the encoding of causally relevant variables and the encoding of their causal dependency structure. It seems that this is crucial to achieve transitive relational inference, as suggested by recent works like the Tolman Eichenbaum machine. Further, I am interested in understanding what the process of memory consolidation in the brain can teach us about how to develop sophisticated relevancy screening mechanisms and how to make AI more scalable. Finally, I am interesting in understanding what child development / psychology can teach us about how the human brain gradually builds causal models of the world and disentangles invariant features. Besides that, I really enjoy writing mathematical proofs and solving hard chess puzzles.