The Algonauts Project 2025 Challenge: How the Human Brain Makes Sense of Multimodal Movies

Jan 6, 2025ยท
Alessandro T. Gifford
,
Domenic Bersch
,
Marie St-Laurent
,
Basile Pinsard
,
Julie Boyle
,
Lune Bellec
,
Aude Oliva
,
Gemma Roig
,
Radoslaw M. Cichy
ยท 1 min read
Abstract
There is growing symbiosis between artificial and biological intelligence sciences: neural principles inspire new intelligent machines, which are in turn used to advance our theoretical understanding of the brain. To promote further collaboration between biological and artificial intelligence researchers, we introduce the 2025 edition of the Algonauts Project challenge: How the Human Brain Makes Sense of Multimodal Movies. In collaboration with the Courtois Project on Neuronal Modelling (CNeuroMod), this edition aims to bring forth a new generation of brain encoding models that are multimodal and that generalize well beyond their training distribution, by training them on the largest dataset of fMRI responses to movie watching available to date.
Type
Publication
arXiv preprint

This paper introduces the Algonauts Project 2025 Challenge, which aims to advance brain encoding models by leveraging the largest fMRI dataset for movie watching to date. The challenge fosters collaboration between neuroscience and AI communities and will conclude with a session at the 2025 Cognitive Computational Neuroscience (CCN) conference featuring the winning models.

For more information, view the full paper on arXiv.