Last month marked the 10th anniversary of the landmark paper that launched “connectomics”, overthrowing the predominant approach to localizing individual functions in the brain in favor of mapping the entirety of the brain’s connections. In the decade since, connectomics has redefined how we collect, analyzing, and interpret our data. Along the way numerous international endeavors like the Human Connectome Project have sprung up, spurring hundreds of institutions to amass never before seen volumes of brain data from thousands of individuals. This revolution has moved cognitive neuroimaging from a small scale endeavor, governed by many isolated labs conducting small scale studies in closed settings, to a massive open science bonanza of data sharing. Today many brain science institutes find themselves engaged in large scale data collection, whether to establish normative samples of particular patient groups or to bolster ongoing connectomic and computational approaches. This movement has not been without its detractors however, with some raising concerns about the cost and long-term payoff of these massive scale projects, arguing that they come at the cost of more flexible, smaller, hypothesis-driven research.
To get a feeling for how far we’ve come and how far we’ve yet to go, I met with PLOS ONE Section Editor and PLOS Computational Biology Deputy Editor-in-Chief, Olaf Sporns to discuss the first “decade of connectomics”.