Navigating Neuroimaging? Use BrainMap

Dr. Peter Fox’s pioneering work in the 1980s established the descriptive BrainMap taxonomy that is still used today

Asset 11.png

By Ivy Ashe
Population Health Scholar
University of Texas System
PhD Student in Journalism
UT Austin Moody College of Communication


It’s difficult to navigate the twists and turns of the brain and its networks, but for the past thirty years, Dr. Peter Fox has been making the task a little bit easier.

Fox is the founder and director of the BrainMap project at UT Health San Antonio. BrainMap pulls data from thousands of neuroimaging studies and organizes it according to an extensive, constantly evolving taxonomic system. The taxonomy provides researchers with a means to spot patterns on a large scale and, in doing so, help describe the most complex organ in the body.

Peter Fox1-2.jpg

“The overall goal is to use the published literature as fully as possible to make predictions about how to optimally analyze patient data,” Fox said. “It puts imaging in a unique situation and lets us compare across several types of imaging data.”

BrainMap was born in mid-1980s when Fox, then a neurology resident in St. Louis, was trying to devise a way to analyze images he received while doing brain activation studies.

“There was no framework,” he said. “My inclination was to analyze the images I was acquiring in a mathematical, algorithmic way that didn’t exist, yet. I needed to be able to pool data from multiple subjects.”

Stereotactic neurosurgery provided an answer. In this subfield, each physical area of the brain is given a coordinate on a three-dimensional space, with specific, X (left-to-right),Y (front-to-back), and Z (up-and-down) coordinates, so that surgical locations can be precisely targeted. Stereotactic neurosurgery has been using the XYZ space approach since the 1930s.

Why don’t we do this for images, Fox thought.

He took a neurosurgical atlas and used it to develop an algorithmic method that would transform the brain activation study images into a standard space. In a task-activation study, different areas of the brain ‘light up’ depending on what the task being performed is. This is noted on the study images, but because everyone’s brain is a different size, the standard space is needed in order to make broad comparisons between individual people and, eventually, between different research projects.

The results of Fox’s algorithm were first published in 1984.

“I was surprised,” Fox said. “Some people loved it. Other people said no, it was the worst idea they’d ever heard, because it was ignoring differences between people’s brains.”

Smoothing over differences wasn’t Fox’s aim; rather, his goal was to provide a means of quantifying a nebulous space and spotting broad patterns.

“A typical way of doing an experiment is we might enroll 20-30 people, and for each of them we do the same experiment. This is enough for a single publication,” Fox said. “But with BrainMap we can take 20-30 publications and group those, to see which results really hold up.’”

“When we pool the data, our study might include 500 people or more, so it makes the data much more powerful. You can do sophisticated analyses that you couldn’t do on smaller sets.”

The original software Fox wrote to perform the algorithm was “okay,” he recalled. It was working, but was not refined enough for a worldwide audience. In 1991, however, a group of researchers in London created their own software using Fox’s method, which they subsequently distributed for free. That software is still in use, and has become an international standard.

After the London software took off, the BrainMap taxonomy and X-Y-Z space-mapping became more prevalent across the field.

“Very few other disciplines have a data structure that is clear and widely adhered to,” Fox said. “One community that also has this type of advantage is genetics—unsurprisingly, meta-analysis is done in genetics quite a lot.”

When BrainMap began, PET imaging was the only one way of mapping brain activity. Functional MRI research (fMRI), which is now the most commonly used brain-mapping method, was developed in the early 1990's. To include new imaging methods, like fMRI, and new statistical analysis, the BrainMap taxonomy has evolved over the years to remain relevant and useful.

But it all leads back to the same original concept and still relies on the same organizing force: research assistants and graduate students comb through each newly published paper and code it according to the current taxonomy while looking for needed updates.

“We do about half of the data coding in my lab; the other half comes from labs around the world,” Fox said. “We do the quality control on all of it. If people find that the taxonomy we use isn’t covering a particular type of experiment, we ask that they tell us.”

BrainMap’s database has now been cited in nearly 800 publications, including one seminal paper led by Stephen Smith of Oxford. Smith used the BrainMap database to do an analysis across all of the task-activation studies available. This analysis allowed him to see how many networks emerged within the brain. Smith then asked whether these active-brain networks were the same as those seen in those of healthy people resting with their eyes closed. If they were, it would indicate that the architecture of the brain was hardwired for particular network paths.

Smith found an “extremely high correspondence,” Fox said. “They’re the same networks. You just utilize them differently as you do different mental tasks.”

The Smith paper has since been cited more than 2,400 times by other scholars.

Madeleine Goodkind of Stanford used BrainMap to pull morphometric data (data that analyzes size and shape) for psychiatric disorders such as schizophrenia, bipolar disorder, major depressive disorder, and anxiety disorders. She asked whether the diseases showed different patterns of changes in the brain.

“If they’re really different diseases in the sense of brain damage or brain effects, then there should be different patterns,” Fox said. “What she found was they show the same effects.”

Knowing that information, he said, researchers can ask better diagnostic questions about the disorders.

When he first wrote the algorithm to create a standard mapping space, Fox had no idea his project would have such far-reaching effects.

“I was trying to solve a problem that would let me get the most signals and noise out of my data,” he said. “I published it and hoped other people would like the idea.”