Brain mapping is one of those problems that sounds straightforward until you actually try to do it. We’ve got a complete fruit fly brain map with 166,000 neurons, and that alone took years of AI-assisted work plus a lot of human eyeballing. A mouse brain is a thousand times bigger. A human brain is another thousand times beyond that. You start doing the math and it gets depressing pretty fast.
Google Research has been chipping away at this for over a decade through their Connectomics team. Their latest trick is a model called MoGen, short for Neuronal Morphology Generation, which does something clever: it generates synthetic neurons to train other AI models.
Why bother? Because real neuron data is messy, expensive to collect, and hard to label at scale. MoGen creates realistic fake neurons from point clouds, using a technique called flow matching. The synthetic neurons look convincing enough that when you train a reconstruction model on a mix of real and synthetic data, it gets better at its job.
According to the paper, which is being presented at ICLR 2026, this approach reduces reconstruction errors by 4.4%. That number sounds small, I know. But at the scale of a whole mouse brain, 4.4% fewer errors translates to 157 person-years of manual proofreading saved. That’s the difference between a project that’s feasible and one that’s a pipe dream.
The core problem MoGen addresses is the diversity of neuron shapes. Most cells in your body are roughly spherical blobs. Neurons are anything but. They’ve got long spindly axons that twist and branch, dendrites with little spines sticking out, and synapses everywhere. This geometric complexity is what makes connectomics hard. The current state-of-the-art reconstruction model from Google, PATHFINDER, works by identifying neurite segments and stitching them together. But it still makes mistakes, and those mistakes require human proofreaders to fix.
MoGen works by generating synthetic neurons that cover edge cases the real training data might miss. Think of it as data augmentation on steroids. The model learns the statistical patterns of real neuron shapes and can produce an endless variety of plausible morphologies. This gives the downstream reconstruction model a richer training set without needing more expensive electron microscopy.
I’ve seen data augmentation tricks before, and a lot of them are underwhelming in practice. But the 4.4% error reduction here is legitimately useful, especially when you consider that every percentage point improvement at this scale is worth weeks of human labor. The team also notes that this approach suggests directions for further improvements, which is researcher-speak for “we’re not done yet.”
The bigger picture is that connectomics is moving from a boutique science to something approaching engineering. Google has already mapped fragments of zebra finch brain, whole larval zebrafish brain, a sliver of human brain, and they’re now working on a small section of mouse brain. Tools like MoGen are what make the next leap possible. We’re not going to get a full human connectome anytime soon, but the gap between what’s possible and what’s needed is shrinking faster than I expected.
Comments (0)
Login Log in to comment.
Be the first to comment!