There has been some discussion lately about the “spiky” nature of AI capabilities — how large language models can be really good at tasks humans find difficult (like complex mathematical reasoning) while struggling with tasks that seem trivially easy to us (like basic arithmetic or common-sense reasoning). Some argue this spikiness will persist as a fundamental feature of AI development, while others believe these disparities will eventually smooth out as models improve.
I’m not here to adjudicate that debate. What I’m here to do is to claim that progress in brain emulation is going to be spiky too, but for different and perhaps more fundamental reasons.
For years, the conventional wisdom about the technology tree for brain emulation has followed a seemingly logical progression: start with the animal that has the smallest nervous system with a mapped connectome (the 302 neurons of C. elegans), then work up to the larger brains of Drosophila (i.e. fruit flies, ~140,000 neurons), mice (~70 million neurons) and, eventually, humans (~86 billion neurons).
There were obvious complications — glia, neuromodulators, modeling the rest of the body and the world, etc. But the basic expectation of this linear path seemed so obvious to many, including me, that people thought that progress in the field of brain emulation as a whole could be tracked simply by monitoring the success of C. elegans brain emulation projects, like OpenWorm. For example, this LessWrong post from 2021 sort of uses C. elegans emulation progress as a proxy for the entire field of brain emulation.1
But over the past couple of years, as the Drosophila connectome was mapped and people began to perform simulation experiments based on it, we have learned that this linear model is almost certainly wrong. The reason is that researchers have already had more success in simulating neural functions from the Drosophila brain than they have in simulating neural functions in C. elegans.
For example, the recent Shiu et al 2024 paper created a model of the entire Drosophila brain (containing over 125,000 neurons and 50 million synaptic connections) that successfully predicted neural circuit activity and behavioral responses. Specifically, they were able to accurately simulate complete sensorimotor pathways — from sensory input to motor output — in both feeding and grooming behaviors, with a 91% accuracy rate across 164 empirical predictions.
Notably, the computation required to perform this project was not immense. While the model included a lot of neurons, the actual calculations were pretty simple. For each neuron, it was only necessary to track its electrical charge using three basic equations, and model the firing of an action potential when this charge reached a threshold. They used the Brian2 simulator which is designed for non-supercomputers. According to the paper, one simulation of sugar neuron activation took about 5 minutes per CPU core. You could probably run this on your own computer.
Scientists tend to be reductionists. We assume that the easiest path in any field is to start with simple systems before advancing to more complex ones. But on the surface level, that doesn’t seem to be happening here. This is pretty puzzling. What’s going on?
The main explanation, in my book, is that simulating aspects of the Drosophila brain actually is simpler in important ways, despite them having way more neurons. The reason for this is that simulating spiking neurons and ignoring molecular-level details is much simpler than having to capture molecular-level details. And C. elegans doesn’t have networks of spiking neurons that perform neural functions, while these are present in Drosophila.
Spiking neurons
In nature, spiking neurons are the fundamental computing units of most animal brains. The neurons communicate through brief electrical pulses called action potentials or “spikes.” And when neuroscientists create computational models of these neurons, they focus on capturing this key spiking behavior while abstracting away many of the more complex molecular details. In a horribly simplified basic computational model, a given neuron gets “input spikes” from other neurons, which increase its voltage, until it eventually reaches a threshold and itself spikes with its own action potential:
Biologically, this is all much more complicated, based on the activity of ion channels and other cellular dynamics. But much of that can be abstracted away to some extent during simulation studies. How much the molecular details can be abstracted away is uncertain and likely depends on what one is trying to model, and with what degree of accuracy. What is clear is that Shiu et al 2024 was already able to get a significant degree of simulation accuracy with a substantial amount of abstraction away of molecular details.
While spiking neurons exist in most animal brains, only a few of them exist in C. elegans, and they are only activated under a minority of circumstances. This is because the two voltage-gated Na+-permeable ion channels found in invertebrates, Nav1 and Nav2, which are usually responsible for spiking neurons, have actually been lost in nematodes like C. elegans. This is somewhat surprising and probably has to do with an adaptation of C. elegans to its environment.2
Instead of spiking neurons, C. elegans neurons predominantly have graded potentials — i.e., gradual changes in voltage rather than all-or-nothing action potentials. This is possible because C. elegans is so small that signals only need to travel short distances. Additionally, despite not having Nav channels, C. elegans has actually expanded other ion channel families. For example, it has more nicotinic acetylcholine receptors and ligand-gated anion channels than humans.
One can imagine a bunch of neural functions that might be largely accomplished by spiking neurons in particular. But those won’t exist in C. elegans. Instead, the only neural functions that exist would be functions that rely on more graded voltage changes and more molecular-level computation. These almost certainly also exist in mammalian brains. For example, there are a lot of neuropeptides in the hypothalamus, affecting aspects of neural function like hunger and sleep. But many higher-level cognitive functions in mammalian brains — long-term memory recall, for example — appear to be largely mediated via rapid communication across long distances, which requires spiking neurons.
The Age of Partial Em?
Some have used the term “lo-fi emulation” the past couple of years. But I don’t love this term. To me, as a friend aptly put it, it feels like an oxymoron. The whole point of a brain emulation — as opposed to a mere simulation — is that it must follow all of the necessary low-level rules and constraints of the emulated system.
The term “partial brain emulation” does seem like it might make sense, though. A partial emulation would still follow the low-level rules and constraints of the system, to the level of scale separation required. However, as opposed to a whole brain emulation, it would only emulate certain neural functions, not necessarily everything that the brain does.
Emulations of C. elegans, Drosophila, or mice are interesting research projects that could have a lot of implications for neuroscience. But plenty of people will shrug. However, if we ever get to the point where we can emulate the brain of a human who desires this, then at that point, it has the potential to fundamentally change our societies and, for those who choose to pursue it, the human condition. There is a reason that some people are discussing “uploading” as a way to level the playing field between AIs and humans in the coming decades. The ethical implications also deserve very careful consideration.
A key question is how the prospect of partial brain emulations might affect the pathway towards emulating human brains. My answer is that I don’t know. I can imagine that partial brain emulations of neural functions relying on spiking neurons might be possible before we can map and model certain molecular level functions. I’m not sure how this would play out in practice. It probably depends on the extent to which people consider these molecular level functions to be identity critical.
Progress in this area will also be intertwined with progress in AI capabilities and alignment, which is already difficult enough to predict over the next few years, let alone the decades it will take to develop the technology necessary for human brain emulations. But I do think the possibility of partial brain emulations is something to ponder.
Honestly, this post doesn’t make my point as well as I would like. But I do think that a good number of people have used progress in C. elegans as a proxy for the field of brain emulation over the years. If you disagree with me, let me know and I’ll look into it more and either come up with better examples or admit that maybe this was just headcanon.
Notably within nematodes, there is also variation in neuron numbers. Crown clade nematodes like C. elegans have fewer neurons, while more basal nematodes like Pontonema have several thousand neurons. This is arguably a “simplification” in the nervous system of crown clade nematodes and it appears to be an evolutionary adaptation.
Informative and stimulating, as always.
From a philosopher's angle, I agree that “lo-fi emulation” is oxymoronic. It would rather destroy the point of distinguishing emulation from simulation.