Biophysically realistic neuron modeling in layer four of visual cortex
I recently checked out the interesting article by Arkhipov et al 2018 and wanted to discuss it here. They use a well-grounded computational model of L4 (which is the input layer) of mouse visual cortex that is capable of replicating a number of experimental observations. Their model combines biophysically detailed neuron models, synaptic dynamics, and experimentally constrained connectivity. Here is their summary figure describing the biophysical model as well as the leaky integrate and fire (LIF) portion:
This model is effectively supposed to summarize much of what is known in the entire field. It is in line with the Markram approach to brain modeling: we know that from electrophysiology data how these subcircuits of V1 L4 cells work and now let's use this prior to knowledge to build a Hodgkin-Huxley based model of it. They assessed their model performance by reproducing a number of experimental findings, such as:
1. They reproduced the statistical features of V1 neuronal responses, such as their log-normal distribution.
2. They systematically investigated how neurons in the model respond to a variety of visual stimuli, both the type of stimuli mice might see in the real world (movies) and ones they would not (gratings).
3. As expected from previous literature, they showed that connectivity rules strongly impact neuronal responses. For example, adding recurrency to the network not only amplifies and synchronizes firing rates, but also biases the neuronal tuning properties.
While not directly studying memory storage, the Arkhipov et al study is relevant to the brain preservation problem in a number of ways.
Primarily, it shows that compartmentalized circuits, such as L4 of V1 can be simulated in accurate ways using contemporary biophysical models.
It also highlights the likely importance of connectivity rules in memory storage. Connectivity patterns form the backbone of any computational model, and Arkhipov et al demonstrated how connectivity rules can shape (and constrain) network activity. Therefore, this is another data point that we must consider connectivity patterns to be crucial factors in memory storage.
Their study used compartment models ("compartmental representation of somato-dendritic morphologies (~100–200 compartments per cell) and 10 active conductances at the soma that enabled spiking and spike adaptation"). Their results are a data point that at the single-neuron level, this amount of information may be sufficient for simulations able to reproduce in vivo functional properties, therefore suggesting a potentially reduced need for fine-scale detail preservation, although this is of course still subject to considerable uncertainty.
They also compared the performance of their model to a much simplified version, where biophysical neuron models were replaced by point-neuron models with either instantaneous or time-dependent synaptic kinetics. They found that the biophysically realistic model had quantitatively better accuracy compared to the experimental model, although even with extreme simplification, the model still performed fairly well. This suggests that a level of model detail even above the compartment models may be sufficient.
While they got their connectivity patterns in a random fashion, one might imagine instead getting connectivity data from an electron microscopy-based connectomics data set. It would be interesting to see if, in a realistic biophysical model and a realistic connectomics data set, they could still reproduce a similar set of functional observations.
Theoretically, using these kinds of models, if one were to go beyond just L4 of V1 to a whole brain, it is interesting to think what kind of functional properties might emerge. However, I personally think we should be judicious about building such models.
(Thanks to Ken Hayworth for a discussion about this paper.)