SQAB Notes -- Day 2
Gerald Edelman, From brain dynamics to consciousness, how matter becomes imagination:
Questions in neuroscience: How is the brain put together?, What are the fundamental operations?, How can we connect psychology to biology?
Consciousness is what we lose when we sleep and what we gain again when we wake up.
In neural development, neurons migrate as a crowd in various layers. No two individuals have the same dispersion. There are some functional implications of these facets of neural developmental:
Precise, pre-specified, point-to-point wiring between neurons is excluded.
Uniquely specific connections cannot exist across individuals (generally speaking).
Divergent overlapping arbors imply the existence of vast numbers of inputs to cells, which sum to code.
Even if individuals are genetically identical, no two of their neurons will be alike.
Brain constructs: The majority of anatomical connections are not functionally expressed (i.e., silent synapses), alternative systems can account for the same behavior, hidden cues can have affect (i.e., you can have behavior within language), and algorithms are essential.
Even in vision, the brain constructs a context-dependent image. Motion is an essential element of perception.
The world is unlabeled, as physics does not contain any theory that deals with neuroscience.
Darwinian population thinking can be seen in somatic selection systems (i.e., natural selection), immunity, and neuronal group selection.
Neuronal group selection comes about because there is a creation of repertoires of circuits during embryogenesis by epigenetic variation and selection.
Re-entrant mapping is a massively parallel, dynamic process to get synchronous firing between distant regions.
The field potential of neurons can be correlated with various stuff, this indicates a relationship between them.
How do you sample the vast brain to get from molecule to behavior? You can't sample a billion neurons.
Degeneracy is the ability of structurally different elements to perform the same activity. The best example of a degenerate code is the genetic system.
If you have a stroke in the thalamus you lose consciousness forever. This indicates that at the least this structure is necessary for consciousness, but it is probably not sufficient.
With a SQUID device you can measure minute magnetic fields, and one useful technique is to do so while conducting binocular rivalry experiments. If the brain has orthogonality, then the brain will suppress one of the images presented at a time. If you use 9.5 Hz for one of the images and 7.4 Hz for the other, you can tag the images with frequency. After a Fourier analysis, you can tell that the brain only responds to one of the frequency images at a time.
The dynamic core hypothesis (i.e., see Figure 1 here) stipulates that in order to contribute to consciousness a brain region must perform high integration in a matter of milliseconds; any larger and it can't accomplish anything. Re-entry among neuronal groups in the dynamic core entails consciousness.
Qualia are just those discriminations entailed by selection among different core states.
Variance in the system is so great that it can't be noise. So maybe it comes as a byproduct of selection. We want to study the *mechanism* of this selection process.
Memory will probably involve degeneracy of a profound sort. Every perception is an act of creation, every memory is an act of imagination.
It will be feasible one day to create a conscious artifact, at the interplay between physics and biology. Such would be a great human achievement, but also raises some concerns.
There are two types of consciousness in his dichotomy: primary (sensory), and higher-order (semantic), which only humans have.
Modularity vs. general purpose: He thinks that modularity as a concept of the brain is a perverse notion. You must not conflate the necessary with the sufficient. The most important facet of the brain, in his conception, is re-entry.
Tim Shahan, Conditioned reinforcement:
Reinforcement provides the connective tissues of learning and the living flesh of behavior of humans in natural environments.
An initially neutral event acquires value because of its relation to primary reinforcement. One can see how long the stimulus resists extinction in order to test its strength.
Baum's generalized matching law accounts for the common deviations from matching (i.e., undermatching or bias).
In the contextual choice model (Grace, 1994), conditioned reinforcement is independent of context, but sensitivity to the relative value changes with temporal context. This is similar to hyperbolic discounting.
In order to measure response rates, a better measure may be resistance to changes in rates of reinforcement. So, you could disrupt reinforcement with extinction or prefeeding to see if the animal's responding changes.
Behavioral momentum sometimes messes with conditioned reinforcement.
Also, if you vary the value of the stimulus and measure the resistance to change, you once again get no effect on the value of the stimulus in resistance to change. So we ask, are they really reinforcers? This has been a debate for over 50 years.
Conditioned reinforcement may be a misnomer. Instead, you should use a word like signals, feedback, and/or sign posts. These stimuli *guide* behavior, instead of strengthening it. They also could be seen as reducing uncertainty, which is reinforcing, but only when the information has some utility (i.e., by allowing them to actually leave the situation after observing a cue for S minus).
In this account, conditioned reinforcement is just signaling.
Another example of this could be alcohol self-administration, where the actual dippers of the stimuli are merely means to an end. This could be seen as underlying the U-shape dose-response curve (Shahan, 2006).
Perhaps animals are learning facts instead of learning to act in certain ways. Thus the notion of strengthening behavior may be superfluous. Instead, animals integrate learned facts in order to make decisions.
Cynthia Pietras, Effects of added cost on choice:
Risky choice is choice in situations with environmental variance.
Behavioral ecology takes an evolutionary perspective and assumes that choices are adaptive.
The energy budget model stipulates that choices should depend on the organism's energy budget. You can maximize the energy budget by choosing the risky choice when the energy budget is negative.
In humans, you substitute money for food for ethical reasons. Only if the subjects earn enough money do they get to keep it.
In this model, individuals should choose the more risky of two schedules only if they would be unlikely to reach the survival threshold by choosing the safer option. This is the behavior subjects usually display.
In their study, they generally followed the predictions of the energy budget. Their study is better modeled by a dynamic function because, i.e., if people get lucky on the first trial in the risky schedule, it may make sense to then switch to the low risk option.
When it is more costly to deviate from optimality (i.e., later trials), subjects show a higher proportion of optimal responding.
William Baum, Dynamics of choice:
All behavior is choice, because every situation permits more than one activity. When the conditions change, the allocation changes. Because time is finite, if one activity increases another must decrease.
He studies concurrent VI VI schedules with pigeons! In this case, choice = log (B1/B2)
The transient phase occurs when conditions change, and then the behavior reaches a steady state, in idealized form. When the variation is no longer systematic (i.e., just unpredictable noise), then you are at the steady state.
To estimate the equilibrium, you need a long time scale. But to estimate the dynamics, you need a shorter time scale. So single sessions for dynamics vs. overall averages for equilibrium. Or you can get smaller relatively speaking, so within-session components vs. single sessions.
What is it that changes, always the behavior ratio? In that case, one could look for reward matching at each environmental condition. However, it's also possible that local processes could lead to overall patterns only, and not the other way around.
Transition data is pretty sweet; you get abrupt changes in visits following schedule changes. These changes rely on matching on quite a small time scale (preference pulses), which is evident when you evaluate the log of the food ratio versus the log of the peck ratio.
When you look at switches only, you see a pattern of a long visit following reinforcement, then rapid changes back and forth. You can derive the preference pulse data from the switch visit data, but not vice versa, proving in this case the efficacy of reductionism.
How general is the approach of thinking about different time scale? He expects that there will be order among time scales in other avenues of research as well.
Joel Myerson, Cognitive aging, a behavior-theoretic approach:
There are age-related declines in processing speed, working memory, learning (complex), and reasoning. This order underlies a hypothesis that changes in speed feed forward to changes in working memory and then learning, etc.
The most pronounced change is processing speed, in a manner that is dose-dependent, cross-task ubiquitous and universal across individuals. But it is uniform?
There are two proposed mechanisms for these changes: exogenous causes, like listening comprehension, and endogenous causes like multi-tasking. The exogenous are sometimes worse because they are limited-time, and you may never be able to recover. So if you can't listen fast enough to hear what somebody says it will be difficult to recover that moment. Whereas if you become a slower reader you can just read more to compensate.
One experimental task that has been studied in this context is mental rotation. Differences in response times in old versus young people increase significantly from 0 to 130 degrees orientation.
In a visual conjunction search task, the more elements there are in the array, the more of a divergence there is between reaction times in young and old subjects.
In the abstract matching task, once again increasing the level of complexity leads to an age by complexity interaction.
Qualitatively, it seems that almost anything you do that increases complexity boosts the size of the age difference, although I am curious to see if some of these effects would disappear if the data was passed through a log transform.
If you use a Brinley plot, you find that older individuals in most data sets are about 2.5 times slower. This suggests that the slowing is pretty much uniform, with the exception of verbal vs. visuospatial. That is, if words are involved, the slowing of RTs in older individuals is not as pronounced as it is for spatial tasks.
There are individual differences in general RT speed across tasks, such that some individuals are generally slower and some are faster. In group means, university students perform better than vocational students, suggesting that RTs may indicate (or be responsible) in some way for general intelligence.
One possible confound is that university students (in the young age group) may be more likely to be using their brain more. This can't be the only explanation, however, as there is a dose-dependent effect across all age cohorts.
Intelligence is a useful construct because people who are good at one thing tend to be good at lots of other things. He says that this "may be the most replicated result in all of psychology.”
Working memory is another individual characteristic that may predict intelligence. You can assess this with spatial span tasks, and force subjects to perform a secondary task of some sort to induce multitasking. You get selective interference by multitasking, which means that secondary tasks only reduce performance when they require activity in the same domain (i.e., spatial task would interfere on other spatial tasks, but spatial and verbal would not).
In older groups there is no evidence of increased susceptibility to interference, although there is a drop in memory span in older groups in general, especially in visuospatial tasks.
Crystallized intelligence is background knowledge; fluid intelligence is success in novel situations, with either insight or rapid learning as the mechanism.
The correlation between fluid intelligence as measured by Raven's advanced progressive matrix and a 3-term learning task (i.e., pattern learning/recognition) is about 0.50. This is much stronger than working memory, and in his words amazingly high for this kind of research.
His research team also got a really high correlation, 0.75, between learning and fluid intelligence in adults.
Intelligence predicts education achievement, education relies on learning (to an extent...), and therefore learning predicts intelligence.
The cascade hypothesis is that age leads to slower processing which leads to a reduction in working memory which leads to a reduction in learning abilities which leads to a reduction in fluid intelligence.
In their model, once you account for fluid intelligence, age adds no predictive power for the divergence in reaction times. This suggests that if you could boost fluid intelligence, you could mitigate some of the cognitive effects of aging.