Too Short for a Blog Post, Too Long for a Tweet LXVIII

Here's an excerpt from a book I am reading,
"Incognito: The Secret Lives of the Brain," by David Eagleman:



Consider memory. Nature seems to have invented mechanisms for storing memory more than once. For instance, under normal circumstances, your memories of daily events are consolidated (that is, “cemented in”) by an area of the brain called the hippocampus. But during frightening situations—such as a car accident or a robbery—another area, the amygdala, also lays down memories along an independent, secondary memory track. Amygdala memories have a different quality to them: they are difficult to erase and they can pop back up in “flashbulb” fashion—as commonly described by rape victims and war veterans. In other words, there is more than one way to lay down memory. We’re not talking about a memory of different events, but multiple memories of the same event—as though two journalists with different personalities were jotting down notes about a single unfolding story. 

So we see that different factions in the brain can get involved in the same task. In the end, it is likely that there are even more than two factions involved, all writing down information and later competing to tell the story. The conviction that memory is one thing is an illusion. 

Here’s another example of overlapping domains. Scientists have long debated how the brain detects motion. There are many theoretical ways to build motion detectors out of neurons, and the scientific literature has proposed wildly different models that involve connections between neurons, or the extended processes of neurons (called dendrites), or large populations of neurons. The details aren’t important here; what’s important is that these different theories have kindled decades of debates among academics. Because the proposed models are too small to measure directly, researchers design clever experiments to support or contradict various theories. The interesting outcome has been that most of the experiments are inconclusive, supporting one model over another in some laboratory conditions but not in others. This has led to a growing recognition (reluctantly, for some) that there are many ways the visual system detects motion. Different strategies are implemented in different places in the brain. As with memory, the lesson here is that the brain has evolved multiple, redundant ways of solving problems. The neural factions often agree about what is out there in the world, but not always. And this provides the perfect substrate for a neural democracy. 

The point I want to emphasize is that biology rarely rests with a single solution. Instead, it tends to ceaselessly reinvent solutions. But why endlessly innovate—why not find a good solution and move on? Unlike the artificial intelligence laboratory, the laboratory of nature has no master programmer who checks off a subroutine once it is invented. Once the stack block program is coded and polished, human programmers move on to the next important step. I propose that this moving on is a major reason artificial intelligence has become stuck. Biology, in contrast to artificial intelligence, takes a different approach: when a biological circuit for detect motion has been stumbled upon, there is no master programmer to report this to, and so random mutation continues to ceaselessly invent new variations in circuitry, solving detect motion in unexpected and creative new ways. 

This viewpoint suggests a new approach to thinking about the brain. Most of the neuroscience literature seeks the solution to whatever brain function is being studied. But that approach may be misguided. If a space alien landed on Earth and discovered an animal that could climb a tree (say, a monkey), it would be rash for the alien to conclude that the monkey is the only animal with these skills. If the alien keeps looking, it will quickly discover that ants, squirrels, and jaguars also climb trees. And this is how it goes with clever mechanisms in biology: when we keep looking, we find more. Biology never checks off a problem and calls it quits. It reinvents solutions continually. The end product of that approach is a highly overlapping system of solutions—the necessary condition for a team-of-rivals architecture.

Comments

Popular Posts