Sign up for Big Think Books
A dedicated space for exploring the books and ideas that shape our world.
A sharp border separating the physical and the mental is intuitively obvious. It is the reason why most cultures believe in a soul of some kind — an entity distinct from the body that experiences its sensations and beliefs.
Science, too, has generally seen this border as impenetrable. The great German mathematician Gottfried Leibniz used the famous “mill argument”: If there was, he wrote in 1714, “a machine whose structure makes it think, sense, and have perceptions,” then you could imagine enlarging it to a size of a mill and walking into it — but all you’d be able to see is parts that push one another, and no amount of detail could explain how all of that converts into actual thinking and perceiving.
In the mid-20th century, this wisdom was considered so unshakeable that an entire tradition of experimental psychology was premised on basically admitting defeat: behaviorism. It stated that behavior — that is, the physical movements performed by an animal body — is the only readout for an animal mind there could possibly be, and “mind,” as a scientific concept, should never even be discussed. The goal is merely to determine which inputs into the “black box” of the brain predict which outputs come out of it — what stimuli cause what behavior. The rest was pseudoscience.
But then, as time went on, neuroscientists — as experimental psychologists came to be known in the 21st century — did get into the black box. Just as microscopy once illuminated previously inaccessible worlds, brain imaging technologies opened the window into the inner workings of the brain — not just its behavioral output. Brains, it became clear, are not just relay stations that route sensory stimuli to muscles; they also create meanings. By abstracting and memorizing complex patterns over long periods of time, neurons can represent in their activity ideas that do not have a specific physical counterpart in the outside world.
Today, using tools such as fMRI, we can often pinpoint extremely specific meanings of these abstract ideas — individual words, personalities, concepts — to small clusters of neurons or, in research animals, even to individual brain cells. We now have technologies to create false memories, to turn them on or off at will, to see them take hold in an animal’s brain without the animal even having to do anything. We can decipher the content of what goes through an animal’s mind. Maybe the most quintessential case is place cells — neurons that are associated with a specific location and that fire when an animal goes there or even thinks about it.
For example, when a rat sleeps, its brain replays the sequence of place cells that track its past movements — it’s a dream you can decipher without any ongoing behavior. This gradual movement of brain science away from the dogma of behaviorism — that behavior is the only thing that’s truly real — is sometimes termed the “cognitive revolution.”
The wall between the physical and the mental remains, however, even though it has become palpably thinner.
I often use the broadly descriptive word mind to encompass the processes that Leibniz called “think, sense, and have perceptions.” In today’s science, this includes two distinct concepts: cognition and consciousness. Cognition is information processing: memory and abstraction. Today, no one would have a problem saying that a machine has cognition.
Consciousness is something else: the first-personness of it all, the fact that it all feels like something. Consciousness, for most people, remains on the other side of the wall. The wall is vigilantly guarded by its staunchest defender, the philosopher David Chalmers. Chalmers is famous for introducing “the hard problem of consciousness,” which is, roughly speaking, a modern version of Leibniz’s mill argument. No amount of knowledge about the objective nature of the brain, says Chalmers, can explain the mysterious nature of subjective experience.
My own view on consciousness most closely aligns with that of cognitive neuroscientists Karl Friston and Andy Clark. Friston and Clark stand opposed to Chalmers: They argue that the wall between the physical and the mental, the objective and the subjective, doesn’t exist. The brain is a biological machine with built-in abstraction and memory that runs sophisticated software. This software is constantly trying to understand ongoing experience, and consciousness naturally arises as part of this process of understanding. We are yet to understand fully how all of the parts work — but once we do, say Friston and Clark, there’s no additional mystery, no impenetrable wall beyond which we can’t reach.
The real problem, which they call “the meta problem of consciousness,” is to explain the idea that we have that there’s something else to our consciousness — the conviction that first-personness is distinct from simply being. What needs to be explained is not the hard problem of consciousness, but why David Chalmers believes it is a problem.
To say this debate is unresolved is an understatement.
I agree with Friston and Clark that first-personness evaporates if you meditate on it for long enough. There is no need to explain how the objective thing transitions into the subjective thing, because there really aren’t two things. There is no wall.
There is no need to explain how the objective thing transitions into the subjective thing, because there really aren’t two things. There is no wall.
What I find curious is that Friston and Clark came to this conclusion by studying the human mind, whereas I arrived at the same place by studying sea slug neurons. You could say we approached the wall from both directions and met in the middle without ever finding it.
When I described how sea slugs abstract information — how the neuronal signals that mean touch to the tail and touch to the head combine into a more general meaning, dangerous touch to the body — I deliberately avoided labeling these meanings as “ideas” or “essences.” Essence is the word I use to describe “ideas of nature”—the products of evolution’s ingenuity, embodied in the world itself. Idea, on the other hand, implies meaning that exists in our imagination, a logical product of our brain activity. Essences are objective; ideas are subjective. So where do abstractions in the slug’s brain belong? Are they essences or ideas?
From our human perspective, external to the slug, they are essences — just like the essence of a leg is to walk, the essence of a neuronal signal can be dangerous touch to the body.
But if we had a way to look at the world from the perspective of the slug, we would have no reason not to call these essences ideas. From this internal sea slug point of view, we would know nothing of the neurons that move our siphons — we would just know of the potential danger in the environment when someone touches our tail.
Sure, you might say, but that’s not really an idea. It’s an “idea,” in quotation marks, at best — a metaphorical description of how a sea slug responds to a stimulus, and it is pointless to give our rich, subjective mental constructions the same name.
But this is exactly how most people think about sea slug memory, too — it’s “memory” in quotes, not real memory. And they are wrong — our memory is, in fact, the same process, just taken to an extreme of complexity.
What if our intuition here is just as wrong as with memory? It equally doesn’t make sense that sea slug memory, human memory, and the memory of neurons in a petri dish share something in common, and yet they do. What if this is the same? What if the subjective is just a complicated form of the objective? What if all ideas really are essences?
If this is so, then the wall between the physical and mental is not really a wall at all, but simply a matter of perspective — hardware and software. You can think of neuronal spikes and synaptic plasticity as properties of either, in the same way that you can think of a phone app either as lines of code or as electrons coursing through a microchip — both are correct, at the same time. And depending on which way you think about spikes and synaptic plasticity, they are either external or internal to you.
What studying sea slugs has taught me is that our understanding of human experience is often clouded by the extraordinary complexity of our brains. Once you disregard the complexity, many things that puzzle us about it — like the nature of life and death — simply dissolve, like a mirage. And one of those puzzling things that dissolves when you look at it closely is the distinction between the objective and the subjective.
Sign up for Big Think Books
A dedicated space for exploring the books and ideas that shape our world.
First Appeared on
Source link