RUSS ROBERTS

READ

Wanting to want what we want

06.20.16

I have been thinking about consciousness recently. What is the source of human consciousness? Is it merely physical? Can it be uploaded? Duplicated via artificial intelligence? Will machines get conscious enough to feel and to care the way we do? This brief essay pulls together my thoughts on these issues linking some reading and thinking I’ve been doing on consciousness with a number of conversations I’ve had recently for EconTalk on artificial intelligence. It’s my way of trying to see how these ideas fit together, at least in my own consciousness. Perhaps you will find it of interest.

Philosophers David Chalmers and Thomas Nagel have raised the question of whether it is possible to have a scientific/material explanation of consciousness. Chalmers has argued we’re going to need a new biology. Nagel has written that the “materialist neo-Darwinian conception of nature is almost certainly false” suggesting that if current theories of evolution and biology and chemistry cannot explain consciousness, then they are not just incomplete but deeply flawed overall and unsatisfying. I take his challenge to be something along these lines: how is it that the most distinctive feature of the creature who ponders the source of life on earth has not developed an explanation of how we feel when we’re doing the pondering? That’s a little melodramatic perhaps but the gist of it is that the essential feeling we have of being alive and the texture of daily life seems hidden from standard materialistic scientific explanations. Chalmers calls it “the hard problem” of consciousness. The issue is even more interesting because Chalmers and Nagel are both non-believers who reject a divine explanation for consciousness.

What is the hard problem of consciousness, exactly?

It’s a little tricky. Here is the way I understand it. I have a rich inner life. I have the sensation of the past, the present, and the future. I have regrets, dreams, fears. In some sense, I experience life as a movie where I play a starring role. Life has texture. It isn’t just a bunch of sensations that I take in and assess for their survival value, reacting to my biological urges. The stars fill me with wonder. A good conversation lifts my spirits and stays with me. The musical Hamilton makes me cry. These are all what philosophers call qualia, my subjective experience of my sensory perceptions. They include the vivid redness of a ripe strawberry. And, I suppose, general feelings of satisfaction, malaise, elation, and ennui.

These subjective experiences do not appear to have a physical counterpart. But they are very real. To me at least. And I presume you have similar subjective experiences of your own. But what is the physical manifestation of these subjective feelings? How do they contribute to our evolutionary fitness? How can they be explained? These are some of the questions I understand Chalmers and Nagel to be asking. The Mary’s Room thought experiment of Frank Cameron Jackson is also relevant. Many philosophers and scientists find these provocative. Others reject them as uninteresting or simply mistaken in their perspective.

These ideas on consciousness came to mind as I was interviewing Pedro Domingos for EconTalk. In his book, The Master Algorithm, Domingos writes:

If you’re a parent, the entire mystery of learning unfolds before your eyes in the first three years of your child’s life. A newborn baby can’t talk, walk, recognize objects, or even understand that an object continues to exist when the baby isn’t looking at it. But month after month, in steps large and small, by trial and error, great conceptual leaps, the child figures out how the world works, how people behave, how to communicate. By a child’s third birthday all this learning has coalesced into a stable self, a stream of consciousness that will continue throughout life. Older children and adults can time-travel–aka remember things past, but only so far back. If we could revisit ourselves as infants and toddlers and see the world again through those newborn eyes, much of what puzzles us about learning–even about existence itself–would suddenly seem obvious. But as it is, the greatest mystery in the universe is not how it begins or ends, or what infinitesimal threads it’s woven from. It’s what goes on in the small child’s mind–how a pound of gray jelly can grow into the seat of consciousness.

A beautiful passage. Then Domingos posits the existence of a learning robot, Robby the Robot. Imagine a robot  with artificial intelligence—a set of built-in algorithms for sorting and identifying objects, discovering their connectedness, and so on. Let Robby exist alongside the infant, seeing what it sees, taking in what it takes in, hearing what the infant hears, learning about the world in the same way an infant would, processing the same sensory information. In theory, it would learn just as effectively (perhaps more effectively) than the human.

How human would such a robot be? After all, it has had the same formative experiences as the human infant. But I wondered about the following. Imagine that 50 years go by. The infant is now a middle-aged man and Robby is a well-maintained machine. You show them both a stuffed animal from when they were four or five years old. The middle-aged man might feel a flood of nostalgia. Would Robby? Presumably Robby could access the memory of the stuffed animal more reliably than the human. But could it feel what we humans experience as nostalgia? Would it sense a longing for the innocence of childhood? Would it have regrets over an imperfect life? Might it feel satisfaction for a life well-lived?

There are different kinds of artificial intelligence. Some see the brain as a computer and that the most human form of AI will be a computer that mimics the processing done by the human brain. Some see this as essentially impossible (see this interview I did with physicist Richard Jones) and that advances in AI will come from the use of machine learning and more sophisticated computer processing and algorithms that may easily surpass biological forms of reasoning. Regardless of what is possible, the question remains as to how we will feel about our interactions with artificial intelligence that passes the Turing Test—AI that will feel human to us. Will it have consciousness? Will it have rights? Will we feel bad destroying it or pulling the plug on it? Should we feel bad doing so? Will AI approach not just our abilities to reason but our ability to feel? Will it have consciousness?

Inevitably, these questions of artificial intelligence and consciousness force us to confront the question of who we really are. What makes me, me? What might make a machine more than a machine? Am I just a bundle of physical connections–neurons firing that give me the illusion of self-ness, of self-control? Or is there something outside my stream of consciousness, the “real” me, that can observe the physical processes and respond to them with will and volition? We feel as if we are in charge of our destiny, yet if everything is physical and physical only, can there be any real sense in which my actions are not predetermined?

These questions of free will vs. determinism, of the mind/body problem, and whether there is anything beyond the observable physical processes of our minds and bodies have been debated for a very long time and I don’t know if they will ever be resolved in a totally satisfactory way. They are tangled up in philosophy, neuroscience, and theology. The academic world and our culture in America has moved increasingly toward one side of these debates. But there is not unanimity. It is so hard to know what we know about these questions. Certainly our understanding of the physical world continues to grow. The trend is that everything or almost everything will bend to the power of the scientific method and rationality. Things that were seen as impossible (computers beating the best Go players, facial recognition and so on) yield to the power of artificial intelligence created by humans. It is just a matter of time, some argue, before the arrival of the singularity–when our knowledge of the physical world and our ability to control it begins to increase at an ever-increasing rate. We (or it, embodied in advanced and advancing artificial intelligence) will cure disease, unemployment, stagnation, the physical limitations of death and this planet. Or perhaps there is an asymptote to our knowledge or our ability to apply it. I am drawn to the Venetian proverb that Nassim Taleb quotes—the farther from shore, the deeper the ocean. I remain agnostic about what even in the physical world is within the grasp of our reason and the possibility that some or perhaps much will remain beyond it.

The Piasetzner Rebbe—Rabbi Kalonymus Kalman Shapira—writing in early 20th century Poland in To Heal the Soul says that if you only respond to your physical urges, seeking pleasure and avoiding pain, you are no different from a plant or an animal. For plants and animals, there is no distinction between the individual plant or animal and its species. Each responds to its environment like any other driven by its biological imperatives. If we respond only to our physical urges and even to the norms and social pressure of the culture around us, then we are no different from plants and animals. We have forsaken our human-ness which is to be whatever is distinctive about each of us. The Piasetzner Rebbe urges us to rise above the merely physical and to be mindful—to take the opportunity to be aware of the choices we face and make those choices as a willful, mindful human being created in the image of God who is exhorted to walk in God’s ways.

This view of the Piasetzner Rebbe is at the heart of the Jewish tradition and other religions. It is strongest on Yom Kippur but it runs through all of Jewish theology—we can choose. We can decide who we want to be. It just requires paying attention—mindfulness, and the will to change. But you don’t have to be Jewish or a religious believer of any theistic flavor to embrace this basic mindset. Amazon is filled with self-help books that promise happiness without religion. It appears to be just a question of technique and will. You can be who you want to be. But is it true? Mark Twain captured the skeptical viewpoint when he observed that it is easy to stop smoking—he’d done it a score of times. And yet, some people do stop smoking, or become religious, or leave religion behind, or get fatter or thinner. Kingsley Amis wrote (in One Fat Englishman, I think) that inside every fat person is a fatter person waiting to get out. It is hard to know who is the real me.

The whole idea of mindfulness (and of Adam Smith’s impartial spectator) is to step outside oneself and become aware of the physical processes that push and pull our behavior. While stepping outside oneself would seem by definition to be literally impossible, we all have had the sensation of observing ourselves as we respond to our emotions and the challenges that arise in day-to-day living. We also know the sensation of being swept along by our emotions and physical desires and acting impulsively in ways that feel thoughtless. The very word “thoughtless” means that we are without thought—that we sometimes act like animals or machines, merely following our instincts, our algorithms. Do we have control over these moments? The practice of mindfulness certainly gives us the feeling that we do. Or perhaps this is only an illusion.

These different ideas came together for me after a conversation with psychologist Angela Duckworth over the question of character and whether we can change our character. In the course of that conversation, she mentioned the 1971 paper by philosopher Harry Frankfurt, “Freedom of Will and the Concept of a Person.” Frankfurt echoes the Piasetzner Rebbe and argues that what distinguishes us from animals is that we can have desires about our desires. (It is a very dense article–this excellent post by Amitabha Palmer helped me understand it.)

Animals desire food, sex, shelter, survival. We humans have such desires along with a much longer list including respect, honor, power, fame, wealth and so on. Frankfurt argues that while, like animals, we have desires, we have something the animals do not have. We can have desires about our desires. I might crave ice cream and I might decide to satisfy that craving. But I can also desire not to crave ice cream. I still might end up eating ice cream but I can have regret. I can decide to try to curb my desire the next time I face a dessert choice. An animal, claims Franfurt, cannot do that.

When I shared Frankfurt’s article with a teacher of mine, Rabbi James Jacobson-Maisels who is deeply knowledgeable about the Piasetzner’s teachings and who I thought might enjoy seeing an early 20th century Polish rabbi echoed by a late 20th century philosopher, he responded that not only might Frankfurt’s claim be a useful one for understanding the differences between humans and animals but that it also might be the standard for judging the difference between human intelligence and artificial intelligence.

That seems right to me. A human child might grow up to be a cruel and arrogant adult. Perhaps at some point, such an adult might realize that he had lived life poorly. That he had made poor choices. That it was not too late to change and that change was possible. Such epiphanies are the stuff of Dickens and Hollywood. But could Robby the Robot have such feelings? Could an artificially intelligent creature wish that its algorithms had been different. Could a smart vacuum cleaner feel sadness at not having a chance at being a driverless car?

Such questions feel foolish. Perhaps that is because we cannot imagine the future of artificial intelligence. One answer I suppose is that if Robby the Robot really were like a human being in an artificially designed mimicry of our chemistry, physics, and biology, then it would feel sadness, remorse, pride, just like the adult it has grown up alongside of. Robby, like you and I, will have desires about its desires. It will not merely want to follow the algorithms it follows, but it will, like you and I, develop thoughts about those algorithms, and have the ability, as you and I at least feel like we have, to change course, to resist our programmed urges and make ourselves anew. Perhaps it will struggle to change course, just as we do, sometimes succeeding but only temporarily, sometimes merely considering the possibility while doing nothing about it, sometimes succeeding gloriously, triumphantly, and re-inventing itself.

We do not understand why some people are able to change while others do not. I don’t understand why there are times when I can change a pattern of behavior while other patterns seem impossible to change. Why is that my consciousness ignores one experience, while the same experience at another time lights a fire that causes me to change a habit or adopt a new one? Why is that a book can change one person’s life but leave another person untouched? Can I really develop willpower over my desires? Can I change my character? Or are these feelings of self-control only an illusion? These are mysteries we may never understand or they may be mysteries only today. Maybe tomorrow we will know how to realize the desires we have about our desires.

I think not. And I think it unlikely if not impossible for our machines to have such skills. But perhaps I lack imagination. I confess that while I often want to celebrate our species’ relentless pursuit of scientific understanding and how often that pursuit leads to innovation and transformation, I concede that there is a part of me that likes the idea that there are some mysteries we cannot unravel, whether they are the prelude to the Planck epoch or what makes the heart begin to beat within the infant in the womb or the physical explanation for consciousness. Or maybe I just lack sufficient desire to understand what we desire that we desire.

 

♦♦♦