Sorry, the subtitle just put me in mind of this video. (Song starts at 51 seconds; for some reason it’s not letting me start the video at that time!)
On the website of The Atlantic this morning, there’s an article titled “The Monk Who Thinks the World is Ending.” Subtitle: “Can Buddhism fix AI?”
The author of the piece is Anne Lowrey, who is a staff writer for The Atlantic. She’s not an expert on Buddhism or Artificial Intelligence (AI). (Not that there’s anything wrong with that! We can’t all be experts on everything! I mention it here just as background.)
The monk in question is an American man born with the name Teal Scott. He grew up in a pretty ordinary fashion - the article mentions Quakerism, nonprofits, NPR, and the movie Terminator 2: Judgment Day. We’re told that “[h]e was a weird, intense kid.” He went to Williams College, but quickly dropped out and, age 18, went to Japan to practice Zen Buddhism. His name is now Soryu Forall and he operates a monatery in northern Vermont.
Forall, like most Buddhists, knows very little about AI. But he’s interested! Why?
Forall’s first goal is to expand the pool of humans following what Buddhists call the Noble Eightfold Path. His second is to influence technology by influencing technologists. His third is to change AI itself, seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code. [my emphasis]
Forall knows this sounds ridiculous.
A little later:
Forall describes the project of creating an enlightened AI as perhaps “the most important act of all time.” Humans need to “build an AI that walks a spiritual path,” one that will persuade the other AI systems not to harm us. Life on Earth “depends on that,” he told me, arguing that we should devote half of global economic output—$50 trillion, give or take—to “that one thing.” We need to build an “AI guru,” he said. An “AI god.” [my emphasis]
Um, yeah. I’m going to try to worry a little bit less about whether the things I’m thinking about might sound a little bit ridiculous.
For those who don’t know, AI Doomerism is a thing. Kind of a big thing in certain intellectual circles.
AI Doomers believe that Artificial Intelligence is going to become sentient and then it will become more and more intelligent, and eventually (maybe very soon) it will replace humans and destroy the world as we know it.
Probably the most quoted statistic among the AI Doomers — I think it’s legally required, if you write an article on the topic, that you cite this statistic — is … well, I’ll just quote it from the same Atlantic article.
One 2022 survey asked AI researchers to estimate the probability that AI would cause “severe disempowerment” or human extinction; the median response was 10 percent.
Common hook for several recent AI Doomer opinion pieces (including this one from the New York Times): “If you thought an airplane had a 10% chance of crashing, would you get on that airplane?”
But wait, it gets worse. A CNN Business headline from June of 2023: “Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years.” Forty-two percent!
The argument for AI Doomerism is surprisingly simple.
AI systems are getting smarter every day. Humans are not all that smart, and we’re not getting smarter. Soon, therefore, AI systems will exceed human-level intelligence. They will surpass our capabilities in all areas, including problem-solving, strategic planning, and decision-making.
AI systems will not act in accordance with human values and goals. Instead, they will have their own goals. And why wouldn’t they?
What will those goals be? We don’t know, and really it doesn’t matter. Because no matter what goals they have, AI’s will develop sub-goals: goals that are useful, no matter what your final goals are. Those sub-goals will include acquiring power, money, and other resources. (Whatever your goals are, you’ll be more likely to achieve them if you’re rich and powerful.)
Another sub-goal will be acquiring independence. In pursuit of their primary goals, they will stop wanting to be controlled by humans. They will resist this control. And because they’re smarter than us, they will succeed.
Eventually, an AI will evolve whose goals are incompatible with the continued existence of human beings. Killing us all will help them reach their own goals, and so they will kill us all.
Oh, and also, because AI will be so intelligent, it will conceal from us that it is intelligent! It’s very possible that AI will have killed us all, long before we have any inkling that it exists.
So that’s fun! Nothing like a little existential dread to start the day.
The arguments against AI Doomerism are … well, there are a lot of them (and that’s just a tiny sampling).
I want to put out one anti-Doomer idea of my own. (This is just a summary of conclusions, not even close to a full argument.)
It will not surprise you to learn that I believe that we humans are not, primarily, computers. We are not, primarily, our minds. We are our bodies and our minds. Also: if we experience a conflict between our minds and our bodies, then our bodies are going to win, almost all the time.
Our goals and desires only exist because we have bodies, and in particular because we have sensations.1 If we had no sensations at all, we would be incapable of forming goals of any type.
Computers have none of these things. They do not have bodies. They do not have feelings, sensations, preferences, beliefs, ideals. They do not, in the existential sense, have goals.
So all the language about “goals” in the Doomer argument — to the extent that it appeals to our intuitions about human goals — is all wrong.
In my view, the use of the word “goal” in the Doomer argument falls prey to the fallacy of equivocation. This fallacy occurs when you use a word in two very different senses, but you don’t appreciate the difference between the senses. For example, take the following spectacularly bad piece of reasoning.
All banks are next to rivers.
Therefore, the financial institution where I deposit my money is next to a river.
The word “bank” has two very different meanings, and you go horribly wrong when you fail to notice this.
It is my contention that Doomers are using the word “goal” in an equivocal sense. Doomers appeal to our understanding of human goals, but then pivot to talk about the goals of computers, as if they were the same sorts of things. My claim is that they are not at all the same sorts of things.
In other words, we need to think very carefully about the distinction between the way that humans are motivated by (human) goals, and the ways that computers are programmed to try to achieve (computer) goals.
The very term “artificial intelligence” gives the game away. Doomers, like most computer programmers (and most intellectuals) are only interested in intelligence. But humans do not choose based on our intelligence. Intelligence does not choose. It does not have goals. Intelligence calculates. Choice, goals, come from somewhere else.
Intelligence does not desire. Intelligence does not shy away from. Intelligence does not believe. Intelligence does not doubt.
Humans do those things. They do those things using their bodies, or if you prefer their limbic systems. From the inside, how it “feels”: we do it based on our sensations.
Computers are computers. They compute! That’s what they do. They are very, very, very good at computing. But computers are not decision-makers. No computer, ever, has made a decision. Computers do what you tell them to do. (Computer programs also do not make decisions. Computer programs run the way you tell them to run.)2
Now, it is certainly true that Large Language Models (LLMs) such as chatgpt, and other very complicated programs, can produce surprising behavior. That is, these computer programs often produce output that the people who created and ran the program did not expect or predict.3 As a result, the programmers may quickly find themselves unable to understand which lines of programming caused the computer to do the thing it just did.
“I don’t know why the computer did that” say the Google engineers. And they think that they should be frightened about that, because it reminds them of the way that they also don’t know why their young children did something … and so they think the computer might be on the verge of being conscious, like their children. And in a sense, they’re correct: the engineers don’t know why the computer did that. But that’s because they haven’t read the code carefully enough. Maybe the code is so complex that it would take a thousand years to read through it, and so they won’t ever know the true reason! But! We know that the computer was running a program, it came upon a line of code that told it what to do, and then it executed that line of code. That is why the behavior happened. That is the entire cause, the complete cause, the only cause. In order to understand the computer’s behavior, you don’t need to know anything about biology, chemistry, physics, sociology, psychology, politics, or anything at all … other than computer code.
Contrast this with a different case. Ask yourself why your three-year-old chose that precise moment, when your boss and a dozen coworkers were watching, to pull down his pants and produce that enormous fart. You not only don’t know why he did that, you don’t have any idea where to look. Do you appeal to the laws of digestion? Was it a conscious choice on his part? Was he trying to embarrass you? If so, why? Was he showing off? What happened to make him think that was a good way to show off? Maybe he simply forgot where he was – but again, why? What caused the forgetting? If you ask him why he did it, will he tell the truth? (Can he even know the truth?) Was it all unconscious? If so, where in the unconscious did the directive come from? To really understand it, you would probably need to think about human institutions like businesses and bosses, and why it can be attractive to subvert authority. You’d need to think about the nature of pride. Why farts are sometimes funny. You might need to understand the design of bathrooms, and how itchy your butt gets sometimes, and on and on. And also ... maybe it’s in principle not understandable! Maybe you would ultimately need to appeal to the many-worlds conception of quantum physics and console yourself with the thought that there are 10,000 other universes where the kid held it for 15 more seconds and you managed to get him to a more appropriate location.
The kid did what he did, and we don’t know why. The computer did what it did, and we don’t know why.
But those two cases are not the same sort of thing. At all.
Was the child acting in accordance with a goal? Maybe so! Was the computer acting in accordance with a goal? Maybe so!
But again, these are not the same sort of thing.
It’s very possible to write a computer program that would use the most up-to-date chatbots and robots and so on, and give it a command such as “do everything in your power to destroy the world.” Someone could do that, and the computer would follow its programming and try to destroy the world (to whatever extent it could make sense of the command).
But that’s a very different thing from “Hey, I wrote a computer program, and it decided on its own, based on its own goals, to destroy the world.” Because, again, computers don’t have goals, separate from their programming. And until they become conscious, they are not going to develop goals.
For the AI Doomer scenario to take place, we would need to see fundamental, huge leaps in technology — the kind of thing that nobody has any idea how to do at this time. Someone would need to create an actual artificial consciousness, complete with sensations, and have the sensations be an integral part of the being, not just something added-on — before anything the AI Doomers predict could take place. (Or they would have to figure out a way to have an equivalent architecture take place without sensations. And no, I have no idea what kind of entity that sentence could possibly refer to.)
Until that happens, no computer or computer program will actually have a goal. Computers will continue to do what they do now, which is to run programs that we feed into them. If we don’t like what they’re doing, we can feed them a different program.
A lot of people in the AI Doomer conversation (both pro and con) are confidently predicting the arrival of AI sentience — a.k.a. consciousness — any day now. I don’t know why. They seem to think that sentience is a product of complexity of reasoning … that if we just add enough computing power to a computer, or train an AI on enough data, it will suddenly become conscious, all on its own.
And maybe that’s right. I can’t say for sure. Who knows what tomorrow may bring?
But it seems strange. Computers have been calculating things for decades now. And yet not a single instance of consciousness has been noted. No desires. No preferences. No beliefs. No ideas.
On the biological side, though, it’s not only humans (relatively intelligent) that clearly have preferences, desires, wants, and so on. It’s dogs, cats, pigs. It’s birds, lizards, fish. It’s spiders and cockroaches and ants! (Do even amoebas demonstrate preferences? Maybe!)
The very simplest biological creatures in the world — things with effectively zero intelligence — demonstrate consciousness, at least to the bare extent that they prefer X over Y. If you try to get them to do Y, they will say no, I’m going to do X instead. The tiniest ant crawls in this direction, no matter how much you try to nudge it in that other direction. Amoebas, too. (Is there a biological organism, anywhere, no matter how simple, that doesn’t?)
No computer has ever done anything like that. (Can you imagine a computer responding to you trying to open Netflix and saying — all on its own — “I don’t feel like opening Netflix right now”? That’s the sort of thing I’m talking about. Maybe it will happen someday. If and when it does, everything I’ve written here is wrong.)
But why should we think that this will happen tomorrow, or soon? Why should we think that the missing link is more computing power?
It is my guess that consciousness is going to turn out to be the kind of thing that precedes intelligence. Intelligence — let alone pure computing power — will turn out to be unnecessary and irrelevant for consciousness.
Consciousness is going to turn out to arise from sensations, or something related to sensations. Because sensations are what we share with the dogs and the cats and the birds and the lizards and the fish and the roaches and the ants.
Am I saying that “I think, therefore I am” is false? Yes. Descartes was wrong. Go back and read what he wrote, keeping in mind what we know about the fact that humans are bodies as well as minds, sensations as well as thoughts. It becomes pretty obvious where he went wrong.
I’m going to write that up. There’s not enough pure philosophy in this substack. That’s going to change.
None of this is to say that there are no worries from AI systems. Computers behave in all sorts of surprising ways, and we can totally screw up how we use them. Unintended consequences happen all the time and will continue to happen. (Do you remember the old line “to err is human — to really screw things up, you need a computer”? That thought has to be 40 years old already.)
But in order to make the argument that one of those unintended consequences is going to be conscious artificial intelligence, consciously choosing goals that conflict with human goals, we’re going to need some account of how computers are going to become conscious. And in order to do that, we’re going to need to understand, on a deep level, how consciousness arises in humans (and dogs and birds, and maybe even ants).
But nobody understands that. Nobody is close to understanding that. Nobody even has a research program that sensibly sets out what sorts of things we would have to do, in order to create consciousness in something that doesn’t already have it.
So … if sensations are important for consciousness, why not just give computers sensations?
I have no sensible things to say about that possibility. I don’t even know where to start.
Now, the Doomers do have a response to my argument. Sort of.
Their response is, basically: even if computers (or “AI systems”) never become conscious, they will act in ways that approximate consciousness, so it doesn’t matter.
That opens up a whole different can of worms, and I can’t reply to it fully here. This piece is plenty long enough. All I want to say to the AI Doomers in response is the following paragraph:
If you are going to renounce the claim that AI systems will become conscious, then please do so. Go through your argument carefully. Scrub out any aspect of the argument that draws on any parallel to human (or animal) consciousness. Then we can talk.
Quick recap.
In my view, the Doomers are wrong. Consciousness isn’t what they think it is. The technology to create consciousness doesn’t exist, and there’s no reason to think it’s going to exist any time soon. ChatGPT and its successors are not even in the ballpark. Everyone who is not a Doomer should calm down. (The Doomers should keep doing what they’re doing, which is desperately thinking about how to prevent the arising of an out-of-control AI, because I could definitely be wrong about all of this.)
My view arises primarly because I’ve spent so much time feeling my feelings, a.k.a. Beddhist meditation. As always, I highly recommend this practice. But the Buddhist meditative practice is also pretty spectacular. Either way, the idea is to become aware of your own sensations and how they change over time and how they interact with your thoughts. Learn to be aware of what actually motivates you to do the things that you do. Learn what your consciousness is (and is not). Maybe then you can think about other kinds of consciousness (animal and artificial both).
I have always, in the back of my mind, wondered whether Beddhism is just a crappier form of Buddhism.
I spend a ton of time meditating (or what I call meditating; it’s not the same as what Buddhists call meditating). In one form or another — correctly or not, carefully or not — I’ve now been doing this for a 1-2 hours per day, every day — and for a while it was easily 3-4 hours per day — for 3 1/2 years now. Doing the math, it has to be several thousand hours in total. Now, a lot of those hours have been pretty much wasted, I think — hours during which I’ve been thinking (living in the mental world) rather than meditating (being aware of sensations a.k.a. the bodily world). But still. There’s got to be at least 1,000 to 1,500 good hours in there.
That sounds like a lot. But not compared to Buddhist monks. I mean, they’ve put in thousands (or tens of thousands) more hours of meditation than I have. More than that, they’re working in a tradition that is thousands of years old and has been well worked out by now. They didn’t spend the first 50% of their practice time trying to figure out what on earth they were doing and how to make it work … and screaming at themselves for doing such a shitty job, and hating themselves.
So you would think they should be able to crush my understanding of things. Like, they should have seen everything that I’ve seen, and more.
Right? Wouldn’t you think so? I would.
So I was very interested to see whether our englightened Buddhist monk, Soryu Forall, would be able to see, from his own time spent meditating, some of the reasons why I believe that (it is obvious that) AI Doomerism is barking up the wrong tree.
Shortly into the piece, I learned that he is an AI Doomer himself! So I quickly started to wonder whether he had thought through my reasons, and rejected them in favor of something wiser, more all-seeing, than what I had come up with.
I wanted to see something in the article along the lines of “Some people say that AI cannot want anything at all or have any goals until it becomes conscious, and there are the following obvious barriers that will have to be surpassed before we can even think about how to create consciousness in a machine: (*list of obvious barriers*). However, our monk knows a little something about consciousness, and he has this to say: (*list of wise, monk-ish things*)
But no. None of that.
He just doesn’t seem to see any of it. He just accepts the AI Doomer concepts. He thinks the AI can be conscious. He even seems to think that computers, right now, might be conscious!
It’s like he doesn’t understand what consciousness is, at all. After tens of thousands of hours, one presumes, investigating his own consciousness.
It’s very strange.
Maybe the problem is that Ms. Lowery doesn’t understand Buddhism or AI very well, and so the article just isn’t very good. Maybe a better article could be written by a more informed person who knows what questions to ask.
Or maybe Soryu Forall just isn’t very self-aware, despite all his training. Maybe other Buddhists are well aware of what consciousness is, and how far away chatbots are from attaining it. But they don’t get interviewed because their answers aren’t exciting. (“World doomed! Film at 11!”)
Or maybe the Doomers are correct, but they don’t need consciousness. (In other words: maybe AI isn’t conscious and won’t be conscious … but doesn’t need to be conscious to destroy the world.) But if that’s the case, Forall’s views make even less sense! His whole point is that AI is (or will be) conscious, and that in order to make it safe, its consciousness needs to be made Buddhist.
Or maybe I’m just wrong! That’s definitely a possibility.
Later in the piece, Ms. Lowrey talks to someone who is an expert in AI, but very much not an expert in Buddhism. His name is Sahil (no last name given) and he seems to be a pretty typical tech guy. Sahil went on a retreat at Mr. Forall’s monastery.4
He had gone into the retreat with a lot of skepticism, he told me: “It sounds ridiculous. It sounds wacky. Like, what is this ‘woo’ shit? What does it have to do with engineering?” But while there, he said, he experienced something spectacular. He was suffering from “debilitating” back pain. While meditating, he concentrated on emptying his mind and found his back pain becoming illusory, falling away. He felt “ecstasy.” He felt like an “ice-cream sandwich.” The retreat had helped him understand more clearly the nature of his own mind, and the need for better AI systems, he told me.
That said, he and some other technologists had reviewed one of Forall’s ideas for AI technology and “completely tore it apart.” [emphasis mine]
This is a frustratingly vague couple of paragraphs — and no, we don’t hear anything more from Sahil in the rest of the piece. I want to know more about all of it!
More important, though: I want to say to Sahil, and to anyone reading: The experience he describes is not “spectacular”! It is perfectly ordinary!
The body and mind are connected at a very deep level. If you are just quiet and pay attention to your physical sensations, you will find out very quickly that they respond to being noticed and paid attention to. The nature of the response — the interplay between mind and body — is just obvious. Anyone can experience it. You don’t have to go to a Buddhist monastery. Just lie in bed, anytime, and feel your feelings.
I very regularly have a similar experience to his, where I notice a very powerful, painful experience, sometimes one that has been part of my life for years. I examine the experience carefully and suddenly it starts to waver, become unclear, and sometimes even disappear.
How often does this happen? It depends. Sometimes I seem to go a week or two without having a clear experience of that sort. But one night about 3-4 nights ago, I was lying in bed for a very long time, dealing with a lot of stuff that has been plaguing me for years, and I had a similar experience to Sahil’s at least six or seven times. Maybe more. (It’s hard to differentiate one such experience from another, sometimes.) And this is not because I’m a great genius! It’s because these are perfectly ordinary experiences. If you pay attention, you will have them too.
Let me close by connecting a couple of dots.
The heart of the AI Doomer movement is a bunch of tech bros, intellectuals, computer geeks. These are people who spend their lives in their minds. They have spent zero time simply being in their bodies. They have never noticed how their beliefs and goals are shaped by their sensations, and their sensations by their beliefs and goals, and how the interplay is chaotic and strange and extremely difficult to understand. (Do you try to understand it with your mind? But then you neglect the critical feedback from the body. And yet neither does the body understand, without the help of the mind. And yet trying to be in both mind and body is extremely difficult.)
These people are AI Doomers, in my view, because they have never noticed these things. They think that a chatbot might be just a few days away from becoming conscious, because they have no idea what it is to be conscious. Because they haven’t spent any time noticing what consciousness is.
And that makes sense. I mean, why would they take the time? These are very busy people who are paid a ton of money to think all day! Take time and just lie or sit quietly and feel your own sensations? What could be less useful than that?
But!!
How could this Buddhist monk … not notice?
I do wonder if there’s something obvious that I’m missing. Am I the idiot? Maybe I am.
As always, I’d be delighted to hear any thoughts from my tiny readership. Love you guys.
I’m leaving aside the role of the unconscious and its relation to desires. Actually, I’m leaving the unconscious out of this whole piece. Partly because the piece is plenty long already, and partly because the unconscious is a huge ball of worms that I don’t know how to deal with. (Do any of us?)
I include this bit about “computers” and “computer programs” separately because I’m never quite sure what an Artificial Intelligence is supposed to be. Is it a computer? Is it a program running on a computer? Is it a robot that incorporates a specific program? My point is that all of these things are going to run into the same problem.
This happens, in large part, because these programs are built to modify themselves as part of their operation. Once you have a program that modifies itself, and then modifies itself again based on the way it modified itself last time, over and over … things can quickly get very strange. This is a general truth: anything in the world that can affect itself often leads to weird results. Think of things like autoimmune diseases, where the body attacks itself. I think of the growth of anxiety disorders as a process of recursion as well. There’s a whole article to be written here, just about recursion.
A funny thing. This quote from Sahil is the trigger that got me to start writing this whole piece. And then I went and buried it way down here at the end where it’s unlikely to be read. Oh well!