Host: Ezra Klein
Guest: Ted Chiang | Science Fiction Writer
Category: Biz & Tech | 💬 Opinion
Podcast’s Essential Bites:
[30:08] “With regard to the question of, will we create machines that are moral agents, I would say that we can think about that in three different questions. One is, can we do so? Second is, will we do so? And the third one is, should we do so? I think it is entirely possible for us to build machines that are moral agents. Because I think there’s a sense in which human beings are very complex machines and we are moral agents, which means that there are no physical laws preventing a machine from being a moral agent. And so there’s no obstacle that, in principle, would prevent us from building something like that, although it might take us a very, very long time to get there.”
[30:59] “As for the question of, will we do so, if you had asked me, like, 10 or 15 years ago, I would have said, we probably won’t do it, simply because, to me, it seems like it’s way more trouble than it’s worth. In terms of expense, it would be on the order of magnitude of the Apollo program. […] However, if you ask me now, I would say like, well, OK, we clearly have obscenely wealthy people who can throw around huge sums of money at whatever they want basically on a whim. So maybe one of them will wind up funding a program to create machines that are conscious and that are moral agents. However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine. I don’t think that’s what any of them are trying to do.”
[32:07] “[A]s for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering. Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering. […] And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea.”
[34:00] “[G]iven that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them. Because it’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering. And so animals experience vast amounts of suffering in the modern world. And animals, we know that they suffer. There are many animals that we love, that we really, really love. Yet, there’s vast animal suffering. So there is no software that we love. So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.”
[38:11] “I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.”
[38:24] “Most of the things that we worry about under the mode of capitalism that the U.S practices, that is going to put people out of work, that is going to make people’s lives harder, because corporations will see it as a way to increase their profits and reduce their costs. It’s not intrinsic to that technology. It’s not that technology fundamentally is about putting people out of work. It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us. How are giant corporations going to use this to increase their profits at our expense? And so, I feel like that is kind of the unexamined assumption in a lot of discussions about the inevitability of technological change and technologically-induced unemployment. Those are fundamentally about capitalism and the fact that we are sort of unable to question capitalism. We take it as an assumption that it will always exist and that we will never escape it. And that’s sort of the background radiation that we are all having to live with. But yeah, I’d like us to be able to separate an evaluation of the merits and drawbacks of technology from the framework of capitalism.”
Rating: 🍎🍎🍎🍎
🎙️ Full Episode: Apple | Spotify
🕰️ 52 min | 🗓️ 03/30/2021
✅ Time saved: 50 min