Should A Cyborg Punch a Nazi? An Interview with Mark O’Connell
“Afterlife? If I thought I had to live another life, I’d kill myself right now!” Futurama’s hard-drinking, human-hating robot Bender, in the episode “A Pharoah to Remember,” neatly encapsulates some of the philosophical challenges of transhumanism, a relatively fringe movement preparing for the next stage in human evolution or, alternatively, our complete destruction. PayPal founder Peter Thiel, for instance, sees in technology an opportunity to radically expand the scope of human life and cognition. Death, in his and many others’ view, is just another engineering challenge to overcome. Humans will live indefinitely and, by merging with sufficiently advanced technology, overcome the inherent weaknesses of our biology.
Or will they? Again, we look to Bender for guidance. As a self-aware robot, he is also quite fond of the motto “Kill all humans,” which is actually a very real concern to many of the tech world’s leading lights. Silicon Valley titans like Elon Musk and Bill Gates have long warned of the existential dangers of unleashing a sufficiently sophisticated artificial intelligence on the world. A.I. philosopher Nick Bostrom predicts that an “intelligence explosion,” a self-improving intelligence could quickly move beyond human control and render us little more than superfluous meat hangers.
So which is it? Will humanity use technology to upgrade ourselves into gods, or will the technology itself render us obsolete? Even if we could live forever, should we? Admittedly, I’m more of the Bender frame of mind. The idea of hundreds or thousands of years of existence makes me want to jump out the nearest office window.
Mark O’Connell, author of To Be A Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death, tends to agree. He is neither techno-futurist nor prophet of singularity end times. He’s just a guy who wants to understand the people who have given over their lives to the idea of using technology to fundamentally change the way we think about being human. I spoke to him recently about transhumanism, the ethics of robots punching Nazis, and the religious aspects of techno-worship.
So have our chances of reaching transhumanist nirvana declined now since Trump?
[laughs] Yeah, I’ve been thinking about this a little bit. All of the sudden it seems like the world has changed. A completely different era. And I think that’s obviously a reaction to the extremity of what’s happening at the moment. But yeah, one of the things that has been in my mind is that Peter Thiel is obviously very close to the center of power, and I’m very curious and anxious to see what role Silicon Valley is going to play. That meeting in Trump Tower was a little unnerving. I wonder whether the reaction to Trump will be an overcorrection in the other direction? I don’t know if you’ve seen mutterings about Zuckerberg potentially running?
Yeah a bit.
So I think that the potential for that sort of extreme techno-rationalism to be a sort of the cavalry riding in to protect us all from the wild end of the Trumpian approach. I mean that’s one potential way this could all come into play. But I think what really struck me throughout the campaign, and even there is all this talk about bringing industrial jobs back to the U.S., and sort of protecting that whole sector of employment, and bringing back industry, and I don’t know about you but I couldn’t help thinking all the way through the campaign that no one was talking about the fact that that just isn’t possible. As soon as those jobs do come back from China or whatever, they’re just going to be automated. They’re not going to be people getting those jobs. So there’s this huge, non-negotiable socio-economic shitstorm coming our way in the form of automation and artificial intelligence. Politically, at least, it doesn’t seem to be part of the conversation. And that’s something that is extremely worrying. I don’t know if that even remotely answers your question.
Yeah it does. It just seems like a time when people are just kind of rethinking a lot of fundamental questions. Speaking of which: Who are we, do you think, if we’re just uploaded into the cloud? Are we still human? Is that life?
It’s funny you should say that. I guess I sort of imply it a few times in the book, but I’m not really writing about the future. I mean, I am writing about the future in the sense that I’m writing about people who are oriented toward the future and who are obsessed with this idea of the future of humanity, but I’m not even that interested in the future, per say, I’m more interested in the present. To the extent that that I was able to conceive of what I was doing, from a sort of widescreen angle, when I was writing the book, I wanted it to be a snapshot of a particular moment in time, of these kind of weird, sort of extreme movements and part of the culture that, for all its extremity, still reflected the oddity of our moment, and our relationship with technology.
I saw you kicked up some dust tweeting about punching Nazis recently. My question is this: what are the ethics of a robot or cyborg punching a Nazi?
Well, that one tweet I had that went viral, I don’t know if you saw it, but I didn’t realize that Richard Spencer himself had quoted the tweet and so I ended up getting an endless amount of frogs in my timeline. But yeah I’ve been sort of forced to think more directly about the ethics of this stuff. But the thing is, what struck me about that moment [Richard Spencer getting punched] is that it was so human. It was like a moment of hope in this terrible, terrible, dark few days. It was this moment of real visceral, human action.
I think to even have an ethical conversation about it obviously needs to be had, but we need to talk about how fascists need to be fought on a particular level and then you can’t really effectively engage with them on the level of, like, the marketplace of ideas or whatever. But at the same time, my reaction to it was completely visceral, like that guy is a fucking fascist and deserves to get punched in the face. It is undeniably good, every cell in my body tells me it’s a good thing that happened. And I’m completely anti-violence, basically — well, I can’t say I’m completely anti-violence, because I can see that there’s kind of an emotional purging that happens with something like that — but I just think it’s a really human thing. So yeah, I think it would suck to have a robot do that. I think you’d want a person to do it. That’s what makes it so powerful. I mean, if no one else could do it, I’d want a robot to punch Richard Spencer for sure. And there’s an argument to be made that maybe a robot would do it more effectively. I don’t know. In terms of pure theater, I think it was pretty much perfect.
In the event that radical life-extension technology that you investigate in your book becomes available, won’t that be something that is available only to the extremely wealthy? Is this the next stage of capitalist collapse, us poors fighting over the scraps while our immortal god-men continue on indefinitely?
Yeah I think that’s an inevitable consequence of this stuff, and I think that is something that… It wasn’t a big part of my book, partly because I wanted to avoid those kinds of social debates, but I think it inevitably comes up whenever anyone advocates for this stuff because it just seems like a natural consequence. The idea that comes up as counter is that this happens with any new technology, that at first the rich will be able to afford it and then the market will dictate that the prices will come down and it will “trickle down” to everyone. Rising tide lifts all boats and that kind of thing. I’m not sure how plausible that all is, to be honest. Again, I’m not a futurist, but if this stuff does come to pass, I can see a situation where people who enhance their intelligence can afford to do that, have the means, will then lead to a runaway intelligence scenario, where they go on this exponential curve and they’re suddenly this entirely different species and we’re just left behind.
I just think for me, I sort of see transhumanism as being kind of an ideological consequence of capitalism and consumerism in that it’s kind of a way of thinking about yourself as not just a consumer, but a consumer item in a way? I don’t know. I can identify with it. You look at your shitty body that’s not doing what you want it to do, and you think this isn’t good enough. This could be better. And we all have a little bit of that. I need to be more productive or I need to be more effective or whatever. But I think that transhumanism is kind of inseparable from capitalism, in that it’s kind of an internalization of the logic of that sort productivity mentality.
Would you even want to live forever? These guys you describe are all engineers who see death as a problem to fix, without pausing to question whether that is even a desirable outcome.
I don’t want to pathologize people’s ideas, but there’s a sort of glib way of looking at it. So many of the people I spent time with for this book, very few if any come from a humanities background. They’re all programmers, mathematicians, cryptographers or whatever. So they all spent a lot of time in the machine, you know what I mean? Operating with the logic of computers. I don’t know if you know any programmers, but they tend to have a very kind of specific way of thinking about things. There’s a problem to be solved, and data is everything. It’s a very useful way of looking at the world in a lot of ways, but I think there’s a sort of internalization of that way of thinking that feeds into transhumanism in a big way. So the idea that you optimize for intelligence, which is sort of this catchphrase with a lot of the rationalist community. So if you optimize for intelligence as the ultimate human value, like that’s what makes us worthwhile, the problems we can solve and the things we can, it’s a really shallow way of thinking about what it means to be human, I think. But if you think in that way, it makes sense to want to be pure intelligence, to be a disembodied mind without this body dragging it down.
I could probably stand to lose a few pounds.
It’s like this fever dream of the perfect self. It’s a religious idea, basically, that you become this disembodied spirit, this disembodied intelligence. And as you say, why would you want to live like that? What would be the point? And then I just kind of came to that question myself quite a lot of times. If you were infinitely powerful, and infinitely intelligent, and infinitely kind of extending across the universe, what’s the fucking point in being alive? What are you? If you’re a god, what are you? You’re nothing. It’s kind of paradoxical and maybe nonsensical, but I’ll take my shitty human body and frustrations and failures and I’ll die. I’ll take that instead. But if I’m having that conversation with a transhumanist, they’d be like why? They’re view comes from the same frustrations that you and I have, but it just seems like a hellish way to end up.
Your perspective didn’t change after spending all that time with them?
The last thing I wanted to do with the book was be this skeptical journalism character going into this world and have this skepticism confirmed by the things that he sees. Ideally, I wanted to have them convert me. Make me see things the way you do. That didn’t happen, but I went in with a desire for that to happen, which I think helped me be more open to that stuff. More empathetic to where these people are coming from. I hope that comes across in the book.
There seems to be a lot of quasi-religious mysticism around the transhumanist movement. Do you see a lot of similarities?
Yeah, definitely. To the point that one of the things I’ve been left thinking about is… I’m not religious, I don’t come from a religious background. I have a lot of sympathy for religions. I think if I could believe, I would. I’m not anti-religion. I’ve been left wondering whether this whole milieu there might be some advance flickerings in what I’ve seen of some future religion. Particularly if artificial intelligence takes off in the way people seem to think it will. Will we worship something, if it didn’t destroy us, will we worship the thing we’ve created, which is, in a way, this certain version of an ideal of ourselves? This idea we have, which isn’t true at all, of a hyperrational homo sapiens, the thinking creature. Will a religion come out of this? Transhumanism answers all the same questions that most religions claim to answer. Eternal life, all that stuff. The difference being, I guess, if you’re a transhumanist there is an actual empirical basis for believing those desires will be met.