Don’t Fall for the Longtermism Sales Pitch

Émile P. Torres

Notes

Paris Marx is joined by Émile P. Torres to discuss the ongoing effort to sell effective altruism and longtermism to the public, and why they’re philosophies that won’t solve the real problems we face.

Guest

Émile P. Torres is a PhD candidate at Leibniz University Hannover and the author of the forthcoming book Human Extinction: A History of the Science and Ethics of Annihilation. Follow Émile on Twitter at @xriskology.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

Paris Marx: Émile, welcome back to Tech Won’t Save Us.

Émile Torres: Thanks so much for having me. It’s great to be here.

PM: I’m conflicted. I’m happy to have you back on the show but I also hate the topic that we’re talking about. I’ve read William MacAskill’s new book; it’s really this argument for this longtermist philosophy that we were talking about last time you were on the show. And so I wanted to have you back on because since we had that conversation, longtermism has really experienced, I think it’s fair to say, this real increase in attention. It’s everywhere all of a sudden, there’s a lot of arguments in favor of it. There’s a lot of really positive pieces about it in The New York Times, in Time Magazine in a whole load of these big publications. I’m not sure what it’s been like in the US, but I was in London recently, and there were ads for the book all through the Tube. So it’s kind of all over the place. There’s this big push to get people to buy into this notion that the book is selling and to make people believe that this is some kind of positive thing. vision for the future. And so I wanted to have you on because certainly we talked about this ideology of longtermism before. But I think that there are some more aspects of this to dig into, especially as it has gained this prominence. And so to start that discussion, there are really two topics I feel like that we’re hearing a lot about, and that we should probably be knowing more about. And the first of those is Effective Altruism. And then the other one, as I said, is longtermism. So I’m wondering to start, could you talk a bit about what these two concepts are? And how they relate to one another?

ÉT: First of all, maybe it’s worth mentioning that the promotional push for MacAskill’s new book has millions of dollars behind it. There is no shortage of funds to buy advertisements in the the London Underground or whatever, and the movements from which longtermism emerges, Effective Altruism, itself has just an enormous quantity of money that wealthy donors tech billionaires like Sam Bankman-Fried and a co-founder of Facebook, for example, have been willing to just give this community. Right now they have your $46.1 billion in committed funding. In addition, there are various organizations, companies and so on like OpenAI that are aligned, more or less, with the EA or longtermist worldview, that have been independently funded by tech billionaires. So the community is just awash in money — so much money, they don’t know what to do with it. Literally, they’re giving out $100,000 prize — five of them — for blogs promoting or discussing longtermist ideas or effective altruist ideas. So it’s just a huge amount of money. So it’s not surprising that Will MacAskill has been able to get all of this attention for his book, you know, it’s not the result of merit. It’s the result of money.

So basically the effective altruist community was born around 2009. The first organization that was motivated by effective altruists, ideas was Giving What We Can, but that was founded in 2009, in fact, by Toby Ord who is at the University of Oxford, and was sort of co-founded with Will MacAskill. The idea behind Effective Altruism is sort of inspired by the global ethics of Peter Singer. So, famously, Peter Singer wrote this article about famine and affluence. I think it was published in 1972, if I remember correctly, but basically his argument was: It shouldn’t matter where in the world someone is suffering. Imagine yourself walking down the road, and you see a child who is drowning in a lake, and you just bought some new shoes, or you know, a new suit, and so on. If you were to go save that child, you would ruin your shoes and suit, should you do it, a lot of people would say yes. And he says, well, what’s the difference between the child drowning 15 feet away from you in a lake and somebody starving on the other side of the world like in Bangladesh at the time, I believe. There really shouldn’t be any fundamental kind of moral difference between these two situations. And therefore, insofar as we care about helping others, which is a basic definition of altruism, then we should be willing to give considerable amount of our money, or at least a minimal amount of our money in it to help people around the world.

So once you have that idea, there is a further question, which is, actually if you are convinced that you should give away some of your income to help other people, which charities should you give it to? And there have been various rankings of charities in the past, but the effective altruists said: Actually, maybe there’s a better way to discern which charities are the best ones. So, they wanted to use science and evidence and reason to pick the best charities. So, for example, one of the conclusions that they’ve stuck with for many years now, is that giving to the Anti-Malarial Foundation, I believe it’s called, which then would manufacture and distributes bed nets to prevent individuals in regions of the world that are susceptible to malaria from getting malaria for being bit by these little flying hypodermic needles called mosquitoes. You get a much bigger bang for your buck than for example, if you donate to disaster relief. Oftentimes that money just kind of gets lost or if there’s an autocrat in rule, they’ll end up taking a lot of money. So far, this sounds this sounds pretty good. If you look at the details, it turns out that there are some methodological problems like the notion of quality-adjusted life years, QALYs, which we could we could discuss later if you’d like. As well as if you take it seriously there end up being these rather repugnant conclusions like maybe you should actually support sweatshops. One of their main ideas is earning to give. So maybe the most good you could do is not, for example, joining a charity, becoming a doctor who then goes to some place in the Global South that is somewhat impoverished and needs better health care. What you should do instead is go work on Wall Street, and then you can make a whole lot of money, take that money, donate it, and that ultimately, if you crunch the numbers, you can do more good that way. Or even working for a petrochemical company — Will MacAskill has argued that in the past.

PM: There was an article in The New York Times recently asking is it ethical to defend or work for a major oil company or something like that? And their verdict apparently was that it can be at but what you’re saying also brings to mind Sam Bankman-Fried, CTO of FTX. And his argument is that he’s engaging in all this crypto stuff, so he makes a lot of money that he can give to these causes to make the world a better place.

ÉT: Exactly. He is one of the great success stories within EA of somebody who was convinced to earn to give and he thought: Well, how could I maximize the amount of money I get to then donate it to supposedly the best causes out there? And so he decided that to go into cryptocurrency, and he himself, as you are very aware, has described it as, more or less, a kind of Ponzi scheme. There’s a huge carbon footprint. I know FTX has tried to address that a little bit — I think it’s inadequate. But there’s a huge carbon footprint to cryptocurrencies, there are a lot of people in the Gobal South who get completely screwed over by it, a lot of people in the Global North who get screwed over by cryptocurrencies as well. Funnily enough, he’s an individual who embodied that EA ethos of this idea of “Earn to Give,” and then ended up becoming this multi-billionaire crypto kingpin.

PM: It’s very troubling to see. And just to pick up on what you’re saying about the approach to this — you make this money and you make these donations, and this is how you make the world a better place. It really is pushed to promote philanthropy. I feel like in this moment, especially when there’s a growing questioning of the role of philanthropy, and whether this is actually making the kind of the positive changes in the world that we’ve been told over long periods of time, questions about the Gates Foundation, and things like this. And in MacAskill’s book, this notion is promoted really heavily. He’s very explicit that it’s far better to put your money into effective nonprofits than to change your personal actions. He say, at one point: Why are people getting rid of plastic? This makes no sense, when they could be donating to effective nonprofits that would make a much bigger difference in the big scheme of things.

ÉT: I think that’s exactly right. A critique that one could make and philosophers have made is that the whole EA kind of approach, in general, certainly in the past has taken for granted the various systems that are in place, the idea is to assume that these systems will continue to exist, and that maybe even they’re good. Maybe they’re even beneficial. Capitalism has resulted in all sorts of materia progress and so on. A lot of them draw from Steven Pinker in his book “The Better Angels of Our Nature,” where he argues essentially that -

PM: Yikes.

ÉT: Yikes! I know! [Pinker argues that] neoliberalism has kind of been very much a net positive. So ultimately, you’re trying to figure out ways, as individuals within this system, to maximize your impact, your hopefully positive impact in the world, which then neglects the possibility that many of the most significant global problems are the result of the systems themselves. So Nathan Robinson in Current Affairs had this really good, recent critique of Effective Altruism, where at the end he made the case that perhaps the most effective altruism there is is socialism. It’s just revamping in fundamental ways the system that is currently in place and is the result of a sort of an underlying cause of climate change of global injustice is the wealth disparity, disparities and so on. It’s MacAskill’s, for example, idea that we should go work for petrochemical companies, work for organizations that are polluting our planet and by virtue of that, pushing us towards this unprecedented state of global crisis in order then to use that money we make in by working for the petrochemical companies to donate to charities that are trying to alleviate the suffering caused by climate change is kind of mind boggling and a bit maddening.

PM: No, I completely agree. I think it’s really interesting that you say that. There are a ton of things that I could pick up there. But just to mention one piece of it — I think that we’ll return to something like this a bit later in our conversation, but also how this notion of earn to give and the way that it is argued for has changed over time, as they have wanted Effective Altruism to be open to a wider range of people. In the book “80,000 Hours,” which is this group or this movement that MacAskill is associated with, he talks about it in the book, he talks about his previous book arguing for things like this, and he doesn’t talk at all about going to work for a petrochemical company, or a crypto company or any kind of other terrible organization that’s doing terrible things in the world. His whole argument in the book that he’s putting out there for the mainstream public is that you should be doing your work or having your experiences. And then you can like found organizations that promote effective altruism, or the only instance that he talks about where people are working in an industry and then puts money into some organizations or whatnot, is a programmer at a tech company, who still does his programming job and then gives a bit of money to some effective altruists organization or something like that. So they really want to downplay that. And I believe in one of the articles you wrote, you said that they like to say how people like to draw from these older comments where we said people should go work for petrochemical companies and stuff like that. But that doesn’t represent us anymore. So I think that’s really interesting.

ÉT: So initially, so I believe MacAskill co-founded 80,000 Hours, named 80,000 hours because that’s the average number of hours that somebody will spend throughout their career working. As part of their marketing strategy, they initially foregrounded this idea of earning to give. Okay, it’s this counterintuitive idea but if you crunch the numbers, again, sort of assuming that the system can’t be changed or shouldn’t be changed, then if you crunch the numbers, maybe this is really good way to actually maximize the amount of good that you do in the world. And later on, they realized that not only was that sort of a bad strategy, because a lot of people found it absolutely abhorrent that you go and work on Wall Street, like Matthew Wage. He was one of the early effective altruists, a philosopher at Princeton who gave up his opportunity to go to Oxford to get his PhD, in order to work on Wall Street to donate his money. They sort of realized that, actually, a lot of people find this to be a really off-putting idea. So it was a mistake. And I think also, perhaps they did a bit more research, and realized that the Earn to Give idea is a good suggestion for a much smaller percentage of young people than they initially thought.

This actually gets at one of the main problems I have with Effective Altruism, and its longtermist offshoot is that very often, the research has trailed behind the activism. They’ve been so excited to go out and change the world, to some extent, they fail to properly interrogate the underlying philosophical ideas that motivate their prescriptions for what people should, in the world right now should actually go into. So initially they said a lot of people should go in Earn to Give. Then they took a step back and thought a bit more about it and realized, actually, this is not such a good idea. Again, not only just for marketing reasons, but maybe it’s it’s not the best way for a lot of people to maximize the amount of goods that they do in the world. So you find a similar thing with longtermism where these these bold claims about what we ought to do right now, in order to improve the long term future for humanity that actually are just based on really flimsy, highly contentious, some would say very implausible, deeper philosophical views.

Maybe a very high level criticism that I would have of these movements is that they have jumped the gun. They’re out there trying to change the world in really significant ways without having a really robust theoretical foundation for their views. In the longtermist off shoot again, that’s one of the main three cause areas of Effective Altruism, in addition to eliminating factory farming, which I think is very good and alleviating global poverty, which I also very much would get behind. But the community in general has shifted away from those over time, over the past five years, away from those two other cause areas and towards longtermism. They’ve already established connections with major governing bodies or agencies like the United Nations, they’ve fostered connections with tech billionaires like Elon Musk, and so on. As a result, they’re in a position to change the world in really significant, very non-trivial ways. And yet, again, that theoretical foundation is just pretty weak and I think that’s a big problem. It’s one reason I’m trying to, within that public arena, to push back on some of these ideas, and to let people know that the long term view is much more radical, and much less defensible than a lot of the most vocal advocates and champions of this worldview would have you believe.

PM: I think that’s put really well, and you can see it in the arguments that MacAskill makes in the book right near the end, where he’s saying: How can you get involved? What can you do? It’s all about how can you get involved in promoting effective altruists organizations, the movement of Effective Altruism, how you can promote longtermism. It’s not like how you can get involved in these causes that are going to make the world a better place. It’s all about how do you spread longtermism and Effective Altruism further to more and more people. And just on your point there about longtermism, maybe you can give us a brief definition of what it is. But one of the things that stood out to me in the book, as MacAskill was making this argument for longtermism, that just blew my mind was he really presented it as an extension of the civil rights movement. You have this expansion of rights to Indigenous people, to black people, to gay people. And now we are expanding rights to the unborn, the people of the future. It’s just kind of a wild framing to me that is presented as something that just makes total sense, but maybe you can give us a brief idea of what longtermism is.

ÉT: Just with respect to the word unborn, there was a study I was reading about just the other day, I believe it was conducted by some of the longtermists and they found that the way you frame questions about the value of future generations depends on the wording. That’s unsurprising; a lot of studies find that. But if you talk about future generations, the percentage of people who are moved by that drops, consistently, but ultimately, what they’re talking about is the unborn. On the view that MacAskill defends in his book, this is called the Total View. It was named that way by a philosopher, Derek Parfit, who is sort of the grandfather of the whole longtermist movement; he was the supervisor of Toby Ord. On the total view, there is no intrinsic difference between an individual who dies and a possible person who is not born. So maybe there are other reasons why the death of somebody might be worse, it might affect the loved ones, and so on. But if you bracket those, there is no difference between the death itself, and the non birth of some person who could possibly exist. The easiest way to understand that is that on this view, people are understood to be the containers of value. So we’re just these vessels, we can be filled with value, which you might take to be happiness, a certain quantity of happiness, then or maybe even a negative quantity of happiness. And the total view says that a universe that contains more net total value or happiness is better than a universe that contains less. And if you then derive an obligation from that, as the utilitarians would do, they would say, well, then we have a moral obligation to create a universe with as much value as much happiness as possible.

One way to do that is to increase the happiness that’s experienced by all the people who currently exist. But another way to do that is to create new people, i.e. value containers that contain net positive amounts of value. So if you double your population, and if everybody has the exact same amount of value, say, happiness level of 10, you double the population, you get twice as much value. And so the universe then becomes twice as good. If you triple it, it becomes three times as good, and so on and so on. So, behind it, there’s this really controversial idea about the intrinsic badness of death versus non birth. And consequently, this kind of moral duty, which may not be absolute, but there’s still a kind of like moral, you know, push then to encourage people or to engage in activities that will maximize the total number of people in the future.

So all of that said, maybe it’s useful then to actually define longtermism. So it’s basically just the idea — there’s there’s a weak and a strong version — a lot of the people in the community, as far as I can tell, are most sympathetic with the strong version. Some of them like MacAskill and Hillary Graves have explicitly defended the strong version. The strong version is definitely what you find in Nick Beckstead, who wrote one of the founding documents of the longtermist ideology in 2013. It was his PhD dissertation as it happens. But nonetheless, MacAskill discusses in an article that he posted on the Effective Altruism forum that for marketing reasons, they should go with the weaker definition. So the weaker definition is just that ensuring that the long term future of humanity goes well is a key priority. And the stronger version is that this is the key priority. It tops the list; it’s more important than than anything else, global poverty, no, animal welfare, no. Any kind of problem, a contemporary problem that’s facing humanity that isn’t going to significantly change how much value comes to exist in the very far future is just not one of our top priorities. Nick Bostrom, who is sort of the father of longtermism, has made this more explicit said: For utilitarians, our top four priorities should be mitigating existential risk. And on this view, existential risk is basically anything that will prevent us from creating astronomical amounts of value in the future.

So if you dig a little deeper, what does it mean to say that ensuring that the long run future of humanity goes well? What exactly does that mean? And the meaning is, at least one way to understand it, draws from the total view. So the future will go better if we not just survive for a really long time, at least we have another billion years or 800 million years on Earth, before Earth becomes uninhabitable as the sun turns into a red giant, and its luminosity increases, the oceans boil, and so on. But if we colonize space, we could potentially increase the human population by many orders of magnitude. There could be 10 to the 23 biological humans in the Virgo Supercluster, our local group of galaxies. Even more, if we create planet sized computers, in which we simulate digital people, these would be basically digital value containers, digital vessels that would realize some kind of happiness, then we could even more vastly increase the future population. So behind the long term view is this vision of what could be that involves space colonization, the creation of computer simulations, and the simulation of enormous numbers of digital people, all for the aim of maximizing the total amount of happiness that exists in the future, within our future light cone, that’s the region of the universe that’s accessible to us, in principle. And there are also other reasons, too. They might say: Well, there are great works of art that will be created in the future; there’s evermore just societies that we could create. But a lot of this is just the foundation is maximization, more is better. I mean, the council actually has a section in his book called bigger is better, we should make this civilization as big as possible.

PM: The future should be big.

ÉT: The future should be big, as big as possible. And so this behind that very kind of approachable, even appealing sort of way that they advertise it — future people matter, we can affect them, how the long run future of humanity unfolds matters is important — is this particular vision, which is radical and bizarre. And I think a lot of people who first encounter it in its details find it very off-putting, especially when they consider the fact that there are real, actual people who are suffering in the world today, and that these individual’s pain and discomfort and misery and anguish, might end up getting neglected or sort of brushed to the side. Because what really matters on the longtermist view is how things go over the next million, billions, even trillions of years from now.

PM: I want to pick up on that more in just a second. I would say if people want to know more about longtermism, they can of course, go back to the last episode we did back in May, where we discussed this in in much greater depth. But you were talking there about the value, and how people are seen as value containers, and that value is associated with happiness, or well-being in MacAskill’s book. The thing that I really took away from it, when I was reading the argument that MacAskill was making was very much like: Look, there can be a ton of people today, and they are very happy, or we can have way more people in the future and maybe they’re not all as happy but as long as they’re slightly above the threshold for having a positive life and not being neutral or whatever, then this is in the long run a better outcome. Then what that communicates to me, even though it’s not explicit in the text of the book, is that why would you significantly increase the life expectations of people today, if that would take away from being able to realize all these other people in the future when you have limited resources, and especially as us, people who are interested in philanthropy, and giving money to particular causes, like: Okay, we should get people up to a level where they are marginally happy or fulfilled, or what have you. And that is, of course, based on subjective interpretations of what happiness is, not a kind of objective take, we want to raise people to this much income or what have you. But as long as people feel in their lives that they are slightly happy, even if they are very poor, and live in abject conditions, then this is acceptable, and we shouldn’t want to significantly raise them up, because we need to think about where we’re putting our resources. And if we are putting all of our money into raising the Global South to the incomes of the Global North, or the living standards of the Global North, then that takes away a lot of our resources that we could be putting into ensuring that we have this great long term future that is going to be fantastic. And we lock in the values that ensure that happens, and blah, blah, blah, right?

It’s like it’s a very kind of troubling way to approach the future, how we think about people how we think about society. And just on your point about the people that he’s quoting throughout the book, he’s constantly quoting people like Nick Bostrom, and Toby Ord as inspiring this thinking or talking about extinction in these particular ways. You really don’t find out like the core of what these people are thinking, which is incredibly troubling, as you described in our last episode. Finally, when you think about this approach, one thing that stood out to me was that MacAskill said his supervisor was an economist turned philosopher. So this kind of base of economic thinking is at the core of what he’s considering when he is denoting or considering the value in an individual human being. And in the same way that think of these kinds of abstract notions of economic growth and how we should be promoting that and not really thinking about the material consequences of that growth, who would actually benefit or whatnot, because as long as this abstract value is increasing, then that is a net positive, we assume, then it’s similar with this, right? As long as the net value that we are measuring in like the total lifespan of human history is going up, then this is a positive thing. And we don’t need to drill down into what that actually means for people’s lives.

ÉT: Exactly. It is very economic. It’s almost like morality is kind of a branch of quantitative economics. It assumes that, for example, happiness can be quantified. There are these units of well being, or welfare out there or happiness. Some have called them Utils, a single unit of utility. So a lot of these individuals, because they realize the importance of marketing, they are really careful about how they present their views, and which parts of their views they conceal. And they don’t want people to think about too much, because most morally normal people will find them to be really, like I said, before, abhorrent. Because it’s so quantitative, one of the criticisms of utilitarianism, which by the way, historically utilitarianism emerged around the same time that capitalism did. I don’t think that’s just a coincidence. What a surprise. One of the criticisms that has been made of utilitarianism, which is very influential within this sort of longtermist community. In fact, an overwhelming number of effective altruists are utilitarians. Their own surveys show that — I think it’s something like 80% are utilitarians. Utilitarianism is not sensitive to numbers. And so by that, I mean, imagine a universe that contained only one individual, i.e. value container, and that individual realized 100 units of happiness. You can imagine a second universe in which there are 100 individuals, and each of them has one unit of happiness. Which universe is better? On the total view, on the total utilitarian perspective, they’re the same. And so this gets your point that it may be better to have an enormous number of future people who have very low kind of happiness levels, then a universe that has a much smaller number of people that have really high amounts of happiness.

If you crunch the numbers, you know if you have one billion-trillion-trillion with units of five amounts of happiness versus a universe that has just 10 people with 1000 units of happiness, the former is better, because what matters is the total quantity. That is the bottom line — the view that the first universe is better than the second was labeled, in fact, by Derek Parfit himself as the “Repugnant Conclusion.” And he took it to be a major point against the total view. It’s a big problem, otherwise, he wouldn’t have called it the repugnant conclusion. But the thing is that since then, a lot of people have tried to make the case including many longterm is —

PM: And MacAskill himself!

ÉT: MacAskill himself, exactly, I believe in the book. The idea is that: Well, okay, maybe it’s not so repugnant. Why would that be? Well, because one thing we know about human psychology and human cognition is that we’re, a professor mine used to say is that we’re qualitative geniuses and quantitative imbeciles. We’re really good at qualitative things like recognizing faces, but not good at understanding, for example, the vast difference between 10 to the 20, and 10 to the 21. It’s just an enormous number difference between those two figures. So perhaps, it’s because we’re so bad at thinking about big numbers, that we come to see the repugnant conclusion as repugnant. But if we were just better, at cogitating these these large figures, then we’d see that actually, a universe with enormous numbers of people with low levels of wellbeing really is better than one with just a much smaller population of people that are very, very happy. A lot of philosophers absolutely do not accept that. And think that that’s total nonsense, or bullshit. Pardon my language. But nonetheless they have, over time, become more and more open to just accepting this implication of the total view.

So as a result, like you were saying before, another point you were making, is that yes, when you focus on the very long term future of humanity, millions, billions, trillions of years in the future, a lot of our contemporary problems do end up sort of shrinking to almost just points, just almost invisible specks, on the cosmic timeline. And that is deeply problematic. And part of that arises from this idea of people as just containers of value. So if somebody can exist in the future with a net positive amount of value, then they should exist. We, again, we have this on the utilitarian view, we have this moral obligation, then to bring them into existence in order to maximize the total amount of value in in the universe. They like to use expected value as a way of determining which actions we should take. In other words, for example, which charitable causes we should prioritize. And as soon as you include these merely possible people, that might exist millions and billions and trillions of years from now, perhaps, in these vast computer simulations, that are just spread all throughout the universe, crowded with digital people that for some reason, are happy, I don’t really know why. But as soon as you include them in the expected value calculations, then the long-term future wins every time.

So Nick Bostrom, for example, has calculated that there could be 10 to the 58 digital people in the universe in the future, that’s just a really, really enormous, absolutely incomprehensible number. When you compare that number, to, for example, the mere 1.3 billion people who are in multi-dimensional poverty today, the question then of well, which action should you take? Should you help to lift these people out of multi dimensional poverty? Or should you try to focus on ensuring that 10 to the 58 people come into existence in the far future? Well, the second option, the far future option, absolutely wins by an enormous margin. So there’s just no question. Bostrom himself has said if you were to decrease the probability of an existential risk — which again, is any event that would prevent us from creating all of this future value by ensuring these digital people come into existence. If you were to reduce the probability of existential risk by just a really tiny percentage point, you know, 0.0000 and so on 1%, that is the morally equivalent to saving the lives of billions and billions and billions of actual human beings. So on this framework, if you found yourself in front of two buttons, and there was a forced choice situation, you can choose one of these two buttons, do you push it to increase the probability that these 10 to the 58 people come into existence in the future by a tiny amount? Or do you save billions of people, help to lift 1.3 billion out of multidimensional poverty, and so on? The Bostromian is going to push the first button every time. There’s just no question about it.

PM: I think it’s really interesting that you say that, because when you think about MacAskill book as well, one of the things that is interestingly absent is this discussion of the digital people in the far future. He’ll talk about how people themselves there can be a ton of them in the far future. But there’s not so much mention of the digital beings, even though at one point in the book, he says that if we would all die, but we had invented Artificial General Intelligence, then civilization will still continue as long as those computers continue to operate. So all of us fleshy human beings can die, that civilization will continue, because we’ve created these digital beings. So the hints of it are in there, but he won’t actually dig into it in the way that they will in some of these other writings that are not presented for the mainstream audience. And in one of the articles you wrote, you noted that he even said that in a Reddit Q&A, or what have you, that this is still something that he was interested in. He just didn’t have room for it, apparently, in this book.

So this is another piece that I wanted to talk to you about, and certainly feel free to pick up on the digital being thing. But there’s been a real campaign to sell longtermism to the general public. And this book is very much part of this campaign or a spearhead for it, in trying to present these ideas in a way that can appeal to a more mainstream audience, to a more general audience, so that you even have people like Bill McKibben, or like the actor Joseph Gordon-Levitt, who is giving positive blurbs to this book, Rutger Bregman, as well. He called me out for being critical of longtermism, because he was like, this book is great, which is very worrying to me. But these people who many people would otherwise think are real trustworthy on particular issues. I don’t know about Gordon-Levitt, but at least Bill McKibben and Rutger Bregman are people that I think people genuinely feel are trustworthy individuals who have some good ideas, or then providing positive blurbs for a book like this. And then as you’re saying, you know, there’s this big marketing campaign being built around it, in order to say, this is a real thing that we should be thinking about this is concerning. And this should be a mainstream cause that people get concerned about that people adopt that people will get invested in. How does this process of selling longtermism to the public take place? And how effective do you think it’s been?

ÉT: Good questions. With respect to Bill McKibben blurbing the book, the longtermist, and existential risk frameworks, they really have roots in transhumanism. And McKibben has been, to some extent, a vociferous critic of transhumanism. So it’s really perplexing that he actually blurbed the book, and I spoke to a number of people, including people in the community who themselves are somewhat quietly critical of longtermism. And pretty much everybody was just utterly bewildered by the fact. A lot of people said: Oh, he must not have actually read the book.

PM: Which happens.

ÉT: Which happens — of course, of course. I think anybody who’s written a book has experienced something like that, where somebody just says: Oh, why don’t you write it and I’ll put my name to it? Which is really bizarre, but that’s just the way it happens sometimes. So with the rollout of the book, there was a lot of anticipation, a lot of planning that went into ensuring that this reaches the maximum number of people. I’d mentioned earlier there were millions of dollars — I’ve been told that there’s something like a $10 million budget — just to promote this book and the PR firm, that MacAskill’s hired gets something like $12,000 every month. So I think they saw this as the moment to go out and evangelize for the longtermist worldview. My guess is that there probably was a vote more or less at some of the institutions, based around Oxford, that are the hubs, the epicenters of the longtermist view, they probably took a vote and picked MacAskill because, I think Zoe Creamer recently described him as just the most approachable, the straight guy I think is the word she used, just the normal guy who’s affable and isn’t too peculiar, like some of the other figures like Eliezer Yudkowky is frequently mentioned as a moral weirdo and so on. And the reason I think that is because I know that when Toby Ord wrote his book on existential risk, MacAskill is writing about longtermism, Ord in 2020 published a book on existential risk, which are just companion books; they’re meant to dovetail each other. The reason Toby Ord wrote it was because people at the Future of Man Institute took a vote. And they decided that because he has a wife and kids, he looks nice and wholesome. He got his degree from Oxford; it’s a prestigious institution. So that’s really good. From one perspective, it looks very slimy. And that’s the nature of marketing. It’s, it’s a bit slimy. It’s all about manipulation.

Anyways, I suspect that they took a vote on MacAskill and then made sure that he had a lot more money than Toby Ord had. Toby Ord had, I think, a total of something about $38,000 to promote his book; Will MacAskill has something like $10 million. I think so far they’ve had a lot of successes. As you gestured at early on, there were articles either by or about MacAskill in New York Times, in New Yorker, the BBC, The Guardian. And a lot of these articles were really quite positive; New Yorker was a bit mixed, they did actually talk about some of the people who have concerns about longtermism, such as Zoe Creamer, who I just mentioned. So I think that so far, it’s been fairly successful, at least getting the word out. And my whole take on this is that the underlying ideas like the total view, that’s sort of widely seen as deeply problematic by a lot of professional philosophers. But nonetheless, it’s a legitimate idea on the marketplace of ideas. And it’s something that I personally would be willing to engage with, and to critique within the confines within the the milieu of academia. So we’re just debating ideas, and so on. But as I mentioned earlier, the activism oftentimes has come before the research and sort of outstripped the research. And, you know, my main push right now is to try to meet them in the public square, and to do what I can to at least inform people of just how radical this worldview is, and just how potentially dangerous it could be, as well. And in doing that, to perhaps undermine, to some extent, their efforts to evangelize, to convert people. Convert is a word that some longtermist themselves have used it — to convert as many people to the longtermist religion, I would say, as possible.

PM: There’s a big focus within the book as well, with MacAskill saying we need value lock in, so that people have these values for the long term. He makes an explicit connection to religions, and the fact that as religions took over and grew, they inculcated these values in people that we still see 100s or 1000s years down the road. So that comparison is quite explicitly made within the text of his book.

ÉT: Yes, there are a lot of parallels, troubling parallels between longtermism and religion. I could talk about that for 10 minutes. I mean, there are so many. But it’s very disconcerting. think, along these lines, exactly, they see the upcoming 2023 Summit of the Future hosted by the United Nations as potentially another key opportunity to really mainstream these ideas. MacAskill has been explicit about that; in a podcast interview with UN Dispatch, which, in fact, the introduction to that podcast, a short little article mentions that long term ism is really being embraced to a significant extent by the foreign policy community, and, indeed, the United Nations itself. There is a lot of success so far, in fact in terms of lock in,you can sort of use that idea against the long term view itself and say, maybe the Summit of the Future might mainstream these ideas, it perhaps might even take some of the underlying longtermist values, and codify them in some kind of official document. And as a result, those values, those long termist values might be locked in for a very long time. And therefore, for critics like me, there’s a certain urgency to getting out there right now, sounding the alarm, saying actually that these ideas are potentially really dangerous, or would have implications that would, would exacerbate the plight of the poorest and most disadvantaged individuals in the world today, right now before the longtermism gets locked into some UN document.

Furthermore, just going back to another point you made, MacAskill did say in a Reddit Ask Me Anything (AMA), that the reason he didn’t mention digital people, and the possibility, which is brought up by the person who asked the question, that there could be enormous numbers of digital people in the future and that ensuring that this actually comes to pass is very important, is because he ran out of space. But my own guess is he probably understood, there probably were conversations behind the scenes that if longtermism becomes linked too tightly with this particular normative futurology, that we should go out and create all these digital people, that that might actually be really bad for longtermism. In the same way that Effective Altruism got its reputation damaged by being too tightly coupled with the idea of Earn to Give, longtermism’s reputation might be damaged by being too closely associated with the notion of digital people. He goes out of his way, I suspect, to not mention them too much. But ultimately, if you just dig into the papers that they are writing within the community, oftentimes for others who already have subscribed to the longtermist view, this notion of digital people in the far future and the possibility that there are 10 to the 45 and the Milky Way, that’s one calculation that MacAskill himself has used in papers, or 10 to the 58, Bostrom’s calculation. This is just very central to this whole picture of what the future ought to look like. It’s very worrisome that longtermism has become so influential in the world today, and seems to have a certain kind of momentum towards becoming even more influential in the future. And there is a kind of time sensitivity to critiquing these views. Because once tech billionaires have really fully embodied them, and the UN has published documents that encapsulate these ideas, then it may become just really difficult to alter the trajectory of the future of humanity.

PM: Which is exactly what they want. To try to start to wrap up our conversation, obviously I’ve been reading your work, I’ve been paying attention to how these things have been developing for the past number of months, but like really reading mechanicals book, seeing what he writes in there, but also knowing that sorts of things that he doesn’t talk about, right that he leaves out of it, it really does feel to me like a kind of technocratic, wet dream, right? This idea that you’re not only trying to shape the president, and what’s going on now, you’re not only trying to plan what’s going on in society at this moment, but you’re literally trying to, as he says, in the book, lock in these values that will shape the future, not just for 1000s of years, but for millions of years to come. And these are the values not of the collective public. These are not values of compassion, and needing to look after the least well off among us, the poorest people trying to help them, but rather these values that we need to ensure that the maximum number of people with the most value exist in the future. These are the values of people who are very disconnected from those real struggles that people face around the world today, values that are held by rather well-paid people at these particular institutes at universities like Oxford, but also the tech industry, that very much agrees with higher up people in the tech industry, who very much agree with this outlook. People like Elon Musk, people like Peter Thiel, who are very much associated with these movements. And so it seems particularly troubling and like a red flag, something that we really should be paying attention to, that these people are trying to push this particular set of values on us, and are trying to lock that in as societies values and how we think about problems and how we distribute resources for many years to come.

ÉT: So a lot of these individuals think that we’re on the cusp of creating artificial general intelligence. And many of them also accept this argument that was most extensively delineated in Nick Bostrom’s 2014 book “Super Intelligence,” which is that soon as we get artificial general intelligence and human level AI, then we will very quickly get artificial super intelligence. Because any sufficiently intelligent cognitive system, whether it’s biological in nature, or artificial, is going to realize that one way to better achieve whatever goals it has, or has been programmed to have, being smarter is going to be useful. So consequently, soon as you get AGI that system is going to realize, well, if I’m smarter than you know, I can do whatever I’m supposed to do much better. So it will have then an incentive to try to modify its source code in order to increase its cognitive abilities, problem solving abilities, essentially. And so the whole reason I mention that is once we get that, the hackneyed analogy is the future of the gorilla sort of depends on human actions and we’re just superior than it in terms of our intellectual or problem solving abilities. And there’s just no way that it can control what we do. So there might be the same kind of dynamic that ends up occurring between us and this super intelligence. So as a result, it’s really important that we load in certain values to the AGI, or the ASI early on. Because those values, if we’re the only intelligent creatures in the universe, those values might not just shape the future millions and billions of years from now, but the entire future of the cosmos within our light cone, the accessible region, the entire future will depend on what values we encode into it.

So it’s really important that the values we select are ones that will ensure the realization of our vast and glorious, as Toby Ord puts it, long term potential. And what does that mean? What’s our potential? Well, it at least one big part of it has to do with what we were talking about earlier, what you just hinted at, which is maximizing the total amount of value or happiness in the universe. So this is part of their vision that we’re just AGI (Artificial General Intelligence) is right around the corner. And it’s crucial that longtermists play a part in shaping the first AGI systems, because then that will ensure that the longtermist ideology ends up determining the way the entire future of the cosmos ends up looking like. Maybe this gets back to the religious parallels, because it’s a very apocalyptic kind of view: the end is near; some fundamental rupture in human history, some fundamental transformation, in fact they call it transformative AI, is right around the corner; we live in this time of perils is another term they use where existential risk is particularly high; and once we get artificial superintelligence, then the risk will significantly decrease, we’ll be safe from extinction or whatever. It’s a very worrisome situation. And I think many of these individuals are motivated to create AGI for this reason that if AGI doesn’t destroy us, then it’s going to usher in a techno-utopian world. And that’s reason then to not just ensure that we understand the potential risks of artificial superintelligence, but that we create it maybe sooner rather than later.

PM: I think that’s really well put. And I think that the comparison to the kind of apocalyptic doomsayers is really important as well, and really draws out some of what the thinking is there. And I would also say, talking about the artificial general intelligence, or artificial super intelligence really shows the connections there to the tech industry as well, and the influences of the kind of ideas that come out of some of the particularly worrying tech circles, I think it’s fair to say, but you know, our conversation has gone on for a while, we certainly could have talked about far more, because there’s so much to dig into on this topic. There’s so much worrying shit both in the book and beyond the book that we could talk about. But I want to end with this question. You said how they are really making a big push right now, in order to try to get these ideas accepted by the mainstream, try to get them better accepted by a more general public, by people beyond their circles, to get them to believe that long term ism is something that we should be pursuing and dedicating resources to, we’ve just gone through a period where the tech industry tried to sell us Web3, right, and these kinds of ideas of of crypto, and that this was going to take over. And there was a pretty significant backlash to those ideas and to those plans. And I think that many people would acknowledge that that backlash did help to restrict the ability of those companies and those ideas to really expand in the way that they wanted to. Certainly there were other factors as well now, as we see higher interest rates and these projects collapsing and a whole load of other things,. But I wonder, what do you think about our chances to actually stop this, to actually push back the tide of longtermism that these people are trying to sell us?

ÉT: I think it’s a formidable challenge; it’s going to be really difficult because they already have, as I mentioned before, infiltrated major governing bodies like the UN, there are people in the UK government who are listening to individuals like Toby Ord, and so on. There are many other examples. So it’s going to be very difficult. They’ve already established a kind of infrastructure in the world that is the foundation of powerful institutions. And the tentacles of influence have reached around much of the globe already. Now is their appeal to the average individual, the general public, but I don’t think the situation is hopeless. I do think it’s possible that if enough people understand that longtermism could be dangerous, that it’s built on faulty philosophical foundations or at least dubious philosophical foundations, and that if it’s taken seriously by individuals in power, it will end up minimizing a lot of the harm is being caused to, for example, to people in the Global South as a result of climate change. There could be maybe a sufficiently large pushback from the general public gainst this idea. That may ultimately vitiate its kind of impact on the world. So for example, a colleague of mine who works in the community, so I won’t mention his or her name, but I was talking to them about the impact of MacAskill’s book. And their view was that this is either going to result in longtermism becoming widely accepted as MacAskill hopes, or it could completely backfire, and it results in all sorts of backlash against the longtermist worldview that really defangs it, and really robs it of a lot of the momentum that it currently has.

So my hope certainly is that people will understand that longtermism is not the same as long-term thinking, and that we absolutely need more long-term thinking in the world today. But longtermism goes so far beyond long-term thinking in adopting all of these bizarre views about the importance of creating 10 to the 58 digital people in the future. This is not the right worldview at the moment. Our societies are shaped by short-term thinking and myopic perspectives on the future, quarterly reports, four year election cycles, and so on. So we desperately need more long-term thinking, but longtermism just swings that pendulum so far to the other side, and casts our eyes on the future millions, billions, trillions of years from now. So it’s really it’s not the antidote to short-termism that’s ubiquitous in our society today. So I do think there’s hope, and the key is not even to present arguments for why longtermism is flawed, it’s simply to reveal the underlying ideas that longtermists don’t want you to see. Because again, the average morally normal person will look at those underlying ideas and say: That’s too bizarre, I can’t accept that. That’s my mission right now and my hope is that people will, yes, be properly informed about what longtermism really is all about.

PM: I think that’s really well put. Hopefully, this episode will help to inform some more people about what is wrong with these ideas, the crazy ideas that are associated with longtermism. We didn’t even get to all this stuff around increasing the population, having more kids that’s very closely aligned with what Elon Musk has been talking about as he continues to reveal new children that he’s had. Émile, it’s great to speak again; thanks so much for taking the time. And of course, I’ll link to a bunch of the stuff that you’ve noted in the show notes as well. Thanks so much.

ÉT: Great. Thanks so much for having me. Real pleasure.

Similar