The Dangerous Ideology of the Tech Elite

Émile P. Torres

Notes

Paris Marx is joined by Émile P. Torres to discuss why longtermism isn’t just about long-term thinking, but provides a framework for Silicon Valley billionaires to justify ignoring the crises facing humanity so they can accumulate wealth and go after space colonization.

Guest

Émile P. Torres is a PhD candidate at Leibniz University Hannover and the author of the forthcoming book Human Extinction: A History of Thinking About the End of Humanity. Follow Phil on Twitter at @xriskology.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

Paris Marx: Phil, welcome to Tech Won’t Save Us.

Phil Torres: Thanks for having me, it’s a pleasure to be here.

PM: It’s great to speak with you. You’ve been writing for a while about this concept of longtermism that I think is really important for the audience to understand, especially when we think about these tech billionaires and the impact that they’re having on the world and the kind of projects and ideas that they’re trying to disseminate into the public consciousness and have us all buy into. And I feel like when you initially hear the concept, it brings to mind the notion of long-term thinking. And for a lot of people, that is a positive thing — we need to think long-term to address major challenges like climate change, or to build infrastructure projects that will be important for addressing these things, if we want to build high-speed rail or something like that. But you explain that longtermism is distinct from simply thinking long-term and comprises a series of very troubling beliefs about the future and the future of humanity and the actions that we should be taking today as a result of that potential future. So what is this ideology that you describe as one of the most influential ideologies that few people outside the elite universities and Silicon Valley have ever heard about?

PT: So I would perhaps start by just emphasizing that there is a distinction between embracing long-term thinking, which I believe is very important, and that mode of thinking about our actions in the present, how they might affect the further future people in future generations 100 or 1000 years from now, is really important and different from this sort of longtermist ideology, or normative worldview. So I think that the term longtermism, which was coined around 2017, is really unfortunate. The idea is not just that future people matter as much as present people, which is a claim that I think a lot of moral philosophers would accept, but it goes beyond that. And there’s this idea that value, however we define that, and philosophers have defined it in many ways, is something to be maximized. And there is no intrinsic difference between the loss of some value that exists in the world, and the failure to bring some value into the world. So that means if there was a million people in the future, let’s say 1000 years from now, who perished, let’s say, instantaneously so there was no suffering, but they just disappeared, and they all had happy lives. That would be the equivalent of failing to bring a million people into the world who would have had happy lives, people who could have existed, but then never would. Because when you look at things from the point of view of the universe, the idea is to just maximize the total amount of value that exists cosmically. So there’s fundamentally no difference between failing to bring value into the world, and then removing value from the world. So that’s sort of a fundamental idea that motivates the longtermist worldview. And as a result, a lot of the individuals who are driven by and animated by this particular perspective, have become obsessed with calculating how many people could come to exist in the future. And the greater the number, the greater the possibility there is for maximizing value.

So this means that you want to not just consider the possibility of biological creatures, but also if there could be digital beings living in computer simulations that also bring into the universe happiness, or value or something like that, then you should consider them as well. So there are various calculations, ranging from 10 to the 54 to 10 to the 58. These are supposed to be lower bound estimates, more conservative estimates, as well — digital beings who could exist in vast computer simulations in the future if we colonize space, and we convert entire planets into something called Computronium, which is matter optimized for performing computations. And then you create these really vast computer simulations with all these digital beings, who are all for some reason within happy lives, that would result in a universe that’s as full of values as possible. And that is the ultimate goal of the longtermist, or what they would call a strong interpretation of this moral reorientation to the far future. A strong longtermist view. That’s the ultimate aim. So you can see that that’s quite different than saying insofar as people exist on Earth in a million years, we should care about them, their suffering doesn’t count for less than our suffering. And insofar as their actions we take today that might affect them, perhaps like nuclear waste, or maybe some forms of climate change, if there’s like runaway climate change, or perhaps not runaway climate, but if there’s some catastrophic scenario, that may actually have really long term kind of effects, then we should think about their well-being enacted today. That’s very different than saying that the failure of these digital people to come to existence trillion years from now would be a great tragedy, and therefore, because so much value could exist in the future, we should prioritize ensuring that these people come into existence rather than, for example, alleviating the plight of poor people today.

PM: I think it’s very odd to reduce humanity and how much we care about people that exist today and the future in terms of a very odd construction of value, and the value that the existence of a particular being could bring, and whether they are a happy being theoretically, or not. And when you were describing, because I’ve read a few of your pieces in preparation for this, these worlds of digital beings, that these people imagined could exist in the future as if that is like the same “value” as someone who is living today. And so we should care the same amount about a digital being in some kind of different world somewhere else in space that we have set up versus like someone who is suffering today; it was really odd. And it made me think of the matrix, but then again, maybe they’re not going to have living beings plugged into the computers to power them, so a little bit different. But I do think it’s really odd to position the future of humanity in this way. And to think about: this is what should matter to us, this is what we should care about, this is what we should kind of structure everything that we’re doing right now around. And it also kind of feels a bit to me like when we think today, and when we think about the economy and society and how it’s geared around motivating growth, and these really kind of abstract ideas around economic value and economic activity, in that way, I could kind of see the extension of it to a certain degree. But then it still seemed really odd to say that digital beings and these post-humans, as you describe them, should be the kind of thing that we care about achieving, rather than actually addressing real problems in the present.

PT: So there’s lots to say about the details here. But fundamentally, it would not be inaccurate to say that it’s greatly influenced by a particular moral theory called total utilitarianism. And the standard interpretation of total utilitarianism is that our aim is to maximize the total amount of, let’s say, happiness or pleasure, goods, pleasurable experiences that people have. You want to maximize this not just within the population of people that actually exist, but within the universe as a whole. So one way to increase the amount of total value within the universes as a whole is to keep the population stable, and to increase the happiness of every single individual. So then the result is just a larger amount. But another possibility is to simply increase the population itself. So if you have 100 people who are fairly happy, and you want to make the universe better, maybe you can bring into existence 100 extra people who are also fairly happy, then you’d have twice as much happiness. So underlying this is the notion that happiness can be quantified, that more is always better. And tied into this is another very strange view that I find very implausible, and off-putting, which is this notion that people, you and I, are containers. So it’s sort of this container model of persons; we exist as means to an end. The end is to maximize value, and we are the containers that are filled with value, positive value, or maybe negative value that would be bad. So when you look at from this perspective, okay, you have this container, you fill it with as much value as you can then to maximize value in the universe, it would be good to create another container and to create as many containers as possible. And so that’s why there’s fundamentally no difference between non-birth and death. Death just removes container from the universe, non-birth prevents a container from coming into the universe, assuming both those containers contain net positive amounts of value.

So if you look at it from this perspective, and you take seriously cosmology, here Earth is. It’s existed for four and a half billion years, the universe has been around for 13.8 billion years. In front of us is billions and billions of years, and a vast universe with all sorts of untapped resources that we could go out there and exploit. We exploit them by creating these vast planet-sized computers, or we TerraForm other planets, we spread life and so on. And the result is that the number of people could be absolutely enormous, the amount of value in the future could be absolutely enormous. And so then, when you have this cosmic perspective, and you see how much value there could exist in the future versus how much value there exists right now, the value of the future sbsolutely dwarfs the amount of value right now. Therefore, and this is kind of the crucial point, the practical implication, which I find so problematic: If you want to do the most good, then what you really should do is focus on the far-future and not on current projects. So like this strong longtermist as the primary value of our current actions is how they affect the long-term future because the future could be so much bigger than the present. So should we prioritize alleviating global poverty, I mean, that would be a very good thing. But if we could increase the probability that this huge number of people exist in the future, by a tiny, tiny amount, in terms of expected value, that will be so much greater than alleviating global poverty. And ultimately, they coined this this term, existential risk or existential catastrophe for any event that would prevent us from realizing all of these future people, which they refer to as our potential.

So existential risk would foreclose the realization of our potential, the creation of all these future beings. Therefore, in expectation, barring from probability theory, the best thing we could possibly do, as individuals and as a society is to focus on the far-future, focus on reducing existential risk. Since global poverty is not an existential risk, it really shouldn’t be prioritized. It shouldn’t be priority one, two, threee or four, as Nick Bostrom argues, or priority five, which is to colonize space as quickly as possible. It’s really much lower down on the list, same with basically every threat that is not existential. And so ultimately, then this particular framework would then lead people to tend to deprioritize, and to minimize the significance of a wide range of current day problems from climate change, to global poverty to eliminating factory farming, climate change is to me is very much bound up with climate justice, and this important fact that polluters should pay. And it should be on us to help individuals of the Global South, who will be most affected from ultimately suffering the externalities of our industrial activities up in the Global North. But from this broader existential risk or longtermist perspective, it’s what Nick Bostrom would call a mere ripple on the great sea of life. Yes, in the short term, it’s going to be really painful. But in the grand scheme of things, what really matters is that we colonize space, we simulate these people, and in doing so we maximize value.

PM: It really shows the potential harm of people who have a lot of power, then coming to believe that this is what needs to be most important and what needs to drive their actions, not the addressing of poverty, of climate change, of inequality, and the housing crisis — all of these issues that we’re dealing with today that are causing a lot of harm and pain and suffering that are so-called reducing the value, the happiness that people are experiencing in the present. Because the real goal is to ensure that a lot more people, or digital people, can be realized in the far-future once we have colonized space, develop these technologies, found the way to ensure that we can create digital people. There’s all these if’s and things that these people hope to be able to realize in order to arrive at this future. And I want to come back to this concept of existential risk, because I think that this is really important — this notion that we need to protect against these risks that could ensure that this future of space colonization and digital beings cannot be realized. Because it sets humanity back in some sort of way that ensures that this can’t be done. And so we need to protect against those things, but what they consider smaller level problems like climate change or poverty do not need to be addressed. And there’s a concept that you discuss by one of the folks that you quote, who believe in these things called the grand battle, where he discusses how like in the next century or a couple of centuries like, this is really the moment when we determine whether we can achieve this grand future that they are outlining or whether we are going to be stuck on our small planet and not be able to realize these things. Can you talk about that a bit?

PT: Another way to articulate these ideas, which might be useful in answering your questions, is the idea is that we’re at a pivotal point in human history, because we stand on the verge of colonizing space. And if values get locked in once we colonize space, then that may determine the entire future of the universe, perhaps, if we’re alone in the universe. So that’s one reason we’re at a pivotal point. Another reason they think we’re at a pivotal point is that artificial super intelligence might be invented with the next century. And that once that’s invented, there’s no turning back. And then suddenly, we’re joined in the universe, at least on Earth, by a system that is more intelligent than us in every way. And that’s just going to be a complete game changer. And so then the question is from their value-maximizing total utilitarian view, imagine two universes: one, basically things remain as they are right now, and we survive for the next billion years, at which point the sun is going to make complex life on Earth impossible. So in this case, the universe, let’s say it contains just to simplify 1000 total units of value. Again, assuming that values sort of thing, you could quantify to units, which itself is very dubious. But this is their perspective. And this is other universe where we colonize space. And we create these huge, huge, huge numbers, unfathomable numbers of digital beings. As a result, in the next, trillions and trillions of years, there are, let’s say, a million units of value. We’re at a point now, where perhaps we can choose between these two universes. And the universe with a million units of value in total is much better than university with just 1000 units valued. And I think somewhat like superficially, you might say: Okay, yeah, it’s better to have more value than not. But when you look at the details, it’s a wild, deeply implausible perspective. And so the idea of existential risk is any risks that would prevent us from creating, what they would consider to be, a better universe with all this extra value.

A lot of people intuitively think that existential risk means risk of extinction. But that’s not what it is. It’s literally just any scenario that will prevent us from simulating all these digital beings. So you can imagine, here’s one existential risk scenario, there’s an all-out thermonuclear exchange. And as a result, through these huge fire storms, and they loft all of this soot into the stratosphere that blocks out incoming solar radiation, there’s a complete collapse of the food chains, and let’s say 8 billion people starve to death. That’s the result. So that is one existential catastrophe. But here’s another: we continue to advance technology, we cure all diseases, we figured out how to finally to live in some kind of harmony with the natural world, if we create these nice, eco-technological communities that are sustainable, and so on, there’s world peace. I’m intentionally making this utopian. Because if we were to create this world, and that were to last until Earth became uninhabitable in roughly a billion years, and then let’s say we just died with dignity, and we’re just like: Our story’s over; it’s been great; we achieved basically utopian worlds here. That would be an existential catastrophe just as much as the first scenario. And the reason is that both would involve our potential being realized only to a very tiny extent. The vast majority of our potential, which again, is all of these 10 to the 58 people in computer simulations would never be realized.

So there’s a reason why the community that has developed these ideas, and is the primary defenders of this perspective, is overwhelmingly white, and quite privileged, people who attend elite universities in the West, Oxford in particular, or are in Silicon Valley. There’s a reason why they’re attracted to it. I take it essentially to be a quasi-religious worldview, that not only says you’re ethically excused from caring about, for example, the poorest people around the world 1.3 billion in multi-dimensional poverty, but also you’re a better person, actually, for focusing instead on the long-term. It just so happens also that a lot of the biggest “existential risks” are threats that could potentially affect the richest people, unlike global poverty. So not only is there this ethical motivation, supposedly, to worry about artificial intelligence and nanotechnology and stuff like that. But also, if their arguments about the dangers of AI and nanotechnology are correct, it follows that those are some of the few risks that could actually destroy the world such that Musk and Thiel and the others suffer as well. A friend of mine described this sort of like an apex model of risk, it’s like, you know, okay, for the people who are at the apex, you know, the top echelon of the socio economic hierarchy, what are the threats that are most likely to affect them? Well, insofar as any threat Well, it’s not going to be climate change. Unless there’s a there’s an improbable runaway scenario, and Elon Musk and Thiel and so on, are worried about climate change insofar as it’s a runaway effect. But otherwie, they say: We’ll survive, where we means them in homosapiens as a whole. But with nanotechnology and AI, those could actually pose a threat to them. And so there’s multiple interlocking reasons here, why the notion of existential risk is very appealing to a lot of these individuals. And it really does give them an excuse to just ignore the plight of poor people, which is exactly what they want, of course, to begin with. When you’re filthy rich, why do you want to care about what the poorest people are going through?

PM: Exactly! Why spend $6.6 billion to feed the world to ensure that the global hungry are fed, as the Elon Musk exchange last year, when you can plow that money into going to Mars and extending the Light of Consciousness to another planet? That’s very much the dichotomy, the framing that they’re setting up: Why should I have to pay taxes to the US government, when that is going to limit my ability to make these investments in allowing us to colonize planets and what have you? These are very much the choices that they are setting up, the false choices to make that very explicit. And I think we’ve been talking in kind of abstract ways. But now I want to start to turn our conversation to deal with the much more concrete aspects of this. And I feel like one of the the pieces that is really important is that in this entire ideology, this way of thinking about the future, and what needs to happen, there’s a big focus on technology and the realization of technological progress, we need to ensure that technology can continue to develop, and the understanding of technology I feel like is positioned in this way that is very self-serving to them. Technology can only be understood in this way that develops in one particular fashion. There’s one route that technology can go, and we need to ensure that it can keep making those developments, without thinking about can technology be envisioned in other ways serve different goals? Can we refocus on different types of technologies that realize different aims? No, there’s just one form of technology. It’s the technology that allows them to achieve this particular future. And we need to ensure that our resources go into realizing and developing those technologies, rather than doing all these other things that might have other benefits, but wouldn’t have these kind of long term as consequences.

PT: Perhaps the default view among technologists is a kind of techno-deterministic view that the enterprise of technological development is fundamentally unstoppable. Some scholars have called this the “Autonomous Technology Thesis.” Technologization is just this autonomous phenomenon that depends on individual actions, but ultimately, we don’t have any control over the direction. Certainly there is bound up with that a kind of view of linear progress over time. And we know, to some extent, what technologies will be developed in the future. And it’s just a matter of catalyzing the developments needed to get there. And furthermore, this notion that technology is a value neutral entity. It’s just a mere tool, as opposed to I feel like you were gesturing at this: How exactly do we want to realize these technologies? Values end up being embedded in artifacts, and that has all sorts of ramifications, not just for how the artifacts are used, but perhaps our broader worldview. I think those are all very problematic, and the link between this contemporary longtermist community and these tech billionaires, some of whom are unfathomably powerful and who will unilaterally make decisions that will affect the world that we and our kids — if we have kids —will live in just individuals making decisions that will affect billions of people. And it’s worrisome that a lot of these individuals hold the views that I just mentioned, this technology as neutral tool view, a kind of techno determinist view, perhaps the notion that technology is essential to progress, which is —

PM: Debatable!

PT: Is debatable! A lot of the existential risk scholars themselves suggest that there’s maybe a 20% chance of human extinction this century. So, just this century — 20%! Imagine getting on a plane, and the pilot saying there’s a 20% chance the plane will crash; everybody obviously would flee, would race towards the exit. So that’s what a lot of them believe. Why? Because of technology. All of them also say the primary source of risk is anthropogenic. It arises, mostly, from advanced technologies, precisely the sort of technologies that they want to develop, because they’ll turn us into radically enhanced post-humans; they’ll enable us to go to space; they’ll enable us to upload our minds; and then simulate huge numbers of people in the future. There is a lot of overlap between these tech billionaires and the existential risk community. So for example, Peter Thiel gave a keynote address at one of the effective altruist conferences that they held. And Effective Altruism is this very quantitative approach to philanthropy that has given rise, has been the the petri dish out of which longtermism has grown. And furthermore, Peter Thiel has donated to the Machine Intelligence Research Institute, which is a longtermist research group based in Berkeley, California. Although I believe they’re moving to Texas; I think they’re following Musk.

PM: Of course!

PT: I would double check that, but I think that’s the case. And then Musk, he’s mentioned Bostrom on many occasions; he seems to be pretty convinced by Bostrom’s argument that we may very well live in a computer simulation. And in fact, I think in terms of explaining some of Musk’s behavior, there are two issues that come to mind. One is the longtermist view that, ultimately, the good he could do in the long run will so greatly exceed whatever harms he might do in the present. Because he might be instrumental for getting us into outer space, he just doesn’t care that much if he upsets people, if he’s mean to people, if he harasses people. And then on the other hand, I think also, he seems quite sure that we live in a computer simulation. And I wonder if that doesn’t also affect his behavior, where he’s just like: Well, maybe none of this is really real. And so certainly, if you’re him, you might have extra reason to think this isn’t real. What’s the probability that you’re going to become the richest person on earth? Maybe the richest person ever? Maybe the most powerful human being ever in human history? That’s pretty unlikely. So you might think: Maybe, maybe I am in a computer simulation.

PM: It also gives you the opportunity then to dismiss the consequences of your actions like: Oh, it’s a computer simulation, so if I choose to do this thing that creates a lot of harm for people, whateve. That’s it, right?

PT: It does, at least, perhaps open the door to a trivializing the consequences of your actions. Because I don’t know, these are just digital people, I don’t know if they’re real, if they’re actually feeling anything, maybe the simulator has some way of who knows, there’s all sorts of possibilities if you accept this premise.

PM: And his brother has said he’s really bad with people; I think that’s something he’s acknowledged himself. I think what you’re setting up here, and what you’re describing, I think it really gives us insight into the way that these people think. For them, for someone like Elon Musk, the threat that we face is from not developing our technology, from not colonizing space to allow the light of consciousness, as he says, to extend into other planets, and then continue on from there elsewhere, and then we can grow the population, we can ensure that if something happens to Earth, the human species continues on somewhere else. And for someone like Musk, we’re just downplaying all the actual threats and challenges that come with living on a planet like Mars, we ignore the cancer that we get from the radiation and all that because as long as we get there, that’s the most important point.

But then, on the other hand, we look at this through the lens of an average person who does not think through a longtermist view and is not the richest man in the world, or have connections to some of the richest men in the world, and seeing the actions of someone like an Elon Musk, who is distracting us from the actual problems that we face — who says that electric cars and that Tesla are the greatest contribution to climate change ever in the world. Even though that is not true at all, and his action is actually delaying us from addressing the actual issues with climate change and ensuring that we have climate justice, that we create a planet that can actually survive in the conditions that we’ve created, addressing world hunger, just ensuring a society that is fair and decent for everybody. It really seems that, as you described with technology creating these actual risks of human extinction, by letting someone like an Elon Musk and these people who have these really odd beliefs about the future, that actually creates a lot of risks, not just for those of us who aren’t the richest man in the world, but even for the human species as a whole.

PT: I very much agree. There’s just too much to say on this point. One thing that comes to mind right away is one of the leading billionaire donors right now to longtermist causes is the 30-year-old named Sam Bankman-Fried — who perhaps you’ve come across?

PM: Last week’s episode, if people have listened with Bennett Tomlin, we talked about Sam Bankman-Fried and he also mentioned, if people don’t remember, that he is an effective altruist who believes in accumulating as much money as possible to realize these sorts of visions.

PT: Okay! So sorry for not having listened to the previous show.

PM: That’s okay!

PT: But that’s great that he was mentioned. So he’s motivated by the effective altruists notion of earn-to-give. Literally some of the EAs have argued: Go work for a petrochemical company and go work on Wall Street; yes, these are evil, but they’re really good ways for you individually to make a whole lot of money, and then you take that money and give it to a charity. I don’t know — I guess there’s a certain kind of logic there. But anyway, so he’s made his money from cryptocurrencies, and this is not my area of expertise by any means, but the point is that cryptocurrencies have a massive carbon footprint. I’m not going to remember the exact details, precisely. But there was a study from just a few years ago that found even if something like if we were to become as a civilization net-zero next week, but if Bitcoin were to persist, we still would not be able to keep temperatures from rising above 1.5 degrees Celsius. So this is a massive problem. My point then is that Sam Bankman-Fried is somebody who is trying to do good, but he’s ultimately involved in a Ponzi scheme that has a massive carbon footprint. So this would be a case then of somebody who is motivated by this sort of long-term thinking, but ultimately they might be doing a significant amount of net harm, ultimately, by contributing to climate change, and so on.

There’s, of course, lots to say about Musk, and downplaying climate change scenarios that do not involve a runaway greenhouse effect. Runaway greenhouse effect would make Earth completely unlivable. It’s probably what happened on our planetary neighbor, Venus, as a result of water vapor rather than carbon dioxide. But it perhaps could happen here seems to be very improbable. Consequently, a catastrophic climate change — yes, it will be very bad for mostly poor people — but we’ll survive. So you end up kind of minimizing them; you see that literally in interviews with a lot of these individuals. And then of course, as you alluded to, earlier in the conversation Musk dangled the $6 billion that would be needed to alleviate — was it extreme poverty or?

PM: It was hunger.

PT: Hunger, that’s right. So I can hardly express how upset that makes me. But I think from his perspective, which really does seem to be infected, if you will, by this sci-fi perspective, this longtermist framework, thinking about the future of humanity spread throughout the heavens in digital form, and so on. The problem of hunger today is just really a minor problem. If you take seriously the non-existence of digital beings in the future, trillions of years from now, is just as bad as the death of somebody now, it really does follow that you shouldn’t be so concerned about global poverty; it’s just not a big deal; it’s a small fish. There are much bigger fish out there to fry, such as ensuring that these people come into existence. The problem with that, of course, is that if somebody doesn’t come into existence, they’re not harmed. Because there is no person to be harmed.

PM: It’s incredibly immoral to say that there are people who exist today, and because I believe that there are going to be all of these digital beings a million years in the future or something, that we shouldn’t actually take the actions that would help people today, because we might realize these lives that would not be recognizable to us as humans today. Maybe that’s okay; maybe that’s not a problem; maybe that’s like not recognizing something’s humanity because it doesn’t look like those of us who exist right now. But still to say that something like that is of equal value to someone who is starving? Elon Musk lives in Texas, he used to live in California. There are homeless people and people who are struggling very close to him, and to say that: That doesn’t matter, because I, as someone who is incredibly rich need to get us to Mars and develop these neural link technologies and whatever to try to realize this future is just really disgusting. And to be able to have the influence and the power to make a lot more people feel that this is like an acceptable trade-off. I think it shows a deep fundamental problem with the world that we’ve allowed to be created.

PT: This perspective on ethics shares fundamental similarities with certain approaches in economics. It really is kind of morality as a branch of of economics, in some sense. And we are these just fungible little containers, to be multiplied as much as possible to fill the universe with as much value as possible. Maybe a good illustration of the underlying reasoning behind saying that it would be a greater tragedy to feed all the hungry people in the world, but never realize all of these digital people. And so you can imagine in the standard trolley scenario, there’s a runaway trolley, it’s heading straight down the track. And there’s the five people who are on the tracks and just oblivious whenever there’s a little sidetrack with one oblivious person, all of them innocent, all of them deserve to live. And so you’re by a railway switch to pull the switch. You know, most people say: Well, in this forced-choice situation, I guess I would, it’s a tragedy either way. But it’s better I guess that one person died. And five people, not all philosophers would would actually agree with that. But that’s a pretty common intuition. But now you can imagine a variant where there’s nobody on the track ahead of the runaway trolley, and there was one person on the side. But as you see the trolley racing down the track, somebody shouts to you and says: I can’t explain the causal details now; it’s very complex, but I guarantee if you were very smart, you knew super advanced physics today that you would understand that if the trolley continued straight, it will prevent five people who would have happy lives from being born people who otherwise would have been born, if the train goes off on the sidetrack.

So then the question is: Do you pull the switch? And for a total utilitarian, absolutely. Because if you bring five people into the world who have happy lives, and you lose one, you get more total value than if you save the one who actually exists, and failed to bring these five people into the world who would have happier lives. So for me, from my perspective, this sort of gets at the crux of a fundamental difference between me and the strong longtermists, and total utilitarians, those are very much bound up together, once again, from my perspective, it’s atrocious, if you pull the switch, even if you know with 100% certainty that five unborn people who would have happy lives will never be born. If you’re unborn, you don’t suffer; I don’t think that’s a tragedy. There have been I don’t know how many people who could have been born in the past who never were. Nobody in their right mind is going to weep over them. And so for the longtermist, you just absolutely would pull that lever, kill the one living person, and ensure that the five people are born. That is the reasoning that underlies this view of: Should we help global poverty? Or should we work to colonize space to become super intelligence cyborgs, upload our minds, and so on?

PM: What I was thinking about as you describe that is, especially when you’re touching on the population question in the unborn question, there are a lot of things that these tech billionaires do and say that are, I think, incredibly problematic. But that seemed to be accepted as something that makes sense by a lot of people — not so much by me, simply because it’s something that they would say, and that they would do. We have Jeff Bezos who is investing in this like 10,000 year clock that I feel like can be seen in a sense, like, yes, long term thinking is good, but as kind of like an object of this kind of longtermist thinking because he too, is saying, you know, we need to build these colonies in space so that we can realize a trillion humans who are living in these colonies. And if we stay on Earth itself, we’re going to be subjected to stagnation. And I can’t remember the other word that he uses, but it’s essentially like, you know, we’re going to suffer as a species because we won’t be able to continue to grow into a trillion people by inhabiting these space colonies and continuing to grow. And then Musk obviously talks a lot about the light of consciousness that he is trying to spread by allowing space colonization, and like, I think those terms are like really weird when you think about them. There’s a lot kind of wrapped up in that, you know, he talks about his businesses as a form of philanthropy. He doesn’t need to donate his wealth. He doesn’t need to pay taxes, because the actual businesses that he’s running are philanthropy for the human species, because it’s achieving these like incredible things. And then he also says he’s really worrying and concerning things about population, right, that people aren’t having enough kids, especially smart people, he says, need to be having more kids. There was that thing when Texas passed the abortion laws, where he wouldn’t say anything about it. And the governor said that he was fine with it, I guess, what do you make of the broader project that these tech billionaires are trying to carry out? And you know, the consequences of that?

PT: I should also add that it’s difficult to know exactly what motivates these people. This system selects for people who have, let’s say, sociopathic, egomaniacal tendencies. People who aren’t like that, who aren’t capable of screaming at someone, firing them, not caring for the fact that those people will have lost their livelihood, maybe they have kids and so on, doesn’t keep you up at night — those important qualities to have within our capitalist system to be successful. And so it’s hard to tell the extent to which some of these tech billionaires are in some deep way kind of motivated by the longtermist view, which is ultimately kind of an ethical view. It might be the case that the longtermist view is useful to them, because again, it justifies. So then, if you’re a sociopath, and you still want to appear as an ethical person, there’s this framework over here that you can incorporate, and actually, it suggests that what you want to do anyways is the right thing. Also the fact that there could be all these huge numbers of future people from the the capitalist perspective, I mean, those people are also consumers. Maybe it’s not just about maximizing value in the universe, it’s about maximizing your own that you have in your bank account. So that being said, it’s really worrisome that there just isn’t a sufficient amount of reflection on things like space colonization, on the underlying drivers of this whole industry.

For example, there has been some work recently on the possible risks of space colonization, which for the longest time was simply accepted by virtually all futurists, all longtermist and so on as something that would significantly reduce the probability of an existential catastrophe. So the idea is that the more spread out a species is geographically on Earth, the lower the probability of extinction, because any single localized catastrophe is not going to affect the entire population. So the same thing applies to the cosmographical, not just geographical realm; so we spread out, we become multiplanetary, then we increase the probability of our survival. But there has been some scholarship recently, that’s really very compelling that suggests that colonies on Mars actually would really increase the probability of catastrophe here on Earth, that these colonies eventually will become Earth independent, their living conditions will be so radically different than ours, it’s entirely possible that there might be modifications to the human organism, either through natural processes or by incorporating technology that is through cyber organization. On Mars, you ultimately get a kind of a variant of homosapiens. Again, they have different interests, and so on. Eventually, they’re going to want their independence. What’s hard for political scientists to imagine, us creating these Earth independent colonies, (Earth independent means they can exist without the help of Earth, they don’t need food to be shipped from Earth to Mars) it’s hard to imagine these colonies not eventually wanting their independence.

Once that happens, the situation may be very volatile within the anarchic realm of the solar system, where there’s no overarching governments or referees that can mollify the parties and ensure that there’s peace between them. So multiple factors from egomaniacal tendencies to greater profit, more consumers to maximizing value, are motivating these billionaires to pursue certain projects that they haven’t really thoroughly thought about the potential unintended consequences and could ultimately put humanity in a much worse situation than we otherwise would have been if we had just stayed here on Earth. So there’s just endless number of points to make about this. But it’s very disconcerting. Again, a fundamental problem, something that really does sort of keep me up at night is the fact that you have these individuals who have become or been allowed to become so rich and so powerful, that they single-handedly will make decisions that will in really non-trivial ways influence what the future looks like for us. And that’s a really bad situation. Even worse, they’re influenced by some of these views from Nick Bostrom and others.

PM: Yeah, I feel like the future and space you’re describing sounds a bit like “The Expanse,” which is a show that Jeff Bezos apparently really likes and that he paid to ensure would continue to be created for a while on Amazon Prime when it was cancelled by Sci-Fi, just interesting things that come up there. But I feel like the point that I want to go back to, to close our conversation is just that these are people who are incredibly wealthy, who are kind of the elite of society, whether they are the billionaires who have an incredible amount of money influence power, or these academics who came up through elite universities, and really don’t have the same kind of concerns or troubles as much of the rest of humanity. And certainly not people in the Global South are people who are homeless, or what have you. They’re very separated and divorced from those experiences. And thus, they turn their thinking and their visions to things that might affect them, or things that seem higher and above the everyday concerns of everyday people.

I feel like the real risk, whether it is something that these people really believe something that Elon Musk really believes or whether it’s something that justifies the actions that he already wants to take, it creates this narrative and this ideology that can be weaponized or that can be utilized to then say: All of this suffering exists; and yes, it’s going unaddressed because of the way that I’m choosing to deploy my capital and my power, but that is justified, because we are able to look beyond the everyday concerns of you who are working your job in the Amazon factory, and just trying to get by, or in the Tesla factory, and suffering the horrible racism in the factory, but just trying to eke out a living, we don’t need to be concerned with those petty concerns; so we can think more broadly, and in the long arc of human history and human civilization, to actually serve the broader species instead of just think every day. And I guess it provides them with an ability to then justify their massive accumulation of wealth, their disinterest in human suffering as it exists today, and these other problems that we face, and then by doing that not only perpetuate suffering, but creates a whole ton of risks for human society as it exists today. And for all of the billions of people who inhabit the planet.

PT: I think that’s really well-put. Anybody who happens to glance at any of the articles I’ve written might see that there are numerous cited cases of the super wealthy individuals who specifically say that climate change isn’t an existential threat. Therefore, the implication is it’s not something that should be prioritized, unless it’s a runaway scenario that would cause our extinction. Otherwise, yes, it’s really bad. We can all agree on that. But it is not a top priority. And so this is a particularly clear and salient example of how this particular mode of thinking leads to powerful individuals, embracing views that are harmful and unjust, because they’re disproportionately harmful to people in the global south who had little to do with climate change. Bangladesh, what is like less than 1% of all carbon emissions? It’s probably much less, I can’t remember the exact figure, but Bangladesh will be decimated, millions of people will have to migrate and so on. So I feel like that’s just a particularly egregious case of this view, justifying a blithe attitude towards non-runaway, but still catastrophic climate change. It’s a real shame. It’s very upsetting, and it’s worrisome moving forward.

PM: I completely agree. It’s a huge risk, and that’s why people need to be more aware of this. Phil, I really appreciate you taking the time to chat. I’ve really enjoyed reading your work on this and certainly I’ll have links in the show notes for people to check it out. So thanks so much.

PT: Thanks for having me. Appreciate it.

Similar