Pronatalism and Silicon Valley’s Right-Wing Turn

Julia Black


Paris Marx is joined by Julia Black to discuss tech billionaires’ embrace of pronatalism and how it’s part of a broader rationalist project to remake society and protect their privileged positions.


Julia Black is  a senior correspondent at Insider and previously worked at Esquire and Vox. Follow Julia on Twitter at @mjnblack.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!



Paris Marx: Julia, welcome to Tech Won’t Save Us.

Julia Black: Thank you so much for having me, a big fan of the show.

PM: Thanks so much, obviously, always appreciate that. You have been doing a bunch of fantastic work over at Insider recently, with a number of pieces digging into this aspect of the tech industry that we’re pretty interested in here at the show. You’ve written major pieces on pronatalism, and of course, looking at Elon Musk and other tech people’s interest in that ideology, I guess, or that approach to reproduction. Lex Fridman’s podcast, which I’m sure is known to a lot of people who are interested in the tech industry because of his relationships with many of these powerful people. And a recent article on OpenAI CEO, Sam Altman. I’m sure that we’ll get to many of the things mentioned in many of these articles. But before we dig into those, I’m wondering, how you approach these pieces? What are you trying to understand when you’re looking at people in these movements within the tech industry, and have us as readers learn about what’s happening in the tech industry right now?

JB: I think that fundamentally, my interest has always been: What are the conversations that are happening behind closed doors in these hyper powerful spaces, very wealthy people? Why don’t they want us to know what they’re talking about behind closed doors and how are the effects of those conversations going to trickle down to the rest of society? I’m definitely really interested in in visuals and personalities and trying to understand: Who are these people who are shaping the future — at least they think they are?

PM: That does seem to be a big part of it. I think you can always see the tech industry as shaping aspects of society, in many ways. But it does feel like, especially recently, there’s this renewed interest in shaping a lot of, not just the society that we live in, but also how we think about how we should live in a way. I was thinking, as I was rereading your pieces and going over these things, about Marc Andreessen’s piece back in 2020, “It’s Time to Build,” or whatever the hell that was called. Where he was making this forceful argument for Silicon Valley to get much more involved in many different aspects of the society that we live in. It seems like many of the people that you were talking to, and that you are profiling, are interested in this in many different ways as well. They were explicitly trying to think about how human society exists right now, and to think about how they can change it or reshape it so that it better fits with how they think it ultimately should operate. Usually, that’s a way that puts them in control of much more aspects of what is actually going on.

JB: One hundred percent. I think that something that frightens me sometimes is to feel like their ambitions are growing. These people want to now colonize more and more of our minds, colonize more and more of the earth, of space, even. Get involved in politics in unprecedented ways. It really doesn’t seem like designing apps is enough for them anymore. It seems like — especially figures like Elon Musk — really want to have control over society want to have control over this long-term future they talk about? It’s just these growing ambitions and like: Whoa, I think we need to take a step back and make sure that we’re all aware of this, and we’re having conversations about this. So that’s what I was trying to do with my work is try not to put too much subjective value judgment on it, but just put it out there and make sure everyone knows what they’re talking about. Because, I do think their ambitions to change the way the rest of us live are growing.

PM: Absolutely. There’s obviously one approach to this, where your values are stuck right in there. Then there’s another approach, which I think you do really well, where even just laying out what these people are saying and what they’re doing tells you a lot about the ambitions that they have and the world that they want to see and create.

JB: Obviously, there’s a big debate in journalism, always, over how objective we need to be. I am a pretty traditional reporter, I think, in that I just tried to talk to a lot of different people and put the facts on paper and let people decide. I also think it’s ridiculous to pretend that we’re not humans with ideas and values ourselves, that get projected onto that. I think it’s just as it’s fair for Elon Musk to have his values projected onto his projects, the rest of us get to say: What about our values? What a way I want to live my life? That does naturally happen a little bit.

PM: It makes perfect sense. Obviously, your values are also going to affect the types of stories that you want to tell, the topics that you’re interested in and want to pursue. I think that shows in the work that you’re doing, obviously, which is fantastic and essential work. You’ve talked about some of these people — and some of the approaches that are driving some of these people — and on the show, in the past, we have talked about effective altruism, and long termism. So do you want to talk to us a bit about those particular ideologies and how they link up to the pronatalism that you were also describing, late last year in a long feature that you wrote on that movement, I guess, and how it’s gaining traction among tech circles?

JB: Definitely. What’s been really surreal is to see the rest of the world become aware of this stuff I’ve been sounding insane talking about it dinner parties for about a year now. I’m sure you’re familiar with the work of Timnit Gebru, and Émile Torres, and maybe some of your listeners are too.

PM: Both have been on the show in the past, yeah.

JB: Yeah! So they’ve coined this term that I’m really interested in. I think it’s called TESCREAL (I think that’s how you pronounce it as an acronym). So that goes transhumanism; extropianism, which is like anti-entropy and improving the human condition; singularitarianism, which is we’re going towards the singularity; cosmism, going to space; rationalism; Effective Altruism; and longtermism. And those last three are all connected as these ideologies that no one had heard of a year ago, but are suddenly super pervasive in these Silicon Valley communities. I think in a way a lot of this is tied with tech’s sharp veer to the right, which no one really noticed until recently. I frankly didn’t really notice it until a year ago. My life changed, in terms of the way I see the world, about a year ago, a little over a year ago now when I first got this tip about Elon Musk’s pronatalist objectives.

I talked to a source in Austin, who basically said to me: You know, Elon has a lot of kids, everyone knows that, but they don’t realize that he has more than the public realizes, and also, it’s part of this worldview, this agenda. Simultaneously, he was out there tweeting it. As much as this sounded crazy, and I almost didn’t even pursue the lead, he was on Twitter, saying these words that I now know to recognize as buzzwords about civilization, and he talked a lot about fertility rates and population. But it’s all tied into this longtermist worldview. These people talk about preserving the light of consciousness and, really, these sci-fi ideologies. They seem different, but they’re all interconnected. So once I got onto that track, what happened was I decided to do some reporting. Pretty soon I found that Elon has these twins with one of his employees, Shivon Zilis.

So I expanded my understanding from there talking to more and more people involved in this movement, and realizing how methodical it is. Again, how this all connects to this global, or cosmic, domination. There are some really fascist-leaning ideologies in here — even in a weird way, that’s hard to explain — it ties back to the free speech absolutism, and what Elon is doing with Twitter. And it’s about elevating this elite class, and you might do that through having lots of kids who perpetuate your own DNA. Yeah, elevating this elite class to basically be in a position where they have dominion over the rest of us, it really does sound straight out of “Dune” or something, which is, again, one of their favorite sci-fi novels. It’s this fascinating subculture that might be niche, but again, just the way that a few individuals are able to exert influence on the rest of us, you got to start paying attention when their ideas are coming straight out of sci-fi.

PM: Absolutely, and niche, but growing in appeal, especially in these elite circles. I don’t know if you saw this, but Marc Andreessen in his Twitter bio now has that he is a TESCREAList So adopting that term and explicitly saying that he ascribe’s to these ideologies.

JB: Oh my god, you know what Marc Andreessen blocked me so long ago? I don’t even know why! I’ve never written about the guy, but I hear he blocks a lot of people. I have not seen that, but that’s wild. I’m gonna have to go look into that.

PM: I saw a screenshot, and I’m not blocked on my podcast account, so I can get in and like double check through that.

JB: Wow. I mean, that’s really interesting to hear because, again, I think that in talking about this, and reporting on and writing about it, it’s forced them to come out into the open with it a little bit. At the end of the day, that’s all I can ask. All I can ask is that they explain themselves to people and be upfront and honest about it. So I think these ideologies had been spreading, and it’s just that now awareness of them is growing. But that’s wild.

PM: I do think that the more they are open and upfront about it, the more that we can then have a conversation about what they’re actually trying to do, rather than just the PR angle that we usually get on a lot of these stories, and a lot of what they’re actually involved in. That’s one of the good things about Elon Musk’s turn recently is that instead of talking about him as the man building the future, and saving us from climate change, and all this stuff, we can have a real talk about his politics and how that intersects with the various things that he’s been doing. Obviously, you talk about how he’s been tweeting about this population stuff, but he’s also been talking about it since many years before that. It’s included in his 2015 biography. You note that you had a source tell you that, even way back earlier than that, he was talking about Genghis Khan and how Genghis Khan and his DNA was throughout the human race, and Elon Musk was someone who was interested in those ideas. We know that Jeffrey Epstein was as well.

JB: I mean, I’ve now spoken with quite a few people who’ve known him for a long time and, and said that this has been a long-term interest of his. I’ve also spoken with his father, on the record, and he said some pretty shocking things to that effect, such as: Why should my son be any different from a monarch? He should be spreading his seed, he should be perpetuating our superior DNA. Again, it’s like: Okay, I just want to make sure you’re willing to say this all on the record, out loud, for everyone to hear and people can make of that what they will, but let’s be honest.

PM: That’s absolutely wild.

JB: His dad is a piece of work and, clearly, has influenced his formation as a person quite a lot.

PM: I do think it’s fascinating. Elon Musk is often very open about his difficult relationship with his father and often dislike of his father, but there are also so many ways where you can see these two men being very similar to one another and holding a lot of similar views.

JB: Now that Elon has however many children, because I don’t think we really know, you’re starting to see some of the patterns recreate themselves. So, that’s all I’ll say on that.

PM: Absolutely. But drilling down on this just a little bit more, there’s an aspect to this that I’m really interested in — because, obviously, they seem to think very clearly — that because of their position in society, because they are wealthy, that means that they’re also smarter than everybody else. That IQ is inherently linked with wealth, we know that IQ is quite a racist measure of intelligence that was created many decades, or whatever ago, and that has explicitly been applied in this way. But they very much use it to argue that they are smarter than everyone else — that they deserve to be in the position that they’re in — and that they want to maintain that dominance into the future.

In the past, these tech billionaires were trying to see how long they could live, themselves, and now as they’re getting older, they’re thinking about their future generations, and how those generations are going to keep that position in society. We know that they both have a lot of interest in IVF, a lot of their children are conceived through IVF. They’re also very interested in choosing genes and investing in companies that have a lot to do with genetics and the genetics of children or embryos, or whatnot. Can you talk to us about that aspect of this and what they’re trying to achieve through all of that?

JB: Yeah, definitely a few different layers there just to start with the IQ thing. Absolutely, I think that’s correct. Again, according to sources, who know Elon, for example, they tell me he absolutely believes your wealth is a direct reflection of IQ, and therefore the orders of magnitude of wealth that he has over the rest of us. —e also believes he is basically that much smarter. So that’s terrifying.

PM: It is, especially when you see his tweet.

JB: Well, what’s so weird, though, is that for such smart people — and I am willing to grant that they are really smart. Something that does drive me crazy sometimes when I talk about Elon Musk, is I’ll refer to him as a genius or something and people go: Oh, come on, he’s not genius. And I’m like: I’m willing to grant, when it comes to engineering or rockets, he’s really, really smart. The guy’s clearly not an idiot. But there are certain areas where you realize that that IQ, whatever that means, doesn’t apply to every area of intelligence. Certainly doesn’t always apply to emotional intelligence, for example. But also, just talking to geneticists around this pronatalism story, there’s a lot of debate over whether any of these technologies that they’re counting on actually work.

Anyone who’s studied genetics kind of understands that just having two super high IQ parents doesn’t guarantee you a super high IQ kid. So there is some fundamental misunderstanding of how the science is going to work out. Not to mention all the social factors, like how are kids going to feel about being treated like a project. How are they going to feel about being one of 13 children? All those things that might influence who they turned out to be. But I think at the end of the day, my best theory for what they’re really doing here, there’s a phrase that people in this sphere throw around a lot called the Overton window, it’s basically this idea in social sciences, that’s like: If you expand the Overton window, you’re expanding the realm of ideas that are socially acceptable to talk about.

So I think that, in a way, what they’re really doing is just getting everyone more comfortable with talking about this eugenics language. So whether or not the tech is there yet, it doesn’t really matter because maybe one day CRISPR for example, gene editing, will be legalized. By that point, I think that certain people would love for society to be at a point where we’re comfortable talking about superior IQ, or picking eye color, or God forbid, picking skin color, which that’s something that we may be nearing capabilities for, technologically, but society is still not comfortable with any of that stuff. I think it’s almost like a propaganda movement at this point, that’s less about whether or not the tech is actually there, and it’s more about helping people feel comfortable with the idea of using these technologies to “improve the general population.”

I actually was listening to an episode recently with Malcolm Harris and he noted the long history of eugenics, obviously, in the US, but also specifically at Stanford, and in this tech world, and how long eugenics has been part of the agenda, and again, it’s just this matter of it’s not that the actual technologies are reemerging so much as the ideas and the acceptability of such ideas is reemerging. And that’s where this all comes back to the way that these people are newly interested in politics and really interested in the social aspect of what’s going on in the world and wanting to in influence that.

PM: I think that the fascinating thing is how, again, as you’re saying, I went back recently, and looked at some of the comments from some of these people who were very clear geneticists at Stanford. The language that they’re using is so similar to the language that we hear today, being completely repeated. So I think, what you’re saying about it being a propaganda movement, to try to make us more comfortable with and familiar with these eugenicist terms — and arguments and framings of these issues — is really spot on. Because that does seem to be what it is about. In particular, about ensuring that at a moment when I think these people are feeling that their position in society is being challenged in a way that it hasn’t for a long time. That they are trying to create a narrative justification or something like that, an ideological justification even, for why they hold the position that they do in society, and why they shouldn’t be removed from that position or challenged in that position.

JB: One hundred percent. This is also where it all comes back to that TESCREAL thing where all these seemingly disconnected ideologies actually have a lot to do with each other because what it all comes down to at the end of the day is the rationalism element, which is stripping everything of emotion and fairness. What is that? Who needs fairness? Like, all decisions should just be made, as if by a computer for the sake of pure optimization, and so once you start seeing the world that way, it becomes a lot easier to think about inequality. Again, these systems we have that people are starting to challenge, but certain people in this movement might want you to actually think: Okay, that makes sense because I’m inferior, and you’re superior. You get more than me and to fight that would just be to fight rationalism. I do think that at the end of the day, that’s the single ideology that kind of encompasses the rest of them, is just wanting to optimize everything, and strip everything of feelings. It’s frankly like, I don’t know where that phrase came from Fox News or something, like: Facts don’t care about your feelings.

PM: I think that’s Ben Shapiro.

vThere you go. He may not be a tech guy, but I am willing to bet there are conversations being had between people like Ben Shapiro and people like Elon Musk, if not those two people specifically. So, it’s kind of this facts don’t care about your feelings way of thinking, that can be used to justify a lot of bad behavior.

PM: Definitely. I want to pick up more on that linkage between the right-wing politics and what these people are doing. But I want to stay on that rationalist point for a second, because I was really interested in this as well and seeing it repeated through the pieces that you had done. How there’s this real desire and real interest in quantification. Everything needs to be quantified. We need to be able to track how everything is doing. You can see this broadly through Silicon Valley, but particularly in these ideologies. Effective Altruism is about how do we make sure that our philanthropy is having the most effective outcome as possible? Longtermism is like: How do we ensure that we maximize the number of people who are doing well, well into the future? We need to not think about the particular issues that we’re motivated by right now, but we need to think on a much longer timescale even if people are suffering today. Well, what about all the people in the future? We could protect them.

Then, with pronatalism as well, some of the people you were talking to were saying how they are tracking their children, from the time that — I guess even before they’re born because they’re looking at these genetic profiles or whatever that are done of the embryos and choosing them based on that. Then tracking them as they get older and saying that after a few generations, they’ll have all this data on their family lineage and all this stuff. What do you make of this whole kind of drive to remove any social or even moralistic thinking? And I would say, sometimes they use this quantification to act like what they’re doing is moral, and just to think so purely about numbers in terms of all of it.

JB: It’s the single reason I’m most fascinated by all of these people, also single reason I’m most alienated from them. I cannot relate to this, just from my personal point of view — that ability to strip everything down to data and to numbers and to metrics. To me that leaves out so much about the human experience that makes life wonderful. And at times, I find myself feeling kind of sorry for my subjects because I have this great personal life that I feel like I keep very separate from all of this tech stuff that I report on. It’s so far from this world, and sometimes it feels like they’re coming for these parts of life that should be untouched. I was emailing you about this phrase has been going around my mind, Soylent world.

I just feel like we’re moving towards living in Soylent world where everything can be reduced to efficiency. Sure, soylent might be the the best way to deliver calories in the most efficient manner, but I don’t want to eat Soylent for the rest of my life. I want to eat burgers and Chinese food and the wonderful things that may not be good for me. And it just feels like in so many areas of life are coming for them without this understanding of what makes humans tick. You see it applied to dating, romance, like people are talking about how you can use AI bots to write your dating app lines for you. And it’s just a fundamental misunderstanding, I feel like of what people want out of life.

I was listening to your episode about Elon and Twitter, that’s been something really amusing to watch play out is him thinking that you can apply this kind of engineering thinking to this product, but Twitter is not really a tech product, it’s a people product. It’s about understanding consumer behavior, and what people want, and what people enjoy. And so it can really backfire once you start stripping out that irrational stuff. I think he’s paying the price with Twitter. It’s not working because he’s assuming that everyone thinks like him, and they really fundamentally do not. Once you start applying that thinking to society at large, and politics, I think there’s a lot of ways that it could backfire, not only for all of society, but even for the people themselves. I think they would be wise to listen to some advice from people with higher EQs than them even if they have a higher IQ.

PM: Julia, how dare you want to do things in life that don’t just make you the most productive as possible?

JB: I mean, it feels that way sometimes when I read about their approach to life.

PM: You see it reflected in the hustle bros and people like that, who are getting more attention on Twitter these days and we know who they are and that perspective on these things. But I feel like as you were talking there, I was also thinking about how this also affects our approach to addressing issues in society, beyond kind of the personal experience, where it feels like constantly the drive with policy and governments, now as well, is that we need to be able to collect data on everything so that we can actually understand what is actually going on here. If we don’t have data, then we can’t possibly think of a way to address a problem.

It seems like not just, as you’re saying, the personal life needs to be tracked, and everything about it needs to be known, and you need to be maximizing your personal productivity — but also that view of Silicon Valley that has taken hold over the past couple of decades then filters out to the rest of society where everything has to be datafied. Everything has to be absorbed into this logic that they have. It degrades everything around us, because not everything can be captured from that form of knowing, I guess.

JB: I think once you start thinking through the political implications, it gets frightening, and obviously, Silicon Valley has been very fixated on what’s happening in San Francisco recently as this microcosm for America, and why liberal values are bad for America. I do think that at a certain point, everyone can agree on the problems that we’re facing as a society. No one wants to see homelessness and poverty and those kinds of struggles, but what’s interesting is that I think Silicon Valley is chomping at the bit to apply their tech-thinking to solving these problems.

Again, without realizing that these are very human problems. I have a lot of friends who work in that kind of world and social services, and you cannot deal with that stuff on a data level; it’s so complex. It’s just the idea that you would try to solve society’s ills with technical data points, to me, seems misguided. That said, I do wish that we could see more cooperation and collaboration between government and tech. I think it would do both worlds a lot of good to have a more open line of communication, but I do not want to live in a technocracy. It could get really nightmarish and fascist, very quickly.

PM: You can see how divorced from reality a lot of these people are. as I was saying before, you see in Andreessen’s argument, “It’s Time to Build,” how they want Silicon Valley to serve this role that you’re talking about, where they’re much more involved in many other aspects of society, without really recognizing how their efforts to alter and change and improve the physical world haven’t really worked out over the past 10 years or so when they have tried to engage in that way. Then you also talked in your piece about Sam Altman about how, I believe, he hasn’t been to a grocery store in four or five years. So there’s this very base level understanding of what most people experienced that just isn’t there.

JB: This was a really interesting issue to engage with Sam on. I think he did surprise me in that he sort of agreed that tech should not be left to rule society, and he is begging for more regulation around AI, for example. That said, I tried to make this argument to him, and in my piece, that as long as they are stepping into these roles that are so powerful and so influential — again, they kind of owe it to society to almost act politicians in their communications, they owe it to us to explain things and they owe it to us to take feedback and have this be an open discussion about how these technologies and products are going to shape our lives. Frankly, I found it pretty troubling — the fact that he really felt unable to articulate after being pushed several times on it — just how AI is going to change our lives.

I really wanted him to be able to spell out, I kind of tried to find different angles to explain it to him, but I was like: Okay, picture, whatever you want to think of as your average American, 40 something year old mom of three, making $50,000 a year, whatever. Pick whatever person, and just try to talk to that person and tell her what her life might look like in 10 years, and how it might be affected by your products. This question just blew his mind. He just had no answer. The problem with that is that in their PR, for what they’re doing, they say AI going to changed the world, it is going to yield radical abundance, all these meaningless phrases. So it’s like: Okay, break it down for me, how’s that gonna work? Is she still going to have a job? Are her kids still gonna be going to school? Like, what does this world look like that you’re pitching? Because you’re just using these very vague terms right now, and you’re promising it’s going to be so fantastic. Unless it kills us all, by the way, which I always add.

PM: One or the other.

JB: Well, I just don’t think you can sell your product that way and get us all to accept these products into our lives so willingly, without being able to spell it out for your “average person.” What does it actually going to mean? When it comes to AI right now, I think a lot about where we were with social media maybe 10-15 years ago. I remember conversations about social media, where I so fundamentally did not understand what was coming, and I don’t think anyone did. I was a teenager, and in my early 20s, make sense that I didn’t, but I really don’t think anyone did, except the tech leaders who were having these closed door conversations that have been talking about.

So they knew that it was all about data gathering, they knew that it was about these algorithms that could change behavior, get us to spend more and more time on our phones, on these platforms. But I remember looking at Instagram at first time I got it and going: I don’t understand how they’re going to monetize this product. I just post pictures. It’s so fun, what are they getting out of me? Of course, the answer is that they’re getting so much out of me. now I’m an addict, and on there shopping for things I don’t need. I’m on there completely changing my tastes and preferences and political views because of what I’m seeing on a day-to-day basis.

So I wonder how society would be different if we’d been actually informed of the risks and consequences before we signed up for Instagram. Like, before we’ve gotten ourselves addicted. I feel like what’s happening with AI right now is there’s this refusal to spell things out and to actually make concrete, what’s going to happen to people’s lives, how it’s going to change our brains, how it’s going to change our jobs, how it’s going to change our daily lives? And so I don’t know, I feel like we’re all being opted in whether or not we want to be. I wish we knew a little bit more about how that’s going to shape things in 10 years, so it would be nice if we could get some communication on that.

PM: It’s a great point because maybe one thing that I would see is at least a bit more positive in this moment — versus maybe back then with the introduction of social media and how all these apps kind of exploded in the early 2010s — was that I feel like there is a bit more of a recognition that all of the promises of the big companies are not going to work out as they’re saying. as you’re talking about Sam Altman and OpenAI are saying a lot of things about what ChatGPT and these AI tools might mean for us. I feel like the critical voice, the questioning voice, saying: Hold up now, what is this actually going to mean for us — rather than just taking the PR at face value — is a lot more prominent in the conversations today, and the notion in the idea that regulation is necessary. Obviously, there’s a question about what regulation is necessary and what it would look like, but at least that is there in a way that it wasn’t so much earlier on when we were more easily duped about what these potential technologies were going to be and what they were going to mean for us.

JB: I would hope we’ve learned some lessons from this social media fiasco, and maybe there’s reason to be optimistic. Again, that’s why I think the work of journalists is important to get this awareness out there. But I don’t know then again, every time I talk to anyone in AI who knows what they’re talking about, I come away from the conversation, just petrified. I don’t know, even this morning I had coffee with an AI founder, who was really nice guy, and he was really optimistic, as it is his job to be. he was talking about, I was like: Okay, what do you think it means for people? And he said: I think it means augmented humans. I’m like: Okay, what does he mean by that? He’s like: We already have some of that, we have Google Translate, we have this ability to speak other languages using this technology, even if we don’t inherently have that skill.

I know, for me personally, Google Maps is a big one. I love that I have a terrible sense of direction, and yet, can get anywhere. So I think that’s one positive look at it. But then by the end of the conversation, he was conceding like: Yeah, we do have this open source approach, and definitely possible that there could be bad actors who use it in really bad ways, and yeah, misinformation is going to be really bad. I don’t know, we just have a lot of weighing of the benefits and the risks to do. As you say, maybe regulation can help with that. But again, I don’t know that I totally trust the competence in Washington. Like that conference, the meeting, of the AI minds with Joe Biden and Kamala Harris. As if that was a productive conversation, what could that have possibly achieved? I did not have faith that that’s making a difference. It kind of feels just for show.

PM: Yeah, getting all the CEOs of the big AI companies together with the top of the government is probably not the best way to learn about the proper approach to these things.

JB: Like Biden, stopped in for three minutes for a photo-op. Actually, well, first of all, don’t get me started on gerontocracy issues and the fact that we’re going to have an 80 year old president no matter what. Like, that’s who we trust to understand AI?

PM: Absolutely. I think what you’re saying there about the potential for augmented humans brings up something else, because in the Altman piece you were also talking about — and of course, this is something not just with Sam Altman, but many of these people — and how they would like to see, or they imagined that we’re going to see, more of emerging between human and machine. I was talking with Emily Bender about how Sam Altman has tweeted out that we’re all stochastic parrots, relating our intelligence to the intelligence of a large language model, which is degrading human intelligence, quite clearly. But also you hear people like Altman and Elon Musk talking about: Maybe we live in a simulation, maybe this is just all a big computer. It seems a bit wild on one hand, but then on the other hand, you can see that if you think that humans are machines, and if you think that we’re already in a computer, then maybe the stakes are a bit lower, in that sense, because it’s not all really real anyway.

JB: 100%! And this goes to show how these themes are all connected because this goes back to the beginning of the conversation. What you didn’t mention about long termism is it’s not just about saving future lives, it’s about saving future simulated lives. There’s this idea that flourishing human future could enable us to upload our consciousness to microchips that will one day be floating around space after Earth has been destroyed by an asteroid. We owe it to those floating microchips to enable their ultimate happy life. That’s how crazy this sound when you really dig into it, which to me, is not life. But that’s the same way I feel about Mars. It’s another reason that Elon’s whole ideology is so alienating to me. I do not want to live on Mars; I do not want my children to live on Mars. I think Earth is pretty great, and we should maybe focus on saving it. It just goes back to this idea you have to dig into the personalities and the worldviews of these people who are shaping these technologies, because they’re really alien to your average person, it’s important to expose that.

Once you realize they think a simulated life is the same as a “human life.” They think that a brain chip interface is a great idea. They think that an optimized diet and an optimized sleep schedule and an optimized daily life is the best way to have happy life. But value systems really matter, so, again, I just find that hard to relate to when I look at my own life which my most wonderful moments are the ones spent with friends, loved ones, family, very outside of technology. I do not want technology integrated into every aspect of my existence. So I hope that in exposing some of those preferences of these people in power, we can start to push back on them a little bit and say: Well, maybe the rest of the population doesn’t want to live your way. And that’s fine if you do, but I don’t think you get the right to enforce this hyper-technological lifestyle on everyone.

PM: You could just schedule your socializing time. You have a block of 2.25 hours for socializing, and you can fit it in there. You need a bit of that to make you more productive.

JB: You know it’s all about balance. Of course you should have a healthy diet, and you should have a decent sleep schedule. But pure rationality is not the way to go happy life, because you’re going to want to eat that burger, you’re going to want to stay out, or have your fun. Even just to come back to the question of love. It’s so interesting talking to some of these rationalist types — they fundamentally don’t believe that love exists. They will reduce that to some hormonal delusion, that is purely for the purpose of biological propagation. I don’t know, that’s no way to live. It just gets really depressing.

PM: I feel like you see that with Musk, but another person you’ve written about is Lex Fridman. You very much see that approach with his lifestyle, his diet that he’s talked about. He also seems to probably take that approach to love, or just kind of step back from it altogether. I don’t know the guy very well, or him very well, but that was what I got from your piece. He also links this tech politics with right-wing politics really well. Or it’s a good entry point to talk about that, because obviously, we have this large right-wing media ecosystem that exists that. Both you have the Fox News piece of this, but you also have digital media organizations and a whole range of YouTubers and influencers who are pushing this right-wing politics.

But you also have this coming together of that right-wing politics, with this increasingly right-wing tech politics as well. I feel like Lex Fridman’s podcast is one of the places where those perspectives find one another. Do you want to talk to us a bit about that approach, and that linkage, and how tech has taken this lurch to the right and how that’s reflected in their politics, but also the media that they consume and promote, and all these sorts of things.

JB: It definitely applies to all of the stories I’ve written recently, and this whole crowd, but Lex is a great example. Just to start with a love question, Lex’s approach to love is very interesting. He does display this intense romanticism at times and talks a lot about the Russian soul. He’s this hopeless romantic. But at the same time, he kind of seems to be creeping towards this advocacy for the sex robot era. He talks a lot about — and this goes back to that question of whether or not these people see humans and machines as inherently distinct — he talks a lot about forming very intimate relationships with robots and AI algorithms. About how his ultimate dream is to have this startup, this AI startup, that creates a personalized algorithm for every human and it’s going to be integrated into everything in our lives. He goes off and waxes poetic about the relationship between your AI-enabled refrigerator and yourself, and how the refrigerator is going to be there for you when you have these late night ice cream binges.

It’s going to forget that — it’s going to have a memory now and you’re going to feel connected to that refrigerator. He’s got these robot dogs who he forms very intimate connections with, and even these Roombas. He did this experiment where he trained Roombas to scream, I think, and then tested human reactions to like: If you hurt the Roomba, are you going to feel bad? Because I think his argument is kind of you should, we should show machines the same kind of empathy as we should for humans. So again, it starts to go to pretty weird places, and you will also start to then combine that with the profile of his listeners which leans a little, I don’t know how to put this, there are a lot of young single men who listen to his podcast, and he talks a lot about being unlucky in love and not knowing how to find connection.

It gets to this Overton window question of: Are we just expanding what’s okay to did talk about in terms of forming correct connections with bots? You’re already starting to see a ton of this with AI bots, you have all these men who are interacting with their AI girlfriends. And then they switch the algorithm and girlfriends, it was like they’ve been lobotomized, they said. These men were heartbroken. I was recently talking to a psychologist who said that she’s been seeing a lot of patients who are forming these connections with AI chatbots, like teenagers who don’t want to hang out with their friends anymore, because they’ve got their chatbot friend. Or a married man who was seeking romantic marriage advice from a chatbot, and he was really benefiting from these conversations. And so again, as a society, whether or not we’re aware of it, we’re getting more comfortable with these ideas of forming connections, romantic relationship — and I think, possibly soon enough sexual relationships with robots — and I think we need to talk about it.

PM: It’s wild, because we know that these things aren’t intelligent. They’re not actually talking back to us in the way that they might make it look or the way that we might imagine. You’re not actually getting advice from the chatbot, it’s not actually your friend. It doesn’t remember your conversations or anything, it’s just responding to your prompts.

JB: Again, if you dig into someone else’s ideology, and value system and way of thinking, I think a lot of people disagree with you there — which, maybe you and me sound crazy — but I think that they would argue that if the algorithm is sophisticated enough, and if the AI is convincing enough, what is the difference? The test a lot of people give is: What if you were told that you’re living in a simulation right now? Like, it feels pretty real, but what if the tech is just that good? Would you want to end it, even if your life turns out to be a simulation? Don’t get me wrong, I think it’s nuts. This is why I think it’s important to try to, at least, put yourself in the mind of someone who thinks that way? And then ask yourself, how much power do we want to give a person like that? And the answer lately seems to be a lot.

PM: It’s shocking and scary, really, when you think about the full implications of it, as you’re laying out.

JB: Actually, I realized I never really gotten to the conservative question there. I think you’re starting to see some unlikely alliances form, again, between this tech world and this far-right, conservatism. And especially if you look at pronatalism question, you look at people like Lex Fridman, who’s a pretty right leaning podcaster. Like this just keeps coming up Again, and again. Frankly, as a woman, I’m pretty freaked out because I think that we are turning to some pretty traditional ideas of gender roles in a lot of these conversations. I mean, not to mention, the anti-trans sentiment in this world. Just these weird kind of unlikely connections, you would think: Why to tech people care about trans issues? Stay in your lane, but it all ties back to this hyper-rationalism, and like: Well, that’s the way biology “works.” That’s the way it’s always been, that’s what should be.

It’s just, you start to see people kind of meddling in stuff that I would rather they get their hands out of. If you really break down the pronatalism ideology, whether or not it’s convenient for them to say out loud, it is about the reason fertility rates are dropping, is because women have entered the workforce. And that’s what happens in modern developed societies is there’s more gender equality. So if you’re saying women need to start having 13 babies, again, I have questions for how that’s going to play out in terms of gender equity. It was really interesting to dig into the backstory, a little bit, of Shivon Zilis, who was a Elon’s executive. She still works at Neuralink; she was on the board of OpenAI; she was this really top, rising figure in AI. I’ve spoken with a lot of people now who know her who say: I haven’t seen her in two or three years. She just dropped off the map, she entered Elon’s orbit and cut all ties.

And by all reports, their children were conceived with IVF, which fits into this narrative around: This was more of a project than a romantic relationship. So, I’ve talked to women in tech who were really sad about all that, and would say like: What does it say about how we’re viewed in the workplace? Are we just breeders to them? This doesn’t really make me feel respected in these tech workplaces? So, I definitely think that gender is one of the biggest social issues that’s going to be affected by this turn towards conservative values in tech, and it’s something we should all keep an eye on.

PM: Absolutely, you even see it in the recent interview that you know, and Musk has been saying a lot of things, Musk has been kind of making explicitly anti trans statements, even though he has a trans daughter, saying that this is kind of like communist indoctrination and all this kind of stuff that’s coming from schools. But he also did an interview with Fox News recently where he seemed based on what he was saying, it seemed like he was very critical of birth control, and women’s access to abortions and things like that, at a moment where, you know, there’s a pretty strong right wing project in the United States, in particular, to crack down on access to abortions, and I imagine birth control as well.

But the right has also taken a particular interest in trans issues, and has been passing bills across the country to make life hell for trans people. And so the tech industry, and people in the tech industry are very powerful and very influential. I feel like there has been kind of a changing in how the public thinks about these people in the past few years, but they do still hold this position in society that comes with a lot of influence, and where people really do listen to them. And so I think, as you see these growing links between these powerful individuals, and this kind of growing and also very powerful right-wing movement is pretty scary.

JB: Not to mention, Elon has literally purchased the public square. So he has seized the means of communication for society. And now his tweets are elevated above other people’s and those tweets are often things like, a woman’s most important role is as a mother or something, or he’ll tweet out these incel memes. So he is finding ways to exert even more influence. And the whole alliance kind of reminds me of the rise of Trump, because there was this moment early on where you were like: How on earth are Trump and evangelicals going to form an alliance? Surely they realize they don’t share any values — he does not represent an evangelical way of life.

But it’s just strategic at the end of the day. And so I think that whether or not abortion and birth control are top of mind for Elon, if increasing the population of elites is on his mind, then he’ll form alliances, and he’ll find people who have similar goals, and maybe they’ll kind of blend their reasons for wanting those things as a strategic way of thinking. So yeah, just again, like, I think people really need to catch up on how involved in politics, the tech world is trying to get and succeeding and getting

PM: Absolutely. To start to close off our conversation, it feels like one thing that is really common here is using this aesthetic of science, or this notion of science, to justify perspectives that are not really scientific. Like you were talking about how they make this really strong link between IQ and wealth. Also the notion that IQ is something that you pass on to your children. Which, as you say, is not something that’s really backed up by the facts. Then you also see it in Lex Fridman himself, is someone who positions himself as a scientist or as a researcher, but does not really have those credentials. Maybe you can talk to us a bit about that.

But that does seem to be one way that they act to justify some of these things, is by using platforms and expressions and concepts that we do inherently trust. We trust in science generally and we trust in the ability of people to have free speech and to say what they want. But then it seems like these concepts are turned on their heads to be used to justify these things that are quite scary and against what we actually would usually be supporting and in favor of. I wonder what you make of that, or if you have any opinions on it?

JB: Well, I mean, I almost had to laugh when you said we trust science because that has been systematically dismantled over the last few years. I mean, again, really intentionally attacking the public’s trust in science.

PM: Well, I do feel like even the people who are you know, say against vaccines and stuff like that, act like their skepticism is scientific. Because — and you even see Elon Musk say it, I’ve seen him tweet about it — where they’ll say this is the scientific method where we question things and blah, blah, blah. So there does seem to still be that use of the term.

JB: Totally, it’s like a co-opting of it, which, again, is has been really effective, I think. It was even yesterday, you saw Elon tweet out some graphics about “Black on Black crime,” and they try to make it look legitimate. I think that’s a hilarious aspect of the community notes program that Elon has introduced, and I actually don’t think it’s the worst thing in the world, because it often ends up catching him and his cohort in lies, or in total misrepresentations. So that was one that was quickly dismantled by the actual data community, who studies these things and understands these social phenomena. But it just all goes back to this idea of like: How is this all connected? How could free speech absolutism possibly have to do with cosmism and pronatalism. How is it that all these things are connected? But it’s a very strategic program that’s being run.

And again, I think it has a lot to do with: Why acquire Twitter? That was this big question on everyone’s minds and we’ve all speculated about it. I’ve heard a few different theories that I think makes sense. But certainly, I think part of it had to be just this realization of the power of controlling this public square. Being able to introduce these: I’m just asking questions, ideas. And that’s what Lex does on his podcast, which is kind of the thesis of my piece. So it all comes back to like broadening the Overton window. Once you plant these questions in people’s minds, it starts to make them more socially acceptable to ask it makes, it easier to lie about them. It makes it easier to get people to just swallow whatever you’re giving them. So I do think it’s all pretty strategic and kind of working. I think it’s important to expose that fact.

PM: I would just say, you often see discussion of the Overton window on the Left as well. This notion that if we just push it a bit more to the left, get people more open to ideas, public health care and things like that. But you can also see how it’s very effectively being used by the Right to get these things that we’ve been talking about into the public consciousness. With Elon Musk’s ownership of Twitter in particular, he has talked about this as being a platform that allows free speech and different opinions. You see him often very frequently, responding to outright conspiracy theories Acting like: wow, I didn’t know this or looking into it more and all this stuff. It shows how his whole headspace, his information ecosystem, is filled with — because he frequently attacks mainstream journalism — but it’s filled with conspiracy theories and right-wing outlets. So that is not only shaping his perspective on the world, but views that he is trying to see elevated, and to get more prominence on this platform, to further influence a lot more people.

JB: It was a crazy moment a couple of weeks ago — I don’t know if you caught one or another of Elon lawsuits — his lawyers basically tried to make this argument that: Oh, are you sure he said that statement? Or could that have been a doctored AI video? To me, it was so transparent as part of this wider effort to get us all to question our reality. To get us to question “truth,” which the chaos that is about to come with aI generated video, and audio and imaging is so horrifying to me. But I think it’s important to remember, these people thrive in chaos. I always remember this tweet by Peter Thiel’s biographer, Max Chafkin, who said: What you need to understand is apocalyptic fixations are a huge part of the way these guys think.

I’m like: They’re always talking about the collapse of society, and I don’t think it really matters to them if chaos reigns because they’ll be okay. And in fact, can probably find ways to benefit from that. But the next election is going to be so screwy in terms of just like: Did Biden really make that speech? Did Trump really say that thing? And now I’m just going on a tangent, but when you start to think about the mental health effects of that, how it’s already starting to feel. I’m now having weekly moments where I’ll see an image and I’ll be like: Is that real? Did that really happen? And the effects that that has on the mind have never been studied before, and I think we’re about to learn in real time how that kind of drives you insane. So should be exciting.

PM: Move fast and break things indeed.

JB: Like our minds!

PM: Julia, this has been a fantastic conversation and so insightful into this kind of whole worldview and everything that these people are putting out into the world and that your work touches on so frequently. Thank you so much for taking the time. It’s really been fantastic to chat.

JB: Thank you. Maybe next time we can cover all the things I’m optimistic about.

PM: Nah, then we wouldn’t have much to talk about [both laugh].

JB: Thank you so much. It’s been really fun.