Data Vampires: Fighting for Control (Episode 4)

Paris Marx

Notes

Tech billionaires are embracing extreme right-wing politics. It’s not just to enhance their power, but to try to realize a harmful vision for humanity’s future that could see humans merging with machines and possibly even living in computer simulations. Will we allow them to put our collective resources behind their science fiction dreams, or fight for a better future and a different kind of technology to go along with it? This is episode 4 of Data Vampires, a special four-part series from Tech Won’t Save Us.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

[THE MATRIX]

In 1999, The Matrix arrived on scene, bringing a philosophically deep cyberpunk tale to a wide audience just as the internet was infecting the cultural mainstream. It introduced audiences to Thomas Anderson, better known as Neo, and a group of leather-clad dissidents trying to break humanity out of a computer simulation created by a race of intelligent machines.

MORPHEUS: The Matrix is everywhere. It is all around us, even now in this very room. You can see it when you look out your window or when you turn on your television. You can feel it when you go to work, when you go to church, when you pay your taxes. It is the world that has been pulled over your eyes to blind you from the truth.

It’s not an understatement to say it was a hit, and some of its themes have remained in popular discourse for the two and half decades that have followed. The notion of living in a simulation wasn’t new — it had been a staple of science fiction for decades — but the film came out in a moment when that possibility started to seem like something that might actually be possible in the near future. Computers were improving, we were all becoming digitally connected, and the exuberance of the dot-com boom was still full steam ahead. Almost anything seemed possible — even that we might belong to a complex simulation ourselves.

The world of The Matrix presented was quite clearly presented as a dystopia — the whole plot revolves around escaping from the simulation, and in the later sequels, destroying it and the intelligent machines once and for all so humans can reclaim their lives instead of being the batteries that power the virtual world. Yet Silicon Valley doesn’t seem to have gotten the message. Elon Musk has talked about how he thinks we live in a simulation, and the ideology embraced by far too many in the Valley openly welcomes such a future.

In 2017, Sam Altman wrote a blog post about what he called “The Merge.” In his view, the merging of humans and machines wasn’t an event like the singularity that will happen all at once, but had already begun with our dependence on our devices and would eventually reach the point where we need to make a choice: either we merge with machines, or we get left behind by them. “If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else,” he wrote, before continuing with “My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”

There’s no question that there are commercial imperatives behind the push by the tech industry to massively expand computation and put as many resources as they can muster into accelerating AI advancement. But there’s also this troubling worldview that shapes what they think the future will be and what should be sacrificed to achieve it. Will we allow our world, and ourselves, to be sacrificed to pursue this future, or will we try to stop them?

MORPHEUS: This is your last chance. After this there is no turning back. You take the blue pill, the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit hole goes.

[INTRODUCTION]

This is Data Vampires, a special four-part series from Tech Won’t Save Us, assembled by me, Paris Marx.

Over the course of this series, we’ve learned more about hyperscale data centers, the growing pushback they’re facing around the world, and how the generative AI bubble is fueling a building a spree by major cloud companies. This week, to close off the series, we’ll dig deeper on the worldview shaping how some of the most powerful people in the tech industry see the future — and why it needs to be opposed.

This series was made possible by our supporters over on Patreon, and if you learn something from it, I’d ask you to consider joining them at patreon.com/techwontsaveus so we can keep doing this important work. In the coming weeks, Patreon supporters will get access to premium, full-length interviews with the experts I spoke to for the series.

So, with that said, let’s learn about these data vampires and finish driving a stake through their hearts.

[SIMULATION THEORY]

The idea that humanity might be living in a simulation didn’t begin with The Matrix, but it seems almost certain to have popularized the idea at the very moment that the power and influence of computer programmers and those cozying up to them was rising. Money was flowing into the tech industry, and that meant their ideas and visions of the future — even the more outlandish ones — could get taken a bit more seriously than they did in the past, especially when they started to be positioned as a key part of their ambitious business plans. A few years after The Matrix was released, philosopher Nick Bostrom started arguing not just that it was possible we live in a simulation, but that it was even likely. If our species doesn’t go extinct in the future and doesn’t decide it shouldn’t run simulations of the past, then it will probably make a ton of them, and that means, in his view, we probably live in a simulation. This is how Émile P. Torres, a postdoctoral candidate at Case Western Reserve University and author of Human Extinction, explained it.

EMILE TORRES: The reason for that third disjunct is okay, if we don’t go extinct, we survive into the future, we do build this post-human civilization, and there isn’t some moral or legal or some other restriction that prevents our future post human descendants from running these ancestor simulations, then it’s very likely that they will run a huge number of them. Consequently, the number of simulated people in the universe could be far greater than the number of non-simulated people. That leads to another question, which is, how do we know that we are not in a simulation right now? Maybe you could say, well, we look around and things just look very real. And Bostrom and others would say, well, actually, these ancestor simulations that our post-human descendants will be running, they will be really high resolution. People in these simulations will not be able to tell that they’re in a simulation. The simulated world will be indistinguishable from from true reality. So if there’s no way to empirically determine, or there’s no known way at this point to empirically determine whether or not we are in a computer simulation right now, what do you do? Well, then he says, what we ought to do is use the principle of indifference, which just says that if you have no extra information, you should just consider that you’re just an average entity in the set. So if you do that, and you remember that there might be way more simulated beings than actual real beings, it follows that we’re much more likely to be one of the simulated beings right now than one of the real beings.

So, let’s take a pause here, because if you’re encountering this for the first time, it might be a lot. And let’s be clear, the idea that we live in a simulation is bullshit, but this is how people like Bostrom, not to mention a lot of these tech billionaires, really think. What someone like Bostrom is saying is that if we believe it’s possible that people in the future might run simulations that a ton of simulated post-human beings inhabit, then we must assume we could be in a simulation of the past created by some future version of humanity that has completed its merge with computers — or has been replaced by them, I guess. Here’s how tech genius Elon Musk described his reasoning for believing we almost certainly live in a simulation at the Code Conference in 2016.

ELON MUSK: The the the… I mean, I think, here’s… in my like, the the the strongest argument for for us being in a simulation, probably being in a simulation I think is the following… um, that that 40, call it 40 40 years ago, we had pong, like two rectangles and a dot. That was what games were. Um, now 40 years later we have photorealistic 3D simulations with millions of people playing simultaneously and it’s getting better every year and soon we’ll have, you know virtual reality, have augmented reality… um, if you assume any rate of improvement at all, um then the games will become indistinguishable from reality, just indistinguishable. Even if that rate of advancement drops by a thousand from what it is right now… um, then you just say okay, well, let’s imagine it’s 10,000 years in the future, uh which is nothing in the evolutionary scale… um so um, so so given that we’re clearly on our trajectory to have games that are indistinguishable from reality and those games could be played on any set top box or on a PC or whatever and they would probably be, you know, billions of such… uh, you know, computers or set top boxes, it would seem to follow that the odds that we’re in base reality is one in billions.

Can you imagine that as recently as a few years people used to sit around and nod along as they listened to this guy ramble on about total bullshit thinking they were hearing the train of thought of one of the world’s unparalleled geniuses? Shockingly, some still think that, even after all that’s happened. Émile told me Bostrom isn’t as convinced as Musk; that he’s closer to 20 percent certain we live in a simulation, not the one in billions change we don’t, as Musk claims simply because video games have gotten better graphics. But those ideas — that we may live in a simulation, will one day merge with computers, and are headed toward this post-human future — are not just appealing because they sound like the future as presented in science fiction movies and novels, but also because of how divorced some of these rich folks in tech have become from the real world.

EMILE TORRES: If you are Elon Musk, the simulation hypothesis might seem more plausible than it is for the rest of us, because he is in such an extremely improbable situation in the world, as the richest person, or one of the richest people in the world, who has had all this massive success and influence and acquired all of this power. So you can imagine how somebody in his situation might look around and go like, this is just so strange. I mean, it’s even stranger than the strangeness of the lives of an ordinary person.

So, the science fiction is one piece of this, and the surreal nature of being a billionaire is another. But there’s also the detachment many of these people have from the average person and the experience of their lives. This doesn’t just play out in Musk’s delusions or obsessions, but also an inability — or even disinterest — in thinking about how these grand plans that are supposedly for the future of humanity will actually impact real humans. Julia Black is a features reporter at The Information, and spoke to Sam Altman for a story a year and a half ago. Here’s one of the things she told me about talking with him.

JULIA BLACK: In my conversation with Sam, as a part of writing the piece, I became very fixated on trying to get him to answer, you know, really tangible, one foot forward, questions about like, Okay, but how is this actually going to change life for your average American? Try to picture that person, and he couldn’t. To a shocking degree, he couldn’t seem to wrap his head around that question. And it seemed irrelevant to him. It seemed, Why would I care about like, your average American today? We’re talking about human civilization on a grand time scale.

If this supposed future isn’t for regular people, who is it really for? People like Altman or Musk will say it’s for future generations, but really it’s little more than a series of obsessions held by tech billionaires and the people who worship them — obsessions that, as we’ve seen through this series, are increasingly cause harm to communities, accelerating social inequities, and making it harder to tackle the climate crisis. But that future vision doesn’t end with the notion of living in a simulation. It goes much deeper.

[LONGTERMISM]

Years after Nick Bostrom repopularized the idea that we might be living in a simulation — at least among a certain niche in the tech industry — he built on it with a much more expansive detailing of the threats facing humanity and the path we must take to combat them. Those simulations rely on the assumption of intelligent machines, but in his 2015 book “Superintelligence,” he laid out a scenario where computers exceed the intelligence of humans, achieving artificial general intelligence or AGI, then determine we’re a threat to their survival and decide to eradicate us or turn us into paper clips. It’s little more than a science-fictional thought experiment, but everyone from Musk and Altman to Bill Gates praised the book. Bostrom argues we need to be on the lookout for existential risks to the human species, and ultimately advocates a worldview called longtermism that puts the long-term future of humanity before the more immediate concerns we might face.

EMILE TORRES: The ultimate goal of longtermism is to realize this, to quote one of the the leading longtermists Toby Ord, “vast and glorious future” where, you know, we become post human. We go out, spread beyond Earth, colonize the universe, the accessible universe, and create astronomical amounts of value. It’s a very kind of economical way of thinking about the future. I’ve said before, for longtermists morality is, to a large degree, essentially reduced to a branch of economics.

Longtermism may initially sound like a good thing — it brings to mind long-term thinking, something we often acknowledge our leaders don’t do enough of. But as Émile describes, longtermism goes far beyond that, instead advocating that our focus should be on what humanity might be like in thousands or millions of years, and making significant sacrifices in the present based on fantasy scenarios like space colonization and even the notion of building vast computer systems on faraway planets where digital beings will live, you guessed it, in massive simulations.

EMILE TORRES: How exactly do we maximize value? Well, longtermism is greatly influenced by a theory in ethics called utilitarianism, and utilitarianism is very much like capitalism. With capitalists, it’s about maximizing profit; with utilitarians, maximizing value in a slightly different sense. It’s not money, but something that has intrinsic rather than just instrumental value. So what we need to do, as I mentioned before, go out and colonize space, create a sprawling multigalactic civilization full of trillions and trillions of people. Now there’s one last important step in this line of thinking, which is that we could go out and colonize space, maybe as biological beings. But there’s a certain carrying capacity to any given planet. So there’s an upper limit to the number of biological beings that could reside on these planets. Like, let’s say, we go out and colonize some some solar system and rather than terraforming the planets that are circling around the sun, we just convert those planets into planet sized computers made out of computronium running virtual reality worlds. Well, you can cram more digital people per unit of space than you can biological people. So the longtermists therefore suggests that we need to go out and colonize space and become digital beings to build these massive computer simulations that are full of trillions and trillions and trillions of digital beings, supposedly living happy lives, because that is the way you get the greatest number of people, and consequently are able to truly maximize the total amount of value in the universe.

That may sound completely wild, because it is. When you hear Elon Musk talking about the need to build a multiplanetary civilization or why population decline is an existential threat, it’s these ideas that are ultimately behind the arguments he’s making. It allows these billionaires to believe they’re building the sci-fi future they dreamed about in their youths, but also provides them with a supposedly moral justification for hoarding vast amounts of wealth to spend on AGI and space colonization dreams while people go hungry, are without homes and proper healthcare, and the effects of the climate crisis keep getting worse — in part because of the demands created by trying to realize this future in the first place. In fact, a lot of longtermists even argue the climate crisis is not an existential threat because even if warming happens far beyond two degrees and the human population declines, over the long term they do not believe humanity will be fully wiped out and it will be able to rebuild. It’s a perverse and immoral way to look at the world, but one they’ve convinced themselves is justified.

JULIA BLACK: I think the reality of running a country, running a society and economy is everything is about a decision of where to allocate resources, where to allocate capital, where to allocate our thinking, our human capital, and the conversation that I’ve seen in Silicon Valley — honestly, even more in the last year, I didn’t think it could get more extreme on this front — but it’s just been reoriented to this thinking around, all of our resources need to be devoted to these extreme possibilities. So rather than taking care of, you know, the society at large, like, what we need to be caring about and thinking about is the cutting edge of innovation, the frontier, these far out possibilities that are much more resource intensive, by the way. I think that there’s something very significant, actually, to the fact that Silicon Valley has become so isolated and so removed from the realities of life for most of society.

As Julia says, we’ve found ourselves in this present reality where these far-out dreams of tech billionaires are taking priority over the real needs of the majority of the population. It comes back to what Ali Alkhatib and Dan McQuillan were talking about last week, with digital technology being used as a smokescreen to reduce the power of average people over their lives and to present false technological solutions to problems so political action doesn’t have to be taken, all while the social and environmental crises continue to get worse. You can clearly see how these longtermist visions relate back to the issues we’ve been talking about through this series. Tech billionaires like Musk and Altman are obsessed with AI — and AGI in particular, the version that isn’t just tech being wielded by humans, but tech that begins to think for itself — because they want their grand science fictional dreams to come true, and will sacrifice virtually anything to try to make them a reality — which is why Altman, in particular, is so determined to see larger and larger data centers being built regardless of the energy or water they need or the broader impacts on the communities they’re built in. But it also serves the commercial functions we discussed with Cecilia Rikap and Dwayne Monroe, where it allows major tech companies to continue expanding their power and sticking internet connectivity, digital technology, and cloud solutions in places they’re not truly needed. Julia made that link quite explicit in our conversation, when talking about the contrast between the utopian and dystopian AI futures on offer.

JULIA BLACK: I do think that it’s important to remember that what both these extremes, these polar options, have in common is that they’re fantastical. None of this is dealing in the tomorrow or the tangible or the what might really be possible or happen. It’s dealing in these theoreticals, these hypotheticals that I think are very useful when you’re also asking for multiples of money, compute, energy that are pretty much fantastical. I mean, he’s talked about $7 trillion needed for data centers.

The “he” there is none other than Altman, who’s doing everything he can from searching for capital around the world to courting corporate partners like Microsoft alongside the White House to support his vision. But there’s one big thing that might stand in the way of these tech billionaires achieving their dreams and being able to force the costs of them on the wider society: and that’s the fact that for all the money and power they have, they still operate in democracies. At least for now.

[OPPOSING DEMOCRACY]

Marc Andreessen is pretty typical of today’s tech billionaires — someone who claims his wealth is a product of his genius, when it’s much more a product of luck. He’s an incredible influential venture capitalist who often has a hand in the rise and fall of new tech bubbles — his venture capital firm Andreessen Horowitz plowed a ton of money into crypto, for example. Thirty years ago, he was working at the National Center for Supercomputing Applications at the University of Illinois, where he developed a web browser called Mosaic with Eric Bina. But as the internet headed toward commercialization in the early 1990s, eventually being privatized in 1995, Silicon Graphics cofounder Jim Clark saw there was an opportunity to cash in instead of just seeing the web browser as a university research project. In 1994, he recruited Andreessen, brought in some deep-pocketed investors, and they started the Mosaic Communications Corporation and released the Mosaic Netscape 0.9 web browser just as the dot-com boom was taking off. The company was later renamed Netscape and its browser the Netscape Navigator to remove the association with the publicly financed original. In 1998, before the dot-com crash, AOL bought the company for $4.2 billion, giving Andreessen his path to further wealth and influence.

Andreessen has been a prominent figure in Silicon Valley ever since, often championing its influence and asserting it should flex its power over US society more strongly, even though his wealth has unquestionably distanced him from average people over the decades. In April 2024, the author Rick Pearlstein recounted being invited to one of Andreessen’s seven mansions for his book club in 2017. When Pearlstein started talking about the benefits of small town life, Andreessen interjected with a heartless statement: “I’m glad there’s OxyContin and video games to keep those people quiet,” Andreessen said, according to Pearlstein. As criticism of major tech firms and threats of regulation and higher taxes by government have increased in recent years, Andreessen hasn’t reacted well at all. He’s been a vocal figure in the tech industry’s embrace of extreme right-wing politics, and in October 2023, he made that clear when he published the Techno-Optimist Manifesto to his venture capital firm’s website.

The 5,000-word manifesto was unsurprising in many ways, but intriguing — if not concerning — in others. It asserted that technology is the only way to solve the world’s problems, and that anyone or anything that stands in the tech industry’s way is holding back the whole of humanity — a very convenient conflation for a venture capitalist. He wrote, “Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential. For hundreds of years, we properly glorified this – until recently.” He wasn’t shy about naming enemies either: It was no surprise to see communists and Luddites on his list, but he also wrote that society was being harmed by calls for “sustainability,” “social responsibility,” “trust and safety,” and “tech ethics.” Few people other than Silicon Valley billionaires and their hangers on could agree with that. He even claimed those who stood in the way of AI could be seen as engaging in a “form of murder,” because of all the lives he believes AI will supposedly save.

Andreessen and many of these other tech billionaires have never been able to accept that they got to where they are through luck: being in the right place at the right time, the right industry as it was skyrocketing, and lucking out by cashing in on one of the early internet booms. They had to convince themselves and the world it wasn’t luck, but skill — that the meritocracy was at play and their wealth means they’re also some of the smartest people on the planet. When you believe something like that, it’s no wonder you’ll start getting angry when those you deem inferior start trying to stand in your way. Andreessen’s manifesto embraces technocracy — the notion that experts and engineers should be in charge of society — and explicitly praises a number of fascists, including Filippo Tommaso Marinetti, the founder of Italian Futurism and an ardent support of Italian far-right leader Benito Mussolini. Other billionaires, like Peter Thiel and some of his crew, have openly opposed democracy itself. Here’s what Julia Black had to say about that when I spoke to her.

JULIA BLACK: One thing that’s standing in the way of those people becoming empowered, as they see it, to do what they want to make this technological future happen, is democracy — is the fact that, as you say, most people couldn’t even begin to relate to this stuff, certainly wouldn’t vote for it, certainly wouldn’t, you know, opt into most of it. And so how do you solve that problem? You remove the obstacle of democracy and the need for majority buy in.

The ideologies increasingly taking hold among the Silicon Valley elite are profoundly anti-democratic, and they have strong allies in the far-right movements growing well beyond the tech industry itself. Longtermism, techno-optimism, and these other worldviews assert that our future should be shaped by the billionaires who’ve made their fortunes over the last few years, and that the rest of us should silently accept the consequences of those decisions — regardless of what it means for workers’ rights, the environment, and the social progress they seem so desperate to roll back. This is impossible to disconnect from the effort to proliferate AI through society, expand the digital surveillance apparatus, mediate as many of our interactions as possible through digital platforms, and build massive data centers the world over to power it all — regardless of the resources needed to operate them. Dan made these connections quite explicit.

DAN MCQUILLAN: We are living in a time when far-right political ideas, proposals, ideologies, understandings are on the rise. If we’ve got a techno-political understanding of what’s going on, we understand that the technology and the politics are not separate, but are really sort of coproductive, then we should at least step back and question if our amazing new technology that seems to be so facile as a product of our moment, has anything to do with what else seems to be on the rise. Far-right politics is used to divert from underlying social, structural injustice and inequality, and so is AI. And that should be of profound concern to anybody you know who’s advocating for AI, because its social function at the moment is very overlapping with the social function of far-right ideologies, even before you get to the point which is actually happening at the moment where those very same actual fascist movements are turning to AI and going, “Yeah, this could be really great.” And anyone who’s building a large scale AI mechanism is building machine for that. But as I say, the thing I’m more concerned about is the predisposition that society develops through forms of heightened abstract cruelty, through these mechanisms that preprepares it for these more fascistic political movements, so I tend to call AI has a tendency towards fascistic solutionism.

In theory, and even in practice, AI technologies can be deployed to do some good and helpful things, but Dan’s argument gets to a deeper point: technology is not neutral; it’s inherently political, given the resources that fuel its development and the use cases its put to are fueled by the politics of the world that surrounds it. On the net, we see AI being deployed in obscene and incredibly harmful ways: denying social supports to people, discriminating in immigration systems, targeting people in wars, like in Israel’s ongoing campaign in Gaza and beyond, and on a broader scale creating a world where people are constantly ranking and decisions are increasingly made by opaque algorithms we have little control over. The PR and media coverage is all too often focused on the potential beneficial applications of AI, many of which are inflated if not wholly fictional, yet the consequences are far greater and don’t get nearly the spotlight shone on them because that doesn’t work for those developing it. And as they embracing anti-democratic and increasingly socially conservative politics, and the wider society shifts in that direction too, do we want more powerful AI to be in these people’s hands?

If these technologies are deployed into the world and do cause harm to people, how do we respond? The discussion often turns to regulation and how it can be deployed to rein in the worst excesses of the tech industry, but at times it feels like that’s not enough, especially when it becomes clear how much tech companies have deployed their vast war chests, lobbying power, and the aura of the tech industry to shape regulation in their favor. Increasingly, discussions are going beyond that — to places the tech industry clearly doesn’t want us considering — but Ali argued destroying some technology should be in the realm of possibly.

ALI ALKHATIB: Sometimes a person will encounter an algorithmic system and it is not going to stop hurting them, and they will not be able to escape the system. And given those two facts, I think it’s pretty kind of obvious that it is reasonable to start dismantling the system, to destroy it. And I’m not saying, like, we should necessarily destroy everything that has silicone in it or something like that, although I’m sure there are probably people that would argue that, and I’d be happy to hear them out. But it doesn’t seem radical to me to say, if you can’t leave a system, if the system is harming you, if you can’t get it to stop hurting you there, there really aren’t that many other options. I think it’s reasonable to say you don’t have to take it, you don’t have to continue to be to be harmed, and if it forecloses on all of the other possible avenues that you have, then one of the avenues that we sometimes would like to talk about is to start destroying the system.

[BETTER FUTURE]

The growing campaign against the expansion of hyperscale data centers is about water; it’s about energy; and it’s about the mineral resources that go into the chips that power them. But it’s also about something much greater: the questions of what society we’re building, who it ultimately serves, and who gets to be meaningfully involved in determining our future. Do we leave it to sociopathic tech billionaires, or fight to reclaim that power for ourselves?

DAN MCQUILLAN: Data centers are such a signifier of a broader set of material relationships clearly linked to broader concerns about extractivism. The number of data centers goes up, the number of servers goes up, the amount of energy goes up, the amount of cooling water. You know, look at the full range of values of our time that they embody. The absolute obsession with growth, the absolute sedimentation of brutal, asymmetric global relations, the idea that this is all a legitimate way of addressing our most fundamental problems, it’s just a massive diversion.

That’s Dan again, and what he says there is important to consider. I think regardless of the path we choose, we will require some form of data storage and processing, but the current path that companies like Amazon, Microsoft, and Google have us on is fueled by commercial interests and broader ambitions that are completely divorced not just from what’s necessary to build a better world for most people on it, but also the very real constraints we face if we’re serious about avoid the dangerous levels of global heating we’re currently headed toward. In the world of Dune, which recently had a couple very successful blockbuster films, there’s an event called the Butlerian Jihad, which involves a war to destroy the so-called “thinking machines,” meaning digital computers are no more. I don’t think our ambition is — or should be — to go that far, but that doesn’t mean we can’t question what kind of technologies are appropriate and when it’s right to use them. Here’s Ali one more time.

ALI ALKHATIB: Using technical systems to help us make sense of complex problems is not something that I’m like categorically against. I think that computational systems can be great ways of trying to draw comparisons between two fundamentally different things. But I think that when that system starts to become overly decisive, or have an outsized weight of what influence it has in making decisions about consequential things, then it becomes obviously much more harmful and much more problematic and dangerous. And again I think it comes back to this question of like, do people consent to the influence that the system has over this particular decision about my life? In a lot of ways, tech companies find ways to claim that people are not stakeholders in decisions that are about them, or they find ways to say, well, this person’s just not informed enough or too stupid or whatever, to make an informed decision that is for the benefit of society or whatever.

It’s long past time we had a discussion about our collective future and the role digital technology should play in it — one that’s not hijacked by science fiction deceptions about colonized planets and AI servants. That’s a discussion the tech industry doesn’t want us to have. These massive tech companies can seem unstoppable. But their executives’ embrace of an extreme right-wing politics is in large part a result of feeling their power being threatened — not just by government action, but by a public that is turning against them. Around the world, the push to build more data centers to serve the commercial needs of major tech companies and the ambitions of the billionaires who control them continues. But there is also successful opposition: things like temporary moratoriums in Ireland and Singapore or projects being stalled or defeated in Chile and the United States. And that’s on top of the broader regulatory and enforcement efforts that are spreading as countries and their citizens get fed up with the abuses of tech companies that feel themselves to be above the law. In more and more countries, people are demanding the return of their digital sovereignty.

The hype around generative AI and the data center buildout it’s fueling have not only put a spotlight on the vast material costs of the future Silicon Valley is trying to build, it’s also showed how the promise of the internet revolution has been squandered by people who’ve been blinded by wealth and power. Now we have to ask ourselves: should they be charting humanity’s course or is it time we collectively take that power back from them? Do we need to sacrifice so much to build out as much storage and server capacity as the tech industry’s constant need for growth demands? Or could we significantly cut that down by dismantling the mass surveillance system they’ve constructed, stopping their attempt to have digital interfaces mediate so many of our interactions, and rejecting their plan to expand their control by rolling out algorithmic decision-making in as many places as they can get away with? Not only do I think we can; I think we should.

A better future is possible, and so is a different vision of technology. But it won’t be won with a fight. And their massive hyperscale data centers are a great place to start.

[OUTRO]

Data Vampires is a special four-part series from Tech Won’t Save Us, hosted by me, Paris Marx. Tech Won’t Save Us is produced by Eric Wickham and our transcripts are by Brigitte Pawliw-Fry. This series was made possible through support from our listeners at patreon.com/techwontsaveus. In the coming weeks, we’ll also be uploading the uncut interviews with some of the guests I spoke to for this series, exclusively for Patreon supporters. So make sure to go to patreon.com/techwontsaveus to support the show and thanks for listening to Data Vampires.

Similar