Data Vampires: Going Hyperscale (Episode 1)
Paris Marx
Notes
Amazon, Microsoft, and Google are the dominant players in the cloud market. Around the world, they’re building massive hyperscale data centers that they claim are necessary to power the future of our digital existence. But they also increase their power over other companies and come with massive resource demands communities are getting fed up with. Is their future really the one we want? This is episode 1 of Data Vampires, a special four-part series from Tech Won’t Save Us.
Support the show
Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.
Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!
Links
- Senior cloud consultant Dwayne Monroe and Associate Professor in Economics Cecilia Rikap were interviewed for this episode.
- Interviews with Jeff Bezos and The Oregonian journalist Mike Rogoway were cited.
Transcript
[THE DALLES]
In 2004, a young guy named Chris Sacca turned up in The Dalles, Oregon. The city had long been sustained by the aluminium industry, but its 16,000 residents were wondering what was next after the local smelter’s furnaces had gone cold. Would they become yet another community across the United States to lose its primary industry — and the tax revenue that accompanied it — or would they find something else to replace it? Luckily for them, Sacca came with a possible answer.
By that time, the internet had been commercialized and privatized for nearly a decade. The boom and bust of the dot-com bubble was over, and the companies that survived the crash were solidifying their gains and gearing up for another wave of growth. Sacca claimed to represent a company called Design LLC that was proposing to spend hundreds of millions of dollars to establish a presence in The Dalles, build its own facility, and create hundreds of permanent jobs. The facility it was proposing to build was a data center, but there was a catch.
It wasn’t a coincidence that this mysterious company was eyeing Oregon for its data center project. Sure, it was close to the internet company hubs — the Bay Area and Seattle — but it also had particular financial advantages. This is how The Oregonian business journalist Mike Rogoway put it when speaking to the Berkeley Technology Law Journal podcast.
MIKE ROGOWAY: Back in the 1980s, Oregon, like a lot of other states, created what they called an Enterprise Zone Program; it’s a set of property tax incentives for small manufacturers. You put them in distressed communities, rural, small towns, and try to attract manufacturers by giving them a temporary exemption on their property taxes. But lawmakers didn’t put any cap on the size of those tax breaks. So when the data centers industry emerged two decades later, they looked up at Oregon and said, “Wait a minute, we can get enormous tax breaks.” … Those property tax exemptions, they save tens or hundreds of millions of dollars a year, or over the life of the exemption, billions of dollars for these really large projects.
No wonder companies were looking to Oregon. But for The Dalles to win the industry it so desperately wanted, Design LLC had a series of demands. It wanted a connection to a transatlantic fiber-optic cable, enough water and energy to meet its needs, and complete secrecy over its business operations. But on top of all that, it wanted a fifteen-year property tax break that had to go all the way to the state governor for approval. They were hefty demands, but The Dalles didn’t have many options, so Sacca and the company he claimed to represent got what they wanted.
When the data center finally opened in 2006, the mask came off Design LLC. The company behind the facility was revealed to have a much more familiar name: Google. For years after, city officials still didn’t want to publicly use the name because of the confidentiality agreements they’d had to sign years earlier. The Dalles became the site of Google’s first company-owned data center — with thousands of servers crammed inside a building the size of two football fields. The company was growing and it needed all that infrastructure to power its dominant search engine and its ambitions for future expansion. But Google’s data center in The Dalles was just one part of a larger shift playing out in the infrastructure powering the internet in that moment.
[INTRODUCTION]
This is Data Vampires, a special four-part series from Tech Won’t Save Us, assembled by me, Paris Marx.
In the early days of the internet, it was easy to believe that a bit more technology and connectivity would make things better. There were tangible benefits to this new infrastructure and what was happening on the web. But as it was commercialized and as the expectation of massive returns became cemented as part of the tech model, the widespread benefits have been consistently eroded and reduced to promises that never arrive just so a small number of billionaires can make out like bandits. Today, there are few fights more central to deciding our technological future and who benefits from it than the one brewing over data centers around the world.
Over the course of this series, we’ll learn more about data centers and how they really work. We’ll hear about the impacts they’re having in communities around the world, and how people are fighting back. We’ll dig into how generative AI is an accelerant on this fire and what’s driving these powerful people in the tech industry to try to foist this vision of the future on us regardless of whether we want it or whether it will even make the lives of most people any better.
This series was made possible by our supporters over on Patreon, and if you learn something from it, I’d ask you to consider joining them at patreon.com/techwontsaveus so we can keep doing this important work. New episodes of Data Vampires will be published every Monday of October, but Patreon supporters can listen to the whole series today, then enjoy premium, full-length interviews with experts that will be published over the coming months. Become a supporter at patreon.com/techwontsaveus today!
So, with that said, let’s learn about these data vampires and by the end, maybe we’ll be closer to driving a stake through their hearts.
[WHAT IS A DATA CENTER?]
So, what is a data center anyway? They’re not necessary the easiest things to locate and they’re certainly not all the same. If you pull up Google Maps and search for data centers in your city, you might be surprised to find a bunch dotted around downtown cores and urban centers in places you never expected. When I searched in Montreal, there were facilities in nearby office buildings I thought looked practically abandoned and even one that seemed to be in the same building as a hotel. Our data is being stored and processed all over the place, but increasingly it’s happening in large, centralized infrastructures controlled by some of the most powerful tech companies in the world.
To get a better picture of what a data center is, I reached out to an expert. Dwayne Monroe has been working in and around data centers for over two decades. He’s currently a senior cloud consultant. Here’s how he explained it.
DWAYNE MONROE: Data centers should be thought of as warehouses filled with servers and also filled with the equipment that supports the creation of platforms. So in the data centers I’ve been in in my career, there would be hundreds, sometimes thousands, of these devices wired up, wired into the corporate network, and then wired via connections to say, Worldcom or AT&T, to the public internet, and then networking equipment and switches and hubs and so forth, and lots of cables to connect things together. You’ll just see row after row of computers, and they like to take pictures of them with the lights down, so it looks very science fictiony. But if you turn the lights up, what you see are like just big computers racked together in racks, all assembled together. And in a good data center, it’ll look nice and neat.
I often refer to data centers as “server warehouses,” so I was happy to hear Dwayne agree with me on that description. Instead of the large warehouses where Amazon stores anything and everything you could possibly think of buying, just swap all those shelves of goods out for computer servers and hard drives and you’ve got a hyperscale data center. Those facilities store vast amounts of data and provide the computation for virtually anything you might interact with on the web. Streaming a video on Netflix, doing a Google search, or putting a prompt into ChatGPT may seem like an immaterial thing, but behind it all is a ton of hardware, energy, water, mineral resources, and the labor that makes it all work — often in mere milliseconds. There’s something pretty incredible about it. But how did it get this way?
Data centers, in some form, have been around for ages — since the early days of computation. Companies used to have to build their own facilities to manage all the information they held about their business and their customers — and even to this day, many still do. But these days there’s an alternative, one we often call the cloud: they’re data centers that are so big they’re referred to as “hyperscale” because of, well, the scale they operate on. As demands for computation grow, they’re the ones that are taking over.
DWAYNE MONROE: One of the first data centers I worked in a number of years ago was for a pharmaceutical firm. Now, by the standards of most corpos, that would be quite large. There were, I believe, something like two or three thousand discrete computers there. Hyperscale is much more. And also, rather than, say, just buying servers from Dell, which is what say your typical corpo would do, what the hyperscalers are doing, Amazon, Microsoft and Google, is they’re buying these elements and assembling them together into custom built kits, because they have to pack more computing power into a rack. They’re consuming much more real estate because, of course, again, they have the deep pockets to build out very, very big warehouses for these computers, and they’re able to create, like, what we techies will call like a unified API, a unified application programming interface, which can then be presented to customers as if it’s like, like one big database.
Technically, “hyperscale” refers to facilities that are more than 10,000 square feet and that hold more than 5,000 servers, but honestly, those figures feel outdated compared to what we see today. New Amazon data centers often hold closer to 50,000 servers. Meta, on the other hand, is building a facility in Minnesota that will be 715,000 square feet — or 12 football fields — once it’s finished. Microsoft and OpenAI have even mused about building a $100-billion data center complex that would be built next to a nuclear reactor it would require so much energy. If you think about doing some intensive tasks on your computer, you might remember how hot it becomes and how loud the fans can get when they try to cool it down — if your computer still has a fan at all. Well, I’m sure you can imagine how hot a facility full of many thousands of them will become — and keeping all those computers cool requires a lot of energy to power air conditioning units and water to fuel cooling systems. But the cloud, as we now call it, didn’t come out of nowhere.
CECILIA RIKAP: Basically, what you have is a system in which these companies, and Amazon, Microsoft and Google, because they concentrate together 66% of the global market in this cloud business space, they end up being everywhere. And the more organizations migrate to the cloud, the more dependent they become.
That’s Cecilia Rikap, Associate Professor in Economics and Head of Research at University College London’s Institute for Innovation and Public Purpose. You’ll hear more from her shortly. There are specific reasons why even major companies outside the tech sector — let alone smaller companies and startups — moved their operations onto the servers of Amazon, Microsoft, and Google. The promise hasn’t always worked out, but the cloud giants gained a lot of power in the process.
[EMERGENCE OF THE CLOUD]
If many large companies already had data centers of their own, what was the value in moving to the centralized infrastructures of Amazon, Microsoft, and Google? It’s an important question to understand, and it goes much deeper than the ambitions of those rising cloud companies. Sure, the launch of Amazon Web Services or AWS’s initial services in 2006 and 2007 was a key moment in this whole story, but I want to start us somewhere else — with a case Dwayne described to me that I thought was particularly illustrative of the motivations that led to this big shift over the past 15 years.
DWAYNE MONROE: I was working for a firm that we were having some difficulties meeting demand when people were ordering books from the organization’s website and our request — techies’ request — to you know, leadership was, “Listen, we need to buy more.” It’s always more servers, more storage, more servers, more storage. Every year they’re hearing this, right? And then, well, how much do you need? Oh, maybe 8 million. And then it turns out that you maybe should have spent 10 million, or you spent the 8 million and you overestimated. That technical problem was solved by building on AWS, and then using an elastic service on AWS, which could expand and contract in terms of its capacity to meet demand or to respond to demand. And you can take that story, my particular story, from 10 years ago, and spread that story out to thousands upon thousands, if not hundreds of thousands or millions of tech workers trying to solve problems.
In the case Dwayne outlined, the book business was constantly seeing its servers get overwhelmed in peak shopping periods — sometimes even losing customers’ orders because their website couldn’t keep up. There was resistance in management not just to using cloud services, but also to buying even more hardware that would only be needed at limited times during the year. So, faced with those constraints, the techies, as Dwayne calls them, built a cloud solution anyway using Amazon’s Elastic Cloud Compute or EC2, which essentially allowed the company to tap into extra computing power provided by Amazon, but only at the moments it needed it. The company would only pay for what was used. Dwayne told me this was how the cloud came in through the back door at many big companies because there was a clear appeal in making things work without having to make big new capital investments.
DWAYNE MONROE: What Amazon was saying, and they were the first, and then later, Microsoft and Google were saying the same thing is, “Listen, this isn’t your business. Give it to us, and then you all you’ll need to do is hire smart people to build these services using our platform. We’ll be the outsourcer, in a sense, the utility.” So that was the value proposition, as business people say, and that’s why it really started to take off.
Amazon was the leader in putting together “the cloud” as we understand it today. That work began in the early 2000s when it was trying to make its own internal processes more efficient. One of those initiatives was to standardize its infrastructure so its teams could focus on the various digital tools and internal services they were working on instead of having to worry about servers. Here’s how CEO Jeff Bezos explained it in an interview with Om Malik in 2008.
JEFF BEZOS: Four years ago is when it started. We had enough complexity inside Amazon. We were finding we were spending too much time on fine-grain coordination between our network engineering groups and our application programming groups. That basically, and what we decided to do was build a hard interface between those two layers so that we could just do coarse-grained coordination between those two groups. As we started — you know, Amazon is really just a web-scale application — so we realized as we started architecting this set of APIs that it would be useful for everybody, not just for us. And so we said, look, let’s make it a new business. It has the potential, one day, to be a meaningful business for the company and we need to do it for ourselves anyway.
Now, it wasn’t actually Bezos himself that had that bright idea. Chris Pinkham was leading the global infrastructure team at the time and, along with his colleague Benjamin Black, believed the service they were building could be valuable well beyond the walls of the growing ecommerce giant. Pinkham and Black put together a short paper for what Pinkham later referred to as an “infrastructure service for the world.” Bezos was intrigued and eventually gave them the green light not just to work on it, but for Pinkham to build a team in his native South Africa to put it all together. Reflecting on that moment in a later interview, Pinkham said, “I spent most of my time trying to hide from Bezos. He was a fun guy to talk to but you did not want to be his pet project.” It also wasn’t clear it would become Amazon’s next big thing, as Black later told Network World. “It took a while to get to a point of realizing that this is actually transformative. It was not obvious at the beginning.”
The two main services AWS launched with in those early days were Elastic Cloud Compute, or EC2, what Dwayne was talking about earlier, and Simple Storage Service, or S3, which sounds like what it is: a scalable storage solution. There was a clear appeal to companies who didn’t see computer infrastructure as their core competency or product offering to slowly begin the transition to the cloud — that was the case for large companies, but especially smaller ones that were just getting started. Back in 2008, Malik asked Bezos if encouraging cloud startups was part of his plan. This is what he said.
JEFF BEZOS: That’s an interesting idea, that’s not something… You know, so far there seem to be a lot of VCs already encouraging cloud computing web services. We have, literally, I rarely run into a startup company today that’s not using our web services. So we’re extremely grateful for that and we continue… We are determined to continue to do a good job for those customers.
It was not immediately obvious that cloud computing was going to transform so much of the infrastructure underpinning the web and the computation of companies large and small. But it’s not entirely surprising things worked out that way, for a few reasons. As Dwayne described, computational infrastructure was not the core competency of those other companies. Plus, cloud providers like Amazon made an important promise to companies moving to the cloud: that they would save money in the long run. Not needing to make as much capital investment in their own servers and only paying for what they used was supposed to be cheaper — and for a while it was. But, like with so much involving the tech industry, it eventually came to the point where cloud providers wanted to increase their profits, so they started to push more services on their customers and to hike prices.
DWAYNE MONROE: If we were to step into a time machine and go back to around five years ago or so, what you were hearing from these companies was, you will save money. And, of course, every corporation, every organization, wants to hear that, that you’re going to save money. This was the pitch. Now it hasn’t turned out that way. Some of the bills I’ve seen are eye watering, much more than companies were spending on premises, much, much more.
There’s another important reason to consider though, one that Dwayne touched on but Cecilia Rikap expanded on in our conversation. For decades, companies have been pushed to outsource as much as they possibly could as a means to cut costs and become more efficient. They shifted manufacturing to places like China and Mexico, shifted customer support to India and the Philippines, tried to reduce their inventories, and even subcontract as much as possible. So why not do the same with computational infrastructure?
CECILIA RIKAP: Amazon, Microsoft, and Google have been developing this narrative that the only, the most efficient way to do it, the cheapest and more flexible way of doing it, is on their clouds. Their clouds basically are just rented computers. In principle, this sounds very attractive if you think about the transformations of big corporations since the 70s onwards. In this process, basically of reducing your tangible assets, it becomes very attractive instead of having to have your own data center in your premises to outsource that to, in this case, Google, Microsoft or Amazon, and they saw this. They saw this attractiveness of being more flexible. But what the customers didn’t see was that outsourcing your digital infrastructure is not as outsourcing a call center or outsourcing your manufacturing capacity, because it is very much intrenched with the intangible assets themselves, which are also the main assets of these big companies and are also crucial for running a university, a government, and so on, and they are also crucial for startups themselves.
One would assume that the goal of outsourcing is to save money at the end of the day, and as Dwayne described, that’s exactly how cloud providers sold their services to companies large and small: if they moved their data and operations to the cloud, they would cut their computation bills by millions of dollars. But, in practice, that’s not how it’s worked out for many of them. So, why don’t they pull out? Why don’t AWS and these other clouds businesses collapse? Well, the longer your business is on the cloud and the more you take advantage of other services Amazon, Microsoft, and Google provide, the harder it can become to extradite your business from it.
CECILIA RIKAP: A company will end up writing all the algorithms, all the code on the cloud. So let’s say you choose Amazon Web Services, so you will be doing all your architecture, your software architecture on the cloud. You will be writing code, but in between, there will be kind of moments when you call a software as a service that is offered by either directly Amazon Web Services or a third party company that also offers its services on the cloud. So basically, you keep on writing the code, and one would think, okay, I can access the technology, that’s perfect and cool, and no, what you do is use a technology that is sold to you as a black box. All your software, all your architecture becomes dependent on the cloud, on these different pieces of the cloud, and becomes so expensive to leave, and so time consuming, that is not only impossible for small companies, but also for larger ones.
For cloud-native startups, it becomes virtually impossible to even imagine rearchitecting the core of the company to get off the cloud services of Amazon, Microsoft, and Google. But to a certain degree, that even becomes the case for major corporations, especially as they begin to use more of the services offered through cloud providers, particularly artificial intelligence tools meant to process their data, better target their customers, and the many other uses they can put it to. Sure, using the cloud may be more costly, but getting off the cloud could take time and energy that may not be worth it. And ultimately, there are some benefits beyond the commercial that can contribute to why a company would stay.
CECILIA RIKAP: What I came to conclude is that they are doing it to escape from uncertainty. They prefer to depend on big tech, because if they depend on their cloud, they are not investing all this themselves, and they can always change and update the technologies faster, which for companies that are also operating as intellectual monopolies in their own fields. Getting access to the method for innovation, the method that is becoming the primary one, the one that, in a way, is being imposed as the mainstream for keep developing intangible assets for these companies ends up being a sort of like best alternative available. And in the end, why it’s the best alternative available for them is that it enables them to keep on extracting value from those that participate in their global value chains.
Even if we debate how much cloud services benefit users, it is clear this shift has been a boon to cloud providers — none more than Amazon. The company’s ecommerce business isn’t a particularly high-margin one and Bezos long kept profit margins low to reinvest in growth and expansion. But once AWS came along, it changed the game. For years, the bulk of Amazon’s profits have come from its cloud division. That’s not only kept shareholders happy, but it’s fueled the company’s expansion into new areas, including everything from film and television to pharmaceuticals. Amazon could lose money dominating other industries because the cloud business was there to support its ambitions. In the first quarter of 2024, AWS accounted for just 17% of Amazon’s total revenue, but a full 62% of its profits. It’s no wonder Amazon and its competitors want to keep the cloud profits coming.
[SCALE OF CONSTRUCTION]
There are plenty of data centers throughout the world, but remember the distinction we made earlier in this episode between hyperscale data centers and everything else. It’s the massive hyperscalers, the kind being built to power the cloud businesses of companies like Amazon, Microsoft, and Google that we’re really concerned about. They not only have massive resource demands, but they signal the further consolidation and centralization of the infrastructure that powers the web in the hands of a small number of powerful, and in this case American, companies.
Let’s look at the numbers. In 2018, Synergy Research Group estimated there were 430 hyperscale data centers worldwide. 40% of those facilities were in the United States, with China, Japan, the United Kingdom, Australia, and Germany collectively accounting for another 30%. At the end of 2020, Synergy counted 597 hyperscale data centers worldwide — a number that had more than doubled in five years. Amazon, Microsoft, and Google were responsible for more than half of them, with Oracle, Microsoft, Alibaba, and Meta (or Facebook) adding quite a number of their own. But they were just getting started.
Between the increasing internet dependence created by the pandemic, the continual growth of the vast data collection by these tech companies, and their efforts to get more people using more computationally intensive AI products, culminating in the generative AI hype of the past couple years, the major cloud companies have been making major investments to more rapidly expand their networks. At the end of 2023, Synergy counted 992 hyperscale data centers, and that number ticked over 1,000 at the beginning of 2024. Synergy expected the number to double again within four years, but more importantly, it noted that the scale of those facilities was increasing — they were continually getting larger, covering more space, holding more servers, and making greater demands on local electricity grids and water resources to serve the bottom lines of major tech companies. It counted 440 new facilities underway, but through 2024 the major tech companies and cloud providers have been throwing around money in every corner of the world to start building the foundations for new data center projects.
I want to highlight those numbers for you one more time: At the end of 2018, there were 430 hyperscale data centers. By 2020, that had increased to 597. At the end of 2023, it was 992, and now it’s over 1,000 with hundreds more in the pipeline. Earlier this year, Microsoft announced it spent $50 billion on data centers between July 2023 and June 2024 alone, and was planning to add new server capacity much faster than in the past. Amazon committed $150 billion to data center expansion, with $50 billion alone dedicated to projects in the United States in the first half of 2024. These companies are serious not just about expanding their businesses, but increasing the amount of computation our societies require regardless of whether there are corresponding social benefits. At the end of the day, the bottom line comes before everything else.
DWAYNE MONROE: Let’s say you just made 10 billion a year by just providing nice software that people wanted, and you had modest growth, or maybe no growth? Well, we know that the way our system functions, that would just be unacceptable. You’d be punished by the market and so forth and so on. How do you increase the consumption of computation above and beyond what actually is required, or what organizations and individual people are asking for? And this is what they’re trying to do by cramming so-called AI into every nook and cranny, because it does require such incredible build out.
[RETURN TO THE DALLES]
To close off this episode, let’s go back to The Dalles, the city in Oregon that became the site of Google’s first company-owned data center. Fast forward 15 years and residents were starting to ask questions about how much they were really benefiting from the arrangement and the effects of all the water needed to supply Google’s growing data center footprint as it continued to add new facilities to augment the original one from 2006. Oregon might be thought of as a wet state with plenty of water, but The Dalles is in a county that’s regularly subject to drought and that has naturally made residents concerned about all the water going to Google.
The tax break on its initial data center is over so the company is paying significant sums into city coffers, but even still Adam Seessel, a journalist at Fortune, reported that residents used to refer to it as Voldemort Industries — in part because of the secrecy and the Harry Potter villain’s nickname being He Who Must Not Be Named. In 2021, Google was negotiating a new water deal when residents’ concerns finally came to a head. The Oregonian, a state-wide newspaper, requested the figures for Google’s water use from the city, and instead of providing them, the paper and the city ended up in court with Google paying the city’s legal bills to keep the information private. It was the legacy of the agreement fifteen years earlier to keep all of Google’s operations a secret. Here’s how Oregonian journalist Mike Rogoway described it to the Berkeley Technology Law Journal podcast.
MIKE ROGOWAY: Well, so if we go back to 2021, our readers out in The Dalles said, “Oh, Google is seeking a lot more water from the city, and they want a new water deal to help finance that.” And I thought, well, we should understand that. So I called the water utility manager for the city, a fellow named Dave Anderson, and asked him about the deal that Google was seeking. And he walked me through it, and that was great. But in a poor example of reporting, I forgot to ask him how much water Google was using at the time. And as soon as I hung up, I’m like, “Oh, I forgot to ask the most basic question!” So I hopped on the email and sent Dave a note and said, “Oh, I forgot to ask, so stupid. How much water is Google using now?” Well, that email set off a chain of events then. Google asserted that its water use was a trade secret and instructed the city not to tell us. Oregon has a sort of unusual public records process that the city said, “Oh, it’s a trade secret. We can’t tell you.” So we appealed to the county district attorney and said, “They have no case here. They have to give us this information. They’re a public utility. And you know, this is public information.” And the district attorney agreed and ordered the city to hand over data about Google’s water use. Well, Google then instructed the city to sue us to prevent us from getting access to that data, which is what Oregon public records law requires if a city wants to block the records. So Google said it was contractually bound to do what Google ordered, and did in fact, sue us. Well, we fought the suit and a nonprofit organization called Reporters Committee for the Freedom of Press stepped in and provided legal representation for us to limit our legal exposure. We felt strongly from the beginning, as did the RCFP, that the law was on our side. It took about a year, but Google gave up and they agreed to give us everything we wanted, as well as pay for the city’s legal costs and our legal costs.
As Google fought to keep its water use figures secret, it found itself facing a growing public relations nightmare and finally relented. When it shared its water use figures for The Dalles, people were shocked at what it showed. In just five years, Google’s water use in the city had tripled. Its facilities used 355 million gallons of water in 2021, which was the equivalent of 29% of all the water used in the city that year. And it still wanted more to cool additional facilities. As one resident told The Oregonian, “Google’s become a water vampire.”
But it’s not just The Dalles asking those questions about the massive data centers being built in and planned for their communities. Around the world, groups of concerned citizens are asking questions about these infrastructures and pushing back on plans they feel aren’t in their interests. Those fights could become central to a wider campaign to reassert collective power and sovereignty over technology — and that’s what we’ll be exploring in next week’s episode.
[OUTRO]
Data Vampires is a special four-part series from Tech Won’t Save Us, hosted by me, Paris Marx. Tech Won’t Save Us is produced by Eric Wickham and our transcripts are by Brigitte Pawliw-Fry. This series was made possible through support from our listeners at patreon.com/techwontsaveus. In the coming weeks, we’ll also be uploading the uncut interviews with some of the guests I spoke to for this series, exclusively for Patreon supporters. So make sure to go to patreon.com/techwontsaveus to support the show and listen to the rest of the series, or come back next week for episode 2 of Data Vampires.