Why We Must Resist AI

Dan McQuillan

Notes

Paris Marx is joined by Dan McQuillan to discuss how AI systems encourage ranking populations and austerity policies, and why understanding their politics is essential to opposing them.

Guest

Dan McQuillan is a Lecturer in Creative and Social Computing at Goldsmiths, University of London. He’s also the author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence. You can follow Dan on Twitter at @danmcquillan.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

Paris Marx: Dan, welcome to Tech Won’t Save Us!

Dan McQuillan: Thanks very much for having me.

PM: Very excited to chat with you. Obviously, these AI tools are everywhere today. Last year, you had a book published called “Resisting AI,” which is very relevant to everything that is going on in this moment. Great timing on the book! [laughs] It gives us something great to talk about and to dig into it because I think it’s a really important perspective on AI to have — especially in this moment where there is so much hype and excitement about it. I want to start by getting your initial impressions of what has been going on the past few months. Obviously, last year, we had the slow rollout of these image generation tools, things like DALL·E and Stable Diffusion, became slowly more popular throughout the year.

Then of course, in the past few months, we’ve had ChatGPT emerge; we’ve had the deal between Microsoft and OpenAI to build that into the Bing search engine. Google is moving forward with its own AI search engine integration. Facebook is talking recently about how it also has its and it might do something with it soon too. Of course, now that there’s the hype, everyone has to get in on it. I wonder, genuinely, what you’ve made of the responses that there have been to these technologies over the past few months as they have slowly become much more common in the public has been interacting with them?

DM: Well, I suppose what’s currently on my mind just because being stuck in academia, you tend to see what’s most immediately around you. I’m really distressed right now about the way a lot of academics are just rolling over and going: Well, ChatGPT are all the language models; they’re here; they’re inevitable; we have to learn how to live with them. They even valorize this, they’re saying: We can adapt our ways of writing to include particular forms of creativity, and we can use these tools. No one could say there’s absolutely nothing to be gained from these tools. I think these statements are what I call in the book AI realism, for starters, which is the sense of inevitability around AI — no matter how visible its toxicity. Also the fact that they just seem to be able to sweep this toxicity under the carpet. I mean, the absolutely appalling sort of varieties of harm that are immediate and already rolled up in something like language models just don’t seem to trouble them.

I guess that just happens to be my little fishbowl at the moment. I’m not searching for something good to say because I have nothing good to say about large language models. I suppose there’s a collateral benefit, as you say. It’s just pushed it further to the front of everyone’s agenda. So, in that sense, I was reading a great thread just now by Emily Bender on OpenAI and their delusionary statements about their careful custodianship of the future of artificial general intelligence and all this nonsense — which is just basically marketing stuff, really. But it also represents that ideology. That’s really great that lots of other stuff is coming out from under the stone — some things that were perhaps only of concern to those of us who spent too much time thinking about this kind of thing anyway, and worrying about it and trying to get people alerted to it. Now it’s out in the open.

It is mostly hype and people seem to be getting swept along with it. But those of us who are in the skeptical corner are trying our best to highlight the urgent downsides. I think some more of those will come out. I won’t want to go on about it too long, but just to say it’s a weird time for me because my main concern, and my concern in the book, wasn’t really to focus on these most spectacular forms of AI in a way. I think they’re important, but my gut feeling has always been actually that the most impactful applications of AI, or effects of AI, are those things that much more invisibly permeate through the institutions and come to touch people’s lives in much more ordinary ways.

When they’re claiming benefits or when they’re seeking housing or when they’re getting educated, or whatever it is. I don’t think this is diverting, I think it is just shining a light on that stuff. But my final resentment for the moment would be that by adopting these things — like Microsoft, for example and still sticking with ChatGPT and Bing — to my mind is also it’s just adding another brick of legitimation to these technologies in general, and their general application to our lives. That’s what resisting AI is really about is the broad resistance to these computational assemblages, being given any influence over our everyday life, at all.

PM: I appreciate how you put that and that is a bright side that you’ve discussed is. Even as the height accelerates around these technologies, that does also give us an opportunity to have more of these skeptical views about these technologies while they get more prominence and attention, as you say. Emily Bender is someone I definitely need to have on the podcast to discuss this with her as well. I want to go back to what you were saying at the beginning about the reaction of the academics that you’re seeing, because it does seem that, obviously, there’s a booster-ish response to it where: Oh my God, this is going to change everything; this is great! It’s the kind of Sam Altman perspective on this and the people like him. Then it does seem that there’s also a pragmatic response that also plays into this as well, that’s really beneficial to them, which is associated with what you described. These are here, we need to figure out how to adjust to them. Rather than saying: Hold on a second, do we actually need these technologies at all? Should we be embracing this and acting like it’s just inevitable or can we push back on it?

DM: Just to skip to the punch line in a way I’m, I guess, a rejectionist or an abolitionist in that sense, and I think we will talk about this. It’s not just about saying no — it’s about saying what are the alternatives and how we go about that? But what you made me think of just then was, I do agree that there’s, first off, this more self-styled, realistic approach to it. What I think in a way is that that’s more dangerous because it’s easy to dismiss the hypesters and the boosters and people understand that positioning. There’s hypsters and boosters in every area of life, so people are somewhat newer to that. I think it’s the positioning of people as responsible to be able to sort of say: Absolutely, There are problems with these things.

Given that we have them — this is the inevitability — given that we have them we… What’s really frustrating about that is that implication that they are the people who should make that decision. That somehow these responsible, liberal, bourgeoisie intellectuals should be the ones who can take on this, essentially monstrous technology, and decide for the rest of us how best it should be managed for the common good. Which is, coincidentally, something that they get to decide. I don’t really make any secret of my positionings around these things. One of my bigger concerns about AI, I suppose, and it’s openings towards facilitating reactionary solutions to things is not that that is either a sharp break in technology (it’s not sci-fi), or that that’s a sharp break in politics. The harms that we’re talking about are an intensification of the harms that are happening right now under our nominally democratic, apparently liberal regimes.

It’s not just that we’ve got lunatic, spacefaring long-term, visionaries — if that’s the right word for them. May they all go off in rockets on a one way ticket. It’s that we’ve got the responsible authorities who are responsibly closing off borders so that people die in the desert, or on the Mediterranean, or responsibly doing all the other things that are causing harm right now. Those are the same people who are coming along and saying: Don’t worry! We have the capacity and we have the responsibility to manage these things sensibly for everyone’s benefit.

PM: It’s a huge concern. We’ll come back to it through the course of this conversation to talk about how this does make AI technologies much more concerning and worrying, especially, when they’re not the very visible hcatGPT style tool, but are much more secretly integrated into the systems that enable all the problems that you’re just talking about. Before we get to discussing those things, I think it’s better for us to get an understanding of how these technologies actually work because that is going to help to illustrate how these problems are caused and why they’re difficult to avoid. Can you talk to us a bit about how AI technologies, so to speak, actually work? How they “learn” — I’m putting scare quotes on that, for listeners who can’t see me.

DM: Very scary.

PM: Totally! So how should we understand these technologies and how they’re actually developed and how they function?

DM: Okay, thanks for that gives me a chance to try and ground, or at least backfill, some of my opinions with some semblance of being grounded in an understanding of what’s going on. Maybe if we work backwards from the language models, or large language models, which are the ones that are so front and center of everyone’s attention. They’re essentially text prediction machines. Inside of AI in general — inside the language models, per se — there are some clever ways of doing things. I’m not questioning that. There are somewhat deft and skillful ways of manipulating things under their own terms, and what they are largely manipulating are statistical optimizations of one kind or another. So it’s machine learning and language models just have used transformer models, essentially — which is a particular way of setting up that optimization — to really just learn how to fill in the blanks. I mean, they’ve learned how to: The cat sat on the blank. That’s how they’ve learned to fill in that gap.

Because they have quite sophisticated methods for doing that, huge volumes of data do that, quite a large amount of invisible ghost labor, which is invisible human input into that, and oodles and oodles of computing power. They do that very well and doing that very well gives them a number of capacities. I mean, they can produce grammatical, mostly contextual, and sometimes creative seeming texts that are doing what they’re meant to do, which is the plausible. They don’t have any capacity to understand, if they don’t understand anything. They don’t understanding anything more than a Teletype machine understands anything. They produce sequences of letters, and they’ve learned how to do that in a way that achieves a mathematical minimization.

Now, that does give them with certain affordances and producing plausible looking text is one of them, and it’s plausible because it’s been trained against text that was produced by humans to be a good imitation. That’s all it is, it’s an imitation. It’s a somewhat randomized imitation of what humans can produce given the right set of inputs and prompts. Given the vastness of the different examples of subjects and topics and contexts and sources that have been sucked into these things, and what they can therefore, produce is also quite diverse. But this is where you could really help me by prompting me, I guess having spent a little bit too much time looking at that, I sometimes get a bit flatfooted when I’m faced with people who read it as anything more. I can understand it — I’m not trying to belittle anybody. But because it’s plausible text doesn’t mean what it’s saying is plausible. It doesn’t mean that it has any conception of what it’s saying. It’s us reading into that pattern, that there’s some meaning in it.

We are bringing the meaning to it, there is no innate meaning in what an AI is doing, other than a mathematical operation of statistical minimization and a reproduction of that by deploying that model, that set of learned parameters, basically. There’s nothing else in it. If it seems reasonable to us, or if it seems interesting and weird to us, that’s us. That does seem to play some pretty strong limits on what we trust it to do. I think of it as it’s a really, really interesting party trick. But why would you do anything else with it? I don’t know. Why would you allow it to be an integral part of our knowledge, searching structures and search engines? Why would you let it write your essay? I’m not trying not to get at my students here if anybody’s listening. You want to try it? That’s up to you! But it’s inevitably going to produce what the industry itself calls hallucinations, which is still over-reading it because it implies that there’s something doing the hallucinating.

Let’s use that word for the moment. Definitely, it’s literally making stuff up and it has no idea what it’s making up. Therefore, it is a bullshit engine. It’s a bullshit engine in that it makes up stuff that, essentially, has no semantic content, grounding causality, or any of that in it. It’s bullshit because its only goal is to be plausible, just like somebody who tries to bullshit you at a party that they know all about fusion physics or something like that and then they don’t. They’re just bullshitting and they’re just trying to sound convincing. I don’t know. I mean, that’s language models, and what they’re doing is a particular application to this whole business of trying to predict blank words and sentences. That’s the learning method. That’s just a very specific instance of the general — what we call — AI. Right now, there is a very particular slice of what could possibly be called AI and AI has a history of being lots of different things, as I’m sure you’ve covered before and you very well know yourself. There are lots of different kinds of AI.

There’s one very particular kind, it’s called Connectionist AI. It’s not relying on the idea that there’s rules in the system, there’s no expert stuff in the system, what it’s relying on is that if you present this type of learning with a huge enough data set, you give it a certain amount of desired outputs and you make lots of interconnections in between those two things, and then you turn the handle, and keep on doing a self-correction process called backpropagation — so that you try to more closely approach the “correct output,” given your input. That’s all it is. jJst a turn in the handle like a mangle. It mangles this stuff until it learns how to fake it. It fakes it until it makes it, basically. That’s pretty much the whole of deep learning — reinforcement learning. They are all variations on a theme. They will all have these common characteristics. They’re all based, essentially, on mathematical optimization, there’s absolutely no reasoning involved in an essence to understand it. They all require vast amounts of computation, vast amounts of data, which have their own political implications, of course. They have their own ethical implications and their own social implications, so that’s my starter for ten.

PM: It’s a very concise explanation of how these things actually work. And what you get at there is why some of the reporting around it has been concerning, as a lot of these publications have been publishing transcripts with ChatGPT and saying: Oh my God, look at all the things that it’s been saying to me, after I fed it these particular prompts to lead it in that particular direction. This is something that we should be very worried about, not so much in the ways that we’ll be talking about, but more in the way that: Oh my God, look at the things that spitting out! What if it actually means these things? Or gets people to believe them or what have you?

It does seem like looking at this through the wrong lens of what the problem is here and how we approach it. You mentioned there the degree of computation that is necessary for AI tools, ChatGPT and things like that, in particular, but many of the others. Would you say that part of the reason that the hype around this is happening, in this moment, is because there has been so much progress in computational power and creating these cloud data centers where there’s so much computing power available? Or would you link it more to a development and the politics and how that has evolved in our society in recent decades?

DM: I try to earwig a bit on industry conversations and actually a really good podcast for that kind of thing for anybody wants to try and listen along is called “TWIML AI: This Week in Machine Learning.” The host on there, Sam, interviews key practitioners in all of these areas and does a very good job of some of it would be very technical, very industry- focused. But if you can listen along, you understand some of the more internal narratives, and it’s pretty well understood in the industry that the main thing that’s going on at the moment is scale. There have been actual changes and there’s a little bit of competition between doing deep learning or reinforcement learning. Now we’ve got transformer models and they have, perhaps not surprisingly, transformed a lot of things, but not fundamentally. Fundamentally, these are just are variations on a theme and, essentially, it’s about scale and compute power, as they say.

But something that I should have said a little bit earlier, and I’d like to emphasize this to people is that I may sound like I’m going on about the computational technology, and I am because I think it’s important to be materialist about these things, across the board — politically materialist. Or other materialist in the sense of political economy and follow the money and everything else, but also actually look at the technologies themselves as not being simply neutral or passive technical frameworks in which people invest their politics. I mean, of course they do, we know that we can talk about the weird politics of supremacist politics that gets invested by Silicon Valley in these machinery.

They’re not just passive vessels for our imagination. They have particular characters; they make certain things easier and certain things harder, but my reading of the politics of them does start from, or at least that’s how I tried to do it in the book, I do try to start from: Let’s look at what AI actually does not what the newspapers think it does, or certainly not what sociologists think it does. Maybe not even what some computer science, but what is it really doing as a material technology? But that’s because it never exists like that by itself. That stuff didn’t just appear. We tend to talk about as if it exists over there somehow, and then has been bought into society. That is a dangerous fiction, this stuff emerged out of the same social-political matrix that we currently inhabit, and immediately goes back into it. It’s really impossible to separate the operations, that I find really interesting, from the institutionalist assemblies that there will always part of. Of the ideas and ideologies and values that are already present throughout their creation process, throughout the process of even conceiving of these things.

Throughout their operation and throughout the application, these things can never be separated, they just act as a particular condenser, or a particular channel, within that process. A real understanding of what these mean, in a way in the world, can never really be separated from the wider social matrix. For me, trying to read the significance of AI is trying to say: Well, what are they actually doing and hnd how could this affect people’s lives? But it’s trying to say: But we’re not starting from a blank here. We’re starting from the societies that we all have, as they say: We’re all experts by experience. We all inhabit this world, andnd we understand some of its very stark and peculiar topologies. These same landscapers — that out of which AI has emerged from a particular corner of it — but out of this particular world, and that’s where it actually operates, there is no other space. So if we need to understand what AI means, where it comes from, what’s going to do? We can’t really do that without trying to see it as part of a jigsaw puzzle in this wider jigsaw puzzle of society, as we know it and the history as we know it, the ongoing dynamic and unfolding of history. AI is part of that history right now and certainly playing an active part in trying to foreclose certain futures and open up other futures. That’s why I do find it something important to pay attention to.

PM: I would say that addresses my question on politics, but we can get more specific about it. In the book, you describe how, obviously, we had this neoliberal turn in the in the 70s and 80s. This led to a greater privatization of the functions of government, a greater reliance on the market to provision services for people — cuts to the welfare, state government seeking efficiencies — “to deliver services” and things like that. As things get marketized, there’s also a vast production of data that goes along with creating these systems, and that seems to be a type of environment — less investment in the public service, the government needing to seek out efficiencies, the market providing more of the services that we rely on — that seems to be something that would be beneficial to an AI type of approach to these functions. Maybe you can talk about how AI fits into this broader political picture, or economic picture, that we’ve created over these past couple of decades, and how it exacerbates these trends rather than seeking to ameliorate them.

DM: Absolutely, what you’re describing a bit is, I originally signed up with Bristol University Press to write a book called “AI for Good.” I’m in a computer science department, I feel that I also have experience of working in what they call the social sector, that’s my twin concerns and maybe I can help to usher in this idea of AI in the public interest or something like that. I just went through this series of moments asking: Wait, what? This happened as I was trying to engage with the AI, and at the same time have a sense of history and sense of political and social understanding. I was trying to understand what was going on with AI, and went through a series of negative revelations going: Wait! This is exactly like neoliberalism, and this is, in its own way, entirely recapitulating a sort of [Friedrich] Hayek idea of what markets are supposed to do optimal distillation of all available information.

Basically, in that sense, you could have almost predicted AI based on a reading of neoliberalism. The aspects of that that are most interesting to me that plays into this anti-social in the literal sense — this anti-society, privatizing, centralizing, seamless construction of a space of free flows of capital, and all of that. It does play into that. One of the useful things I think about the analysis of neoliberalism that’s equally important for AI, and is somewhat overlooked sometimes, is the idea of the simultaneous production of the subjects. In other words, us. How we experience ourselves how we see ourselves and each other under neoliberalism, that’s one of the really strong elements of the critique of neoliberalism — as opposed to all the capitalisms that we’ve had — is its hyper-emphasis on hyper-individualism. The fact that we exist as very isolated subjects and entities and that what’s relevant for me is a sense that AI, in that feedback where I was saying, comes out of and feeds back into that process as well, that it feeds back into subjectivation.

AI produces it’s own subjects in a way. It’s not just a mechanism for manipulating us — it’s a mechanism for producing or producing certain kinds of us, or producing certain understandings in us of who we are and what we should want. That’s always how hegemony works in that it establishes a set of cultural and experiential framework and field as much as it does governance, as much as it does industrial relations or relations of work, or labor and social reproduction. It establishes these things across the board from the psyche, to the political economy. I think AI is active in that way. The other thing about it, from what you said was one of my first revelatory moments in unpicking what actually goes on inside AI and realizing that it doesn’t really just do anything except come up with these ways of dividing things, and ways of doing a utilitarian optimal division way of ranking, essentially, the desirable and the less desirable. That think about the moment, slightly longer political moment, when in the sense of the financial crash for example.

What I’m saying, in a very long winded way, is I just looked at this and said: Oh, my God, this is a machine for austerity, this is an austerity machine! It is a machine for reproducing and scaling this mechanism of austerity and this mechanism of characterization. When you’re looking at AI, you’re looking at Deliveroo, you’re looking at Uber. These are inseparable from the core — we want to call them dark patterns, somebody called it to me the other day taking that analogy from web design. These are the dark patterns that perhaps weren’t intended to be dark, but they came out of a certain logic and they’re embedded in these systems. It does absolutely not conflict with neoliberalism, but seems like turning up of the volume of the agenda that’s been rolling since — let’s pick a date — 1973 and the coup in Chile or whatever. It’s a machine for the reproduction of Chicago Boys values.

At the same time [laughs], that’s pretty bad! But what really concerns me, maybe, is that this is coming at a time when the wheels are coming off that dominant world mode. There seems to be quite a lot of unintended friction in trying to sustain this neoliberal world of just in time trans-global economic operations. Actually, for a number of reasons. One are the financial crashes, another one of them the so-called refugee crisis, or the massive displacements for war and climate change and famine and other reasons that happened around the world. Plus, the climate crisis, overarching everything, is causing shocks and tensions in the system that — not to mention the pandemic, actually. That was also another kick in the nuts for the existing system. It’s really staggering. It’s staggering and that is worrying because it’s not like the relations that have accrued themselves such power over this period of time, even compared to previous versions of capitalism.

It’s not like, they’re going to go: Oh, yes, it’s not working anymore, we better go for a socialist republic or something. They’re going dig in! They’re going to dig in. They’re going to seek other means by which they can support and sustain the existing asymmetries, let’s say. Historically, that is the recipe that people like Robert Paxton, point to as being pretty generative of forms of far-right politics, if not fascism. Which is you’ve got a crisis, you have a ruling class panic, and you’ve got a complimentary present presence of a solution offering ideologies. We don’t have to look anywhere, you don’t have to look outside, down the street, at the moment in many places in the world, to see rising far-right movements. In my mind, that’s something that has concerned me for a long time.

Politically, I’ve always felt myself to be an anti-fascist, but the last few years is incredible — the rise of the far-right. So again, if you’re really considering AI as a technology that’s enmeshed and entangled, and involved in creating circuits and feedback loops in the society that it comes from, and the society that is currently emerging, then the fact that this machine for division and ranking and separation, that tends to centralize the attributes that it imposes on people — for that to come about at a time when not only do we have existing injustices, but those injustices are veering in a far-right direction in so many ways. Whether that’s fascist style far-right, or more religiously justified far-right or whatever nationalisms, or identitarianisms we’re talking about. It’s breaking out all over, and the possibility of that happening, in conjunction with the emergence of AI is partly also why I wrote the book.

PM: I think it is concerning to see how these things develop. I want to come back to the point on fascism and the relationship to the rising far-right in just a minute. But even if we think about how this affects our governments that have not elected far-right parties, which is still the case in some countries, luckily. If we look at that, I was talking to Rosie Collington recently about how bringing these private sector technologies into the public sector still has an effect on what governments can do and what they do, because it does help to shape the responses. We can already see people like Sam Altman, and other boosters of AI technologies whether it’s ChatGPT stuff, or more broadly, the suite of AI tools, that these tools are going to be your doctor in the future, they are going to be your teacher in the future. We’re going to need less human teachers. It won’t be their kids — the kids of the billionaires who will have the computer teachers and the Chatbot teachers — it will be poor people who don’t have the money to buy access to private schools and private hospitals and those sorts of things.

Even as, especially in this moment, we see the challenges that a lot of governments are facing to fund public health care systems and the talk that: Oh, we need more private sector involvement in the healthcare system. Then what is that going to mean into the future? How do these things evolve? How does technology and AI then justify, or help to justify, a further privatization of these things because of how AI is used? Then even more in a more hidden sense, in the sense that we don’t see so much, as you describe in the book. There’s also the way that governments, as you say, after having been cut so much after having been subjected to so much austerity, say: Maybe these AI tools can help us to administer our programs in a different way. Can help us to see who is a deserving welfare recipient and who is an undeserving welfare recipient?

What other ways can we use these tools to classify the people that we supposedly serve, so that we don’t need so many bureaucrats, so that we don’t need so much labor so that we can reduce the cost of delivering services. We can make them more, “efficient.” How do you see that aspect of this playing out as these technologies get integrated into the public sector, and then have consequences that are often not expected, but maybe people like us would say: Hey, pay attention; this is going to happen. Even though a lot of times, those concerns are not listened to.

DM: I think you portrayed a very convincing picture of not only what will happen, but what is happening. Although I’m trying to also raise warning flags about potential more extreme outcomes of these things, that is actually really my main concern because that’s the stuff that’s hitting people right now. It hits all the people that are already being hit so hard by everything else. I’ve been digging into in the last couple of weeks, a little bit more of what’s going on behind the benefit system in the UK in this rather notorious benefit called Universal Credit. Actually, I have to say I have to credit Human Rights Watch here for a report they wrote a couple of years ago that I didn’t read until recently. Reading through it, firstly, in an affirming way, it really made concrete a lot of things that I’d written into the book without knowing these on the ground facts about how these things would affect ordinary lives under under pressure. Feeling that these are simultaneously distanced, cold, thoughtless mechanisms of bureaucratic administration, trying to achieve impossible target given the ridiculous goals politics has in turn set them.

At the same time, sort of genuinely mechanisms of cruelty that they are actually forms of punishment. I mean, in the UK, again, that isn’t really hidden. It’s not said so much about Universal Credit, but for example, the system in the UK that’s created to deal with revealing phrase “asylum seekers,” is actually officially called the “hostile environment.” I was chatting to somebody who’s working with Universal Credit beneficiaries, and they were saying that one of those beneficiaries, they were talking to refer to Universal Credit as a hostile environment and I think that’s absolutely right. It is in the way it sanctions people. It has this incredible twisted logic of saying: Well, we’re only going to give you the absolute bare minimum and if something happens, and this can be quite trivial, that somehow transgresses one of the many, many parameters that we set up for how your behavior and life should be lived, then we’re going to sanction you and only give it to you 70%. Which is by definition 70% of what you need to survive. It’s really appalling, and there are many dimensions to this.

I think they perhaps reflect a few of the things I tried to put in the book that I did find quite early on. Hannah Arendt’s concept of thoughtlessness was very helpful in guiding me through what’s going on, because it was the capacity for such cruelty, such inhumanity, violence is of various valences, happening from people to people, but through the mediation of these systems is so awful. That, unless you’ve seen it, or studied it or experienced it, I think a lot of people still don’t really either know about it or want to know about it. Again, these things will play into each other sometimes really don’t want to know what’s going on that’s already present that the things that you described already present? Of course, the thing that you were just describing is that so much of that is now being absorbed into the private sector. What does that give us? That gives us perhaps an incredible intensification of a single parameter, which is making a profit out of all this stuff.

The fact that people, there’s been newspaper stories in the last few days in the UK, comparing the food that you get in an old people’s home with the stereotypical oysters that the CEO of that company is eating on their yacht, it’s a cartoonish in its horror. The privatization makes it even less accountable — not that local authorities or government or local government was accountable before in any meaningful way — but it makes it less accountable, makes it more opaque, makes it more difficult for those people trying to exercise some guardianship journalists or wherever. Or whoever else it is, or activists or whoever, it makes it more difficult for them to get traction on it. It is the market ideology, of course. Also, our own experiences, where I work. have that same restructuring at the hands of consultancies, who talk in terms of efficiencies and optimizations. That’s one, or one of the other things, I like to drop in a way — that that’s a consequence of this privatization — but it’s giving much more of a hold to this single valued good as well.

This idea that things must be optimized and optimization is very combined with a capacity for cruelty, combined with the capacity to leave people without the the bare minimum that they need to survive becomes, again, the term that I adopted in my jack door like way, picking up little bits as I go along. The term that made sense to me at that point was the idea of necropolitics, which is not meaning some massive transformation and politics. If anything, then Achille Mbembe, when he wrote this book, “Necropolitics,” he was talking about the still murderous colonialism that exists within post-colonial regimes. But actually, that seems equally applicable to a lot of the systems that we live under, depending on which bit of the system you experience. I wanted to emphasize necropolitics because I wanted to emphasize, really, that these systems in a way are literally a matter of life and death. They aren’t ordering anybody out to die or automated executions or something like that, but what they are doing is seeping out of the system in a way that is justified by being somehow mathematize and algorithmic and computational.

Perhaps, nobody really believes it’s fully objective, but it’s systematized at least. It’s all authorised, just seeping out the sheer possibility of the continuation of existence for so many people, so that’s much more everyday and very pervasive. I think all that ChatGPT and other sort of generative models, in a way are kind of generative, possibly further ideas about how to do this for other institutions that didn’t yet think that they too could get in on this particular party. That the welfare benefits people and border forces, they’re all over AI, and now it will probably spread to the rest of them under the under the rubric of. Mental health is a particular concern of mine, in general interest area because it seems one of those areas which is a focus for the effects of systemic categorization, the marginalization of the more fundable a statement about what society really means.

How do we care for people who at that particular moment, are not able to care for themselves. At the same time, it speaks so much to our values of normativity and what we think is okay and what we think isn’t okay. Then, who will accept because they’re productive, and who we won’t accept because they’re not productive. So mental health seems to encapsulate so many things. Of course, mental health is now, I’ve been concerned about AI and mental health for many years, but right now, I imagine that all of the people who are running underfunded programs for CBT therapy or whatever are looking at GPT models and going: Maybe that’s the answer.

PM: It’s very concerning, especially when you go all the way back to Joseph Weizenbaum and ELIZA in the 1960s and how he created this system that was basically an early Chatbot. It was supposed be a psychotherapist and he was like: People will talk to it, but they’ll never actually believe that the computer understands them. Then he was shocked to learn that they did and they did buy into it and want to believe that it was real. He spent his whole life afterwards saying: We really need to be concerned about this AI stuff. We don’t like to learn the lessons of the past, at least not not generally. I think in the book, you made a really good point and explaining this as one of the benefits of the systems, of AI, is that it doesn’t feel ethical doubt. So if you integrate it into these bureaucratic systems, it doesn’t worry that you’re giving someone only 70% of what they need to live on, for example. Or you’re cutting them off from benefits altogether because you flag them as someone who doesn’t deserve it.

I’ll just say for listeners that if they want an example of this, they can go back to Episode 72 in August of 2021, with Dhakshayini Sooriyakumaran, where we talked about how this occurred in Australia with their benefits system and how they implemented it to cut people off and how it led to suicides, and all these sorts of things. Based on it was integrated in a way that wasn’t appropriate, that didn’t work. Whether these things are ever appropriate, I think is a question, but in this case it was flagging people who were completely legitimate. I wonder, going back to what you were saying about the concerns about how this can lead us toward or, or further empower a far-right politics, a fascist politics that we’re already seeing on the rise. I have two questions on that. Do you think the views, the politics, the perspectives, of the people developing these technologies matter in the sense?

When we look at these people, we know that there are people like Peter Thiel who are involved in this, who is very much I feel comfortable calling him a fascist, and he promotes the use of these AI tools in ways that are beneficial to the government, to the immigration agencies, all these sorts of things. People like Sam Altman, who is talking about artificial general intelligence. These billionaires in general, who are supportive of things like longtermism, and not wanting to address these social concerns because we need to think about building artificial general intelligence and ensuring that the human species lives for way long into the future, particularly people like them, not poor people who don’t matter so much.

How much do you think that matters, the particular politics of these people? Then also, how did these systems in general empower this way of thinking that leads us more in the direction of a far-right politics and that way of seeing the solution to the problems that we very much do have in our society? As we’ve already seen, our political parties, our political system shifting in that direction — the center-left party moves to the center, the center-right party moves to the far-right, for example. Everyone’s shifting in this direction. To what degree are these technologies helping move that along?

DM: It’s important to do what you did and say: Well, actually, this person is a fascist. The meme that I always like, and which gets repeated quite a lot at the moment, is the variation on the theme of saying: When they tell you who they are, you should believe them. There are many, many people at the moment who are mouthing and espousing and actively advocating for things that I think are unambiguously fascist. We shouldn’t beat about the bush. Those people are present in the same way that there are actual fascists on the streets, those things need to be opposed and dealt with. But they are in a way, I don’t think that is determinant factors. I don’t think there’s determinism in this, it’s a complex unfolding of stuff. That doesn’t need to leave us powerless and trying to analyze and I don’t think they’re the most important factor, actually.

It’s perhaps no surprise that people like that find themselves drawn to or in positions of incubating technologies like this. I think that’s more a resonance between their preexisting worldviews and what these technologies offer and what they’re good for. What’s far more concerning is the stuff that you have already been alluding to, which is the systemic side of things. From what I saw we have called a few weeks ago, which has now adopted completely that polycrisis. We’ve got a plurality of overlapping crises going on, which are going to make life, obviously for huge parts of the world, they’re already critically filled with crisis, and have always been ever since the beginning of empire. Let’s say, there’s always been the Global South — which has always been subject to mortal precarity — that, perhaps, is simply just coming home a bit more. Now, I don’t know, but whatever it is, it’s pretty inevitable that all of us, in one way or another, are subjected to greater precariousness and more threats in some way.

This is the crisis of legitimacy that the system itself is already experiencing because even those who are supposed to be loyal to it now concerned about their own existence, and the future of their families, and so forth. There’s a systemic set of circumstances in which AI emerges, it’s no accident that it’s emerged at this time in this form. To which AI forms a feedback loop and from my personal probings into how AI actually works, it seems to me that in itself is no surprise, and that what it does, as a competition party trick made meaningful in the world. If you connect it to actual world operations — it essentially forms of closing off it is forms of enclosure, let’s say — or another way of saying that is forms of exclusion, and those dynamics. That’s what it offers, it doesn’t really add anything new into the world. What it does is provide means of boundary being dividing, valuing, as you say, particularly in ideas of worth, greater worth and less worth.

That’s what makes it create the preconditions for more terrible people, or more terrible outcomes by people who may not see themselves as actually believing in any of these ideologies. The more extreme people propagate, but they just feel: Something needs to be done, we need to sort this problem out. Too many people coming across the English Channel in small boats and I can’t hardly get a school place for my kid. When I go to hospital, there’s a 12-hour wait. All of which is true — it’s very easy to point the finger, as always, in scapegoating politics, which is very much a far-right tactic, as well as to blame some other groups in our group. To find ways of systematizing, of algorythmicizing, of mathematizing those kinds of operations, as well. This stuff will happen — it is happening. These reactionary politics are happening, and they would happen if AI wasn’t here. But I do think that AI, very unfortunately, resonates far too closely with these things.

I don’t say AI is fascist. I sometimes getting queried about that. I don’t say that because fascism is a political category. It’s something that we do in social forms. I think we do need to have a broader understanding of politics. Actually, we need to have a technopolitics, particularly in our time, but I think it’s been true since the Luddites. We need to have a technopolitics where we understand that what we understand as politics always rests on, is shaped by, is delivered through particular technologies. That in itself is shaped by the character of those technologies — shaped by the particular things those technologies are good at doing and the particular opportunities they seem to close off. I fear that might already be getting a little bit vague sounding, but essentially, I see AI as forming a mechanism that’s very able to extend processes of enclosure, extraction, and exclusion.

PM: I just want to emphasize what you’re saying there by comparing that to what we hear from the people who are developing these technologies. If we think about the Sam Altmans of the world, for example, or if we think about any of these other people, they are telling us that these AI tools are going to have so many incredible benefits for the world, as I was saying, they’re going to be your doctor. They’re going to be your teacher and all these other things, and they’re going to make opportunities and education and all this available to so many more people because these tools are just so powerful. What we’re talking about, and what you’re talking about is very much that it’s very unlikely that those things are going to happen in the way that they are telling us.

On the other side of things, these technologies can be used in really harmful and worrying waves, when they’re integrated into systems that are already enabling forms of exclusion and harm, and all these other things, that already exist in the world, as it’s happening. Especially at a moment where the crises that we face are escalating and there’s a there’s even increased pressure to do something about something. Then that opens the the gates to say: Okay, well, we just need to accept these harms, or whatnot because of what’s happening here. To close off our conversation, I wanted to ask about how you think we should respond, collectively, to AI. You mentioned that you started writing a book called “AI for Good.” So is our response to think about how we can have good AI? Or what a progressive AI looks like? Or does it need to be a much more comprehensive response that challenges AI itself and thinks about how we address these problems in other ways?

DM: I definitely think we should challenge it because it’s not the kind of thing that we’d hear in these parts on this podcast. But in general, people still seem to feel the need to preface any conversation by no matter how critical by saying something like: Of course, there’s got huge potential for healthcare, or huge potential for education, or whatever else it is, as you say, and then go on to list some of the evidential, empirical harms. It seems like nonsense — why do we have that? That is, in fact, complicit in extending the range of harms that these things are likely to cause all of the benefits. When it comes out of the mouth, or the Twitter feed, of someone like Altman, then we know for what it is. It’s supremacist ramblings and mixed with self interested investment, tactics and PR — but when it comes out of lots of other people’s mouths, it becomes a lot more obscures a lot more the reality of the thing.

Which is that, actually — I’d stand by this one in a corner of a boxing ring — the benefits are all speculative, all of the things that are talked about how they’re gonna make society better. I would say, fantasies. Whereas the harm that these things are able to do already evidenced, even in the minimal ways that real AI has been deployed in the world. One of the reasons why — to get onto your question — why I talk about anit-fascist approach, for example, is simply that one of the principles of anti-fascism is you don’t wait for fascism. and then go: Look, we were right! Now, we need to oppose it. Because then you’re in a really bad place to do that. It’s, at least by analogy, at the very least, even if you don’t think these technologies are fascistic in some way, at the very least, the analogy of not waiting for all the bad things to happen before you go: Yeah, all those things are bad, and now we need to really undo them. Once they become sedimented in our infrastructures — once we are not training any doctors anymore because we relied on these systems, or whatever else it is — that seems a very foolish measure.

Much more on the plus side, what writing a book taught me is to try to think about technologies in the same way as I would think about what’s sometimes called prefigurative politics, which is that if you’re going to envision and imagine — this is what we should be doing — envisioning or imagining better worlds, and I say that plural, better worlds, which can be multiple and in different places, different things. If we’re going to imagine those in the first thing, obviously, is the need to imagine and believe it, that those are as possible as the horrendous reality that we’ve got, we then have to be totally consistent with that. To think about it using tools of any kind and computation in particular, what forms of computation might be consonant with that way of living with those relationality that we want. Care is something that I didn’t mention explicitly, but it came out and so much of what you were saying, and I was echoing, about the consequences of these systems is they’re so uncaring and callous, and cruel forms of relationality.

The values that actually promote care and mutual aid and solidarity, I think those are the values that I would like to see much more prevalent in society that I have witnessed for myself, I’ve experienced in part of. They offer huge potential benefit for facing the challenges that are coming, actually, that are particularly relevant to times of crisis, then we should be trying to conceive of modes of technology that are consistent with those. That does need taking a step back, AI is a florid outcome of particular techno-social matrix. I have had a couple of exchanges with Gary Marcus about this because on the one hand, really ready to go out there and denounce AI as it is, but mainly, so he can wind it back to the form of AI that he thinks is better, which is a hybrid with expert systems. I’m just not interested, I don’t think I have any form that I’ve encountered — and I’ve got some historical understanding as well, not just contemporary — I see it all as it’s either come out of neoliberalism, or out of the Cold War, it expresses and instantiates those values.

I’m not writing off computing. We’ve already got more computing than we need and we should really roll backwards. I love technology, but I still hold out the possibility that there’s ways of assembling technologies that are consistent with the values of the world that we want to live in and that’s the exercise that we should undertake. At the moment, the way I’m thinking about that, as my little shorthand, is of shift from cyberpunk — which is telling us how the world looks when you actually put this stuff on the street, in the hands of an already existing mafia — that’s what looks like cyberpunk, towards solar-punk. The idea of what might the world look like if we allow ourselves to be led by pro-social values of care and solidarity, mutual aid and imagination and creativity and diversity, and inclusion. If we allow ourselves to think about those societies, then we must allow ourselves to think about what technologies might support them.

I think the potential is there. I’m not talking about a 50-year research project here. If anything, it’s about recombining and taking apart and putting back together, in a very different way, some of the stuff that we already have. This is an immediate thing. The historical example that I put in the book is one from the 1970s, where a bunch of workers in the Lucas Arms Company did exactly that. They said: We don’t want to be this arms company anymore for a variety of reasons, so we’re going to use the existing skills we have. They were all structural workers, but because they were in a high-tech industry, they had engineering skills, practical engineering skills, design skills. And they designed early forms of hybrid power solar cells, and so forth. At that time, it was the beginning of the environmental movement. They were affected by that thing, drew those ideas in, and they turned them into real practical working possibilities. Of course, they were completely crashed, at that point, but the possibility didn’t die. That concept of social production didn’t go away. It’s still there and it’s still to hand as much as open AI’s nightmare visions are to hand. I think that’s what we need to reach for.

PM: The example of the Lucas plan shows how we can imagine a different way of organizing things. We can deploy our productive capabilities in a different way. But the system limits what we can do there and one of the functions of these tech billionaires, like the Sam Altman’s, is to limit what we can imagine for the future and to accept they’re visions for how things should work and how the systems should roll out and what role we have in that, which is a very small one. To be more of an object rather than someone who thinks up and has much agency in this.

DM: I just wanted to say, supported also behind themselves by the apparently reasonable liberal establishment who also position themselves as people who are acting on everyone else’s behalf, but there are a lot more sensible and these raving lunatics. I think that’s also a problem. Is there anything that Lucas plan did work? What the committee did said was: We can do it, ordinary people can do it. I think that’s the ultimate lesson of AI is no matter how complicated this stuff is, and how Wizzy and sci-fi it seems to be, it’s nothing compared to the insights, the capacity for insight and the capacity for organization in ordinary people.

PM: I completely agree. It’s the perfect time to have a conversation like this, as there’s so much hype around AI around these tools, so much imagining of what this is going to mean for our future rather than questioning: Should this be our future at all? Dan, thank you so much for taking the time to chat. I really appreciate it.

DM: Thank you. It’s been a lot of fun!

Similar