AI Hype Distracted Us From Real Problems

Timnit Gebru

Notes

Paris Marx is joined by Timnit Gebru to discuss the past year in AI hype, how AI companies have shaped regulation, and tech’s relationship to Israel’s military campaign in Gaza.

Guest

Timnit Gebru is the founder and executive director of the Distributed AI Research Institute.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

Paris Marx: Timnit, welcome back to Tech Won’t Save Us.

Timnit Gebru: Thank you for having me. Was it last year that I was here because it feels like, I don’t know, so much has happened since then?

PM: Time is hard to keep track of sometimes. You were on the show January of last year, as this AI hype was just taking off. ChatGPT came out end of November 2022. We were starting to see those kinds of stories in the media around how it was going to change everything. You came on the show and gave the listeners an introduction to what AI is, how this stuff works, what we should actually expect. And now we have had this kind of whole year of hype. I wonder, to start with more of a general question, what have you made of that past year and the way that AI has been treated and talked about over that period?

TG: What I’ve made is that the people pushing this technology, if we want to call it that, as an end-all, be-all — either the thing that will save everybody or, apparently, render all of us extinct. And I’m not exactly sure how, have really had a really good campaign and have succeeded in infiltrating government of all kinds: multilateral, EU, US, UN, whatever you want to call it, the media, federal organizations, institutes, schools, what have you. And that’s really what I’ve made of it honestly.

PM: So you’re saying that DAIR [Distributed Artificial Intelligence Research Institute] is not running a campaign to make sure that we nuke the AI facilities and stuff like that, to protect us?

TG: I don’t know if we’d be able to write a Time op-ed; I don’t know if we would be invited to write a Time op-ed, asking anybody to nuke anything. I would think that the FBI would be at my door nuking me, before I can get a chance to say anything like that. We are not composed of people who are allowed to say things like that. So we’re not planning such a campaign anytime soon. But no, we have not done that.

PM: Okay, good to know. But you’ll tell me first?

TG: I’ll tell you first, if we need to nuke anything. I’ll let you know, so you can hide, I suppose [both laugh].

PM: I’ll get away from the data centers to make sure that I’m protected.

TG: Remeber, we need, according to them, we need a few people to ensure that civilization “still exists” when everything is being nuked. So some of us could be some of those people.

PM: Sounds good. But talking about all that, you talk about the campaign that these tech CEOs, AI CEOs, have really waged over the past year to ensure that their narrative is the one that we’re believing. And what that really brings to mind is the campaign that Sam Altman was on earlier this year, where he was basically on this world tour talking to politicians all over the world to sell this vision of what AI is, how it should work, how it should be regulated. I feel like it was often presented in the media as this coming out and this almost altruistic thing to introduce this to the world. And then you got these reports, for example, like in Time Magazine, where they reported that he was lobbying on AI regulations, and the AI Act when he was over there. And we saw in the final version of that, that it aligned with what he wanted it to look like.

TG: I remember, it’s so interesting. Time is really weird, because I can’t believe all of that happened last year. They had these articles saying he’s the Oppenheimer of AI, or he created this thing, and now he’s really worried about it. And at the same time, he’s saying: Regulate us, but not really like that. So, he appeared before the Senate, and that was also last year. So, he appeared there, he was talking about how this thing is so dangerous, and that it needs to be regulated; there needs to be some new, complicated structure to have it regulated. And then the lawmakers were literally suggesting that he be the head of some organization that regulates organizations like his, which is sad to hear people say that. And then as he was doing that, and he was doing those tours trying to supposedly convince all of the lawmakers how dangerous the thing he can’t help but build is — because he still has to build it — they were, in the backdoors, lobbying heavily against the actual regulations that were being proposed.

The media was talking about him as if he’s this altruistic person who’s so worried about this thing that he, once again, can’t help but build. In the UK, especially, they dissolved a whole advisory group of people that they had. I mean, Neil Lawrence is one of the most outstanding machine learning researchers for a long time. He knows a lot about data and everything else. He was not buying into the hype, and dissolved. And so this campaign to capture research direction, regulatory direction, media coverage, and even federal funding direction, I would say in the last year has been successful, maybe because of that. I also would say that, for the first time, we’ve seen some media coverage, discussing some of those motivations. I remember there was a Politico article talking about the effective altruists and Emile [Torres] and I are calling the TESCREAL bundle — I know you had him on your show to talk about that — have actually essentially been successful in capturing this conversation.

Nitasha Tiku had a wonderful article talking about the amount of money that was being put into pumping students into the field of AI to supposedly stop it from killing all of us. Because in order to stop AI from killing all of us, you need more people, more people and more money being pumped into the field. Which is a very logical conclusion to reach. So on the one hand, they’ve been so successful, and it’s been very frustrating for me to watch. On the other hand, I’ve also seen more conversations about those actual ideologies, more stories, coming out. And I hope to see more of those this year.

PM: Absolutely. There’s so many things in that answer that I want to dig into through the course of this conversation. I think just on the question of time, I find that really interesting. Because we’ve been talking about how it’s wild that all of this just happened in the past year. It feels like things that would have happened over a much larger stretch. And it feels to me like that shows the cyclical nature of this and how these cycles work in the Valley — where we can have this kind of compressed interest in something like AI and it’s going to change the world and we all need to be so focused on it and worried about it and all this kind of stuff. And by next year, there will probably be something else that wiill be getting that whole range of attention and will be like: AI? That’s the last thing. Is that still important anymore?

TG: That’s so last year [both laugh]. Is anybody talking about crypto right now? I don’t even know — it’s like it never happened. I mean, some people are talking about it, but it’s not like the thing that’s going to save the world. I remember there was some VC, or another person, having some type of suggestion for unionizing, and decentralize this or something or another. We’re like: That’s not how it works, though, you need interpersonal relationships, you can only get rid of humans to a certain extent. So, I don’t know. I’m not seeing those conversations and all the crypto grifters or, actually, even the pharma grifters have congregated around so-called AI. So, that tells you where the new grift it.

PM: Absolutely. Since you were talking about regulation there, and how these AI companies have so successfully captured this discussion around it, whether it is in the United States or so many other parts of the world. Obviously, the White House has been making gestures towards AI, has been speaking to AI CEOs and some other people who are in the AI field. What do you make of the way that the US government has approached AI regulation over the past year? And do you think it’s the type of thing that you would want to see them doing if they were really taking this seriously? Or does it look like they’re basically following the line from the Sam Altmans of the world?

TG: The one organization that I’ll say has not been captured is the FTC. But I’ll stand behind whatever the FTC saying. If you look at how they were talking about a number of things that are within their jurisdiction, for example, deceptive practices, the way they’ve been advertising how ChatGPT works. For instance, I don’t know if they’ve changed it now, but I remember, a few months ago, I think I was looking at their README files, and we talk about how, whatever their product is not really doing understanding or things like that, and maybe that could be up for debate, whatever, even though we have a specific position on that. But even if you take that premise, seriously, they don’t even try to scope out whatever language, for example, they say that their product understands. They’re like: Oh, ChatGPT understands general language, this, that. These are the kinds of things they’re doing. And so the FTC came out and said: Listen, AI is not an exception, if your organization is engaged in deceptive practices, and you’re deceiving customers, that’s our jurisdiction.

That is very different from the kinds of things that Sam Altman was asking for. And when he made his appearance, he was acting like this is some new uncharted territory that requires some sort of new governance structure that we need to create and figure out. Whereas, organizations like the FTC are saying: No, that’s actually not true. You are a huge multinational organization, and we have jurisdiction over you. They’ve done a number of things like that, that I really appreciate, and makes me think that they’re not buying the hype. Now, contrast that with what Chuck Schumer was doing. I mean, I didn’t even want to be a part of that, to be honest. I’m sure I will never, ever get such an invitation after I spoke up about it, and it’s totally fine. I know, they’ll never invite me, but maybe they’ll get the memo and invite someone else or a number of other people and not have the thing that they did.

They literally had Elon Musk, and obviously all the different CEOs there for their AI Insight forum — I think is what they called it. And they had a couple of people as window dressing, so that we don’t criticize them for taking that approach. I got an email a day before, or something, kind of asking. And I asked around, and it was because they wanted to do a backfill, as a last minute thing. I’m like: I don’t want to be a part of that! What would I accomplish by being a part of this? It’s already set, right? My voice is not really going to do anything except for have a stamp of approval, as a stamp of approval as to what they’re doing.

PM: You mean you didn’t want to go meet Elon Musk and shake his hand?

TG: You know how I love him. You and I have that in common. That’s the one only thing we have in common. I mean, I didn’t do a whole series on him, like you did, so maybe my love doesn’t go that far. But I already met him in 2016 — I already regretted that. He came to Stanford and it’s just so ridiculous. He just kept on talking about stuff that makes no sense. And I should have put two and two together. Because I asked him afterwards, why is he worried about AI as an existential risk? What about climate change? And he said: Well, climate change is not going to kill every single human. It’s exactly the kind of argument that these existential risks people have. But at that time, he was starting to talk a lot about that, and I hadn’t put that analysis together, two and two together with the TESCREAL and all that. But anyways, so I don’t need more of that now [laughs].

So there’s that, and then there’s the White House Executive Order that just came out. And I have to tell you, I have not really looked at all of it, because that was in the middle of a genocide that they are also announcing and funding. So to me, it was just like to celebrate that, while, we are seeing what they’re doing. And it just kind of reframed everything I’m thinking about, because how can I do that when I’m seeing that the weapons and the book, “The Palestine Laboratory,” and all this. We just had an event called No Tech For Apartheid to amplify the No Tech For Apartheid campaign. But it was in the middle of that. And so I’m like, I cannot celebrate this right now. I know that there are some transparency requirements and other things that I appreciate. But, still, it was in the middle of that. And so how could I go and say congratulations, and thank you, when this is what they’re doing?

PM: Absolutely, there are much bigger issues out there to deal with and to be looking at right at a moment when this AI Executive Order is coming out. And I don’t want to make it seem like we’re just moving on from that, I do want to come back to the issue of what’s happening in Palestine and the campaign that Israel is in the process of carrying out with American assistance a bit later, in our conversation. You talked about how these people in the AI industry are seeing what is going on in very different ways, or presenting it in particular ways to the public, in terms of the ideologies that are present in this industry. And as you’re saying, I talked to Emile, about that last year, the end of last year. But I wanted to discuss that with you, as well, because it does seem like obviously, there are these ideologies that have been present in Silicon Valley for a long time. That position technology is the way that we’re going to solve all the problems in the world and all this kind of stuff.

And even as you were talking about Sam Altman there, and what he was saying at the hearings in the government, versus what he’s been saying out in the world. It seems like he has been using arguments from both sides of this. Where on the one hand, he’s saying: We need to be paying attention to AI; we need to be regulating it because it’s this massive threat to humanity. But at the same time, he is pushing for this acceleration of the rollout of AI into every facet of our lives, and arguing that it’s going to be our doctor and our teacher and going to be an assistant for everybody, etc. So what do you make of how these people, the Altman’s of the world and the Andreessen’s, since the other folks are positioning AI, and what that says about their views on it, but also, I think they’re ideologies more generally?

TG: Oh, I forgot about Marc Andreessen and his manifesto — it’s just too much to cover! And now there’s this new e/acc [Effective accelerationism] thing that I don’t even know if we can fit in our TESCREAL acronym [laughs]! But again, same shit different day, I guess, same movie, different day. They want to be saviors, and they want to be the ones who save humanity in one way or another. It’s a secular religion; you just want to believe in it. And so of course, if you so strongly believe that you’re doing something good, and it’s really important, so it’s fine that you’re amassing all that wealth and money, because what you’re doing is saving humanity. And what’s really interesting, to me, is there are different factions that are fighting against each other. But for me, they’re all the same. There’s the billionaires like Altman and Andreessen and all of the tech leaders, Silicon Valley leaders. There are the “philosophers.” I don’t even know if you want to call them that, the EA people and Nick Bostrom, and all these people. People like Max Tegmark — that’s a good example here.

If you look back at, if you can stomach it, some of these Singularity Summit or Effective Altruism lectures. And this is why I appreciate Emile because when we collaborate Emile can go through all of those things and read them and get the quotes and stuff. Because I’m like: Every single line I read, I’m just so angry that I have to do it. There was this slide from Max Tegmark. I don’t remember if it was 2015 or 2016, from the Effective Altruism conference where literally the title left of slide is: If we don’t develop technology, we are doomed. That’s what he’s saying — if we do not develop technology, humanity is doomed. And he has this chart, and climate change question mark. That’s not an exact doom scenario, right? And then after that, there’s like ‘cosmocalypse,’ I think is what he called it. At the same time, he has his Future of Life Institute or whatever, Future of Humanity or whatever.

PM: I get them both confused.

TG: I know, it’s like future of X. Future is one of the words that they’ve just ruined for me. And it’s funded by Elon Musk, and all the best people. Around that time, they had a letter, a petition about existential risks of and stuff like that. The same kind of letter that they just recently had, I think it was in March, it was all over the news: We have to worry about the existential risks of AI. It’s the same dude, who was saying: If we don’t develop technology, we are doomed. Now, he gets to make them money, saying that he gets to make the money, also saying that it’s an existential risk to humanity. So, they’re just circling money around themselves. And that’s what they’re doing. Even like Geoff Hinton — the godfather of deep learning — who has started to say that he’s so worried about existential risks in ChatGPT. If you look at his tweets, a few months prior to that, he was starting to talk about how ChatGPT is the world’s butterfly. It took the world’s data, and then it turned into a butterfly. A couple of months later, he’s apparently super worried about existential risks. He’s making the press rounds.

The thing is that, they get to fool us both times. That’s thing that is so upsetting about it! They get to take all the money to tell us how it’s the best thing, and then take all the money saying they are also the solution. They’re the problem and they’re the solution. And this is a sign of amassing power and privilege. And that when we talk about the real risks of AI, versus whatever they’re thinking about, in my opinion, it’s because they can’t fathom any of the real things, mundane things, affecting human beings, to get to them. It’s kind of like what you were talking about in your book about the different solutions all the billionaires come up with, like flying cars, and Hyperloop, or whatever, it’s not going to work. But this is the thing they’re really fantasizing about, what they’re thinking about. It’s not what the “masses experience.” So, it’s really nothing new, but it’s just sort of a new technology to dress up these ideologies.

PM: It’s fascinating to hear you talk about that and bring up Geoffrey Hinton, again, because I remember in the interviews with him, he was explicitly asked: What about these more real risks that people like you were drawing attention to, like how it’s being used against people in the here and now? And he was very explicitly dismissing that and saying: No, it’s this big existential risk that is the problem, not the real things that actually affect real people that are happening right now.

TG: Specifically, I remember they asked him: So what about the stuff she said, again? And he was like: While discrimination is an issue, hypothetically, I don’t find it as serious as the idea that these things can be super intelligent, and this and that, and such and such. You know what I mean?

PM: It’s so ridiculous. It’s so frustrating!

TG: I know!

PM: Especially because these are the types of people that the media is more likely to listen to, and thus, these are the perspectives that the public hears more and more. And so that informs the whole conversation that we end up having. And it’s like: We’re being totally misled as to what we should be discussing around AI, what we should be concerned about, whether we should even be talking about it in this way. But because the people in the industry have that much influence, because they have all these people like the Geoffrey Hinton’s, who will repeat that stuff. And then the media just seems to fall for it, or seems to be not really be interested in the real story. And again, there’s plenty of journalists out there who have written great stories about AI and who have challenged this perspective, but the general thing that we hear, the general framing that gets presented, is what the Sam Altman’s and what the Geoffrey Hinton’s are saying, not what the Timnit Gebru’s and the Emily Bender’s, and people like you, are saying.

TG: It’s really interesting because people forget that I am a technologist. I studied engineering and math and science. And I wanted to be a scientist, I didn’t want to go around telling them: No, don’t do this. But it’s taken away the joy of actually thinking about technology. Because it’s like this nightmare that we’re living in. And so to present us as just the naysayers, or the whatever, it is because we’re not getting to put forward our visions and because we have to fight them. That’s the only reason we are doing these things, not because that’s, I don’t know, by our very nature, that’s what we were looking forward to, or anything like that. And when you’re talking about the media, I mean, the influence is huge. It’s not just in the US, it is worldwide. When I have interviews, and in my native languages — I have interviews in Tigrinya, or in Amharic — and they talk to me about AI. And they ask about how do you think you will impact the African continent? What are the good things that it can do and all of that? And is it gonna kill us all? Is it an existential risk? And what about labor?

I’m thinking: Think about the labor context in those countries versus what these people are talking about. I grew up in a country where most people are farming using exactly the same methodology as thousands of years ago with cows, and they don’t even have tractors, let alone autonomous such this and that. Labor is much cheaper than goods, for example, because a lot of the goods are stolen from countries in the continent. So, people are not even encouraged to to do contextual analysis of what these people are saying, because the media, the megaphone is so loud. Or the way in which when labor is impacted, it’s the way in which people are exploited, like the workers in Kenya. It’s so interesting how Sam Altman, I don’t remember if you saw some of his tweets, talking about how there’s such a concentration of talent at OpenAI, because there’s only a few hundred employees and compare it to all of these other people in all of these other organizations. And, of course, he was not counting the millions of people that data is stolen from, but also the exploited and traumatized workers, that they are benefiting from, that they’re not paying.

So, it’s like very little pushback, and the pushback does not have as loud of a coverage. And that’s also part of it, because I think they don’t talk about movements. They talk about individuals, and so they want Gods. But then they also wonder, a lot of times they built up these people and then when that bubble bursts, they wonder and do a post mortem, like: How did this happen? Remember Elizabeth Holmes — how many covers she was in? And anybody who said anything was a naysayer. And now they’re just analyzing: How did this happen? Well, who built up Elon Musk? You did, right? And so what are you expecting now that he has amassed all this power? And a lot of people are how he has more power than multiple governments combined all of that? Well, who let that happen? The media had a lot to do with it.

PM: Absolutely. When you talk there about how the media is kind of focused on the individual, rather than looking at a movement or looking at more of a collective? Did you have much of an experience with that when the media spotlight was on you after the Google firing and things like that, what was your experience of that in that moment? And how the media treated that and how treat individuals and and tech in general?

TG: I definitely noticed that. They wanted to talk to about me as an individual. And I wanted to sort of bring out that: I don’t want to discount stuff that I went through, as an individual, or stuff that I am doing as an individual, but also, I wanted to make sure that they knew that, for example, the reason that all of this was news in the first place was because I had a lot of people supporting me. And there was a strategy for how to do that. There were people who have never come out publicly, and still work there, and who can’t just quit or whatever. And working in the background, there are collections of people who were writing petitions, statements. There were just so many people doing grunt work in the background. That job wasn’t to be a public face. Not everybody can be a public face. So, it’s more difficult for the media to see things that way. They want to create a godlike figure. So once they have somebody who they think is a godlike figure, they just elevate them. And then they get surprised when these people have so much power, like Elon Musk.

PM: Absolutely. I feel like that became so clear, recently, when we saw this drama at OpenAI around Sam Altman, with him being deposed as CEO, taken out by the board. And then you had this few days of back and forth and the media is trying to figure out what is happening. And you have all these people in tech and on Twitter kind of pushing for Sam Altman to be returned to the post. Meanwhile, we don’t even really know why he was removed in the first place. Certainly, there were some rumors around people at the company not agreeing with the direction that he was taking it. There has been some reporting, afterward, that he was not a great boss, that there were issues with how he managed the workplace, the decisions that he was making, all those sorts of things. I wonder what you made of that whole episode and how it was treated, and how Sam Altman was ultimately restored to this position, and seems to really have any guardrails that might have been there before, taken off his leadership, now?

TG: I remember when that story came out, none of us at DAIR liked that. And we were like, Whoa, what? So our first reaction was for a board to make a statement like that, and then immediately move on, like, what happened? What is coming? What? That was my first reaction? Because I just didn’t think that they would make a public, as a company, announcement like that and remove him immediately. I was wondering, like: Are they about to be sued? This was what was running through my head. And then some people said: Well, you know, Annie Altman, his sister, her allegations. I’m like: Do you guys know that tech industry? You really believe that they care about anything to do with sexual assault or harassment? I don’t understand — do you understand that they silence us? That they would punish us? That had nothing to do with it, I’m pretty sure. That was my first thought. But, again, it was like in the middle of the whole Gaza thing. So, I wasn’t really like paying that attention. But then I remember OpenAI explicitly saying: Members of their board are not effective altruists.

Then, nobody was really checking that, which was ridiculous, because they were. I mean, that was what they were. The media was asking me about, one of the board members apparently wrote something that was critical of OpenAI. And then, Sam Altman wanted to suppress it. And so people were starting to connect that with my story at Google. And I was like: I don’t know about that, because, again, I didn’t know the person that well, but just a quick scan. I’m like: This is a very effective altruist-like person, the whole US-China thing. I’m like: I don’t know, I don’t think it’s that similar. So, at some point, I was like: Maybe it has something to do with effective altruism, and the board members being mad. Because, I don’t know, that’s what I thought. And then, of course, the end results ended up being all the women get removed, and then people were asking me if I would consider being on that board, which I thought was the funniest thing!

I have said, literally, since the inception of this company: If my only choice was to just leave the field entirely, and like have to work with them somehow, it would unequivocally be the second option. Because, I don’t know how to police it exactly as to the why. But I just really disliked that company from day one. The whole savior mentality and the way that the media was selling them as a nonprofit. Then the people were asked to write and supportive him. And then there’s a whole Ilya-Sam Altman thing, which was also weird. So my takeaway from the whole thing was, I didn’t really think that it didn’t seem like the board was mature just to start because, as a company that so publicly that, if you’re going to make public announcements, maybe you should discuss them. It just didn’t seem very mature. But then it seems like there are no guardrails around OpenAI, except for Sam Altman. He is the company, the company is him, is how it’s being run right now. And then the end result is for people like Larry Summers to be part of the board, which is excellent. You not only got rid of the women, now you’ve added the guy who’s talking about how women can’t do STEM, and things like that. So, that’s my takeaway of what happened. [laughs]

PM: You think about how they’re often arguments about how the AIs are not discriminatory, or they’re not being influenced by the culture that’s around them. And it’s like: Well, you’re setting up this whole corporate culture that is sidelining women, that is bringing in people like Larry Summer, that is very demonstrably discriminatory if you feed it certain prompts and things like that.

TG: Also, I mean, honestly, even when they had women, it was the kind of representation politics thing where they could have a board of all women, and the way it OpenAI is run, I still would think that they don’t care about women. But now, they don’t even care about the optics of it, let alone, actually care.

PM: Absolutely. Since we’re on the the question of OpenAI, there are a few things that I wanted to dig into with regard to it. You talked earlier about the privacy and the data that’s being collected by these systems. And of course, there has been a lot of debate and discussion recently around all of the data that they use in order to train models, like the one used for ChatGP, and there have been lawsuits around copyright. Obviously, there has been a lot of discussion in the past year when you had the actors and the writers unions going on strike and talking a lot about AI in their contract negotiations and what it might mean for their professions and their industry.

And then just recently, we had OpenAI make a submission to the House of Lords, Communications and Digital Committee, where they basically argued that they should be able to train their models on copyrighted material, and not have to pay for it, because if not, they say they would not be able to “train today’s leading AI models without using copyrighted materials,” and that limiting training data, to just public domain data, would not result in a high quality model or whatnot. I wonder what you make of these discussions, the growing debates that people are having around copyright, and what relationship AI models and AI training should be able to have to them? Because it does seem like, especially going into this year, it’s going to be one of these big fights that’s playing out, especially as the New York Times is suing over this and other things like that.

TG: So, we wrote a paper with a number of artists called “AI Art and its Impact on Artists,” recently. It was great — if I may say so myself [laughs]! It was a great experience for me working on that paper, because it was a number of artists whose jobs are on the line, because of this — it is not hypothetical in the future kind of thing — and some legal scholars and philosophers and machine learning people. And we talked a little bit that the point of the matter is, and we wrote another article, I think in 2023 (don’t quote me on the time), called, “The Exploited Workers Behind AI.” This article was kind of synthesizing a bunch of research and a bunch of stuff us and a bunch of other people have done. The point we were making there was that labor is the key issue here. Because whether we’re talking about discriminatory AI systems, or face recognition, or we’re talking about autonomous weaponry, or are we talking about generative AI like this, or “AI art,” the reason being that, if they were not able to exploit labor, their market calculations would say that this is not going to work. And so they won’t be so quick to go to market with this stuff.

That’s exactly why the OpenAI people are saying: What do you mean we can’t steal everybody else’s work? Then we literally cannot make our product? And we’re just like: Exactly, that’s kind of what we’re telling you. You’re profiting off of everybody else’s work. That’s my number one takeaway. But the second one is you get to see the kinds of arguments people in AI make to defend this practice. And one of them is, what we call, anthropomorphizing. It’s talking about these systems as if they are their own thing. And what we were talking about earlier about existential risks, and all of this fits into this. Because if you talk to people, if the media regulators, or if you push this agenda, that these are systems that have their own mind, who knows what they’re going do, we’re not thinking about copyright, we’re not thinking about things on Earth, OpenAI companies regulation, theft, that’s not our labor exploitation. That’s not what we’re thinking about.

We’re distracted by thinking about can this machine be ethical? Can this machine be inspired by the data, just like humans are inspired? So the way they talk about it is saying: No, this is not theft, because it’s like when human artists are learning, they look at other artists. These are the kinds of arguments they make. And no, human artists are not just copying; they’re not combining existing data and compositioning, and then spitting something out. They’re putting their own experiences, and they’re coming out with new style, doing all sorts of stuff. And so these people want us to believe that art is nothing more than just combining a bunch of stuff that already exists, and just spitting it out. And what about when people come up with completely new stuff for the first time? They make you talk about art as if they know, actually, what they’re talking about.

So, to me, it also makes me sad for them because I feel like the depth of humanity is so small for them, if this is how they think about humans. But this is how this discourse about existential risks, and all the ideologies that we were talking about also fits into this narrative that they have that they don’t want us to think that they are actually stealing from people and profiting from it, which is what they’re trying to do. They want us to think that they’re creating some magical being that can actually solve the world’s problems if we just let them do their thing — and/or kill us all! I don’t know which one, could go either way. They want to keep us high up on that discourse, so that we’re not really looking at the actual practices that they are engaging in, which is not complicated. We all understand what theft is. We all understand what big corporations do and we all understand what kind of laws we want to guard against those things.

PM: It’s not surprising at all to see these companies trying to get past copyright regulation when it helps them and wanting to defend it when it doesn’t. I think it’s so interesting to hear what you say there around the people arguing that it’s just like how a human reads an article, or reads a book, or looks at something, and that inspires him to make something else, because I feel like I see that argument so many times by people who want to defend this. And it’s like: No, the problem is that these systems do not have brains like humans; they don’t think like humans; they don’t approach these things like humans, it works completely differently. And you can’t compare these things, because these computers are just making copies, and then developing these kind of models.

But I feel like on the copyright question, there has been a lot of legitimate criticism of the way that the copyright system is constructed for decades. But I think at the same time, there’s a legitimate use of copyright right now, in order to defend media publications, in order to defend artists and their work, against the attempt by these companies to use all of their work for free to inform their business models and stuff like that. And I don’t think that those things are actually in conflict at all, or at least they shouldn’t be. Wanting reform of copyright, and also wanting to protect artists and media publications and whatnot.

TG: That angle was very interesting for me to hear from people, it’d be like: Oh, you’re a landlord, you want copyright or stuff like that. And also, it’s really interesting, like you are saying, to see how companies like OpenAI are saying they have to use copyrighted material. But then in their terms of service — I don’t remember if this is in OpenAI’s — but a whole bunch of these generative AI organizations have Terms of Service saying that you can’t use their APIs to have competitors, or as input to other generative AI systems, or something like that. I’m like: Okay, so you want to have these restrictions, but you don’t want to honor anybody else’s restrictions. Also, I think that the reason any of us are even talking about copyright. For me, I’m not a copyright expert, I’m not a legal expert. But I’m just listening to what the artists whose jobs are on the line are saying. They are trying to exist in the current system that we have and barely make a living. And what’s the point of living, also and for all of us, if we can’t even interact with the art that is created by humans? It’s an expression, it’s communication.

So, there is this thing, this is the only thing, that currently is helping them navigate the situation. If there is something else, I’m pretty sure they’d be happy for that other thing to exist. But we wrote in our paper that copyright is also not very well-equipped right now to even protect these artists. Because, imagine the courts take forever to do these determinations. You mentioned a number of lawsuits — Karla Ortiz is one of the artists is a plaintiff for one of them — and so they imagine the time, resources it takes to go up against these large organizations. It’s not really a sustainable way, I think, forward. So, nobody’s saying that everybody loves copyright, and nobody’s trying to protect Disney or whatever. And you can’t sing “Happy Birthday” or something like that, the ridiculous things that they’re doing. We’re talking about the artists. It’s already difficult to exist as an artist. And so why are we trying to take away the few protections that they have?

PM: Exactly. People are using the tools that are available to them, even if they are imperfect tools, to try to defend what little power that they have to push back on these things. You mentioned earlier, the fact that it was difficult to engage with these things in a moment when we’re seeing a genocide being committed against people in Gaza and Palestinians, more generally. I wanted to turn to that, because when we’re talking about OpenAI, I think that this discussion is very relevant to what is going on when we talk about AI, as well. We talked about Sam Altman, of course, who is the leader of the company, and probably the most influential voice in AI right now. But OpenAI is head of research platform, Tal Broda, has actually been posting quite a lot about what is going on in Gaza. He’s posted tweets such: “More, no mercy. IDF don’t stop,” while quote tweeting images of neighborhoods turned rubble in Gaza. He’s tweeted, “Don’t worry about killing civilians worry about us,” and “There is no Palestine, there never was and never will be.” I wonder what these sorts of tweets and and this approach to this horrific situation going on in Gaza right now, being committed by the Israeli military and government, tells us about some of these people in the AI industry and the ideologies that they hold to be able to say things like that or see this in this light?

TG: What it tells me, first of all, is how emboldened you are to say something like that. Just the fact that you feel it’s okay to constantly say that, that means you’ve never had any pushback to saying those kinds of words And honestly, actually, I have to tell you I’m not surprised at all by those tweets. I am very surprised that there has been some amount of pushback, and that Sam Altman said something about Palestinians, which is the bare minimum, but I’ve never seen that in the tech world.

PM: Just to say Sam Altman tweeted on January 4th, “Muslim and Arab (especially Palestinian) colleagues in the tech community i’ve spoken with feel uncomfortable speaking about their recent experiences, often out of fear of retaliation and damaged career prospects. Our industry should be united in our support of these colleagues; it is an atrocious time. I continue to hope for a real and lasting peace, and that in the meantime we can treat each other with empathy.” Now, this is a statement that he put out, as far as I know, there hasn’t been any action against Tal Broda for the types of things that he has been saying, that I’m sure make a lot of people who are Palestinian — and even who aren’t Palestinian in Silicon Valley, and in the tech industry — feel very uncomfortable. And I’ve seen a number of media stories suggesting that there is a silence on this, in the tech industry, because people are scared of speaking out because of the degree of support that exists for what the Israeli military is doing. But sorry, please continue.

TG: Silence is one thing, we need to talk about the tech industry’s role in this whole thing, which is pivotal. So while we’re talking about this, I want to mention the No Tech For Apartheid movement that’s created by Google and Amazon workers. And so, any tech worker, they can go to notechforapartheid.com. They had a mass call yesterday, and they’ve been protesting. And they’re modeling it after the Anti-Apartheid activism for South African Apartheid. And so to say that the tech industry silent — if it was just silence, that’s one thing, but they are actively involved. There are IDF reserves working at these large tech companies. There are actual members engaged in these horrific acts who are currently employed, and they have the full support of these organizations. These organizations are supplying the Israeli military with technological support. And we know that a lot of the startup scene out of Israel comes out of the military intelligence arm and is transported across the world for surveillance and suppression, and the VC world is very intertwined with that.

So, it’s like the tech industry is absolutely pivotal to this. And because of that, it is career suicide. I know for the last, let’s say, two decades maybe I’ve been in this space, or even when I was in school. It is the scariest thing to talk about, supposedly the scariest thing to talk about. Let me tell you, even when I started talking about the genocide in Tigray, and I want to talk about it, because it has been heart wrenching. We have teammates who have been experiencing this genocide — one million people dead, currently starving, over 100,000 women raped. Just think about that, out of a population of, maybe, 6 million people. This is what we’re dealing with. With that, we see the social media companies and how they just don’t care, because they don’t have to do anything, nobody cares. The UN does not care, anybody. So, it’s more of profiting and ignoring. In this particular case, it’s actively retaliating against you, if you say anything.

I remember Tigrayans even telling me: Hey, I know you spoke up, but be careful here; be careful right now. Because that’s kind of what we’ve been told and what we’ve seen, for anyone saying anything. Because of that, going back to Tal Broda and his horrific, absolutely horrific posts, how can you have anyone at any company publicly saying things like that, thinking that it’s okay? Even with that, that’s why I was actually surprised to see a whole bunch of people pointing that out, and asking for him to be fired, which he should be. Really, the baseline is like: You should not have genocidal people like that working at your company or any company. But because that’s been the norm for so many years, and we know the kind of repression that people face and retaliation people face, whether it’s protesters or tech workers. Because of that, I was actually surprised to see this pushback. Unfortunately, it’s taking for a genocide of this proportions for us to see that. But the tech world is absolutely central and pivotal to the Israeli apartheid and occupation, absolutely pivotal.

PM: It’s an essential thing to discuss. And I’ve had Antony Loewenstein, author of “The Palestine Laboratory” on the show in the past to talk about this and of course, Marwa Fatafta was on last year to talk about how tech works in this conflict, as well, and of course, what was happening on the ground, at the moment that we were speaking. I feel like at a time when we discuss artificial intelligence, and when this is ever present in the discourses around tech at the moment, it’s impossible not to ignore how that is being used in a campaign, like what Israel is carrying out in Gaza. Not just because we know that Israel has been using AI weapons and AI tools for a long time. And of course, I discussed that with Antony in our conversation. But on top of that, obviously, as this AI hype year has been happening, Marc Andreessen, I know has written a number of times about how AI would make war less common or less deadly or anything like that. Meanwhile, we have the reports from, for example +972 Magazine, about the Gospel AI system that we know Israel is using for targeting and other reports about how they’re supposedly using this really targeted system, but it’s actually ensuring that they can find more targets throughout Gaza, in order to hit, which is leading to much more civilian death. I wonder what your reflections on the AI component of this is?

TG: There is a 2022 report from Tech Inquiry that is talking about how more and more tech companies are becoming military contractors. I think they were looking at at least Microsoft, and they’re looking at US and UK Governments and how they’re purchasing from big tech companies, are dominated by deals with the military intelligence and law enforcement agencies. Jack Polson is the person who started this organization also left Google over other concerns. So, there’s that. There is the fact that artificial intelligence was born out of military. They’re the ones who wanted these things. There’s a book called “The Birth of Computer Vision” that I haven’t read yet, that I’m hoping to read, about, again, how computer vision specifically was birthed out of military interests, autonomous weapons. This is just the history of AI.

For Marc Andreessen to talk about how — even when you look at drone warfare, what these things do is it inflicts less harm on the entity that is doing the drone attacks, and more harm on the person experiencing it, like these kids who talk about how they can’t even they’re traumatized by blue skies, because that’s when the drones come. But you’re not going to see the same drones coming in New York City. Then it’s gonna be all hell breaks loose on whoever does it. So that’s what these things do more and more. The further you are from the impacts of war, it means that the other entity — the entity that has all these systems — is able to inflict as much pain as possible without facing the impacts. And so that’s why I’m extremely worried about the increasing use of movement towards that, even though I know the field was always going in that direction. And Silicon Valley is going more and more in that direction. A whole bunch of people have been writing about how Silicon Valley needs to collaborate with the American government, and the military, and things like that. So I’m definitely not looking forward to more of that happening.

PM: The Eric Schmidts and Peter Thiels of the world are pushing that. But it’s far beyond that as well, as you say, Google, Microsoft, Amazon, they’re all contracting with US military, but also, Israeli military on Cloud and things like this. Obviously, SpaceX is a big military contractor, they just launched a US spy satellite, or military satellite into orbit. And then we have all these other kinds of military companies like Anduril, founded by Palmer Luckey. Obviously, there’s Palantir, with Peter Thiel. There’s just all these companies that are increasingly focused on getting these contracts from the military. And not only as the world seems to be moving toward more conflict, but they seem incentivized to want to see that happen, because it would be good for the bottom line.

TG: When we see the military budget is just unlimited, it’s just a bottomless pit. I heard that sometimes they have to buy things just to say that they’re spending it so that their budget is not cut. And so I just wonder, what if we lived in a world where the bottomless pit was budget for housing or food? It does not make any sense! And so it makes sense if you’re in Silicon Valley and you’re seeing this bottomless pit of a budget. That’s what you want to get a part of. And of course, you delude yourself into believing that’s also the right thing to do. And it’s nothing new! That book, “Palo Alto,” I see it in your background, which I’m also reading. It’s just nothing new, because again, it’s been completely intertwined with the military, but I feel like some of the tech companies were trying to act like they were more idealistic, the newer ones is what I mean, like Facebook, Google, etc, they were trying to act like they’re not like that or Apple. I feel like probably now, there’s a new phase where that’s not going to be the case.

PM: Absolutely, we can see exactly what they are, and it’s clear that they’re just as bad as any of the rest of them. It’s difficult to pivot away from that conversation to wrap up everything that we’ve been talking about, but just to close off this broader conversation that we’ve been having, we’re now just about a year into this wave of AI hype. I wonder where you see it going from here? Do you think that this hype is already starting to decline for this cycle of it? And how has seeing what has happened over the past year shaped the kind of work that you’re doing at DAIR, and what your team is thinking about?

TG: Honestly, I’m not exactly sure where the hype is, if it’s at the height or in decline, it’s unclear to me. But I cannot imagine the promises being delivered, and the money that’s raining down, that they predict, being delivered. So, I’m anticipating that. I don’t know how long they’re going to keep this going for. Given that, to your question about what we’re doing at DAIR, I think I said this last time, too, it’s just we keep on being in this cycle of paying attention to what they’re doing and saying no. And we kind of need a different thing. We can’t just continue to do that. And so this year, what we’re really thinking about is: How are we putting forward our vision for what we need to do? So, one example of that is a bunch of organizations like Lelapa AI, Ghana NLP [Ghana Natural Language Processing], and Lesan AI, are thinking about how to create some sort of federation of small organizations that can maybe have clients, while they don’t out compete each other, and try not to monopolize things.

Because the idea, at least for me, is that I want to push back on the idea that you can have one company for everything located someplace that makes all the money, ane model for everything, like the whole all-knowing AGI, whatever thing that they’re doing. And so if I’m going to say that, I think we should show that there are alternatives. And so we just wrote a workshop paper — which I’m hoping will turn into a longer peer-reviewed paper — is about showing how, just one example, these smaller organization’s machine translation models outperform some of these larger organizations who are saying that they have one model for everything. Because these organizations know, they care about certain languages that these other organizations don’t, they know the context. And so the idea is, what if these smaller organizations can band together and have some sort of a market share?

A client can come and say: Hey, I need a lot of language coverage. And so maybe that client would want to go to some big corporation. They don’t want to deal with 100 different, or 20 different organizations. So we’re thinking through what that might look like. That’s an example of how can we show a different way forward, given that labor is a central component of all the issues that we talk about in AI, we have a lot of projects. Turkopticon is an advocacy group for Amazon Mechanical Turk workers. Or we have collaborations with some of the Kenyan workers who you read about in the other articles. And so it’s a combination of trying to empower the people that we don’t think are currently being empowered, and also the organizations, because we think that we are living in this ecosystem with other organizations. So, we also want to support the other organizations that are advancing a future that we believe in.

PM: I think that makes a lot of sense. And especially with the model that we have right now, it’s like one major American tech company has to dominate what’s going on, and of course, it, almost always, has to be in the US. But a different model can have these kinds of smaller groups, that have their expertise in different parts of the world, that care about what happens in their parts of the world. And that leads to a much richer ability to think about how these technologies are going to affect what’s happening there. So I think that sounds fantastic.

TG: I’ll keep you posted. We’re excited about it. There are specifics still on how we’re working on it. But the idea is to help people survive and be sustainable, if they don’t want to be monopolies and take over the world. How do you support the organizations that are just trying to do their thing, be profitable, but not take over the world?

PM: I’ll be looking forward to updates on that or, whether you decide to change gears and just nuke some data centers or whatever.

TG: [laughs] We are nuking some data centers, creating this thing, who knows?

PM: One or the other [laughs]! Timnit, always great to speak with you. Thanks so much for coming back on the show.

TG: It’s always wonderful to talk to you — we always cover so much ground. So, thank you for having me and congratulations on your show, again. It’s such a great show and I look forward to more episodes in 2024.

PM: Thank you so much.

Similar