Don’t Fall for the AI Hype
Timnit Gebru
Notes
Paris Marx is joined by Timnit Gebru to discuss the misleading framings of artificial intelligence, her experience of getting fired by Google in a very public way, and why we need to avoid getting distracted by all the hype around ChatGPT and AI image tools.
Guest
Timnit Gebru is the founder and executive director of the Distributed AI Research Institute and former co-lead of the Ethical AI research team at Google. You can follow her on Twitter at @timnitGebru.
Support the show
Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.
Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!
Links
- Please participate in our listener survey this month to give us a better idea of what you think of the show: https://forms.gle/xayiT7DQJn56p62x7
- Timnit wrote about the exploited labor behind AI tools and how effective altruism is pushing a harmful idea of AI ethics.
- Karen Hao broke down the details of the paper that got Timnit fired from Google.
- Emily Tucker wrote an article called “Artifice and Intelligence.”
- In 2016, ProPublica published an article about technology being used to “predict” future criminals that was biased against black people.
- In 2015, Google Photos classified black women as “gorillas.” In 2018, it still hadn’t really been fixed.
- Artists have been protesting AI-generated images that train themselves on their work and threaten their livelihoods.
- OpenAI used Kenyan workers paid less than $2 an hour to try to make ChatGPT less toxic.
- Zachary Loeb described ELIZA in his article about Joseph Weizenbaum’s work and legacy.
Transcript
Paris Marx: Timnit, welcome to Tech Won’t Save Us!
Timnit Gebru: Thank you for having me. I’m a huge fan of your work and am very much looking forward to reading your book. But given all that’s on our plate, it’s really hard to figure out how to do things that we need to be doing like read books.
PM: I completely understand that. I have a million books on my reading list. I get sent books all the time, and I’m like: where am I going to find the time for all these? So, no pressure.
TG: I have bought it.
PM: I appreciate that. When you get around to it, though, you can certainly let me know what you thought of it.
TG: Definitely. Looking forward to it.
PM: I have obviously been following you and your work for a while as well. A lot of people will know you and when you came to their attention. Admittedly, I didn’t know your name before you were fired from Google, but at least we got to know you, many more people did, at that time. One upside to that whole situation, I guess. I’m really excited to talk to you today about artificial intelligence, all this hype that has been around lately, how we should actually be thinking about these technologies and what they might actually mean for us going forward and into the future. Especially with the failure or loss of interest in Web3 and crypto and metaverse. It seems that AI is going to be one of these technologies that the industry is refocusing on and trying to build this hype around into the future. As I said, we haven’t talked very much about AI, machine learning, on the show. Previously, they’ve certainly come up, but I was hoping that you could give us a general understanding of what it actually means. What is AI, artificial intelligence? What does that refer to? Is machine learning the same thing? Is that different? How should we understand these terms and what they actually mean?
TG: So I want to start with a comment that I got my PhD from a lab called the AI Lab at Stanford. I have my PhD in Electrical Engineering, but that lab is in Computer Science department. I find myself asking the questions that you’re asking. I just will start with that, so as a researcher I had never described myself as an “AI researcher,” until that became the mainstream term and super hyped up. I remember about 10 years ago is when the hype started and everything has been rebranded as, “AI.” I first and foremost, think of it as a brand, as a rebranding, more of a marketing term. There was an article by Emily Tucker, from Georgetown Law Center for Privacy and Security, titled “Artifice and Intelligence,” where she talks about how it’s a misnomer, we should not call it artificial intelligence. The way I look at it, artificial intelligence, is the big tent, a big field, with sub-sets of things inside that field. I would say, my understanding of the field is that you try to create machines or things that can do more than what’s been programmed into them.
This could mean natural language processing, where people are more interested in analyzing texts and speech, so then they can create things such as speech to text transcriptions in an automated way. Or automated machine translation from A to B, from one language to another language, etc. It could be computer vision techniques, where you look at what’s in an image and analyze that image more so than just taking it. Maybe you can do some statistics, maybe you can say: There’s a chair in that image; there’s a hat, a person, house, whatever. That would be computer vision. So at Stanford these are all under subsets of the AI Lab. There’s robotics, where people are working on robotics. Then, machine learning is more of a technique. Machine learning would be a technique that many times people try to use for many of these various things. You could use machine learning techniques and natural language processing, you can use machine learning techniques in computer vision, etc,.
Nowadays, because a specific type of a machine learning technique is one that is almost everywhere. People use many of these things interchangeably — they use AI machine learning, deep learning, interchangeably. It wasn’t always like that actually. I remember in 2012 Yann LeCun wrote an open letter to the academic community, the computer vision community, being like: You are not accepting my papers; you guys don’t like us, the deep learning people, we’re going to persist. Now, they’re the dominant paradigm. It’s as if there was no other paradigm. So they’re not all the same things, but they’re often used interchangeably.
PM: A really big tent, I guess, of a lot of things that are going on there. A term that we hear more and more often as of a marketing term. Do you also feel it’s misleading us and making us believe that these technologies can do things that they can’t? When we talk about smartwatches, and smart gadgets, it’s a particular term that’s created by PR people at these companies to make us believe that it is a technology that’s really smart. It has this term that we obviously think is very important that’s associated with it, instead of calling it something that wouldn’t give it these particular connotations. Is the term artificial intelligence making us believe that these technologies operate in ways that aren’t actually reflective of what the term would have us think?
TG: Absolutely, many times there’s a bunch of things going on. I don’t only blame the corporations or other entities that might have a vested interest in marketing turns. I also blame researchers who did definitely feed into the hype. First is the naming of areas of study, aspirational naming, it doesn’t mean that things are the way they are, but it’s aspirational. For example, neural networks is one. For some people, the brain might be an inspiration, but it doesn’t mean that it works similarly to the brain, so some neuroscientists are like: Why are you coming to our conferences saying this thing is like the brain? But that’s aspirational. Computer vision can be aspiration, what does it mean for a computer to have vision? Natural language processing makes sense, that is a term that makes sense. But people are not going to be impressed by that term, so there’s that one thing, that aspirational naming, which is completely confusing and full of hype. Then there is the conflation of a bunch of things. Where as I look at AI as a field of study with a bunch of subsets, as I told you, there are a bunch of people whose interest, whose goal, is to build what’s called Artificial General Intelligence AGI. Which I personally don’t even know what it is — it seems like a God to me.
PM: It’s going to be here in a few years, though, don’t you know?
TG: Are we already there? I’m not exactly sure. It’s this all-purpose, all-knowing God, where if you build it the right way, then you build a God. If you build it the wrong way, then you build the devil. It’s the single existential risk for humanity, but we’re building in, so I’m lost. Because that segment of the population is the loudest, and has the most money right now, they also influence the discourse any type of AI discourse because they try to is make it look as if everything they’re building is that — AGI or has AGI characteristics. That’s another hype angle. When we go to the corporations, and other vested entities, another reason where this hype helps is if you make people think you’re building something that has its own agency — who knows what it’s going to do, and it’s so capable of doing this and that — people are more focused on the thing because they think it has more agency than you, as the builder of the system, the deployer of the system, the entity, the regulation. Now, responsibility is not your responsibility, but who knows how the machine will behave?
So, everybody sort of starts thinking like that rather than holding relevant entities accountable. Because it’s an artifact that people are building and deploying. So, at both ends of the spectrum you have people building it, and deploying it, and people being harmed or benefiting. So when you move the conversation away from that, it makes it easy to evade responsibility and accountability. Then, also here we have Hollywood, which really has not been helping. Anytime people hear the term AI, the number one thing that comes to mind is either Terminator or some other Hollywood thing, and again, that’s what’s in the public consciousness. None of these things are helping us in having a real understanding of what’s real and what’s hype.
PM: They’re really good points. Especially the makers of the technologies, being able to use this to get the blame or get accountability pushed to some other place, like we couldn’t have known it was going to do this when we developed it and trained it on these particular datasets and what have you. I want to keep exploring this thread, but I want to detour a little bit to give us another path into it. I want to talk a bit about your career as well. Obviously, I’m interested, there are many different routes you could have taken into technology, into the tech industry, different things that you could have explored. Why was this area of technology of interest to you when you were doing your studies, your PhD and then working in the industry?
TG: I’s really all I’ve ever wanted to be is an engineer, scientist, I’m interested in building things, in learning about things. That was really my goal for a long time from since I was little, I was interested in certain subjects, etc. That’s really all I tried to do. Then during trying to live my life, you have various experiences, I had to leave because of war, and then, I came to the States and the racism was just in my face since day one. I went with teachers telling me that I can’t take classes — that I would fail if I took the exams, and all of that. That experience persisted for a long time. I would say I had a very visceral understanding of some of these issues.
However, what’s really interesting is that I still hadn’t really connected the tech. So, I would understand how I’m facing sexism in the workplace while building a product or whatever. But I never had an understanding of how the tech itself was perpetuating these things, or how it was being used by powerful entities or the militarization of these universities, like MIT and Stanford and Silicon Valley. That all came later. I was chugging along and doing my engineering stuff, I was like: Oh, I’m gonna start doing what’s called analog circuit design. Then I did that, then I’m veered off into something else and then I veered off into something else. I then, somehow, arrived at computer vision, which is a subset of AI and I was like: Oh, this is super cool; let’s do this. Then there were three things that happened simultaneously. One is that I was shocked by the lack of Black people in the field. So, graduate school was even more shocking than undergrad. In graduate school at Stanford, I realized that they had literally only graduated one Black person ever with a PhD in computer science ever. Now it’s two, by the way.
PM: No way!
TG: Yes, since the inception of the department, 1950-something. Exactly! Then you go to these conferences and I remember, before I started Black.ai, I counted five Black people out of 5,500 people attending these international conferences from all over the world. That was one, just dire lack of black people. Then, again, a fluke. Secondly, I started watching Joy Buolamwini gave these talks. She told me I don’t say her name right and we keep on saying each others name’s right — Buolamwini. She told me she doesn’t say my name; it was hilarious. We were just repeating each other’s name’s each other. Anyways, I saw her talk and she was talking about how these open source face detection tools wouldn’t detect her face, unless she puts on a white mask, and they would detect her friends and other people’s faces, but not her face. So I saw that around the again, around 2015, a very similar time. Like I said, when I wrote about this, just counting five Black people out of 5,500 in the AI conferences, that was 2016.
Then again, in 2016, I read the ProPublica article on crime recidivism. There was this article that talked about a particular startup that purported to have a model that can tell you the likelihood of someone committing a crime again, and judges and courts were using this data, with other input, to set bail or determine how many years you should go in prison for. I had no idea that this kind of stuff existed, because that’s when I put all of these things together. I was like: I know that people who are building these systems, I go to school with them; I work with them and I go to conferences with them. I know how they think. I know what they say when I talk to them about police brutality and so these are the people building this. Are you kidding me? I already could tell, could imagine the issues and also could see how other people were not caring about these potential issues. Around the same time, the Google gorillas fiasco happened, where they were classifying people as gorillas, Black people as gorillas. All of these things happen at the same time, then the final piece was the hype. Basically, I had been going in and out of my PhD for a long time. I tried this thing, then dropped out, then this other thing and dropped out.
So when I went back, finally, working on my PhD in computer vision, it was literally at the inflection point before and after of the hype and AI. Before there was no hype — I started my PhD when there was no hype. In two years it started exploding, so then around the whole Google gorillas fiasco thing, OpenAI was announced. So, I wrote this piece just for myself, because I was just so angry. I was gonna submit it as an open letter to somebody. My friend was like: Everybody’s going to know it’s you, so it’s not anonymous. I was so angry because at that time they were a nonprofit, they then changed into a for-profit, but they were a nonprofit. And basically, were saying that they were going to save humanity from the dangers of AI, it was going to be an AI safety first company. They were worried that large corporations who are profit driven. They’re not going to share their data on stuff or controlling AI, which is very important. It’s going to change the world. It’s these eight white guys, one white woman and one Asian woman in Silicon Valley, all with the same expertise in deep learning plus Peter Thiel and Elon Musk. I was just like: You gotta be kidding me. So these are the situations that led to me starting to focus, pivot a little bit, and try to learn more about the potential harms and things like that.
PM: I appreciate you outlining that, because I would imagine that a lot of people who are in the tech industry who listen to this podcast, or don’t, have had an awakening about the politics of AI, you might say, and have a similar story. Sure, there might be some different experiences in there, that they have experienced personally in seeing these things, but I imagine they went into it not thinking so much about the impacts of the technologies in the world. Then after working on it, after learning a bit more about what they are working on, they start to realize more about the impacts of these technologies and how they are part of this system that is creating that.
I want to fast forward a bit. Obviously, I can’t not ask about it, but your time at Google, when you were the co-lead of the ethical AI research team. As far as I’m aware, there were a lot of people who recognized that this was a really diverse team that had been put together to dig into these issues, to provide a critical look at what was going on in the AI space and at Google in particular. Then, of course, we all know the story of your firing in 2020, and everything that went down there in the press. What was that whole experience like? What was it that Google found so objectionable about the work that you were doing to lead it to take such extreme action against you?
TG: Those are two years, so much happened in those two years. By the time there was no question who I was, at this point. Joy and I had published this paper showing the disparities in error rates on face recognition systems, among people of different skin tones, and genders. I had co-founded Black.ai, we had our first workshop. I was very vocal about the issues of racism and sexism in the industry, etc. I joined Google, September 2018, and I was already very nervous. When I was about to join, I was just nervous and like: What am I, where am I going? I talked to a bunch of women who had a lot of issues of harassment, who sat me down and telling me to think twice about it. Then Meg Mitchell, who was there at the time, and Meredith Whittaker was also there at the time, I knew both of them. That was during the whole Maven protests, so fact that there were people like that was a big deal. There’s dissent anywhere, but if you don’t look like you have it means it’s bad place. So at least I can have that. Then I was thinking: You know what, Meg is someone I can really work with, she had started the ethical AI team at the time and it was just a couple of people. She asked me if I could co-lead the team with her. So I was like: Well, at least I have this one person to work with and I joined the team. Right off the bat, November, there was the Google walkout. Then right off the bat, I was seeing the systemic issues, sexism, racism. Now, this is kind of not exactly about the work itself, but the kinds of impacts of work. The organizational dynamics of who gets to make decisions, who’s valued, who’s not obviously directly impacts what the output is.
PM: Totally, it’s the wider context of the company that you were entering into.
TG: Absolutely, I started making noise immediately. After a few months, I was certainly not the friend of HR. Clearly, I was not the friend of the higher ups, so that is how it started. Then, I really started fighting for our team to have a voice to be at different tables, decision making tables. to grow our team, to hire more people. We had to fight for literally every single person we were trying to hire. We hired social scientists, as research scientists, for the first time. Dr. Alex Hanna, she’s now Director of Research at the Distributed AI Research Institute (DAIR), she was the first person who we hired as a social scientist, we were research scientists. Because you just need different perspectives if that’s what we’re really trying to do? So our work, we were primarily a research team, but we also we have lots of teams coming to us for questions, if they were gathering data: how are they going to annotate it? What are some of the issues? We worked on things like a paper called “Model Cards” and this was Meg spearheading it saying that: Any model that you put out, there has to be accompanied by a set of tests and need to think about it about tolerance. As an engineer there’s no way you would put out any products without extensive documentation and testing what is supposed it to be used for? What are the standard operating characteristics? What are the ethical downstream considerations? What were what was your task in mind when you did it?
PM: The opposite of the “move fast and break things” ethos?
TG: Absolutely. So I post a few papers like that, where I didn’t have any opposition from people in corporations or engineers, because I’m saying these are engineering principles. I’m not saying politically, you have to do this and that, but that’s super political, because what you’re saying is that, nstead of making $100 in one week, by doing whatever thing you’re doing. First of all, spend one year instead of one week, because do these additional tests, etc. Secondly, hire additional people to do these things. I’m saying make less money per thing, put more resources per thing, and sometimes you may not even want to release this thing. We had to figure out in our team, we want to get people rewarded, promoted, and all of that, not punished. So how do we say: hey, we stopped you from releasing five products because they were horrible. How do we talk about it in these terms? That was what I was doing, that was my job. I was extremely tired, exhausted from all the fighting — sexism, racism, whatever — trying to get our team at various tables, but we grew our team and it was definitely a very diverse team in many different ways.
Then came 2020. At this point, there was the Black Lives Matter protests, there was OpenAI GPT-3 that was released. At that time, it’s not like this was the first language model that was released, but they were the first people who were basically talking about this as this all knowing, all encompassing. Just hyping it up, ridiculous amounts of hype and that seeped into the entire industry. All these researchers everywhere, chats were like: Oh, my God, this is so cool. It’s so cool. And then the higher ups be like: Why are we not the biggest? Why don’t we have the biggest model? Why don’t we? I’m just like: Oh, my God, why? What’s the purpose of having the biggest? What kind of pissing contest is this? That’s how it felt. At the same time, a bunch of people at Google, were asking our team — when we’re thinking about implementing infrastructure for large language models and things like that — what are the ethical considerations we should have? At some point, I was like: Look, we got to write a paper or something. I contacted Emily Bender, who is linguist whose written about curating datasets, documenting them, etc. Voices like hers need to be heard prominent. So I contacted her, I said: Hey, all these people at Google are talking about large language models. They’re asking me questions. Some of them just want to do bigger and bigger — is there a paper I can send them? She said: No, but why don’t we write one together? So that’s what happened.
We wrote one together and I was like: Oh, excellent! Oh, we have a little emoji as a title of the paper. That’s gonna be cool. That was it. It was not a controversial paper, we outlined the risks and harms of large language model where she coined the term Stochastic Parrots. There was other members of our team, obviously Meg Mitchell, who got fired three months after me. We go through some of the risks that we see one was environmental racism, because you can use lots of compute power and training using these models, and the people who benefit, and the people who pay the costs are different, so that’s environmental racism. We talk about all the risks of bias and fairness and things like that, because these models are trained with vast amounts of text from the internet, and we know what’s on the internet. We spent a lot of time on that. We talk about how just because you have so much data doesn’t mean that you have, “diverse viewpoints.” We talked about how there’s a risk of people interpreting outputs from these models as if it’s coming from a person, of course that happened, that was on the news. The other risks that we talked about was the risk of putting all of your resources and research direction in this thing and not other things.
That was it — it was under review, whatever, and then randomly I was told that after it went through all these internal processes that we had to retract the paper and remove our names. So first, it was about retraction. I was like: Well, we have external collaborators and I don’t trust what you’re going to do if we just retract the paper. It’s not like you’re discussing improvements or specific issues. So then they said: Okay, we can just retract the names of the Google authors. Even that I was like: I’m not comfortable with that, doing that without a discussion. I want to know what kind of process was used because I can’t be doing research. I’m not in the marketing department. I’m in the scientific research department. If it was PR or marketing, do whatever you want, but if you’re having me as a scientist to write peer review papers, this is different. So of course long story short, I was basically fired in the middle of my vacation. It was a whole public outcry. Then I had a lot of harassment campaigns and threats, and all of this. So that’s what happened at Google.
PM: I appreciate you outlining it in in such detail for us. This gives us important insight into the work that you were doing. I do want to pick up on some of those pieces of the paper that you were talking about because these are really important things I feel don’t get the attention they deserve. Things we should be talking about more when we do talk about these AI tools and these large language models, even beyond that, the environmental costs of this. It can be easy to ignore how the models are based on a lot of compute power — massive data centers that require a ton of energy to train these models to keep them going. There are rumors now that ChatGPT, even though it’s free for you to go and use it online, that it’s costing millions of dollars a day, just to keep the servers running, essentially, but that doesn’t get talked about as much. Then of course, how it’s using a ton of data that scraped from the Internet, we know the severe problems that exist in the data that is actually coming off of of the internet.
As you were talking about, we know how these datasets can be very biased — this is not controversial or a new thing that we’re just hearing about — this is something that’s been around for a long time and has not been effectively addressed. Of course, the opportunity costs in the research, there’s all this hype around this particular type of AI — this particular type of large language model — so all of the energy, research, and money is going to go into building on these types of things. What if there’s a different type of AI tool, a different type of language model, that is actually better suited toward the public, toward having public benefits that might come of this. Moving towards the types of things that we would be more concerned with, as a people, instead of what’s going to excite people at the top of the tech industry, make money for these large corporations. Those things might not be aligned, but when the hype goes in one direction, it gets distracted from anywhere else. That’s why these are all really important points that come of this paper and the work that you were doing with your co-authors and the other people you’re working with.
TG: Honestly, even though we were worried about the risks then, I still did not expect it to explode in the way that it has, right now, in such a short period of time. All of the things that you outlined are about obfuscating —what exactly is going on? The way we talk about things in the tech world? There are data centers taking resources, and water. People working in these data centers who are being exploited. It’s not clouds. In AI, we just wrote an article about this, the exploited labor behind what so called AI systems, when people in the hype talk about, it’s like: Oh my God, look at this machine; it’s so cool, what it can do. Well, look at the armies of exploited laborers whose data has been taken. Everybody’s data has been taken. With art it’s become much more clear because artists are galvanizing. All of these systems have datasets that they scrape from the internet and data laborers that do some type of task. Some of them supply data sets or data, some of them label them in crowdsource micro-tasks. And so that’s what this whole field is predicated on. That’s when it started to become more popular with the advent of things like Amazon Mechanical Turk, where you could crowdsource things and pay people pennies for little tasks and things like that.
Now, there are many companies who are raising billions just based on outsourcing tasks, similar to content moderation and social media and things of that nature. It’s again, the hiding of all of the different things that are going on. So there’s the theft; there’s the data, the hidden cost, the hidden data labor, the hidden costs, environmental costs, so that’s actually the one thing I feel we didn’t really cover in our paper is the the level of theft. We talked about data curation and how you can’t really just scrape everything on the internet and assume that something good is gonna come out. You have to curate your datasets and document them and really understand them. If your answer is: Oh, it’s too big to do that, then that means you just shouldn’t do it. Where else can you do that? Can you sell me a food item at a restaurant and be like: Eat this food, I don’t know what it’s made of, there’s some sugar, I know that there’s some flour, whatever else you’re on your own. You can’t do that, not allowed. We have to understand that in this industry is the most unregulated industry, they can proliferate things because of that.
The other thing is, I just learned today when I was talking to some people who know the Artificial Intelligence Act in the European Union. There is a proposed legislation, the AI Act, that they’re still debating. People are lobbying for exceptions for what they call general purpose model, so ChatGPT type thing or large language models would be general purpose models. OpenAI has been going around building this as an all knowing god-like thing. They haven’t gone out of their way to tell you: Hey, you can only use this thing in these limited ways. That’s not how they’re talking when you see the press, when you see Sam Altman tweets, and people that are like: Oh, this is AGI. However, they would be exempt from any sort of harm if they simple say: Don’t use this in any high-risk things, but they can go around saying this is a God. Then you have people who are talking about how they want to use Chatbots in the Supreme Court. You have people who were using them in mental health applications unethically very clearly. Now you have open AI who can make tons of money because it’s not going to be free. Obviously, it’s not going to be free. They’re going to start making tons of money from these kinds of people, but they won’t be liable despite creating the hype, they created the race, they created what’s in the public consciousness, hiding all of the costs and hiding what’s really happening, telling us they’ve created a god-like thing. Now, the people who are going to try to use it as such are going to be the ones who are liable. That’s even if we have any sort of regulation, and that’s assuming best case scenario. Even with the things we first saw in 2020, I still did not foresee it to be this prevalent, just in the two years.
PM: It is very worrying to see how much it has exploded and how seemingly unprepared the discourse has been to reckon with what it actually might mean for us. As I’ve been watching ChatGPT roll-out or GPT-3, the “art AI tools,” that have gained popularity over the past year. I’ve been a bit unsure how to think about it. On one hand, you have this public narrative that is being promoted by these companies, like OpenAI, where tools are going to upend everything. All artists are going to lose their jobs, all writers are going to lose their jobs, because these tools can make writing or images that look as if a human has done it. But that feels to me something that we’ve heard about a lot of tech products in the past that have really been unable to live up to these really loftily, hype-inflated claims. On the other hand, I wonder what might the real impact of this be when we get past the hype-fueled hysteria that we’re in now and I’m unsure what to make of that and I wonder what you think of it?
TG: That’s a really interesting point because I’m stuck in a cycle where I’m constantly I’m like: No! Don’t! No! [both laugh] Where I have a hard time thinking past that and I’m actually trying to force myself, and my institute, to carve out time to work on our vision for our affirmative vision for a tech future that is not dependent on what they’re doing. But if we were to think about the actual impact of this, I do think centralization of power is big. Let me give you one example of an actual impact. OpenAI has a speech-to-text model, transcription model, and a data set called Whisper, and Meta came up with a model called Galactica, claiming it can write all scientific papers. Then they came up with another paper, which they called “No Language Left Behind” saying that: Oh, we have this dataset that contains so many different languages, it’s so accurate, and this and that.
Then one of our fellows, his name is Asmelash Teka Hadgu, who created a startup called Lesan. It’s a machine translation startup, specifically for some Ethiopian and Eritrean languages — a few of them, there’s so many Ethiopian and Eritrean language — so just a few of them. Somebody who was thinking about investing in them was like: Hey, have you seen this paper from Facebook, your startup is gonna die. Basically, it’s a solved problem, so they say they solved it. Let’s say it was true — that people who would make money are not people like him, that people who would make money are not people who actually speakers of that language — the people who get the money, the workers of the startups and stuff, are not those. It’ll just be one company located in one location. However, it’s not true because he was looking at the data set and some of the languages, those that are spoken by literally more than 100 million people, their data set was complete gibberish. So whatever the path may be, centralization of power is where we’re headed.
PM: The example you give there seems really similar to what we’re seeing. If you say that Facebook is saying that it had this model that was going to work for all the languages, but then if you look into smaller languages that are not the more dominant ones that many people will be paying attention to, we can quickly see how that falls apart. It seems similar with tools like Stable Diffusion, or ChatGPT, some of these things look great. It looks like the AI is creating this incredible image or is writing this thing that makes a ton of sense, but as soon as you start to tinker with it — or try some different things that might be a bit less conventional — you can see that hands don’t generate properly in the images. There are other issues that show that the AI, so to speak, doesn’t know what it’s doing. Rather just trying to combine these different images and styles in a way that you are looking for, or in the way that it’s been taught through these machine learning processes. It’s similar with the ChatGPT and things like that where people are finding it’s more than happy to spit out complete lies as long as you frame your your question appropriately. It’s unclear whether it will be able to replace human writing, as we’ve been told.
One of the things that really occurred to me, as I’ve been seeing this over the past number of months, is that we had a similar narrative around 2015 and 2016. As you say, in that moment, there was a narrative that: Oh my God, all drivers were going to lose their jobs because self-driving cars were right around the corner. We can see how that worked out. But at the same time, there were also narratives about how writers were going to lose their jobs because all these things could be automated and that didn’t happen in that moment. I’m sure these promises have been made before in the past as well, so I’m always very skeptical when this comes up, as we’re seeing right now. As we’re in the midst of this hype-fueled hysteria, what is actually going to come out on the other side of this. I’m very unconvinced that it’s going to be what Sam Altman wants us to believe.
TG: He said that the future is going to be unimaginably great because of their chatbots. I don’t know how you create a text image generator and you think you’re gonna create utopia. But this is the language of the long-termness and effective altruists and stuff. I am vindicated, because some people used to think I was just obsessed with this niche group of people for some reason. However, I’ve been around them for a long time, and I used to roll my eyes, but now it’s just very much something you can’t ignore. I realized that when you were talking about Stable Diffusion, it reminds me of social media and the issues we’re talking about in terms of social media. They’re having issues of moderation too, so then you’ll have issues of the moderators and what they’re going to have to see and what they’re going to be exposed to and then have to moderate. Also, the exploitation of these moderators because their entire business model is predicated on theft and not compensating people for stuff. If they were forced to do these things, they would decide not to do it because not worth it. It’s not going to make the money.
The other end of it — that’s what I’m seeing, the same have with social media companies, the spreading of disinformation this trend of centralized power, moderation and exploiting people, a labor force that the second class citizens around the world. That’s the path we’re on while promising utopia, which I don’t understand how you go from A to B. For me, we can never have a utopia that is based on the vision of a homogeneous group that has no absolutey history, they don’t even understand art. What I’m saying is I used to play piano for a long time, and I’m a dancer, but I’m not a visual artist, so my goal is not to watch a robot dance even if they technically end up doing things that are dance, technique is just one aspect of it. It’s like having words. It’s a communication mechanism between humans – it’s a human expression mechanism. People use art for social movements. These are the people who want to give us techno-utopia, like a world where you just sit at your computer all the time and you never interact with other human beings. Everything you do is with a chatbot and with a robot. Honestly, why do I want to live in that world? So even their vision, which won’t happen, is dystopian. I just don’t see how this is utopia.
PM: It seems fundamentally inhuman. I’m going to connect a few threads here based on what you’re saying. When we think about the work aspect of this, and what it might mean for work. One of the things that comes to my mind and I often reflect on is tech’s impact on the work we do and on the jobs and what people can expect is that, going back to the gig economy, we often get promises that tech is going to automate things, or is going to take away the need for human labor in various instances. What we actually see is it doesn’t take away the need for the labor, but just degrades the job and the expectations that the workers can expect when it comes to performing those types of tasks whether it’s the gig economy or the people who are working to label AI tools and images, workers who are really poorly paid, who are treated very poorly.
We can see the rollout of technologies in Amazon factories and things like that, and how that’s used against workers. They haven’t automated the whole factory, but they have instituted algorithmic management of these people. Then, as you’re saying, when we think about the future that is being positioned with these AI tools — like ChatGPT or Stable Diffusion — and what they are supposed to create for us, it seems to be taking away one of the things that many people would imagine to be one of the most human things which is our ability to create art, to create culture, and to try to replace that with tools, with computers. And we would just consume what the computers create, rather than what humans themselves make. If there’s ever going to be an argument for something that probably shouldn’t be automated, it’s those particular things.
The final piece of this, because you mentioned effective altruism and its connection to longtermism, is that there seems to be a fundamentally inhuman future being presented there, too. Where we need to be concerned about the far future and humanity’s future, but not the issues that are really affecting people today. Because the people who are pushing it, who are leading it — whether it’s thinking up the AI models and the futures that they predict for us or promoting longtermism — they are incredibly disconnected from the actual problems that people face in day-to-day life and the type of future and solutions they are imagining do not deal with that or reflect that at all.
TG: Exactly, it’s iterations of the same story — each decade or each century or whatever it is — even when you look at the rebranding of terms. For example, Big Data is basically AI now, people instead only say AI. Every so often you have a rebranding of things that sound cool and I am someone who got my PhD in the field because I thought interesting things about it. I would like to focus most of my energy on just imagining what are useful tools — interesting tools you can build. But that’s another thing I resent these people for right. Nobody said: Go build us a God or try to build us a God. They decided that this is the thing that not only they should do, but it’s a priority for everyone to do. Then the rest of us have to be stuck cleaning up rather than trying to implement our vision of what useful things we can build with our skills and our little amounts of funding we have, we don’t have billions of dollars that they do. We end up in a constant cleaning up mode, which means we can never catch up. That’s really the worry that I have.
When I talk to legislators, we’re still talking about regulating face recognition. We’re still talking about the fact that we don’t have any regulation for social media platforms, but the EU has some and California just came out with their privacy law. They have moved on to the metaverse and synthetic Stable Diffusion type-stuff. They don’t even care about this stuff anymore. So I often talk about it as regulating horse carriages when actually it’s like cars that they’ve created. They’ve already paid off all the legislators — ruined public transportation, created roads and cities and all the infrastructure for cars. And we’re still sitting and talking about: Well, the horse carriage that you’ve created is not safe, and how can we regulate it, but guess what? They’ve moved on! They’ve moved on, and we’re already going to be too late. So it’s important for us to do both the uncovering, critiquing of the harms, but also investing in the people who can do something different like alternative visions, because otherwise we’ll just going to end up trying to clean up and never catch up.
PM: Absolutely, it’s essential. If we ever want to think of a different type of future —a different type of technology than what’s on offer by OpenAI, or Elon Musk — I’m wondering as we’re talking about this, you are not at Google any longer and you have your own institute that you have founded, and that some people who you’ve worked with at Google have come to join you there and I m sure some people who weren’t at Google as well. What is your focus in trying to do research on artificial intelligence, now, at this research institute? What the goal that you’re trying to achieve without having to worry about what Google and the higher-up’s there are going to do? What’s the kind of lens that you’re approaching this research through?
TG: The first lens is interdisciplinarity, not just in the research world, sociologists, and computer scientists and engineers, etc., but also labor organizers, refugee advocates, activists, people who haven’t generally been involved in the shaping of this research. If you’re talking about “benefiting humanity,” which is not what I’m talking about, you obviously should start with the people who are already doing work to do that, and incorporate their perspective. So our research questions and methodology are very much informed by that. We co-create together. For example, Adrian is working at our institute, she used to be a delivery driver at Amazon, and she just gave a talk at our one-year anniversary about wage theft. So, she was calculating her estimate of wage theft that was by Amazon for each worker. I would have never thought to do that if we didn’t have her. Or we have Meron who’s a refugee advocate, and a victim of transnational repression. So we’re doing a project on analyzing the harms of social media platforms. We have to work on natural language processing tools to do that, and it’s really important to have her perspective.
But it’s really interesting because it challenges your points of view, because, for instance, Manam is very much into Bitcoin and the rest of us are very skeptical. I’m like: You’re in the room full of skeptics, but I have to listen to someone who’s single-handedly rescued 16,000 refugees from human trafficking, and has fought against governments and all sorts of people. So I have to listen to her when she’s telling me the ways in which she’s been using it to help refugees, the ways in which refugees have been using it. I have to then listen to her. It’s interesting even to reimagine a different future, it’s very different to have these kinds of perspectives, rather than just a techno-utopia from some tech bros in Silicon Valley. That is the lens we’re taking and what’s hard for us is basically what I just said to you, because there’s so many dumpster fires, we have to fight all the time, it’s very difficult to carve out space to be like: What is our positive vision and imagination for the future? We’re trying to force ourselves to do that.
We’re starting this Possible Future Series, which is a few pieces to show tech is not this deterministic march towards some predestined future. The first piece is talking about that and the second piece is what would an internet for our grandmothers have looked like? My grandmother did not read or write, did not speak English, and was paralyzed for a number of years. So when we’re talking about technology, you never think of her as a early adopter or customer. Then speculating, and bringing it back down to earth, what are the components of such? We’re just trying to create space for ourselves to think about these things, and executing on them to the extent that we can, given our expertise.
PM: That’s fascinating. It reminds me of a conversation I had about people in the favelas in Brazil and how they were utilizing technologies to serve their needs and rethinking about how they could be used in ways that companies that were developing them probably never thought about. It’s so fascinating to think about those futures and those other potential uses and how people might think about or approach technologies. I love that. I’m sure I’ll be paying attention to the work that you’ll be doing and the team, of course. I’m wondering, as we close off our conversation here, where do you think this is all going in the near future? Certainly, there has been some critical coverage of ChatGPT, or Stable Diffusion — these types of technologies — but there’s also been a lot of coverage, majority being uncritical, buying into the hype and framing these things as though they are creating text that you would never be able to tell the difference between a computer and a human, and it makes me worried to see that. You talked about the company that was using ChatGPT to turn out responses for a mental health service they were running and it immediately made me think of Joseph Weizenbaum’s ELIZA chatbot, that he made all the way back in the 1960s, and the worries that he had around people thinking that this computer system that this chatbot could recognize what they were saying, and actually had some degree of intelligence when it didn’t at all right. That’s a long way of asking, Where do you see this going in the next few years and what should we be paying attention to try to have a more critical take and a more critical view on these tools as they get all this hype and attention?
TG: You know, I’m similar to you. I don’t know if it’s just my imagination, but I have started to see a lot more critical approach to tech in general, much more so than, say, 10 years ago. I don’t know if Steve Jobs died today, I’m not sure the type of critique he would have had — I’m not exactly sure — the reverence he did 10 years ago, which was like a God. When I was at Apple, I felt like I had joined a cult, tactually, in the way they were talking about him.
PM: Did you wear a black turtleneck?
TG: Absolutely not. On the other hand, it’s really depressing, because it’s the same story over and over again. If you look at Theranos, and how the media just hyped it up, built it up all over, then question how did that happen? Or Elon Musk’s Time “Person of the Year” — it’s not like we didn’t know. The rest of us are like: Oh, come on. Then the whole Sam Bankman-Fried thing, it’s just the same cycle over and over again. I am shocked with ChatGPT and the level of attention and hype it’s getting is truly astonishing. You do have to give it to them on the hype machine — they are so good at it. So whatever kind of critique we have, it is not even comparable to the hype that it is generating. So to me, that’s where I see it. I see the same cycle. If at some point, aTheranos-type situation happens, it’s not the media that’s going to help us. It’s legacy media and it is really not helping us. What they’re doing is giving megaphones to the super powerful, and making it even more difficult to raise awareness about some of these issues and talk about reality. That’s where, to me, it’s going.
You have an explosion of startups. It’s not even the big tech companies right now, actually. Google is the one that didn’t release a chatbot and they had to tell their researchers why not. I mean, they could have done that before they fired us, but now you have an explosion of startups, because this is the next thing that all the VCs in Silicon Valley are talking about and pouring their money into. We’re super behind in raising awareness about the potential harms and I honestly see the hype continuing. [Joshua Browder]’s talking about having chat… I mean, this guy was saying that having a chatbot as a lawyer would level the playing field because poor people would have access. It’s really interesting because this goes into Cathy O’Neil’s point in “Weapons of Mass Destruction: where poor people are often much more interacting with automated systems than rich people, so the fact that this guy is saying that automated systems are going to level the playing field for them. Obviously, the idea did not come from someone who didn’t have a good lawyer, and therefore lost. It didn’t come from a poor person who didn’t have access to a lawyer. So I don’t see it slowing down anytime soon.
PM: I share your concerns, especially, when we’re in this moment when tech stocks are down, interest rates are higher, and the industry is clearly looking for the next big thing after Web3, the blockchain, and the metaverse clearly haven’t worked out. But what does give me hope is seeing how the criticism of Web3, cryptocurrency, and the metaverse made a difference in trying to make sure that there were more critical perspectives of these technologies in the media, in the discourse that people were engaging with when it came to these technologies. So my hope is that if we’re able to hone a critique of these technologies early on, then we might be able to influence some of the coverage and hopefully not have the repetition of some of these 2010s hype-cycles around some of these technologies that we saw in the past. So that’s what I’m crossing my fingers for.
TG: I hope so too. I think it does make a difference. When I talked to Joy, and I’m like: Look at face recognition, I thought we made progress, and it’s everywhere. Now, it’s exploding. And she’s responds: What I think about is what would have happened if we didn’t do what we did. I do think it’s making a difference. I’d like to see more of it.
PM: Totally, I completely agree. That’s a great place to end off our conversation. Timnit, Thank you so much for taking the time. It’s been fantastic to chat with you.
TG: Thank you so much for having me.