AI Criticism Has a Decades-Long History

Ben Tarnoff

Notes

Paris Marx is joined by Ben Tarnoff to discuss the ELIZA chatbot created by Joseph Weizenbaum in the 1960s and how it led him to develop a critical perspective on AI and computing that deserves more attention during this wave of AI hype.

Guest

Ben Tarnoff writes about technology and politics. He is a founding editor of Logic, and author of Internet for the People: The Fight for Our Digital Future. You can follow Ben on Twitter at @bentarnoff.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

Paris Marx: Ben, welcome back to Tech Won’t Save Us.

Ben Tarnoff: Thanks so much for having me, Paris. Big fan of the show, so it’s always great to be here.

PM: Thanks so much. I’m a big fan of your work, of course, from Logic Magazine, to all the other writing and stuff that you’ve been doing your books and everything else. You’re an essential contributor to the critical perspective that is so essential on the tech industry. So it’s always great to have you back on the show. And I’m very excited to discuss what we’ll be discussing today.

BT: I appreciate that.

PM: You had a recent piece in The Guardian, discussing the life and work of Joseph Weizenbaum. Now, he is a figure who we’ve discussed on the show before. He’s come up in some of these discussions around AI that we’ve been having in the past number of months, as we’re in this moment of AI hype, and his ELIZA experiment or software in particular, or chatbot, we might say today. But we discussed that in the past with Zachary Loeb, if people want to go back and discuss that. But I think that there’s a lot to get into, especially in this moment, with ChatGPT is getting all this attention because Weizenbaum’s work is linked to what is going on today and I think provides a lot of lessons for it. So I just wanted to start by asking, for people who aren’t familiar, who was Joseph Weizenbaum? And why do you think his work continues to be relevant?

BT: Well, before I step back to give you his full biography — I can give you as much or as little as you would like I have acquired probably more information about his backstory than anyone really needs to know. But the reason that he is in the conversation these days, and the reason that his name is appearing in places like The New Yorker and The New York Times over the past year or so is because of ChatGPT. ChatGPT has obviously generated a lot of interest in chatbots, which are not new. In fact, Weizenbaum is commonly credited with creating the first chatbot in 1966, called ELIZA. But that ChatGPT has, let’s say, renewed interest in chatbots as a conversational interface to something we might call AI. Bracketing for a moment that AI is this contested, and often ill-defined term, so I’m always putting quotes around it. But nonetheless, ChatGPT gives us a character that we can interact with, which is underneath a large language mode, which is in 2023 defined as AI.

So Weizenbaum, as a figure, has attracted greater interest, both because he created the first chatbot, but also, because partly through the experience of the reception to Eliza, which did make a big stir in 1966, when it was released. He began to develop this broader critique of AI, which again, sidebar, meant something a little bit different at a technical level than than it does now. But nonetheless, he develops this critique of AI, and really of computation more broadly, that occurs over a series of articles, but really culminates in a 1976 book called, “Computer Power and Human Reason,” which is his magnum opus. So that’s why folks are talking about him. And this article that for The Guardian long read, was my attempt to intervene in that conversation and say: Hey, ELIZA, is really important, and very interesting. And we should revisit that as folks are doing. But also, there’s a lot more there. ELIZA, for him, was actually a starting point to developing this broader critique that is of immense and urgent relevance for today.

PM: I’m really happy you outlined all of that, because that’s basically what I want us to talk about in the conversation. Not just ELIZA, but this broader critique that he developed as a result of creating this chatbot and then seeing how people were interacting with it and responding to it, and developing these ideas around technology at this time. Do you want to give us just a brief idea of his early years, and then how he actually got into doing work with computers and what we call AI now?

BT: Sure, so let me just give you a brief chronology of his early life. Joseph Weizenbaum was born in 1923 in Berlin. He’s a German Jew. His father came from Eastern Europe and established himself in Berlin becomes a moderately successful furrier — someone who creates tailored fur clothing for women — and secures a somewhat secure, foothold in the upper-middle class of the German Jewish community. In Berlin, he marries — this is Weizenbaum’s father — marries a much younger Viennese woman, and has a relatively successful shop in Berlin. Incidentally, or just kind of parenthetically — although it will have immense consequences for Weizenbaum’s later life — his father is quite abusive to him both physically and verbally, and tells him he’s worthless from day one. This is related to the mental health challenges that Weizenbaum develops from an early age, which will be of great consequence to him, not only personally, but for his intellectual project because it helps stimulate his interest in psychology and in psychoanalysis.

In 1936, Weizenbaum and his family leave Germany for the United States. The Nazis are now in power; they have passed a number of anti-Jewish laws. As a result, Weizenbaum was forced to drop out of his public high school and move to a Jewish school where he meets a number of much poorer Jews — so called Ostjuden from the east — and develops a very intense friendship with one of them. All of this is cut short when the family decides to leave Germany for the United States. They end up in Detroit in 1936 because Weizenbaum’s aunt had a bakery there, so that’s where they had somewhere to stay. His father reestablishes his practice, sets up a shop in Detroit, again, rejoins the middle class, one could say. And Weizenbaum ends up studying at what is now Wayne State University, what was then Wayne University. Which was a working class, local public university in Detroit, in the 1940s. Weizenbaum then serves in the army from 1942 to 1946, which he experiences as a liberation from his family, where he is very unhappy. And in the army, he gets to travel all over the country, he gets a degree of independence.

In the course of one of his furloughs back home in Detroit, he meets and marries a woman named Selma Goode, who is a Jewish socialist, and who will go on to be one of the early members of the Democratic Socialists of America. She’s very involved in the left-wing activism of the time, the 1940s, being a high watermark for the American left. A period of very intense class struggle there in Detroit. This is the period in which the UAW is in the process of winning its first contract against Ford. So there’s a lot going on. This is a period of very intense social mobilization. Weizenbaum gets caught up in that, and develops a left-wing politics as a result. In the late 1940s, he and Selma get divorced, and this is an extremely painful experience for him, because by that point, they have a baby boy. They decide that Selma is going to take the boy to raise on her own. Out of this experience of heartbreak, he goes into psychoanalysis for the first time, and also goes into computing for the first time. This is of significance for his later career, that his first encounter with psychoanalysis is happening around the same time as his first encounter with computing. His first encounter with computing, however, is accidental, serendipitous.

He’s studying math at Wayne University, his professor decides that he wants to build a computer. This is the late 1940s, the modern computer, the architecture — what we now think of as the Von Neumann architecture — this is all starting to get consolidated in the late 1940s. It’s obviously still very difficult to acquire a computer. You don’t just go to the RadioShack, I guess that reference itself dates me. But you don’t go on amazon.com or wherever one buys computers these days. I wouldn’t know. And just buy it, particularly for working class university in the middle of Detroit. So they decided to build one, and this is an extremely exciting experience for Weizenbaum, it actually heal his heartbreak in a way. Brings him out of his pain, connects him to what will become a passion for him, which is computation. This gives him a sense of self, a self-esteem, a sense of self worth that has been so missing in his family life with his father telling them he’s worthless all the time.

He becomes quite good at computers. It really fits. He ends up marrying a woman named Ruth Manes, in the early 1950s, and then becomes a early computer engineer and programmer. He works for General Electric in California and what will later become known as Silicon Valley. Where he develops a project for the Bank of America that helps them use a computer system to automatically process checks. By 1963, he is invited to become a professor at MIT; he experiences it as a great honor. Because it means he’s reached a stage of his career, where he can be invited into this high temple of technology, where MIT is the epicenter of the emerging discipline of computer science in the United States, at the time. Now, before I pause, because I’ve been talking for a long time, let me just tell you, why did he join MIT? What was the context for him to join? The context for him was that in 1963, an initiative was launched at MIT called Project MAC. This was funded by Pentagon’s R&D arm, which at that point was known as ARPA. It would later be erenamed DARPA. And they were getting millions of dollars at MIT, from the Pentagon, to work on interactive computing. In particular, to perfect a technology that was known as time sharing.

Now, what does that mean? And I promise I’ll be brief. To understand why these things were revolutionary, we actually have to take a step back and talk briefly about what did computers look like when Joseph Weizenbaum was getting really into computers? Well, you had punch cards, and if you were writing a program, you had to encode that program onto the punch cards, and then run these punch cards through the computer. This was known as batch processing, and typically what would happen is you would bring your program to the operator, they would run it overnight, and you’d come back in the morning to see if it ran correctly. Now, anyone who’s ever done any kind of programming might imagine that this is a very painful way to develop a program. If you sit down in Python, and you’re at the interactive shell, or whatever, you can just try out: Hey, does this throw an error? Or if I just write up a little script and run it, does that run? Imagine having to wait a whole day and come back in the morning and say: Oh, there are all these errors. It’s a very slow, very difficult way of working with a computer.

The idea with Project MAC was what if we could create a more conversational way of interacting with a computer? This would not just help us develop computer programs faster, and more efficiently, but would also create a new kind of human computer interaction. And in particular, this figure was very well known than and since, J.C.R. Licklider, who was at MIT and had moved to ARPA, and was instrumental in getting Project MAC funded for whom this idea of creating a more interactive experience of computing, was absolutely essential. So that’s the context in which Weizenbaum was hired into MIT. And that is, crucially, the context within which and quite materially because they provide the funding that ELIZA, the first chatbot, is developed.

PM: I appreciate you outlining all that history. And even noting the significance of those pieces to what he later does right into the work that he ends up doing. It leads us really well into talking about ELIZA. But before we do that, there are a couple of things I just want to pick up on there, in what you described because I think that it is interesting and understanding Weizenbaum’s perspective and his approach toward this. And just trying to ensure that we understand, how he’s approaching this work as he goes into ELIZA. One of the things that you described in your piece was how Weizenbaum had a difficulty with humans and human interaction, that probably comes out of his experiences as a child, and how that made the computer and kind of working with computers, something that was appealing.

And I think that that is a familiar story that we hear from people in Silicon Valley, or people who work with computers from time to time. Mark Zuckerberg obviously comes to mind as someone who’s very obvious there. But then the other piece is also that we’re talking about how he had this critical stance on technology. But as I understand it from reading your piece, in this moment, he considered himself to be a real advocate of these technologies and a real believer in what they could do. So can you talk to us a little bit about that, before we go into talking about ELIZA?

BT: That’s a good point, Paris. When you write about somebody, there’s always this risk that you’re projecting too much of yourself into that. And in fact, one of the key concepts that we need to understand in order to understand how and why people responded to ELIZA and respond to chatbots since the way they did, is this idea of transference. This we’ll get into later. But I’m also very conscious of myself as a writer projecting, perhaps, too much of myself into my subject. The reason I mentioned that, just as a frame, is that I work in the tech industry. And I also have a complicated relationship to technology, one that I think is probably not unlike the one that Weizenbaum had. It is on the one hand, I love it, I’m fascinated by it, I want to be around it, I want to know how it works. It has given me a certain sense of myself, a certain stability, a career. In the same way that it has, and did for Weizenbaum. On the other hand, I am aware of the many destructive purposes to which it can be put, to which it’s currently being put, as he eventually was.

So for Weizenbaum, I think both he and I would resist this conversion narrative, where he was a naive believer in the promise of technology and then turned against it. I think something more complicated was going on, which is that he had developed certain political commitments quite early. Really in college in the 1940s. And while his political passions kind of recede over the course of the 50s, and the early 60s — and don’t really get reactivated, until he joins the movement against the Vietnam War on MIT’s campus — that critical edge never entirely disappears. Even when you’re reading articles in which he does exhibit a certain enthusiasm about ELIZA, as a chatbot, demonstrating the potential for a more enriched human computer interaction. I mean, this is his initial idea of what ELIZA could be, is not really a joke or a critique. But as a instance of a more conversational approach to computing, in which we could talk in natural language to a computer and want to talk to a computer. And thus through that process, the computer could learn more about us and about the world.

So there is, I think it’s quite important to say, a kernel of optimism in the initial idea for ELIZA, that nonetheless has a bit of hesitation, a bit of ambivalence attached to it. Because when he released ELIZA out into the world in 1966, the institutional context in which it is being received is this field of artificial intelligence, which we can talk about in greater detail, in which figures like John McCarthy, who had been at MIT and was by then relocated to Stanford, or figures like Marvin Minsky, who was at MIT, at the helm of the AI project there, had a much more audacious — and one could say even arrogant views — about the potential for computers to simulate human intelligence, the whole range of human intelligence. So there are kernels of optimism and kernels of critique, as early as ELIZA. And then what’s interesting about watching him develop over the course of the late 60s and through the 1970s, is how the kernels are still there, but the proportions become different, where the critique gets turned up, and maybe the optimism gets turned down. But then even towards the end of his life, we have these flashes of optimism about the possibilities for friendship with an artificially intelligent agent. I’ll pause there, perhaps I’ve nuanced-bro’d it into something incomprehensible.

PM: I found that fascinating, to be honest, that think about the way he was approaching it, and how that approach developed over time, and how there was always these conflicting or both of these viewpoints coexisting together. But then seeing how they both evolve as his experiences with these technologies change and how he sees people interacting with the things that he’s created. You’ve already started to bring us into the ELIZA program, or chatbot, or whatever we want to call it. So how exactly did that work and why was it significant at the time? How did it shape how Weizenbaum started to change his views on these technologies and the role that they might play?

BT: So ELIZA was a fairly simple chatbot, in which you would sit initially at a electric typewriter, because this is actually before the era of the computer console with a monitor. These are very early days of human computer interaction. You sit at this typewriter, you type something in, and you get a response. And the character that ELIZA is performing is that of a psychotherapist. So the responses are ones that you might hear from a therapist, if you’re familiar with that kind of language. Now, why did Weizenbaum choose to have ELIZA perform this role? Well, we know that he has this history with psychoanalysis, he’s interested in psychoanalytic concepts. Psychoanalytic themes dominate his work. So there are those considerations. On the other hand, there’s also a very practical consideration, which is, in fact, quite funny. For ELIZA to perform the role of a psychotherapist, you don’t have to encode any knowledge of the outside world into the program because all you have to do is write these fairly simple transformation rules that take the input and rejigger it and put it back to the user.

So an example might be: I’m thinking a lot about my mother. Why are you thinking about your mother? Simply turning things back into questions. And this is funny, because if you’ve had an experience of therapy, often that is the experience, people are reflecting back and taking you deeper. As Weizenbaum says in an aside in one of his articles, in a normal interaction, or a non-therapeutic situation, if someone responded in that way, you would think there’s something wrong with them. But in a therapeutic situation, it actually signals wisdom and depth and knowing, that type of transformation. This is essentially using your own language and reflecting it back to you. So ELIZA produces a very powerful response in people, and this is a response that I think we could recognize as a transferential response. This is not a word that Weizenbaum himself uses, but I think it’s one that is appropriate, where the response that a number of people have to this chatbot is to impute humanity, empathy, understanding to the program itself. And this is a phenomenon that Sherry Turkle later calls the ELIZA effect.

So what is most useful about ELIZA is not really the chatbot — which is not terribly sophisticated, by the standards of its time — but rather this discovery of the ELIZA effect, that we have this tendency, which is connected to our tendency to project feelings about people that we’ve known, onto people who are present, which is transference, but that we can do that with computers, as well. We can do that with software. ELIZA generates a fair bit of interest at the time. The Boston Globe writes about it. They send a reporter to go sit and talk to ELIZA and they run an excerpt of the transcript. It generates a fair bit of interest in his professional circles. Weizenbaum, partly as a result, secures tenure at MIT the following year. But one of the major responses, in addition to this transferential kind of ELIZA effect, one in which people feel seen, feel heard, feel like there’s a real person there.

Another set of responses, or let’s say a related set of responses, are from the experts who think that this demonstrates a real understanding of natural language — that this is in fact a promising path for authentic, genuine, what we would call natural language processing now. And even among some psychotherapist who believe that this indicates a promising path for automated psychotherapy. It’s really these responses by the people who should know better, by the experts, that bother Weizenbaum. But again, the timeline of this is interesting, because while the response bothers him, how he processes that take some time. This is kind of a moment where we have to be a bit careful about timeframe, where often when people are writing about ELIZA, they’re reading Weizenbaum’s reflections on ELIZA towards the end of his life. And at that point, he is seeing things a bit differently than he did at the time, that he kind of telescopes, this process of evolution.

This is where ELIZA, again later in life, seems in retrospect to him, to always have been a critique, to always have been a kind of parody of artificial intelligence. Whereas at the time, as we’ve discussed, it did have this element of optimism of this could provide a promising path for developing a more enriched form of human computer interaction. But then the set of responses that it generated, again, through this ELIZA effect — that he essentially discovers —  bother him in a way that actually takes years to fully process. So it sets in motion, the threads of thought that will eventually culminate in his 1976 book. But that comes out 10 years after a live set and in between, there’s a lot of evolution and development is thinking.

PM: It’s really interesting to hear you describe all of that, and also how the reflections take some time to actually come to a position where he has this critical view on what this technology meant, and how he perceived it in retrospect. Also, what you described there — even though you’re talking about something that happened in the 1960s — there still seem to be so many parallels to what we’ve been seeing in the past number of months, with people interacting with chatbots and feeling like it understands them, and it’s talking to them and whatnot. Experts saying that this means that we’re really close to artificial general intelligence or something like that. Even the reporters going out and speaking to it and publishing transcripts of it. It seems so fascinating that after all that time, the response can be so similar. At least the immediate reaction can have so little of the learnings or reflections from criticism that Weizenbaum and many others have made in the decades that have passed since then.

BT: I think that’s right, and I think maybe that helps us lead to a point that I would like to make, which is that the ELIZA effect doesn’t mean that people are stupid. Transference doesn’t mean that you’re an idiot. It’s not this moral category of like: You idiot, you think that’s your mother? That’s not your mother. In fact, in psychoanalysis transference is kind of what makes psychoanalysis work, or what’s supposed to make it work. It’s how we bring the past into the present to try to get some clarity on the distinction between the two. You actually have to make, like in kind of a classical theory, the analyst into this figure that holds all this transferential energy in order to uncover all of the things from the past and bring them into the present. The reason I mentioned this is because there’s a similar perspective that Weizwnabum brings to the ELIZA effect, where even in this original 1966 article, he’s very clear that the software is producing an illusion of understanding. It’s not real understanding; it’s an illusion of understanding. It’s a more powerful illusion than he had anticipated. People really seem to believe that this program understands them, that it’s actually listening to them.

But that illusion can be useful, because it makes the user wants to talk to ELIZA, and through that process, ELIZA might learn something about the world. That that illusion could actually contribute to a more interesting, more enriched, form of human computer interaction. But he also points out in that original article, he says a certain danger lurks there, which is that through this illusion of understanding, we may attribute a certain level of judgment to computers that they really aren’t capable of. So, again, it’s not that people are dumb, people are stupid for thinking that this computer is a human. In fact, that sense might be constructive in certain contexts. But there are also certain dangers that we need to be mindful of, and it’s the dangers, of course, that he becomes increasingly preoccupied with, and which formed the centerpiece of his broader critique in “Computer Power and Human Reason.”

PM: I think that’s a really good point, and we’ll return to that in a few minutes. I wanted to pick up on the fact that obviously Weizenbaum is joining MIT. At this time when the concept of artificial intelligence is growing, is kind of being promoted by people like John McCarthy and Marvin Minsky, as you mentioned, who are also at MIT, if I have that right. Their views on computer intelligence — or the type of intelligence that computers can hold — it seems quite distinct from Weizenbaum’s view on computers and intelligence, and whether a computer can ever kind of achieve human intelligence. Can you talk to us a bit about the distinction between both of those different approaches or perspectives on this term artificial intelligence, or this notion of computer intelligence?

BT: Absolutely, and this is a distinction that is present from the beginning, that even if Weizenbaum has not developed the full critique that he will publish in the 1970s, he’s always quite distinct from the AI diehards, figures like McCarthy and Minsky, who really believed that a computer can precisely simulate human intelligence. That human intelligence, and by extension, human experience, is essentially computable. And what that meant in this era, particularly for McCarthy, is that you could encode rules, a very elaborate sets of rules. This is the paradigm of so-called symbolic AI, which is different than the connectionist paradigm of neural networks that we’re in today. But nonetheless, that you could encode rules that would arrive at a certain simulation of human intelligence that could match or even exceed human capabilities. Weizenbaum is quite suspicious of this claim, early. There’s a radicalism to the AI project, that he is always wary of.

McCarthy is credited with coining the term artificial intelligence in the mid-50s. And there are various reasons that he comes up with that term, and why he feels the need to come up with a new term. But one of them is that he wants to convey the breadth of his ambition. This is the height of the Cold War. There’s an enormous amount of money on the table for science and technology. And there’s a lot of optimism about what information technology, in particular can achieve. Which is somewhat reasonable, given the relatively rapid pace of development in that period. And McCarthy, as a result has enormous optimism about the kind of intelligence that can be developed in a machine. This is the kind of optimism that Weizenbaum really never shares. I think as time goes on, for Weizenbaum, it becomes less about certain kind of weariness or a certain suspicion, or a certain kind of insistence on the need for more modest ambitions about what we can achieve in computation, into something sharper into something harder. Into something where he begins to feel that AI as an ideological project is actively harmful — not just too ambitious, not just a bit unrealistic on what can be achieved — but that it actually has a sinister social and political dimension. And that’s what he begins to dig into in the course of the 1970s.

PM: That gives us a good bridge to talk a bit more about those wider ideas, as well. This broader critique that he develops over time, as he’s kind of reflecting on these experiences and these issues. One of the things that really stood out in reading your article was that Weizenbaum wrote that he believed the computer revolution was ultimately a counter revolution. Something that was fundamentally conservative, and that goes against a lot of the narratives that we have around personal computers and the internet as being this moment of empowerment. Where the individual is getting all of these additional abilities to enhance their capabilities, or their skills, or whatever. Why did he believe that and what is the importance of recognizing the computer revolution in that way?

BT: In many ways, it’s his most provocative idea. And it’s one that I am both very drawn to, but also struggle with. I think it’s worth saying that Weizenbaum was a writer that one struggles with. He said, challenging writer to read, I think, not in the sense that he uses a lot of technical language or a lot of jargon, but that his thinking, particularly in “Computer Power and Human Reason,” his 1976 book, has a kind of meandering quality, which we could charitably describe as essayistic, and it is quite brilliant, at points. But at others, it feels disjointed, that he follows a thread, picks it up, drops it, picks up another thread. The reason I mentioned this is because there is a bit of interpretation that is required to make meaning of this very provocative point, of the computer revolution being a counter-revolution. What does he actually completely mean by that? You have to fill in some of the blanks. I think what he meant by that is that on the one hand, the computer revolution as it takes place — let’s say if we had to periodize it, it really emerges in the 50s and the 60s. The 60s being the turning point, the decade in which computation enters mainstream American life in a profound way, no longer a specialized military technology.

I think what he means by that is that this is a period in which economic, social, and racial hierarchies are being strengthened and consolidated. This is the period of the early Cold War. This is a period in which that high watermark of class struggle and struggle for racial justice that occurred during World War II and the United States has been defeated. That wave has receded that we are, certainly by the late 40s and the early 50s, in a much more conservative period of American life, which we I think, in pop culture associated with McCarthyism, but goes much deeper than that, of course. And that computer is an instrument for strengthening those conservative, those counter-revolutionar forces, because it makes it possible to automate decision-making at a certain scale. And thus provide very narrow criteria for how decisions will be made that reinforce existing logics. I think that idea is actually quite familiar to us, now, when we think about how algorithmic policing, to take one example, reinforces existing analog racist policing practices. But I think at the time Weizenbaum was saying something that perhaps felt a bit newer.

So, that’s one dimension of the counter revolutionary aspects of computing. And I should say — perhaps this contradicts what I said just a moment ago about how perhaps he was saying something a bit newer — this notion of computation is counter-revolutionary is widely shared among members of social movements of the 1960s. The computer becomes a symbol of not just the war in Vietnam, because computers are being used to wage war in Vietnam, which is why they’re being attacked by student radicals at computer centers on campuses across the country, but also a symbol of stifling bureaucracy, of this very regimented, institutionalized form of life. This is connected to capitalism as a system, but also the specific imprisoning cultural codes of 1950s America, which is part of what the student rebellions are about.

In that sense, he shares that intuition, that computers are counter-revolutionary, but tries to develop that idea a bit further. The other piece of his argument, I believe, is that not only do computers reinforce existing concentrations of power, existing social hierarchies, but that they also constrict our understanding of what it means to be a human being. And actually, this latter point I think, is more important for him that computers encourage us to think of ourselves as computers. That they encourage us to mechanize our rational faculties, to embrace instrumental reason or instrumental rationality. Which is a concept that he borrows from figures like Max Horkheimer and Theodor Adorno, for whom instrumental reason means an attention to means rather than ends. It is an attention to optimizing processes without reflecting on what those processes are for. For Weizenbaum the computer is an agent of instrumental reason. It encourages us to adopt this engineering mindset where we’re just trying to make things more efficient, but we’re not really thinking about what is this efficiency for?

He gives an example from the anti-war movement where, during the campus protests, at MIT, there was a proposal floated of why don’t we create hotlines, so that campus protesters can communicate with the administration and this will ease tensions. He presents this as an example of instrumental reason of the kind that computers automate and proliferate. Because he says instrumental reason converts moral, political, social problems into technical ones. And in doing so it suppresses the possibility of conflict, that you can’t actually have conflict between different sets of interests, between different sets of values. It’s simply a technical problem that can be solved with a technical solution. So in this setting, the notion that the student protesters of the administration would have entirely opposed sets of interests, entirely opposed sets of values, that actually can’t be reconciled through a telephone wire is a difficult idea for instrumental reason to accommodate. But then you can see, I think, through that example, how instrumental reason, and by extension computation as a whole, serves the status quo. Because if you’re not allowed to ask questions about ends, if you’re just thinking about means, then the established way of doing things continues. It sets very narrow parameters on what you’re allowed to tinker with.

PM: I think you’ve put that so well. I think there’s so many things I could say in response, but I think just a few things I want to pick up on that. On the one hand, when you talk about people seeing themselves as computers, I think that this is something that we have experienced for a long time. But you notice, in particular, when it comes to the people in Silicon Valley today, there’s a strong belief in, or a view, that we should be trying to achieve transhumanism to merge ourselves with computers. You see people like Sam Altman comparing us to stochastic parrots using a term used by Timnit Gebru and Emily Bender and those sorts of people. To draw comparisons between these chatbots and large language models, and ourselves as humans to try to make us look as though we are one in the same.

You talked as well about this view of computers, at the time, as these things that are controlled by large institutions for their very bureaucratic. And it’s interesting, because I guess, on the one hand, you describe how there can be this view that Weizenbaum has, where he’s looking at how these computers are used, what the politics behind them are, and taking a bit more of maybe an oppositional stance to this way that computers are operating. Whereas, then you have the Steve Jobs of the world come in with the personal computer revolution and say that the problem isn’t that fundamental to computers, but just the fact that large institutions control the computers. If we put computers in everyone’s hands, then we take away the negative effects of that. I don’t know if you have any further reflections on those points, or we can certainly discuss other aspects of his work.

BT: I think something that makes me think of is this term humanism. I hope that doesn’t take us too far afield. But it’s a term that has kind of an interesting history within the history of computer. Because as computers begin to be capable of performing certain functions that we might associate with human intelligence. It always poses this question to us of what a human being is? And I think this is really the central preoccupation Weizenbaum’s work, what is a human being? And this is a question that becomes active and urgent and challenging in an era in which computers seem to be able to do more and more of the things that we would have associated with human beings. Humanism, as a term, can mean a bunch of different things, but it signals some investment in an idea of the human, some attachment to the human is variously defined. And that set of ideas has been on the one hand very useful for developing information technology. We talked previously, briefly, about Project MAC and the influence of J.C.R. Licklider. There is a lot of attention in those circles as they are developing the fundamentals of what we now take for granted as interactive computing. A lot of attention to this category of the human.

You mentioned, Steve Jobs, that gets inflicted with the 60s counterculture and an interest in eastern philosophy that is one of the inputs into the personal computing revolution. And then of course, jobs is central to the mobile computing revolution with the iPhone. So I guess this is a long way of saying that talking about humans, as something distinct from computers, is not necessarily oppositional. It can actually be a force that greatly develops the power of these technologies. It may also develop the usefulness of these technologies. I’m grateful that we have PCs and that I don’t have to run punch cards through a mainframe, but nonetheless. It is something that the tech industry has made use of that dehumanization, if we would use that term of information technology has made the industry much more powerful, much more profitable. So on the one hand, I want to be wary of humanism, full stop. But I also want to be attentive to the different ways that humanism is defined and deployed.

What I find interesting about how Weizenbaum uses the term is that for him, he has a very historical understanding of what a human being is — that a human being is a person who has a human history, who was born to a human being. who was raised by a human being. who inhabits a human body, who has a human psyche, who goes about the world as a human. And that to me resists some of the mysticism that I dislike in some humanist discourse, and also refocuses us on the real distinctions that he’s interested in between the human and the computer. It’s not that there’s an essential goodness or an essential spiritual quality to humans that computers don’t have, it’s really just quite simple in a way, that they just simply don’t have a human history.

That opens the door to a point that I make at the very end of the piece, which is that it’s — and this is a point that Weizenbaum himself made, but I wanted to draw attention to because I thought it was a nice place to end — that opens the door to the possibility of a computer system developing its own history, developing its own embodiment. Perhaps, developing its own set of relationships. And that through that process could acquire something like intelligence, but an intelligence that is very alien to ours. A very different kind of intelligence, and I think that’s an important point to make. Is that Weizenbaum, unlike some other figures, never thought that intelligence could not develop in a machine, he did not want to make that type of claim. He just thought that if it did, it would look something completely different than human intelligence, that it would be as different to us as a dolphins intelligence, is to us.

PM: I think that there’s a whole conversation and a whole rabbit hole that we could go down in discussing that further. I’m just going to allow that thought to exist as it is, and people can reflect on it, because there are a couple of other things that I want us to discuss before we end off this conversation. When you talk about his idea of the importance of the human history, it’s a narrative that I feel like we are returning to in this moment where we have the threat of large language models and image generators and things like that. One of the arguments that we hear by, say, writers in Hollywood, is that ChatGPT doesn’t have the human experience that could go into writing these stories that people enjoy. Or artists, for example, saying that image generators, again, don’t have these human experiences. So they can’t make unique art in the way that we would expect, and that we want humans to do.

I think that bridges us into the discussion of what Weizenbaum writes about in his book, “Computer Power and Human Reason,” where he really draws a distinction between judgment and calculation and what a computer should do or be able to do, and what should be left to humans and should not be given over to computers. So can you explain the distinctions that he draws there and why he believes, in the way that Silicon Valley presents today, that we should be trying to have computers do as much as possible, and virtually everything. How he really believed that that should not be the case, and how there should be very clear things that computers should be designed to do, and other things that should be left to humans, because computers will never be able to effectively do those things.

BT: I’m glad you asked that, Paris, because that’s really the center of his book, and as he explains in the preface, the book has two major arguments. The first which we’ve been discussing, is that there is a difference between man and machine. That is very important to Weizenbaum. The second is that there are certain tasks that a computer should not do, and this gets to the distinction between what he calls judgment versus calculation. Now, calculation for Weizenbaum is a quantitative process, it uses a technical calculus to arrive at a particular decision. We could think of it as algorithmic. And there are many occasions in which we need to use calculation to arrive at a decision, like this is absolutely an essential human faculty.

But it’s important to Weizenbaum that we understand the distinction between that and judgment, which by contrast, is a qualitative process. You can’t use a technical calculus to arrive at a decision when you’re using judgment, because judgment is rooted in values. These values arise through the course of human experience, they’re related to this question of human history that we spoke about just a moment ago. That we acquire values by beings human beings in the world, we acquire them from our parents, from our surroundings from our socialization, we define our own values as we grow older values are something that one can’t acquire without having the experience of being human.

That’s a very important point for him, because if you’re going to make judgments, you need to rest on that foundation of values. This is why, for instance, he considers it obscene to imagine that a computer could perform the functions of a psychotherapist. Because for him, the functions of a psychotherapist require access to a set of human values, which are in turn predicated on a set of human experiences without which you can’t actually provide a therapeutic encounter to someone. Similarly, he would consider obscene for a judge to be automated. To pass judgment on someone, a computer could never do that because computer doesn’t have access to those human values, and human experiences. So that distinction between judgment and calculation is really essential for his thinking.

PM: It definitely brings to mind conversations that we’re having today. It seems incredibly relevant in this moment where there is a renewed push for AI to be integrated in many different ways. And it brings to mind the work of someone like Dan McQuillan, who wrote the book “Resisting AI,” about where should we be okay with AI being implemented? And should we be okay with it being used to shape our lives and ensure that we have less power over how we live, essentially? Because we are handing that over to AI and to computers and to machines. Should welfare systems that are offered by governments be determined by artificial intelligence systems that could end up getting something wrong or wouldn’t be able to listen to a specific situation that you’re in where there might need to be a compromise made or something like that? Or to do visa systems with AI tools, or to turn policing into something that is done by AI. That can have a lot of negative consequences because the AI does not have human values. But also allows humans who do have values to say: Oh, the AI said that this is okay, so that’s fine now. I think that you can see a lot of different ways that this is still incredibly relevant today, and is still something that needs to be present as we consider what computers should actually be used for.

BT: Exactly. Importantly, those are not complexity arguments, because today, it’s very difficult to make the argument: Well, those systems simply aren’t complex enough to perform these functions. They’re quite complex, these are not just rule following programs. They’re relying on these massive neural networks and the reality is, we know how they’re trained, but we don’t actually really know why a large language model does what it does. It’s a complexity that in many cases eludes human understanding, which is probably a problem of its own. But we’re not saying that if only these systems were more complex, they would be able to perform the function of a psychotherapist. We’re saying that they never could because they can’t access human values because they can’t have human experiences.

It doesn’t mean they might develop their own weird AI civilization, God bless them. But that they should not be permitted to do things that only humans can do. And that when you allow them to do so, not only does it degrade the quality of that experience, but it shrinks the scope of decision-making. This is an important point, I think, about instrumental reason, which is that by reducing the richness of human reason into this algorithmic process, you are actually also constraining the decision space quite significantly. We can’t actually make certain choices, because we’re enclosed in this much narrower form of decision making. And this has tremendous political consequences. Again, this is part of Weizenbaum’ s point about the conservative impulse at the heart of computation.

PM: I think that brings us back to something you were talking about earlier, and that you wrote about in the piece, where you describe how Weizenbaum was less concerned by AI as a technology, and more by AI as an ideology. I think that that really links up to what you were just saying. So I just wanted to close with more of a general question. We’ve been talking about the history and of Weizenbaums work and of its relevance to this moment. But we are in this period where there is a lot of AI hype, because of large language models, and ChatGPT, and all this kind of stuff. But there’s also this growing skepticism of Silicon Valley and the worldview that it holds that we’ve been seeing growing over the past number of years. What do you think is the takeaway that we should have from Weizenbaum’s work in the present?

BT: Wow, that’s a big question. Well, I would encourage folks to read Weizenbaum. I mean, his great book is not in print, but perhaps it’s on Libgen or something. I think returning to his work is quite useful. Again, not as a as a profit as someone who got it all right, but as someone to struggle with, as someone who challenges us and who is not always right. There’s times when he’s often not right, really, he doesn’t get everything right. But this is someone who, to a large extent, was present at the creation. Who essentially participate in the computer revolution, and who saw something inside of it that I think we are still working through, there is so much one could draw from his work.

As I mentioned before, it’s not always very coherent. I’m not sure it adds up to a very clean, integrated picture. But if I had to give people a takeaway that I thought was most valuable, I would say, it really resides in this very simple sentence, that there is a difference between a human being and a computer. It’s a very obvious point, but it’s a point that AI as an ideology is constantly trying to deny. The ideology of AI is that everything that humans do, computers can and should do, and will eventually do better. And if you think of humans and computers as entirely distinct, entirely alien entities, that concept that is proposed by the ideology of AI becomes nonsensical.

You could even do a thought experiment of if a bunch of aliens landed from Mars, that would be very cool. It would be very interesting to have a conversation with them and see where they’re coming from and figure out how do they think about language and culture and art and all these things? That’s awesome. I’ve been wanting that to happen since I was like a little kid. I’m obsessed, frankly. And I believe that they’re out there, and that they will come at some point, and that there’ll be friendly. But we wouldn’t say to the little green man or whatever they look like: Hey, would you like to be my shrink? Would you like to be a judge? Would you like to be the President of the United States? Would you like to have your finger on the nuclear codes? Whatever it is, that would be insane.

I don’t think anyone would ever, of course, one can never be too careful. I’m sure there’s some strange internet subculture that celebrates that possibility, but I don’t think that most people would find that very reasonable. But Weizenbaum’s point is that that’s kind of how we are approaching computers. That we are empowering them to an extraordinary degree, to make decisions about people’s lives. When in fact, they’re aliens, they don’t have access to human experience, so they shouldn’t be given such extraordinary power. But perhaps if we understand that really profound difference between human beings and computers, we can find a form of coexistence that can be quite useful, constructive, satisfying, even fascinating, as they develop in their capabilities. So I think that’s a note of cautious optimism that we can end on.

PM: I think that’s a fantastic point to leave the listeners with and and to leave them thinking about, especially in this moment of AI hype, and with so many CEOs expecting us to believe that the AIs will, and should do so many different things. Ben, it’s always fantastic to be able to pick your mind and to talk about these tech topics because I think that you’re an essential voice on the key things that we’re grappling with today when it comes to technology. So thank you so much for taking the time again to chat.

BT: Thanks so much for having me, Paris. This was great.

Similar