 Ed Feinbaum says that he remembers that just after Christmas vacation in 1955 or 1956 by that time, you came into your mathematical model in class and you said over Christmas Alan Newell and I invented the thinking machine. Did I say that? Yes. And he said all the students were sort of sad with some silence and they didn't really understand what you meant. They knew what you meant by machine and they thought they knew what you meant by thinking because the two words just didn't come together. Anyway, you also said that after that work was done it was all downhill from there. Well, we knew it could be done. We still didn't have a running program on the computer but we sort of knew how to organize it and we had succeeded in hand simulating the Principia proofs. So in that sense downhill there was a certain large amount of work to be done there after. Well, you're probably not sure familiar with Thomas Kuhn's notion of a paradigm. Yes. Would you say that this was a paradigm? Maybe I will read you his formal definition. Yeah. Well, I don't remember the formal definition but I know his view was pretty well. Yeah, I think we thought we had a paradigm and that it was, noticeably different from what was around before. Lots of people have talked about, have had reductionist theories of human behavior, primarily physiological theories up to that time. But I guess we understood that this was a new and different reduction. Well, the kind of information processing theory we produced does not provide you with a physiological theory of how the human being operates. That's still another layer below. Physiologists haven't done their work yet. But this showed that these human performances could be produced by a system having nothing in its innards except a certain specific, very highly specified set of information processes organized in such a way that they could be simulated on a computer. My several analogies you can have here. My favorite one is the 19th century chemistry analogy. The 19th century chemistry was able to take behavior of sticky stuff in test tubes and reduce it to the combining and recombining of some hypothetical entities called molecules and atoms. There were no physical reality of those things in the sense of any direct ways of observing them. That had to be supplied in the 20th century mostly by physics. Likewise, I think all of us believe that ultimately one wants a physiological theory of human thinking. But instead of trying to go from the complex behavior in one jump down to neurons, here was a reduction to some intermediate level of processes with obviously mechanizable, because we mechanized it, whether it's as obvious that it's in turn reducible to physiology. That's, as I say, that's a physiologist's problem, which they haven't quite given us the answer to yet. Yet in the early days there were attempts to do models of what was told to be physical. There were people doing nerve nets, especially at that time the Rochester Galerntor Project at IBM, which was based in turn on the McCullough-Pitts ideas that we talked about last time. Very schematic neurons, but nevertheless trying to do it at that level. But getting up to behaviors which were far less complex than human problem-solving behaviors. All they were trying to do was to get a nerve net to organize anyhow, which was important to do. I don't mean to denigrate it, but we were trying to go from A to B, and they were trying to go from Z to Y, and there's still a lot of things in between B and Y. How does the Carnegie Group concentrate almost entirely on human psychology? It has distinct from artificial intelligence and other aspects. I think two reasons, one, that's where we came in. The original motivations, certainly for me and I think for Al, arose out of our attempts to understand human behavior in organizations. Our enterprise was a psychological one from the beginning. Second reason, because it seemed to us, let me speak for myself for the moment, another part of my life I was very much involved in operations research and all those good things, which in a sense are, if you like, artificial intelligence, the use of powerful mathematics and computing techniques to do things that people don't do so well, if you like. I saw human beings able to solve a lot of kinds of problems that we hadn't reduced to OR formulations or to the formulations of economic theory. Starting with the notions that I had out of that part of my life and out of my organization theory of bounded rationality, it seemed to me that there were a lot of slide tricks that people had which we were going to have to learn about and borrow and apply if we were to have effective artificial intelligence. It was a two-way street, it wasn't just artificial intelligence contributing to psychology, it was ideas coming out of psychology which we needed if we were ever to have effective artificial intelligence. And I think the first working example of that was the whole idea of associational memory and the list structure, the list processing languages. Now the ideas for those came from many sources, Al will have one version of that, but one important source of ideas for the list processing languages was what psychologists knew about associational memory. And I think I mentioned last time that one of the things I was doing, certainly in the winter again, in the spring of 55-56, was sort of scratching through the psychological literature to see what I could find that was relevant. And in many ways, the key idea for EPAM came out of the 1940 article of Elmer Gibson that we dug up during that time. The notion of list processing in memory preceded the notion of list processing and why would you say list processing? In a way, list processing, in some of its aspects, list processing develops out of the idea of an associational memory which goes back at least to Aristotle that's applied to human memory. No, no. No, I said all of that. But the general idea of the use of heuristics, which has been very strong in our group, again comes basically out of the conviction that heuristics, that's a very vague term, but it was intended to be vague. It was intended to be sort of an umbrella term or, as I say, the whole set of slide tricks which humans use in lieu of computing power and won't want to know what they were. Because if you look at the early chess-playing proposal, it's not Shannon so much, but the people actually started programming like the people down at Los Alamos. Oh, their idea was, here we've got this big, fast machine. We're going to search the net. And our idea was, how do people do it? What can we learn from that? And that's continued to be, both on the psychological side of eyes here and also on the AI side, continued to be a source of ideas for AI devices. Looking at the psychological literature. Well, in running our experiments and trying to develop a psychological theory to go with the AI part of it. How did you get started on it? The first thing I have in my file is a memo of about February 56. We can dig it up sometime and check the date, but it's something like that, which I wrote my typewriter over in GSA on a very snowy Sunday afternoon down here. And it, as I say, came out of thinking about an article of Eleanor Gibson's in which she was trying to clear up some problems about verbal learning. I forget at the moment exactly the point of her paper. I just thought it was even then a fairly well-known paper. And somehow or other that ticked some vague ideas of, well, why couldn't we build a memory that would operate like this, would learn like this, what would be involved, the notion of the letter features and so on. And Ed was around a graduate student, I guess about ready to take on the thesis subject and we talked about it a little bit and then we developed it and by summer, Ed can probably give you more dates on this one than I, I'm spending most of his life on it. By summer we had a pretty clearly worked out idea of how an EPAM net should work. I remember we were out at the summer seminar in 56 and I remember walking on the beach one noon with Ed and Rand talking about this memory structure and how many nodes it would have to have. So I know we had that idea. Well, we had that idea even before the summer because Ed, I think, gave a presentation of it to this summer seminar at Rand. It wasn't programmed then, but the first glimmer of, the first idea of trying something like this must have been February or March 56. But that would be in conjunction with the work that you were then doing on the logic theorist. Yeah, you see, after that Christmas holiday when we knew the heuristics were doing it, we knew the heuristics the program had had, then the whole emphasis switched to getting the programming language designed and running. Now, if I recall right, Al took his prelims just around that Christmas time and so in terms of the dog work I was probably doing the bulk of that before the end of the year and then as the emphasis shifted to the programming languages then Al took over the main leadership on the task and that was the main thing that was occupying us all through the late winter and spring 56, up to the Dartmouth conference. And I don't remember whether we were doing anything on chess or not. I think that was on a back burner. We had that in mind as sort of the second course and it was almost all on the logic theorist. Well, what I was doing during that time, mostly I don't know what mostly, one of the things I was doing during that time was in fact scanning the psychological literature. The EPAM project came up and that sort of began then as a separate activity, a semi-separate activity, but a definable activity primarily between Ed and me which went on during that spring. And meanwhile, we're going very hot and heavy on implementing the logic theorist which meant the writing of that first article, well, a speech that Al gave in March down in Washington and then the article for the September I, Tripoli and Al working with Cliff on the actual language that was going to run on the machine which came after that first, the language in the first article, the first logic theory article that I, Tripoli, was never one that was implemented. It was a conceptual language. But then to get it on Janiac, Al and Cliff mostly worked that out on the basis of the general ideas in that first article and that's where the available space list got invented and that was the key to implementing a list processing language. It's done today with garbage collectors but we did it with an available space list and that was when Al was in Pittsburgh but communicating almost daily with Cliff both on the phone and with long letters which had been preserved. Al's got the I know and I have copies of those from that period. We have lots of documentation. So you went up to the Dartmouth conference with the first, I think you said last time the first hand simulation of the logic. No, I think the first, actually the first print out, yeah. And I understand from others who were there that you really made a big splash because they were also talking about it in a theoretical way and here you two came on the scene with the real thing. Yeah, we went up. I know, I understand that you and Al resisted the whole notion that the whole idea of calling the field artificial intelligence. We didn't like the name at all. We had talked a lot about labels and the one we had settled on, as a matter of fact one of the things I encountered I was looking for things that you would later want to delve through was a sheet of paper which has a long list of terms down the side, complex information processing and all the rest. I remember that also. I think that was the day we were sitting over in the office. We had these all on the blackboard and we were trying to decide what we were going to use for what. So we had settled on complex information processing as the phrase we were going to use. What were some of the other candidates doing? Well, I'd have to see if I can find the... Oh, that's okay. I'll figure it out. But maybe one day we can look at them and see what we eliminated. You want to do that now? I'll have to unhook myself, I guess. This is a Christmas vacation because here's a draft of a paper that never appeared in this form called Mechanisms of Human... and notice it's on Human Problem Solving on January 56. And here's the selective search stuff and here we're going to talk about computers because I assume nobody knew anything about them and here's the symbolic logic. So this is sort of a pre-draft of the September 66 paper. And here is... This is just a little memo to ourselves that came out of the reading and psychology trying to get some terms straight. What do we want to mean by learning? Learning, fixation, discovery, discrimination, learning in the problem solving machine, et cetera. Here is one of the early... Oh, this may be part of the stimulus for ePAM. Self-rated love to put this conundrum up. How do you know that's an H and that's an A? That's an H and that's an A. You've probably seen that in some of these stuff. And so on February 1st, 56, there's a memorandum which I can't really understand to myself. These are the kind of notes I keep. Differencing one trial and many trial learning, span of immediate recall, cues, initial period, build-up black, white distinction, et cetera. So it was this kind of thing that began to lead to the ePAM. Write and store a description of a stimulus. Construct a response. That's a memo, apparently didn't have an exact date on it. And here... Yeah, this was the Sunday afternoon. I was reading Zenison at the time and working on my Greek. And on the 18th of February, it got named, in fact. I didn't realize we had the name then, yeah. It really was named after a Parmenon disease. Oh, I didn't know that. Oh, didn't you know that? No, I didn't. Well, I don't know which of these came first, but... And here is a description of the Association of Learning trial, inferences from human-memorizing perception. This is stuff distilled out of literature. E-PAM and Gibson's theory of verbal learning, psych-review, volume 49, et cetera. That's the Gibson article. And this apparently all got typed out that afternoon. Most of the latter part of it is mostly notes on the Gibson paper. And also a paper, a chapter in Stephen's handbook, worrying about time for trial, et cetera. I didn't realize it was so long. Oh, but I do remember doing this. What was I trying to do? I was free associating. I guess I was worrying about the structure of the long-term memory. There's it, Xenophon, you see? He really is there. Yes. Typewriter was my analyst, you see. What's all this? Oh, I was trying to probably time finger motions. Who knows, yeah. How fast can you type all that in? And here's some more stuff on probably... They're probably on a famous article by the other Gibson, Jackie's husband, Jimmy Gibson, on set. I was trying to get to a different meaning. Well, it was that I really wanted to show you, I think. Was there another item? Now, apparently we had been approached on that before March because we were already sending them a memorandum then. Did you find out any more of how we got approached? No. I didn't get to Boston this weekend. Oh, I see. Okay. Unless you want to see what else we can turn up here. Oh, this is fascinating. Do you always keep this good documentation? No, I have better documentation for this period. Most of the rest is sort of loose. You see, there are some... All of those boxes underneath. Pieces of my life. But this particular period, I guess I knew that I wanted to remember about it. Did you have a sense that you were doing something very special? Oh, yeah. Oh, yeah. Yep, no doubt. I mean, one always has a good feeling about good research, but there are better feelings about better research. No. It seemed obvious. Well, during the spring we were trying to push this out in a library a little bit. And these are just some thinking alouds. Worrying about what plans were, relationship between plans and hierarchies. Hierarchy is a program that's already stored. I think these are some notes I made when I was up in Michigan for a weekend and some damn conference or other. And a job that's involved. There's another memorandum by April. And I'm sure Ed was much involved in this by April. And I'm working on this, some quaint language of perception in a large capacity file computer, whatever that is, temporary storage. Low capacity series computer. We're taking information from temporary perception. Permanently storing it and so on. Here are the programs we were going to need. What is reinforcement? Learning about attention as a function and all this whole thing. And again, going back to the psychological literature. And oh, here was presumably an early memorandum. We must have been sweating about the question of how could such a program get generated? Could we, by looking at proofs that we achieved and going back and analyzing them, diagnose and so on. And here's a more ambitious little project coming out of the Systems Research Laboratory out of Rand. Could we duplicate a Bales conference on a machine? You know, Fried Bales' work on coding what goes on in a human conference. And we had a protocol. Take the Colonel Allen transcript that the protocol would have, which in fact later on I did analyze to some extent. In fact, this is sort of the scheme that probably would almost have worked. Now, Allen and I date the beginning of the book at different dates. But I thought we were writing a book already in April 56 because here's something called Chapter 5 in Graph 1. And it's got in it a fair amount of what came out in the second, at least in the outline of it, came out in the second explorations with a problem-solving program. The statistics of problem-solving and so forth. And the planning idea is already in there. There's some notes in here somewhere, and one of the other notebooks would show that the idea of GPS planning, not GPS planning, but the idea of planning by going up to an abstract space was something we were already fooling it in the spring of 56. Here we are worrying about, can we add a learning program to this? Oh, this is the one I did up at Michigan. There were peacocks at the place. It was one of these houses that the university acquires from some rich character. They put up a little conference there. There were peacocks around the island. I remember trying to ponder this while I was admiring the peacock. Work proofs backwards, providing the basis for judging similar and different and so on. We need to go into the content of it. Here's the second draft of same. There's an outline. Now, this outline, I don't know, this later got cannibalized. This is an illustration of some of the ideas that came out of the earlier stuff that we'd done in an organization context, the notion of using aspiration level and aspiration level adjustment as one of the mechanisms for guiding a search and for learning. The empty world hypothesis is something that had come out of the earlier, some of the earlier attempts to, well, like those two chapters of my Indian models of man on selective search through spaces as a model of human behavior. That kind of language later dropped out, so to suggest the ties between this and what had gone on earlier. Attempts to define what problems solving is as formally as possible. This was probably, oh, this was just a piece I did one night when I was feeling angry about Talcott Parsons, I guess. About what? Talcott Parsons. Why did you get mad at Talcott Parsons? Oh, such a ball of fluff. I hope you censor this tape. All this talk about general theories of this and that, I don't know what a theory was. Then we were trying to see whether we could get some formalization on the size of the problem space and see if we could do any mathematics on it, and most of us never got anywhere. But we tried it anyway. This is where the idea of the set representation and the search representation of the problem, which we just get with that distinction we make later on, comes from. A little memo on what the nature of functional analysis is. A lot of these themes were there right from the beginning. We just never had any chance to do anything about it. The first thing I ever published on that subject was the piece that Al did with, who's he? Just a piece published about two or three years ago on functional analysis. Oh, one of the graduate students got a degree with Al. He's out on the West Coast now. Well, he was a president back then. But these themes were sort of bubbling around from very early on. What up? Did you sleep at all? I slept very well, probably better than I do now. Here in April we were wondering whether we couldn't apply this to do differentiation. Never got it reduced. Here we were worrying about what the difference is between an algorithm and a heuristic, because we already wanted to use that term which we got from Folie. You know that Al took undergraduate courses from Folie. And here are all sorts of terms that we were trying out. This seems to be trying to find some other task environments that we could transfer to easily. Functional calculus, elementary number theory, common terms. We've done, of course, in the fall a little bit of preliminary work on geometry, but hadn't gotten very far. Design of switching circuits with memory. I don't know what that's all about. I think these were just possible directions to go. I must have been playing the name game with somebody. Gleason is a mathematician at Harvard. Milner, you know who Milner is. I don't know who Christine Criss is. I may have mentioned this to you. I once found it reached in my pocket, or I keep little scraps of paper, and I drew out a piece of paper, and on it it said, in my handwriting, mind you, Andreas Papandreo wants to see you. And as far as I know, I'd never heard that name before in my life. This is Andy Papandreo, who's now back in Greece. He was a young mathematical economist. Yeah, we met shortly thereafter. That's when I was around the college commission. I never found out how that piece of paper got there. Well, as a man, he was very skeptical about the mystical side of life. I'd like an explanation. Well, and I think it's what I'll tell you. Now, July 8th, 56th, that's almost Dartmouth, isn't it? Yeah. This may have been written up there. Again, some projects. Could we do some learning? That means not our learning, but we put learning into the programs. And self-programming was the translation of learning, of course. So we were talking about automatic programming. We didn't know how to do it. Do you know what the dates of that conference were? Well, apparently it ran... It was fairly late, wasn't it? Oh, we were only up there about two weeks. Right, most people came in small. Did it run that long? Yeah. In fact, some people remembered it was running all summer. I see. We must have been up there toward the end, because we were up there and then we came back to the MIT conference. The MIT conference? That's not the same thing as the IEEE conference. Yeah, that was at MIT, right. I see, I see. It was in that Crestia auditorium there. Yeah, like Marvin told me that you all talked about what you'd done. So these July ones were probably before we went up. Here I must have been reading Brunner. The Brunner Good Known Austin book came out just then. That summer came out that year, I know. So I was reading that and taking notes from that. Do you take notes generally? A couple of books a year. And I really want to understand them. Or I heavily mark them up, but I only read seriously a couple of books a year maybe. And I find a book that's obviously important. Oh yeah, really taking them apart and finding out what makes them tick. Well, here's the safe example which I usually, that probably came out of reading Ashby actually, whether consciously or not. You were starting to say last time that when Ashby's book came out, that had quite an influence on you. Yeah. Not with a book again, which I sat down and really read and read right away. Minsky told me he had the same reaction to the book, but he said there were so many gaps that I couldn't wait to start filling in those. Uh-huh. Was that something I could feel in your head? I don't recall having exactly that feeling. I had a feeling that here was a, something I gave you a feel of how a feedback system could really behave intelligently. I guess I thought it was quite abstract, but now I would have used the word abstract rather than gaps. Here was sort of how it might work in principle, but how could you really make it work? I mean, that's the same thing, that's a similar thing. Let me see if I've got anything in here. These are probably the notes I made myself when I was trying to write one of these manuscripts or revise one of these manuscripts. Well, either discrimination that or a, this is the problem of how much information is involved in opening a safe, which I've used as an illustration in various places. And probably worried about where does the information come from. Here's a little program to do it in some strange programming language that was probably invented for the purpose. This is probably when we were trying to, trying to formalize a number of the concepts for the first, or the second paper, not the first paper, that we didn't graph by now. Yeah, a graph theory or the commodal problem solving, a note on epistemology. I don't know what that says. You'll have to read it if you want to at some point. This is sort of the question of how you could use artificial intelligence to do epistemology, something that's still on the burner and hasn't much been done. Here's a, this is probably a note in a conversation with Alan. This looks like a notation, the language here is a language from our first paper. A little comment to myself. Now, by now we must have been at, at Dartmouth because what is the relation of this to Minsky Net? And September we, yeah. This is probably, this is probably an outline for the talk, although Al actually gave the talk I think. This looks like a, this is the way I outline a talk typically. And then it goes on and on. Here's the list of people who were at the IEEE. And it's interesting that. Yeah, I wonder if there are any people who later. Oh yeah, everybody was there already. Ashby, John Bacchus, Julian Bigelow, Alex Bernstein of chess program fame, Peter Elias is later chairman of the IEEE department up there, Duda who did, does work in pattern recognition. There's a book, Duda and Somebody. Fanno was later head of MAC. Farley was an early pattern recognition type. I don't know who Davey is. Eugene Galanter who wrote with Miller, playing some structure behavior. I don't know who Glasshaw is. Heiglberger was sort of a mathematical information theory type I guess. Bell telephone company. George Miller, you know. Leon Harman at Bell Labs. John Holland of Michigan. Anatole Holt. He's now up in Boston freelancing, I believe. Duncan Luce, Donald McKay of England. John McCarthy, Warren McCullough was there. I'd forgotten that. Marvin, I don't know who Melzak is. Trench Moore was this guy who... Yeah, Z.A., math department. I don't know. John Nash was a very bright game theorist who I think later had some mental troubles. Trench Moore was a guy who had done this other logic program. Alan Newell. Abraham Robinson, let's see. That's the Toronto Robinson. Was he the... I can never keep the... He's the one who later went to Berkeley, isn't he? He didn't get any of that theorem proofing. Matt Rochester. Hardly Rodgers, a symbolic logician at Harvard. Walter Rosenblift, who's still at MIT. You know who he is. Wiener Rosenblift. That one. Jerry Rothstein, who's a strange, crazy man. Been around in the electronics computer area for a long while. Dave Sare, I've forgotten who he is, although I've known. I don't know who he is anymore. Lloyd Shapley, the mathematician. It ran. Schützenberger then was doing information theory kinds of things. He's a very good mathematician. An associate of Shannon, I think. Oliver Claude Shannon. Norm Shapiro was at RAN. I forget just what he did, but he was a computer type. Ray Solomonoff. Moore, the guy who did the early automotive theory stuff. I don't know who Webster is. John Kemeny. Steele, I don't know. So you see a large part of the world was there already. That's very interesting. Yeah. Well, you may not want to go on with this particular route. If you'd rather go somewhere else, let's do it. Here, Ed and I must have been worrying about letter features. And I was learning Hebrew at the time. So, you know, you got to keep busy. I was contrasting the distinctive features in the Hebrew alphabet with those in English, in the Roman alphabet. I don't know why I was doing some automotive theory there, but these are probably notes on some reading on automotive theory. I have no idea what that is. It looks like machine language something, which I was in detail trying to find out what happened in the program. Oh, I gave you one wrong fact last time. Probably gave you millions of wrong facts. Clearly, I learned to program a 701 in 1954, not 52. And it was just before, well, I started learning before we went out to Rand at summer. And I know I was working on it the day that Al and I started out for the Edwards Air Force Base, the trip I told you about. Right. Because when we started from the Rand parking lot, our first conversation was about the interpreter within the 701. This is a 701 program. I can tell that. It might have been done much later. Anything else here reminds me of anything. These are, you know, very great help, because they're not even dated. Not much help for an issue. They may even date back to, these may even date back to 54 because they're clearly 701 kind of notation, which I was actually tracing through. No, that's a list. That's an early list, maybe. Don't know what it is. Notes were one of our papers of about that time describing the methods. These are the early, oddly enough, these notes are fall of 56, but these come out of the, essentially are the language of the IEEE paper. I'd have to work through to find out why I was still using that language there. Still worrying about automatic programming, ideas of how it might be done. I still haven't found that one page with all the words on it. Some attempt to do formal theory about this. But notice it keeps oscillating back and forth between the actual programs and the psychology. In fact, I know this when you write a label problem solve and you've been qualified by saying, comma, human. Yeah. There's a more Hebrew alphabet. Apparently, I had gotten, at this time I had gotten the Grotes Dutch book on chess and was translating pieces of it. From the Dutch. Yeah, and comparing it with some of Celts' stuff, which I had known previously. Now we're down in November, and this looks like a memo of Al's. This is probably a comment on a draft of a manuscript. Observation on the MIT conference, discussion on the conference. Oh, this is an earlier MIT conference. The papers of which were all circulated in a mimeograph thing that we had. What was that? It was a conference held, I think, in 51. He'd like to know that. It was true, too. I was reading Verthimer. This is more of a psychology reading to see what we might get from it. For reference, it's a psychology literature, some of which I probably looked up. This is probably a connection. There's a mention of the Eleanor Gibson article. These might even be earlier memos. I don't know what I was counting, but I was counting something. Well, you do a search of literature. You just sit down with an index, looking for keywords. Is it associative? Oh, searching for literature? Searching through literature. Oh, more often than not, I guess primarily use citations to other things. And the trick is to get the recent stuff, because citations only point backwards in time. For that, we use a variety of things. I would be inclined to take a few standard journals and look at the last year of their issues and just scan titles and abstracts. And then you can use something like site abstracts, but I don't have enough sit-slice for that. I find that it just drives me up the wall. So I'm more likely to look at recent issues of journals until I've got three or four good recent citations. Then I work back and you almost never miss anything that way. During this meeting after the Dartmouth conference, the IEEE meeting, Minsky said he didn't know for a fact that there was any hard feelings, but there should have been, because he and several other people in you and Al were all essentially sharing a platform and all essentially talking as if you knew what you were doing when in fact you were now with the only ones who really had done anything and the rest of them were quite fuzzy. Well, the only hard feelings, I don't know whether hard feelings describe it, but tough negotiations didn't have to do with Marvin at all. Marvin is a very generous guy in this respect. I'm sure he has the same feelings all of us have about wanting to discover this and that, but he's really a very generous guy. If he's as paranoid as the rest of us about it, he shows it less. But what did happen was that John McCarthy decided before the conference that he was going to report on the Dartmouth conference. He was going to tell them about our work, et cetera, and we allowed it how that wasn't going to happen. And so poor Walter Rosenblitz, who's the name you saw there, who was supposed to chair the session, walked around with us around the MIT campus. And so on. Negotiating this during the noon hour, I think, either the day before or just before we were supposed to go on. And finally was agreed that John McCarthy would get up and give a general speech about what went on, and then Al would present our work in particular. So we were not feeling at all good about John, and I think along that dimension I've probably felt edgy about him ever since. But it did not involve the others in the Dartmouth conference at all. It was strictly a matter of John and Rosenblitz trying to be in the neutral corner, and we didn't think there was any neutral corner. But we were perfectly satisfied with what happened. John got up and talked in generalities and then we got up and said what we had to say. But for that audience it didn't really matter anyway because it wasn't clear that very many of them were quite ready to evaluate it then. At the same meeting Chomsky did one of the first public performances of his three theories of language. Oh, yes. He got up, I think we mentioned that in the history. Probably didn't know it. And Edinger, you know Tony Edinger, was a discussant of Chomsky. So he did a typical Edinger job. Nothing was right. No, it wasn't very important anyway. It was so outrageous and Chomsky was very young then and he looked helpless. He's young and he still looks helpless though we've learned better, you know. And so Al Newell was so outraged that he jumped out of his seat and up on the platform and gave an impromptu defense of Chomsky, which was very eloquent. Where machine translation went, if it went away from AR, just what that whole stream was about, maybe you can sort of give me a little history of that. Well I can give you a little bit, although again I don't know it too well and on this one from some things that came up in the book I know that Al's history and mine are a little different on this. But I think we both agree that artificial intelligence, excuse me, artificial machine translation came out of the computer community and had very little communication with linguistics and the idea that somehow or other it was sort of Chomsky's linguistics that gave it the stimulus or that it gave the stimulus to Chomsky is probably not true in either direction. Probably both of them owe to the zeitgeist. But Chomsky was never really interested in automatic translation and so far as he was ever involved in such projects he's done his work on linguistics and the automatic translators really didn't borrow very much from linguistics except sort of the superficial ideas of what a grammar looked like. So it was a strict artificial intelligence approach to most of it from the very beginning and also a strictly syntactical one. Well, I remember very close to it. Tony Oddinger was quite important in the early days but there were two or three groups working. You can get better information on that from other people. I guess Hayes, you know, the former Rand Hayes. Pardon? The Hayes at Rand? Yeah, what's the name? Dave Hayes. Dave Hayes, who's now at Buffalo, isn't he? Yeah, that sounds good. Dave can probably give you a good deal of that. Oddinger could if he will and he'll give it to you straight. He's not too impressed. It always lived in a fairly separate world of its own and never really at that stage picked up very many ideas from the problem-solving stuff and I guess honestly we could say never contributed many because we weren't aspiring to natural language then. The first time I really got any urge to deal with natural language inputs was the year I was out of Rand when I started working on the heirs to compiler probably motivated by seeing the progress that Bob Simmons was making and he was operating an information retrieval kind of mode rather than a language translation mode that is his approach to things like that. So it was a fairly separate community. We watched it sort of at a distance with interest with interest and some people were doing this. They thought they were operating as though it were a fairly strict syntactic job, find the syntax, translate the vocabulary, get in the syntax of the other language. But it eventually came to naught. Well it came to naught. First it came to naught sort of scientifically because it turns out that that isn't a job you can do that way. Since that was the way they were trying to do it they didn't succeed. And then it came to a kind of grinding halt fiscally because a report was written by John Pearson, a committee which said that it might be a good scientific problem but there wasn't any practical need for it. It just wasn't cost effective, he said. Because you could get human translators. Yeah, that report is, it's a published report I'm sure I have a copy of it around here if you have occasion to look for it. So most of the projects then were cut down to scale but then it got revived as people began to have ideas to do something about semantics. But it never got revived to the point as far as I know where it's had equivalent levels of funding and by now it's kind of merged into the general artificial intelligence movement because natural language processing, well nobody really takes translation as the most interesting task. The most interesting tasks are understanding language for various other purposes other than translation and almost all the progress that's been made on this rebirth has been made with semantics very much in the center of the stage. Do you remember Mortimer Cowell? Yes. I remember he wrote a letter once that he also wrote a little book, yeah. called Computers in Common Sense where he was taking all the artificial intelligence people to task. Yes. And one of the things he complained literally about was the fact that machine translation was taking the syntactic approach instead of semantics. Did he say that? I've said a lot of other things, too. Well, good for him. I'm glad he was right about some things. Where is this sort of language study going on now? I think there's still a group at Texas. There's a fellow named Layman down there, I think, L-E-H-M-A-N. I don't know. I'd have to check up. I don't know whether Dave Hayes really does translation. He's interested in language processing of various kinds. He'd probably be a good resource on this if you want to find out more. I just don't keep close tabs on it. There are always are rumors of somebody who has an actual working system that produces great translations. They're supposed to be one military system operating now. I just hear these things and they go out of my ears and I don't follow them up. Do you think that the collapse of the machine translation project had anything to do with bringing AI into disrepute? Because there are some people outside AI who feel that you're doing yourself very securely. Oh, yeah. I don't know whether the language translation project was caused or effected there or neither. The idea of AI just evokes a great deal of affect in some parts. Well, I think it's very threatening to the idea of man as a unique creature in this world. And it's threatening for the same reason that Darwin is threatening and the same reason that, I guess, Darwin is threatening. Or Copernicus was threatening. You're making man just a machine and that's very threatening to lots of people. If you ask me why again, I can't answer. It's not threatening to me but I can state as a matter of empirical fact that it's threatening to many people and you know many people whom it's threatening. Now, given that as a starting point, then such people are going to bang away at targets in sight. And what kind of targets? Well, here were all these language translation projects, many of which started out with optimism. One should start out in any scientific endeavor. And then they weren't delivering quite what they had hoped for or promised. But I don't think they were, I don't think there were in any sense, their failures were in any sense the cause of this antagonism. There's some of the ammunition that can be used. Just as people have used our 1957 predictions as ammunition, they happen to have been pretty good predictions. It takes a fair amount of explaining. You know, if they come out just that way, we wouldn't need to explain anything. Now to explain why they were fair predictions, it takes some explaining and it's very easy to level a shot at them. So, yeah, if the language translation had gone, then there would have been one target less. I don't think they would have diminished very much if they hadn't taken it some problems. See, it takes a matter. Now, all right, let me be defensive. It takes a matter of the very widespread belief that there are all sorts of people in the artificial intelligence field who make reckless claims. That's a large part of the criticism from outside. And the people they mean include Marvin, an excellent target, include me, much less Al, somehow we're supposed to be distinguished on this dimension. We are if you look at what we write separately. Al is more cautious in what he claims publicly than I am. And then you can mention some other people. But if you go around and look in other sciences, which maybe aren't so threatening, people only claims all the time. Look at the cannons of behavior in astronomy today. Gold can go around and with the smallest scintilla, whatever the plural that is, of evidence, he can go around and make a new kind of universe that spans or contracts or permanently in one state or another. And cosmologists go around doing this all the time. And they regard it as good scientists in astronomy because that's part of the mores of that field. Ditto for geologists, plate theories of the world. These all go way ahead of the evidence. And in some fields this gets institutionalized as acceptable. The images on the whole are much more careful in that sense of careful. They tend to stick much closer to the data and the kind of speculating they give. So if people from a field which does less speculating look at a field where this is done, so to speak, and they say, oh, here are a bunch of publicity seekers, and so on and so forth. I think you've put the finger on it when you say the animosity of the peak field toward machines or place in human beings has a lot to do with it because nobody really seems to care whether cosmologists make these great claims or not. You can be immune, you can not be immune. But people really take personal offense to that and someone going around saying, in 10 years this will happen. Well, this is the reason why I don't believe that the difference here lies in the behavior of the people in the field. I think the difference lies in the field itself and the feelings people outside it have about the field. But you are aware of this enormous... Oh, I certainly am. By the way, it even includes people who are not very far from the field. Occasionally it includes people who are converted out of the field. I think Tony Audinger is an example. I think Joe Weissenbaum is an example. People who have gotten religion in different ways. They are quite different types. Hmm. You think Weissenbaum has been converted out of the field? He was happily writing Eliza a few years back. I didn't hear that talk that he gave. It got people so upset at the last day. Well, for two or three years now, certainly since the time I gave my content lectures up at MIT he's been sort of denouncing everybody in sight. He wrote a letter to what it was, a scientific American or science of where was it? I don't know. A long letter that appeared a year or two ago that Steve Cole's got so upset about. It was mostly directed against me. Steve Cole wrote a great letter in response. Is his point mainly that you're barking up the wrong scientific tree or it's just wicked? It's wicked. No, in Joe's case it's wicked. In Tony's case or Bar Hillel, it's hopelessness of it. They are nasty remarks. I make nasty remarks all the time. You know, there are two guys who try real hard and the particular things they try that didn't go, which is always a good proof that it can't. That's amazing. Joe, I think, is a different case. I think he's got a religion in connection with the troubles, the student troubles. I'm no depth psychologist. Well, controversy is very much a part of the Gestalt of science and what expects to have to defend one's hypothesis and so on. But there really does seem to me to be a higher degree of acrimony. Yeah, I think this would be comparable probably to what you would encounter if you looked back at the Darwinian controversies. It really gets people where it hurts. Yeah, I know this from my own experience. I tell people what I'm doing. I get some very interesting reactions. Lou and Hazard sort of missed the point of my anecdote the other night at dinner about not getting served dinner when I tried out the reproducing beast, but that was an example of the horror you can stir in people's hearts but even speculating about such things. I suppose eventually I would have to go and popularizing them and certainly read what he's written. But maybe I'm morally deficient. It doesn't seem wicked to me at all. But you can see why it could. Once you sort of pass the bound where you don't believe it anymore... It's hard to empathize. We may as well talk about that 1957 paper. Do you have a copy that around or maybe what I should do is bring in a copy? Which do you mean by the 57 one? The infamous one that everybody... Oh, the prediction. I don't know whether I have a copy. I probably do have a copy, but I sort of remember that pretty well. I was talking about some of the predictions and why that explanation you were talking about. Well, let me give the setting of this first. I was going to talk to an ORSA conference which is in Pittsburgh here. And I wanted to talk about our work, but I wanted to talk about the way that was relevant to operations researchers. And I thought a relevant way to talk about it was to try to do some assessment. I say I am this, but almost all of these things are we. I actually gave the talk, and I of course worked this thing out together, that to give an assessment of what the implications were, that was my intent to give an assessment of what the implications were of this new paradigm, if you please, for fields of management and management science. And so the way of doing this was to try to be concrete, try to give some for instances, as it were, of the kinds of things you could expect to happen because you can talk about this in the abstract till the cows come home, and that's very hard for people to... Then you're expecting them to draw the implications, and I was trying to draw the implications. So I took four things that seemed to me to be plausible extrapolations of what was going on then. A chess-playing program that would be the world's champion, all of these to be happening in ten years, because I've done some social prediction before that and since. I guess I should add that to the premises. This was not a new belief in making prophecies. I'm quite interested in the problem of how you make social predictions and of the importance of, under certain circumstances, trying to make them, and I've been engaged in a major effort of this sort earlier when I worked with the Cowles Commission on doing a report on peacetime uses of atomic energy back in 1950. So I thought that it was important to do this for this field, and we predicted then, because chess was already underway, a chess-playing program that would be world's champion in ten years, a musical composition that would have serious aesthetic content in ten years. The reason for predicting that one was that Hillier and Isaacson had already produced the Iliac Suite, which was not trivial and uninteresting. So that was almost there. The third was that most psychological theories would be stated as computer programs, and since we were going to do that, that seemed a reasonable one to say, and the fourth one, why am I blocking on the fourth one, do you remember? Chess, music, psychological theories, it'll come back to me in a moment. But you see, each one of those arose out of work that was already beginning. And ten years seemed a reasonable time in terms of what we thought would be the effort applied to those. We made no prediction about natural language, which we were far too conservative, because at that time that looked very far away to me, and Al, I guess, too, and that moved much faster than we expected. So at the end of the ten years, we didn't have our chess champion, but we had chess-playing programs, and there we just vastly underestimated two things. One, how little, I guess this is overestimated, how little man years would go into this, and secondly, how much very specific knowledge had to get poured into it. Maybe we left out some other things, but those are the only things I'm willing to admit that we left out. Again, on the music thing, we essentially, the prediction was correct, but even then, much less labor went into this than we expected would go into that. So the biggest mistake we made, actually, in the predictions was an overestimate of how much this field was going to fascinate people and trap them into working in it. We just couldn't understand how any people could stay out of it, and they managed to stay out of it in droves. There probably are more timid people in the world, even in science, than one likes to believe, who like to do things in well-structured environments where there already is a paradigm to work in. There probably are more normal scientists than evolutionary scientists than one likes to believe. Among even people I've had as students are people who wouldn't march up to things like this because they knew another kind of problem, they were well-structured and they knew at the end of a year they'd have a PhD thesis. What would they have with this wild stuff? We probably just very much overestimated the number of people who are willing to work in unstructured spaces or relatively unstructured ones. Secondly, we underestimated the extent to which the computer science culture was going to be colored by the mathematics culture during the early years. And heuristics never appealed to mathematicians. There weren't any theorems in it, whereas things like automata theory had theorems in it. And with things like time-sharing and programming developments you could at least define programming languages. So I think we misestimated the culture out of which the scientists were being drawn and what they would be fascinated by. We misestimated the amount of adventuriveness would be necessary to operate in this field. I don't think we seriously overestimated the difficulty of the problem. Underestimated. We did specifically on chess, but that was just really a for instance. Was that the question I was trying to answer? I said I was going to be defensive and I was. I must be blocking for some reason on what the fourth prediction was. I thought I knew them just like this and I called on them when I give lectures. There were frequently questions about them. Well, of course, people like what's his face out of Berkeley? Dreyfus. Dreyfus was taken on that. Oh, he attends my lectures when I come out there. Oh, does he? So I always managed to tell the anecdote of how McCack trimmed the pants off of him. Oh! And by now that gets him very upset. Oh, you know about that? No. Well, in the... I don't recognize him since he's this. Oh, I see. You're off that, but there was an underground version of his book that circulated called Alchemy and Artificial Intelligence. It was a rand paper. Oh, there's a long story behind this. I won't only tell you a piece of it today. And in that, there was some really, really nasty stuff about the chess because he was looking at our NSS chess program at Rand and he knew that a 10-year-old child had beaten it and a couple of things like this. And so he really, you know, he really played that hard. Later on, as this got to be known, the McCack program was running, which of course was a very much stronger program. And somehow or other he was induced to play it by... That's the name of the guy whose program it is. The other guy is MIT. As part of his education. And he played the program and it walloped him. And this game began to be circulated around and finally it appeared in SIGART. And Dreyfus became very righteous at that point and wrote a letter to SIGART saying, you know, he was being... Oh, I think the SIGART thing had a quote on the top. A 10-year-old child can beat the best computer chess program, Dreyfus, and then this game below it. So he wrote a very righteous letter to SIGART and I wrote one in Reply, which is published there, in which I quoted... I can then produce those for you. I quoted his book in various respects. One of the things he was arguing in the book or in the document that preceded the book was not only that the chess program was going to play bad chess, but it was going to play mechanical, non-human chess. But if you look at this game, it's a wonderful chess game because it's a cliffhanger. It's two wood pushers fighting each other and they have these momentary, great bursts of insight in which they get a fiendish plan to trap the other guy. Usually two moves deep. And alternately, the guy almost falls into the trap or he doesn't fall into the trap and Dreyfus was being beaten fairly badly and then he found a move which could have captured the opponent's queen. And the only way that the opponent could get out of this he couldn't get out of it against correct play but the only way he could get out of this was to keep Dreyfus in check with his own queen until he could fork the queen and king and exchange them. And the program proceeded to do exactly that. And as soon as it had done that, Dreyfus' game fell to pieces and then it checkmated him right in the middle of the board. So it was a typical game... it wasn't mechanical at all. It was a typical game between human putzers with these great moments of drama and disaster that go on in such games. It was wonderful. Well, there's a long history to that but again, I think you'd have to know something about him. Richard Bellman was on the staff at Rand in mathematics and it's pretty important to Bellman as it is to all of us to invent the things that get invented. And his candidate for the new great paradigm was dynamic programming. And he was going to do a lot of things by dynamic programming including probably artificial intelligence. A lot of evidence that he had this that was part of his program. So he became fairly hostile to Al and me not in an active way but in kind of a silent way. For a while he wouldn't speak to me when he passed me in the halls of Rand on O.R. things we'd worked not together before but we'd known each other well and we never had a quarrel about anything. He just sort of stopped speaking. Then he wrote that letter. He was the guy who objected to the 57 predictions. You know he wrote a letter. We had a deathless phrasing that I think was not in the published version which said, a prophet need not be without honorarium in his own time. I think the editor scrapped that which I was always sorry. So as of 57 our relations with Dreyfus were a little tense although I wasn't angry with him. Dreyfus was one of Bellman's sons. I mean it was Bellman with a little tense. I wasn't mad at Dick but he apparently didn't want to talk to us very much. Dreyfus's brother worked for Bellman with a good mathematical analyst. Stuart Dreyfus. And he didn't like the artificial intelligence stuff very much and I don't know why maybe because he talked to Bellman too much. I gave some talks up at MIT a couple of years after that about 1960 a series that Martin Greenberger arranged. I gave a talk in the series. And the Dreyfus brothers were in the audience. I guess Hubert was by then a member of the philosophy faculty at MIT. And I gave the usual stuff. It was kind of an epamish. I used epamish as a paradigm for human reading. Nothing to eat. And Dreyfus brothers were so exercised by that that they asked permission and received it which was worth to insert a half a page of discussion into the volume. I guess they did carry discussion. It was not discussion that took place at the meeting. It was an afterthought they had. A nasty little diatribe about this preposterous stuff that is being peddled. I've got that also, you can find it. But just a paragraph. I think the idea of climbing a tree to reach the moon was in it already. Then Stuart, the older brother, managed to get Hu a consultorship out of Rand for a summer. And he went out to Rand and wrote that artificial intelligence volume which then got peddled as a Rand report. He had had no connection with Rand before since. He had no technical background for this at all. But the fact that he was a consultant at Rand immediately gave him credibility. And that's, I guess, the whole story. I was going to say I don't mind being criticized. Of course I mind being criticized. But you know, that's fair game. I can play it the way a politician would play it. But the one part of this whole story I resent was the Rand lane getting attached to that garbage. It was really, really false pretence. Yeah. But that's the history. And I think there really is a tie back to the Bellman incident, although I have no evidence of that. Anyway, does Drathas do anything else? Does he have...? Smokes pot. Yeah, Bellamy. No, he's a... Is his mission in life to undo more different problems? I don't know. He's a humanist philosopher. And I suppose he writes other humanist and existentialist kinds of things. I don't know how big a part of his life this is, but it's been an important part obviously. And again, I think there's a certain amount of affect. I don't think he's just doing this as a careerist. He really believes and feels deeply that we are the enemy. Do you view it as just the kind of criticism that anybody with a new hypothesis is going to come across or do you really think that he has an extra grind and that it goes beyond...? Well, I don't think he has an extra grind in that. I think he's a philosopher and a function of philosophers that look very critically at things. I always think they'd rather have questions and answers anyway. And I think he does this, and I think somehow or other, emotionally this one hits him in the ribs. And so his aim is to make sure that this terrible stuff is discredited. Has he had much about that that you know of? Well, I go up and down on that. There's this underlying sea of feelings, and he taps it. These feelings of worry and hustle, and he taps it very effectively, and he writes well, and his rhetoric is not without its sting. It's the usual question, do people make history or do great movements make history? Great movements mediated through people, I guess, make history. And I would think that the financing, for example, of artificial intelligence, some people think it's very generous now, but I would think there would have been much more resources put into this if there wasn't this tone of criticism continually leveled from outside. But whether any one critic really matters, I don't know, it's clear that this English physicist, who suddenly became an expert, vital and damaged the Edinburgh budget, there's no doubt about that. So I guess, yeah, it has an effect. Now, whether you counter it by silence or by counter-attack is a more difficult question. I oscillate on that too. I usually use counter-attacks just to prevent me from getting ulcers, rather than because I think they have any effect. That's what it's going to say, but that doesn't really seem to help. Although, Geoffrey Drive is speaking in the spring, just this last spring, and he said he'd really changed his tune. And what he was saying in two of this group of computer scientists, was that, yes, artificial intelligence is the logical culmination of 2,000 years of Western philosophy. Now we've got to go someplace else. That's quite different from... Well, in a way, different, but a very natural next step. Oh, yeah, because it's implicit in the nature of his criticism of the word go. He's always contrasting these mechanistic, this-is-not, with Gestalt-like holism. And to go from Gestalt-like holism to Eastern philosophy, in my map of the world, there's only a very narrow river between them, not an ocean. So that doesn't seem to me like a major change in direction. Maybe there's more charity in his tone. Yeah, I had the feeling, reading Drive, that if somebody could produce a computer, and I mean in the sense of a machine, a box, that could do all the things there in that room that many computer programs all around the country now can do, and he would have to eat a lot of his words. And that seemed to me to be a very trivial sort of objection. If somehow everything was under the same skin, then all right, maybe we'll be willing to concede it. Perhaps a lot of the problems that people have with thinking machines is that our notions of thinking are very broad. That is, we include daydreaming, and we include accounting procedures, as laymen do when they're talking about thinking. And so when I tell people what I'm doing, they say, oh, you mean like a bill I get from Horns, and I say, well, no. And then when they finally do say that I don't mean that, they say, well, do computers dream? And I don't have an answer to that. No, there's not a good answer. The whole question of how you want to represent consciousness is not very clear yet. I think Marvin's done a little thinking about that. I don't have that. I really haven't thought it through. Okay, well, thanks for your 30. We will get back to some. Let's close off here. This was a conversation with Herbert A. Simon at Carnegie Mellon on Wednesday, November 6, 1974.