 We're very pleased that you could join us. Those of you that were here yesterday heard some very penetrating and inspiring presentations and we also look forward to the program that we have before us today. I would like to remind you of a couple of activities that are available to you and what spare time you may have during the conference. In Alumni Hall, which is the building just to the southeast of this auditorium in the Student Union on the second level, there is a display. A number of well-known high-tech companies have equipment and materials available there for your examination and we invite you to proceed there after these lectures and perhaps over the noon hour. Also the special art exhibit in Schaefer Art Gallery on the south end of the campus, which opened last evening, is also available for your viewing today. So we begin with the second day of Nobel 20. President Kendall, friends of the Nobel conference. On behalf of the faculty of Gustavus Adolphus College, I am pleased to begin the ceremony of award of the degree Doctor of Science to Herbert Simon. Herbert Simon becomes the 64th Nobel Laureate to receive the Doctorate, Honoris Causa from Gustavus Adolphus College. John Holti, Associate Professor of Mathematics and Computer Science, will deliver the citation. John Bungham, Associate Professor of Economics and Business, and Anne Walcott, Assistant Professor of Political Science, will serve as the Lictors. Mr. President, today we honor Herbert Alexander Simon, the Richard King Mellon University Professor of Computer Science and Psychology at Carnegie Mellon University and the 1978 recipient of the Nobel Prize in Economics. Professor Simon is a polymath who has distinguished himself in many fields. While earning his A.B. and Ph.D. degrees in Political Science from the University of Chicago, he began the study of decision making in organizations that led to his seminal work, administrative behavior. During his career, he has held academic and administrative positions at the University of California at Berkeley, Illinois Institute of Technology, and Carnegie Mellon University in Departments of Political Science, Industrial Management, Psychology, and Computer Science. He is the author or co-author of over 500 articles and over 20 books covering the wide spectrum of his interests. A member of the National Academy of Sciences and a fellow of many professional organizations, he has received many honors, including the Distinguished Scientific Contribution Award of the American Psychological Association, the AM Touring Award of the Association for Computing Machinery, and Election as Distinguished Fellow of the American Economic Association. In awarding him the Nobel Prize in Economics, the Swedish Academy of Sciences cited his pioneering research into the decision making process within economic organizations. His observations and research challenged the classical economic theories that business firms behave as mathematical optimizers. By focusing on how decisions are actually made in the economic arena, he has adduced that bounded rationality leads to decision making by satisfying rather than by optimizing, and has thereby opened a whole new approach to the study of economics. Sending the scope of his inquiries from decision making processes to problem solving procedures, Professor Simon, in collaboration with Alan Newell and J. C. Shaw, pioneered the development of artificial intelligence with such computer programs as the logic proving machine and the general problem solver. His subsequent research has employed information processing models of intelligence to explain human cognitive processes. In works for the layman, such as the Sciences of the Artificial and Reason in Human Affairs, he stunningly articulates a paradigm with which to comprehend the complexity of the behavioral sciences, thus drawing together many threads of his various lines of research. President Kendall, I have the great privilege to present to you, upon the recommendation of the faculty of Gustavus Adolphus College, Herbert A. Simon, for the degree of Doctor of Science honoris causa. On the recommendation of the faculty of Gustavus Adolphus College and with the approval of its Board of Trustees, and by virtue of the authority vested in this institution by the state of Minnesota, it gives me great pleasure to confer upon Herbert A. Simon the degree Doctor of Science honoris causa with all the rights, privileges, and responsibilities there to appertaining. I think I will be in better wind if I remove a few of these accoutrements. The title of my particular address in this series of six lectures is some computer models of human learning. I picked this particular title because I think that in many ways concrete examples are the best teachers of what cognitive science and that strange area of inquiry called artificial intelligence, what those areas of inquiry are all about. Before I get onto my examples, I think I need to say a few words in introduction and place my own remarks in the context of those that preceded mine yesterday morning. I won't ask for a show of hands as to who heard the talks yesterday morning, but I assume that many of you did. One of the things that was very evident in the talks yesterday is that an area as complex as cognitive science can be investigated at many levels. In the addresses of professors Edelman and Milner, we saw the investigation beginning at the level of the biological brain, beginning at the level of tissues and cells. In the paper of Roger Schenck, we saw the investigation beginning from the top down, beginning with the analysis of complex human behaviors and in particular verbal behaviors using natural language. I don't think we should be surprised that in an area as complex as this one, there is more than one direction of approach or even that there should be more than one level of theory. The idea that we can take the complex behaviors we think of as thinking, problem solving, use of language, understanding, that we can take all of these behaviors and in one step of reduction, explain them in terms of chemical and electrical events in neurons seems rather extravagant, rather ambitious. Instead, this field is following the example of many other natural science fields of constructing levels of explanation where the task of scientists is to reduce phenomena at one level of complexity to explanation in terms of concepts introduced at the next level down. And here we are, as I say, in good company with the other sciences. Nobody supposes that biology can't advance until we have a complete understanding of quarks. Nobody even supposes that solid state physics can't advance until such time as we have a full understanding of quarks. And that, of course, is very fortunate, it's very fortunate that science can be sometimes hung from skyhooks because the foundation's underground are often very mysterious and shifting indeed. And science often builds from both directions, from the top down and from the bottom up. And so we here, in my map of this territory, and I think each of the six of us has a slightly different map, you can take all of these maps home and have an atlas. And in my map of the territory, there are very definitely at least two levels and perhaps three. The neuron level at which very exciting work that you heard something about yesterday is going on, and the information processing level. Just one short word on terminology. By information processing here, I do not mean exactly what Professor Edelman meant by it yesterday. He was using it as a label for a particular point of view, a particular approach within a somewhat larger category that I will be concerned with. So when I talk about the neuron level and the information processing level, I'm not making the same contrast that he was between what he called information processing and the population approach to the field. In the developments of the last 25 years, since it's been discovered that computers can do something besides number crunching, since it's been discovered that they are quite general symbol processing devices, that discovery has been exploited in at least two ways. One is in the direction called artificial intelligence, which can be very roughly defined as finding as many ways as you can to get computers to do clever things for practical reasons or just because it's fun or a little of both. But another direction, which is more often nowadays labeled cognitive science, with which I think we're more concerned here, and that is the direction of using the computer to understand the nature of intelligence, the nature of mind, including the nature of the human mind. What it is, well really the old problem of mind and body, what it is that enables a biological organ like the human brain to think. Yesterday another distinction was made that will be rather important in my discussion between systems, intelligence systems, whether they be computers or whether they be human beings, intelligence systems which perform intelligence tasks, they play chess or they make medical diagnoses or they solve problems, those kinds of systems and systems which learn to perform tasks. Because there's certainly one important difference that we know between computers and human beings, in order for a human being to be able to do anything except that limited range of things which is evidently built in, but in order for human beings to do anything we have to learn. And since nobody knows our programming language, the internal programming language, the only way we can learn is by being exposed to experiences and to symbols from outside and somehow or other go through a process of transforming those stimuli and symbols in such a way that the internal state of the system is changed so that we're able to do new things. The process we call learning, a very impressive process if you look at its cumulative effects over a lifetime or over a childhood, sometimes a frustratingly slow process if we realize that a large number of us are here are still in school after having started this process, well the formal process at age six and the informal learning process from the time we drew our first breath. Something must be wrong with a process that has taken all these 20 odd years to get us where we are. So I think we have this dual feeling about learning that on the one hand it's very impressive that this can be done and on the other hand that it is sometimes painstakingly slow. With a computer on the other hand of course you can open the lid, you're allowed to open the lid of the box and put things in if you know what to put in. And so computers on some dimensions at least can become instantaneously intelligent. They can become instantaneously pretty good chess players. Human beings probably could not reach the levels of the best programs today in less than five years of devotion to the game of chess. And most of us might never reach those levels even though we play weekend chess all of our lives. I will, the focus of my remarks will be on learning but if we look at the strategy of research in cognitive science over the past 25 years we will see that the strategy has been first to try to understand human performance, first to try to understand how it is that a human being is able to solve a problem. Having understood something about that then people are beginning to go back and ask can we conceive of mechanisms, can we conceive of processes which could be implemented in something like a brain and can be implemented in a computer. Can we conceive of such processes which could arrive at this state of performance which could in fact learn and not simply be programmed. And I will try to give examples of that. Now what I will say proceeds from a hypothesis. It's rather pretentiously called the physical symbol system hypothesis unless you have a rather formidable name for hypothesis nobody will pay any attention to it. So we call this one the physical symbol system hypothesis. By a physical symbol system I simply mean some kind of a device which can deal with patterns and a pattern is almost any arrangement of things. There's the pattern of a human face which allows us to recognize it. There's a rather symmetrical pattern of the tressels that hold up the ceiling of this room. So by pattern I simply mean some kind of an arrangement which we can presumably learn to discriminate from other and different arrangements or that we can learn to compare with similar arrangements just as we can see the regularity of these tressels. What can we do with patterns? We now being human beings and also it turns out computers. Patterns can be read that is they can be ingested from some external sensory source. They can be written that is patterns can be created in the environment from the system itself. They can be stored in what we call memory. They can be kept around for a long time more or less accurately and presumably with more or less reliability. Patterns can be put in relation to each other classical psychological terminology. They can be associated and similarly in other pattern devices we can build up relations so that one pattern points to another pattern in the system. As I've already suggested patterns can be compared. I can compare these two patterns match them together and see that there are instances of the same pattern in human hand more or less similar instances in this case. You can have strict tests of comparison or looser tests of comparison. I guess that test didn't notice that one was a left hand and one was a right hand. Now what is the physical symbol system hypothesis? The hypothesis is that the necessary and sufficient condition that a system be capable of behaving intelligently is that it be a physical symbol system. The necessary and sufficient conditions that a system be able to solve problems to think, to use language, to understand the necessary and sufficient conditions that it be able to read symbols that is sense them, to write symbols, to set up structures of related symbols, to store those structures in memory, to compare symbols to see whether they're the same or different and to behave differentially on the basis of whether those symbols turned out to be the same or turned out to be different. Now there are several corollaries to the symbol system hypothesis. The first corollary is that if the hypothesis is true, then it follows that computers can be made to behave intelligently. Because if the symbol system hypothesis is true, we know that computers, you can find that out by opening the box, so to speak. We know that computers are physical symbol systems. They have all of the simple capabilities that I mentioned of reading, writing, I won't go through the list again. You know the list now. So if the hypothesis is true, then computers can be made to be intelligent. And the second corollary is that if the hypothesis is true, if this is the necessary condition for intelligence, then human beings must be at least physical symbol systems because human beings are capable of behaving intelligently. I haven't defined intelligence by the way. I simply am using it in a common sense, in its common sense meaning. We all know how we judge our friends to be intelligent and of course all of our friends are intelligent, sometimes more intelligent than others. But we have all sorts of tests and these tests generally consist of giving someone a task and seeing how successfully that task is confronted. Being intelligent is no unitary thing because people can be very intelligent along some directions, can be very unintelligent along other dimensions. So that's all I mean by intelligence. And when I talk about a computer being intelligent, I'm going to apply to it the same kinds of tests I will apply to human beings. I'll judge it as intelligence by the range and kinds of tasks that it can perform, which if performed by a human, we would say required intelligence. We would say a perfect fairness in the tests that we're going to apply. So that's the hypothesis. That human beings must be physical symbol systems and that computers can be programmed to behave intelligently. And no one needs to accept the hypothesis or reject it in any hurry. There's plenty of time to consider it. It's an empirical hypothesis. We test it by looking at empirical facts. We test it by designing experiments. For example, if we want to know whether a computer is intelligent in being able to perform a certain kind of task, we can try to write a computer program that will enable the computer to perform tasks of that class and as wide a class as possible. If we want to know whether people, in fact, are intelligent because they behave as physical symbol systems because they have these properties of manipulating symbols in their brains, those capabilities, if we want to test that hypothesis, it's a little bit more complicated. And the usual strategy of testing that's been adopted in cognitive science over the past 25 years is something like this. One writes a computer program. We already know that the computer is a physical symbol system. One tries to write such a program so that it can perform some intelligence task that we might ask a human to perform and perform it in a demonstrably human way. Not flies and airplane flies, but flies and bird flies, if we want to understand birds. Not play chess by trying millions of combinations which a computer can do and the best programs do, but play chess by looking very selectively at the important features of the board, which we know to be the way a human being, a skillful human being, plays chess. We can try to write computer programs and then to do with them what we do with any theory. How do we test a theory? We let the theory generate its consequences. In this case, we let the theory, the computer program, generate a stream of behavior. We let it play a game of chess or a move in chess. We let it solve an algebra problem and we see whether step by step it seems to follow the same kinds of procedures and make the same kinds of errors and get into the same kinds of backwaters that a human being does in solving difficult problems. And that kind of evidence has been accumulating for about 25 years now about a limited range of human tasks. We still are far from testing this hypothesis over the whole range of things which human beings do and call intelligent. And Professor Schenck yesterday called to our attention some of the difficulties in simulating by computer human behavior in areas like everyday use of language. There are indeed such difficulties and there's plenty of research to be done in continuing to test this hypothesis that we work for all of you who are motivated to join this game that we work for you to do for several generations. I'm sure. Now as I say, the early activity in this field has mostly been directed to performance. To getting computers to play chess, to make medical diagnoses, to solve the tower of an oil problem or getting the missionaries and cannibals across the river, whatever your favorite puzzle happens to be. In recent years, progress has reached a point and enough ideas have been generated about possibilities for the learning process that there has been a very strong renewal of research in the area on learning in the area of cognitive science. That research has resulted in the development of a number of computer systems that are capable of learning. Some of them in a quite humanoid manner. Others in manners that may or may not bear any close relation to human learning. And I'd like to mention in the remainder of my talk here, I'd like to refer to three of those examples. I picked these three as examples that I'm especially familiar with. They're not in any sense a random or even a representative sample of work in this field. First of all, psychologists for the first 50 years or so of the history of that field from the time of Ebbinghaus and Wundt in Germany, psychologists were very fascinated with the way in which people did verbal learning. I guess in those days in school, well I'm sure in those days in school, children still had to learn lots of poetry. They obviously had to learn lots of foreign language, vocabulary. I am told that children don't memorize nearly as many things in school anymore as they did in days gone by, and I don't know quite what to think of that. I guess it's a good thing, but at any rate, psychologists became preoccupied with these kinds of verbal learning processes, even rote learning processes that in fact constituted a large part of school activity. And there's a vast body of psychological literature which describes what human beings do when they're faced with a verbal learning situation with a problem. Of course, Ebbinghaus wanted to do this really scientifically. He wanted to control all the variables, so he invented something called nonsense syllables that didn't have any meaning and did most of his experiments with nonsense syllables. Well, we won't get into nonsense of that kind at least this morning, but talk about learning in general. There exists and has in fact existed for now over 20 years. A computer program, it's generally known as E-PAM, which may or may not have been intended to stand for elementary perceiver and memorizer. Some people think it stands for epaminandis, but I don't know, can take your pick. There's a program called E-PAM, which in fact is capable of learning either series of things, the way we learned the alphabet, or learning paired associates of things, the way we might learn a foreign language vocabulary. There's no claim that E-PAM understands semantically the things that it learns. What E-PAM is, is an attempt to model certain aspects of the human learning process when humans are learning things more or less by what we call rote. And today we can explain a very large part of the main phenomena that were discovered in the verbal learning laboratory of psychologists. We can explain those phenomena with the E-PAM program in the sense that if you give E-PAM the same materials that were given to the human learning, learners, they will produce the same kinds of phenomena. For example, it has been many times demonstrated that syllables consisting of three unrelated letters, real nonsense syllables, take about three times as long to learn as simple one-syllable English words, or whatever your native language is, words, could be Swedish words, I guess, here, or English words, or Dutch words, or take your pick. About three times as long, why that mysterious number three? I don't want to go into the anatomy of E-PAM, but here is a program that predicts, in fact, that English words will be learned about three times faster than three-letter nonsense syllables. Various phenomena with regard to how the similarity between items that are being learned affects the learning. In some cases it facilitates it, in some cases it interferes with it. E-PAM can predict the major effects along those dimensions. Out of the research on the E-PAM model and similar models, combined with laboratory research in which we study human subjects, the idea, the two important ideas began to emerge while one of them was around before then, in fact introduced by George Miller in 1956. One was that terribly important in human performance is the fact that we have something called a focus of attention, otherwise known as a short-term memory, that everything we pay attention to, that everything we do that requires some modicum of attention, has to be squeezed through the bottleneck of our short-term memory. We add a column of figures, those figures one by one, as well as the running total, have to be in short-term memory. And this is a terribly narrow bottleneck and turns out to be one of the most important features of the architecture of the human thinking system in determining the rate at which we can think and in determining the way in which information has to be organized in order that we can carry out complex sequences of thoughts. Today we speak of the short-term memory, or some of us do, as though it were, sometimes we speak of it, as though it were a little box, which had six little compartments in it, some people think it has seven compartments in it, we won't quarrel about one compartment more or less, and that we can hold in that short-term memory something like a half dozen familiar items, where a familiar item might be anything which you have become familiarized with for people in this room. Familiar items might be any one of the 50,000 English words that are in your vocabulary. You maybe don't know that you have 50,000 words in your vocabulary, but this has been estimated for you, and it's of that order of magnitude. It may actually be twice that number, but we won't quibble about factors of two here. So you can hold any six of those in memory long enough if I gave you a list of six, you can hold them in memory long enough to repeat them back to me, or watch out if I give you nine, you're going to be in trouble. A vast body of experimental evidence, which is in turn compatible with the E-PAM model and the effects of which are predictable by the E-PAM model. We think today that the structure of the short-term memory has very great deal to do with the way in which knowledge is stored in the mind of an expert. We believe that a great deal of what the expert is able to do, he or she is able to do because of an ability to recognize cues in a familiar situation, recognize cues in any situation within the domain of expertness, and having recognize those cues to use those cues to access relevant information in long-term memory. Doesn't surprise any of you if you walked on the street that when you see the face of a familiar friend, you immediately recognize that face. Moreover, you can probably, not all of us are real good at this, but you can probably retrieve from long-term memory the name of the friend. Not the name, a lot of facts about the friend. We think that process of recognition followed by retrieval from long-term memory underlies most human expertise. We even have estimates today of how many of those chunks, the same little familiar chunks you store in short-term memory, how many of those chunks it requires to make an expert, and the answer again comes out, order of magnitude, 50,000. A chess grandmaster can recognize 50,000 different kinds of features and patterns of pieces that are likely to occur on a chess board during a game. I'll just mention, just to give you a sample of the kinds of experimental evidence that buttress up hypotheses like this, I'd like to give you just a short description of a classical chess experiment. You take a chess board, 8 by 8 squares, and arrange on it the pieces from a well-played game from, oh, maybe the 20th move. A game that's well-played but unfamiliar to the subjects that you're going to use in the experiment. You put this position, you display it to your subjects for 5 to 10 seconds, and then withdraw it and ask them to reproduce the chess board. If your subject is a master or a grandmaster, the board will be replaced almost perfectly, with better than 90% accuracy, even though there are 23 pieces on average on such a board. If the player is something less than a master or an expert player at least, he or she will be lucky to get 6 pieces back on the board right. Oh, you will say, you need to have special kinds of eyes, special kinds of visual imagery to be a chess master or grandmaster. Well, let's take the next step in the experiment. At the next step, we simply take the same pieces and arrange them at random on the board and expose them for 5 seconds. And now, the ordinary chess player replaces his 6 pieces on the board just as he did before, and the master or the grandmaster replaces about 6 pieces on the board. Nothing about the eyes. A great deal to do with how many hours have been spent in staring at chess boards, misspent youth, you might say. Well, it depends on how you feel about chess. How many hours have been spent in staring at chess boards? How many patterns have become familiarized and have been stored away in long-term memory, associated with information also about what to do when you see such a pattern? And so we see the expert in every field of expertise not only recognizing familiar friends in the situation he sees, the doctor looking at your spots or whatever it is you come in with and saying, oh, yes, but forming an immediate hypothesis, which may be subject to revision, an immediate hypothesis about what might be going on in the situation. And so we think today that that kind of performance, that kind of expert intuition, if you want to call it that, or insight can be given a perfectly reasonable explanation in information processing terms in terms of these recognition processes. Now let me go on to another kind of learning because all of us, particularly those of us who are teachers here and I hope most of the students know that rote learning, except for very special purposes, really isn't a very good thing. We don't encourage it in our students. Rote learning is great to get through the Friday quiz, but by Monday it seems all to have evaporated again. So we really want to talk about learning with understanding. What does it mean to understand what we've learned? Roger Shank pointed out to us yesterday the word understanding may mean many different things and I'm only going to touch base on one or two of them. All of us here at some time in our life had occasion with greater or lesser pleasure to learn to do algebra, at least simple algebra. We learned to solve equations. The teacher would write on the board, 3x plus 4 equals x plus 10. If you have paper and pencil you can solve that. I'm not going to ask you to. We won't grade you at the lecture here. 3x plus 4 equals x plus 10. Some of you here probably still remember what to do about that. Probably most of you here remember what to do about that. And in a moment or two you would have solved the problem. Probably fewer of you could quickly, if your child came up and said daddy or mommy I'm having trouble with my algebra tonight, will you help me, probably fewer of you will have immediately on the tips of your tongue a set of ideas as to exactly how the trick is done. Maybe you can do it yourself, but do you know what it is while you are doing the trick? Well, there's been a good deal of study now of how people solve simple algebraic equations. A simple skill but like an awful lot of skills that we use in our school courses, particularly but not exclusively in the areas of science and mathematics. And in its broadest terms it goes something like this. I won't even refer to algebra in the first version. I am in a certain situation standing in front of this podium. I need to be in a different situation. Eventually I'm going to have to walk down and get out of here. Now I have to ask what's the difference between where I am and where I want to be? Well, in this case it's a difference in position or in place. What do I know? What knowledge do I have which tells me how to reduce such differences? Well, there are bicycles, there are taxis, there's something called shanks, mayor. Bicycles and taxis aren't very appropriate here. It would be a little hard getting down the stairway. It might be hard getting them into the hall. But I always do have my legs. And so we've solved this very simple problem. When the time comes I'm going to walk down. A very deep solution. It takes kids a while to a couple of years before they're able to do that. The thing I've been describing with this simple, the stupidly simple example, is something called means and analysis. I'm here, my goal is there. I detect a difference between where I am and the goal. I reach into memory. I use that as a cue to reach into memory to see whether I have any experience in reducing differences of that kind. Yes, there is such a means of reducing differences. And I apply it. I now am closer to my goal. Maybe I'm at the head of the stairs. I have to have a new difference reducing means called walking down stairs, et cetera. Well, how do we do algebra by means and analysis? Very simply. I have an expression like 3x plus 4 equals x plus 10. What do I want? Well, I'll have a solution if I have an x and an equal sign. I better do it in your direction. An x and an equal sign and a number. Of course, it has to be the right number but it doesn't pay off if you just write down any number that would make the problem too easy. I need an x and an equal sign and a number. How do I change an expression like the one I have into one that consists of just an x and an equal sign and a number? Well, we do have all kinds of operators. We're allowed, without changing the value of that number, we're allowed to subtract a number as long as we do it from both sides. We're allowed to add a number as long as we do it from both sides, divide both sides by the same number, except watch out for zeros. That's about all we're allowed to do, isn't it? Those are the only actions. I can walk, I can climb stairs, that's about all I can do. I can add, subtract, multiply and divide the same quantity from both sides of the equation. Well, if I have an equation which has not only x's but numbers on the left side, not only numbers but x's on the right side and I want to reduce it to an equation with just an x and an equals and a number, I can reduce it to some of those differences. Let's subtract away the numbers on the left side which I don't want in the final result. Let's subtract away the x on the right side which I don't want in the final result. Let's divide by the 2x, which if you work this out you will find you have on the left side by now. Let's divide through by the 2 and we will end up with an x and an equals and a 3, assuming I did it right, and stick that back in the equation. Now, we know how to describe a system like that in terms of computer programs today. We know how to describe it perfectly well. And when we do describe it in that way, it looks very much, the processes that enable a person to solve this equation looks very much like the processes that allow the physician to solve this equation and allow the physician to be an expert diagnostician and that allow the chess player to be a grandmaster chess player. In the case of the physician, see those spots, guess that it might be x, suggest the following tests. If you're the chess player, see that open file, consider putting a rook on it. Well, don't do it in a hurry, think about it a while, but consider putting a rook on it. If you have an equation and it has a number on the left-hand side, consider subtracting that number from both sides. A system, if you've studied old stimulus response psychology, behavior psychology, you can think of this as a stimulus followed by a response. If you studied computer science instead, you can think of it as something which nowadays we call a production, a condition or Q followed by an action. And the hypothesis here is that most human skill is embodied in the brain in the form of productions, a perceptual capability of responding to a Q followed by retrieval of information about the action to take. Now, the expert who does this algebra problem and algebra problems like it, doesn't have to have 50,000 chunks. Probably before becoming a PhD mathematician, that person has to acquire the 50,000 chunks or maybe 100,000 chunks to do algebra problems of this degree of difficulty. And mind you, learning to do that is about one week's work in the algebra course. The set of chunks that you need, the set of productions you need to do that is in this case just five in number. I've already said them to you several times. Any system, including a computer or a human brain, that has a way of storing in it, the following set of procedures. If there's a number on the left, subtract it from both sides. If there's an X on the right, subtract that from both sides. If there's a number multiplying X on the left and the number isn't one, divide both sides by that. And if you have an expression that looks like X equals N, halt and check, four productions will do a whole range of those algebra problems. Now, if that is in fact, if that does in fact have any resemblance, now let me make one previous statement to that. I do not mean by that that anyone supposes that you can learn to do algebra by memorizing those four rules. When I say those four rules have to reside in the brain in order for you to do algebra, what I mean by that is that the perceptual skill of recognizing the cues and the access to memory that follows that perceptual skill, the access to the knowledge of the actions to take has to be available in memory. And for most of us, at least those of us who have continued to use our algebraic skills, those four productions are so well stored there that no longer do they require conscious attention or very much conscious attention for their application. I don't have time this morning to go into the question of where the boundary of consciousness is, but certainly there are processes that automate the application of a great many of these productions. So that a simple recognition takes place of a situation and an action follows it, and that's characteristic of expert behavior. Now, if that does correctly characterize or even partially correctly characterize the way some skills are held inside the human skull, then it becomes an interesting question of how learning might take place. How might a person acquire such production systems? And again, the computer comes to our aid because today we know how to program so-called adaptive production systems. Now, there's nothing very formidable about an adaptive production system. An adaptive production system is simply a computer program which consists of productions, and you can write as general computer programs you like in this particular format, it turns out. You can create a Turing machine out of productions. So an adaptive production system is simply a set of productions which includes a subset that are capable of... excuse me, that are capable of constructing, of generating new symbolic expressions in the form of productions. After all, these productions must be stored in the brain in whatever symbolic language the brain is using, and what an adaptive production system is to... does is to create new symbol structures of exactly the same kind. Puts them into the system. Once they are in the system, they are operative and can run along with all the other things that are there already. Well, how might that work with the algebra example? Suppose we had a clever student, and this idea was suggested in the first place by watching clever students. Suppose we have a clever student who encounters this chapter of the Algebra textbook for the first time. One thing the student may do is to read it through very thoroughly, very unlikely. Another thing the clever student might do is page through and see if there are some worked out examples. And surely there will be some worked out examples in that chapter. And then the student will study the worked out example very carefully, study it in the sense of, well, what makes it work? What makes it tick? Why does it tick? And after a little while, that clever student may be able to work simple algebra problems, may be able to solve simple linear equations. We see that happening all the time with our clever students in school. They do a lot of their learning and work out examples. Sometimes worked out examples in the classroom, on the blackboard, very often worked out examples in the textbook. Well, what is going on there? You see, all of the information that's needed to construct the production system that we use to solve that problem is present in the example on the page. Because the student can compare successive lines of the solution, can see what changes have been made, and by comparing those changes to the final result, the x equals a number, can see the motivation for those changes, can see the gradual reduction of difference between the original expression and the final expression. As you see, I'm waving my hands, which in computer language means I'm going over all the difficult issues, but there are running programs today which do this and help to keep us honest in describing this kind of research. So essentially by using the same kinds of means and analysis, but using it in a questioning mode, such and such an action was taken. The person moved over there. And why? Let's compare it with the goal that was being aimed at. Let's now define a difference in terms of how we got nearer to that goal by that action. Let's associate that difference with the action that was taken and now we have a new production. We stuff it into the system. It's ready to go the next time an algebra equation is in counter. Now, can this work with human beings? Another game you can play then is to construct teaching material on this basis. You can construct some problems. In fact, the example I'm going to give you was not this algebra one. We constructed factoring quadratic expressions. The reason we didn't do the algebra but did the factoring quadratic expressions had to do with a particular point in the school year when we were ready to do the experiment. The students had already done the algebraic equations and they were just coming on to factoring. We constructed a set of written materials only taking three or four pages and consisting wholly of worked out examples for students to work. And carefully designed worked out examples so they would introduce the differences between present situation and goal one by one. We presented these to classes of students. We happen to have done this in China but I think it would work just as well with American students. We'll find out in the coming year. We presented this to classes of students and within 20 minutes 90% of the students in those classes were factoring quadratics. I don't know whether you think that's a great thing or whether you did that very easily when you studied algebra but usually about two days of the algebra curriculum is devoted to the task of learning to factor quadratics. Now, I don't want to make too much of a single experiment that will certainly need to be repeated many times. It will need to be compared with other methods of teaching but the point I wish to make is that not only can you build for a computer an adaptive production system that will teach itself to solve equations in algebra or to factor quadratics by looking at worked out examples but you also can use this method for teaching human children and it appears at first blush at least to be a not completely ineffective method. The first comment we got from the teacher of the first class in which we ran such an experiment was and you didn't say a word to them. He evidently had the faith that all of we college professors have that the way to spread the germs of knowledge is through the spoken word by spraying the words onto our audiences and it seemed to him a little magical that the whole thing could be brought off without oral words. There were written words or at least written equations. The other point I wish to bring out by this example is that cognitive science has now reached a point in its development where it is no longer dealing just with toy tasks it's no longer just dealing with getting those missionaries and cannibals across the river it's dealing with school level tasks and for that reason if enough excitement can be generated about this area of research and I can't imagine why it doesn't generate excitement if enough bright people will come into this area of research I think it can be brought within a reasonable period of years into real relevance to the educational process and I think that's an important task for us ahead. Let me turn to a quite different example. As we all know computers can only do what you tell them to do so even these systems I've been describing were only doing what you tell them to do an adaptive production system was simply learning from examples because you told it to learn from the examples. Of course if you think about it that way you can tell computers to do quite general things. Well how general can you tell them? How vague can the specification be? The way to study that and again the way to get some hint as to how human beings perhaps attack vague problems like many of the problems we deal with in everyday life and many of the problems we deal with in science is to see whether we can in fact give a computer program more general tasks. I suppose we would think that the tasks of discovering laws in raw data might be a task that would require a certain amount of independent thinking maybe we'd even want to use that sacred word creativity to apply to a human being who's able to take some data and out of this data find order and find hidden in the data some kind of regularity or scientific law. So let's take that as our arena to study. Let me give you an example first that comes not out of scientific laws but out of intelligence tests. One standard kind of item and you've all had it at one time or another standard kind of item in intelligence tests is the so-called Thurston letter series completion task. You're given a sequence of letters and you're asked what comes next? Well we'll all go through one together. A, B, M, C, D, M, what's next? Oh don't be bashful you all know the answer. E, F, M. Now there are some mathematicians here who will say anything could have been next. That's perfectly true I could have started over again A, B, M or done anything. However I would advise you that if you take such a test and want to get into college you'd better answer E, F, M. Now what was involved? What did you do in doing that? What's this mysterious process of seeing a pattern? Not so mysterious. It's based on the elementary symbol processing procedures that I mentioned at the very beginning of my talk. You all heard in the sequence I read the letter M, you heard it twice. You heard the same, you compared two symbols and they sounded the same. A, B, M, C, D, M. So you had an anchor point there. You could conjecture at least that perhaps every so often an M would appear. Secondly all of you know about the English alphabet. You know that B follows A, that C follows D, excuse me, B, that D follows C and therefore you know that a reasonable thing to follow the D is an E. And it's in fact the knowledge stored in memory of the alphabet plus the knowledge stored in memory about identity of letters plus your ability to recognize situations where these simple patterns show up. The fact that when one thing follows another in orderly succession, the way the lights do down the roof of this ceiling or the cards do across the table here, that you are prepared to notice that repetition and you're prepared to notice that sequence in alphabets, in numbers, and in a few other series. In music, for example, the diatonic and chromatic scales, the circle of fifths, all of these elements of musical pattern that enable us to know whether we're listening to Bach or Beethoven or whatever it is we're listening to, our bass can be shown to be based on the ability to recognize repetition and the ability to recognize sequence. So here we've already discovered a very simple scientific law. We've learned that after A, B, M, C, D, M, there follows E, F, M. Let's consider laws that have had more historical interest than that particular law. Unless I was the one you failed on the intelligence test, then it has historical interest for you, I suppose. Kepler looked at the heavens. It was very hard to look at the heavens in those days. They had very primitive instruments. But he and his predecessors were able to get some data about the distances of the planets from the sun and the period of time it took for each of the planets to revolve around the sun. And similarly for the planets around Jupiter and around Saturn. At least they had that by Newton's time. And Kepler wondered whether there was a relation. We have today, and he found one, and it's called Kepler's Third Law, we have today a computer program called Bacon. It's named, of course, after Sir Francis Bacon because it believes very deeply in Francis Bacon's ideas about induction. What the program Bacon does, following the advice of Sir Francis, what the program Bacon does, is to look at data and try to induce from those data pattern. And the first thing it looks at in data is to see whether there are any correlations lying around. Well, in this case, it would discover that as the distance of the planet increases, the period of revolution also increases. What Bacon is looking for, the goal out there, is, of course, an invariant, some kind of invariant law hidden in these data. And so if you have one thing that's increasing, while another thing that increases, one kind of primitive idea is, try dividing one by the other, maybe the ratio of them will be a constant. And Bacon does that. No luck, no constant. It gets a new quantity, p over d, period over the diameter. But it notices that that quantity is also correlated with the diameter of the orbit. So it tries dividing again. It gets p over d squared. Still no jackpot. Still no invariant. It now notices that p over d squared and p over d vary in opposite directions. As one goes up, the other goes down. Well, maybe we'll have luck. Maybe if we multiply them together, we'll get an invariant. It multiplies them together. It finds an invariant. p squared over d cubed, which is Kepler's third law. Now, notice that Bacon is trying different functions, but notice that it's not doing it at all in a random way. And notice that in fact it finds the correct function after very little trial and error. I've perhaps oversimplified a little bit there, but essentially that's what it does, and does it in something under a minute on a reasonable computer. Kepler took maybe 10 years, but he was preoccupied with other things. His mother was being tried for witchcraft during part of that period, and his life was rather complicated. One other example, I'm not going to try to give a whole long list of examples here of what Bacon does, one other example to illustrate that quite, let's call them natural processes, quite understandable processes can be used by information processing system to discover natural laws. Suppose I had two objects here, and you have to imagine that I'm a spring connecting these two objects. And you have to stretch me a little bit, stretch the spring, and then release, and the objects will be accelerated. And let's suppose we can measure those accelerations. Bacon will rapidly find that the ratio of accelerations of object A to object B will be the same no matter how much I'm stretched or how little I'm stretched. Same ratio of accelerations. There's a little scientific law. I don't know how exciting it is, but there's a little scientific law. Bacon will now take this object and a third object, and again we'll find the ratio of the accelerations are constant with a different constant. Now under those circumstances where Bacon finds a law that expresses a relation between two things, under those circumstances Bacon, I suppose this is a philosophical point of view it has, Bacon will try to attribute a property to each of those objects such that those properties will explain the behavior of the two objects. So in this case Bacon will assign, I won't go through the procedure whereby it's done, but Bacon will assign a number to this object and a number to that object. And then we'll find that this number multiplied by its acceleration times that number multiplied by its acceleration, excuse me, this number multiplied by its acceleration plus this number multiplied by its acceleration is always zero, which is an expression of the conservation of mass. Now notice Bacon did two things there. First, it introduced new concepts. Nobody told it about the properties of that object. It introduced such a property, the property which we know of as inertial mass. Secondly, by the use of such new properties, it discovered laws, in this case, a very basic conservation law of physics, the conservation of mass. And given other kinds of data, Bacon has succeeded in introducing the concept of specific heat in experiments involving mixing liquids at different temperatures, different liquids at different temperatures. This was a problem which defied solution for about 30 years after the invention of the Fahrenheit thermometer. Fahrenheit and Berhave, a Dutch chemist, got halfway, but they really got stumped when it turned out that mercury absorbed less heat than water. When everybody knew that mercury was much heavier than water, it ought to absorb more heat than water. It was Joseph Black, the Scotch chemist, who introduced the idea of specific heat. Bacon, knowing nothing about temperature or heat, Bacon purely by induction from the data arrives at the concept of specific heat in order to find a conservation law for these phenomena. It also finds index of refraction and could find many more. Well, I don't want to go down a list of Bacon's wonders, but would like to make just one more point before I conclude this morning. What I think Bacon does demonstrate is that one can begin to think about those forms of human intelligence which are usually supposed to involve genuine creativity. We can think about what processes it could be that would lead a scientist to discover a new scientific law. Now, inferring things from data is not the only way in which scientists infer new laws. I want to make that quite clear. But we can show historically that in the cases of Ohm, who invented Ohm's law of electricity, in the case of Kepler, in many other celebrated places in the history of science, in fact, there was no particular theoretical foundation for the discovery. In fact, the discovery had to be an induction from facts, because that's all there was. I need to mention one other area before I conclude here. And that is the area nowadays usually called representation. Again, from human psychological experimentation, we have good evidence that when someone takes a problem given to him or her in words and translates that problem into a set of equations and then solves the equation. Or if someone is given a statement in the French language and translates that statement into a statement into the English language, in both of those cases, we have quite good evidence that the process is not a process of direct conversion from the input code or the input language into the output language, the mathematical language, or the English. That there intervenes between some other kind of internal representation, some other way of holding that information in our head, which we usually call a semantic representation, of holding the meaning of what is input in our head, and that that serves as a mediator between the input and the output. You don't translate from French to English, at least if you're any good at French in English, you don't. You translate from French to the meaning of the French and from the meaning of the French into English. It sounds like a slow way around, but it's really the only fast and correct way around. And similarly, a physicist, or again a student in high school who's given one of these problems about an 80% alcohol and water mixture which he wants to dilute into a 60% alcohol and water mixture, the students who are successful in solving those problems do it by means of an internal representation of the problem which precedes the writing of the equations. One of the important goals today of cognitive science and artificial intelligence is to understand the nature of those internal representations. And I won't try to say in any breadth what we know about it, but give you an example of what I really mean by an internal representation so you can get a gut feeling about it. We all talk about mental pictures. That gives philosophers a great difficulty because they tell us that if you have a mental picture there must be a little man inside looking at the picture and then what kind of a picture does the little man have and you can get into all sorts of regressions if you allow your mind to stray in that way. So let's ask, what could a mental picture mean? And I'll ask all of you, you all can form mental pictures, I'll ask all of you to form a mental picture along with me. Imagine a rectangle. It's twice as broad as it is high. Exactly twice as broad as it is high. Now drop a line from the middle of the top of that rectangle to its bottom, divides it in two, doesn't it? And the two halves are each squares. You all see the two squares? Now I want you to draw a diagonal from the northwest corner of your rectangle to the southeast corner, from Seattle to Miami. Okay, you've got that line drawn. Now my question is, does that line intersect the line that you drew bisecting the rectangle from top to bottom? How many do you think it does? No more than that? How many do you think it doesn't? I guess it's the more important. You've been dozing. Now the interesting question here of course is not that it does intersect but I wonder how many of you in the room believe that you could prove that it intersects that you could use the methods of mathematics to prove that it intersects. In fact, that's a rather difficult problem in geometry. It's so simple that it's difficult to prove that these two lines intersect. If you use analytic geometry it's a little easier if you write down equations for the lines it's a little easier. Do you suppose that you or the homunculus in your head or somebody performed the analytic geometry? Well maybe. But the problem of representation is the problem of understanding how operations of that kind can be performed on information in the human head and certainly it appears to us or it appears to most of us who are researching on this that those operations are not the operations of deductive logic. They are symbol processing operations but of a very much more what shall I call it a direct kind. I don't have a good label for it because there's still a lot of dispute as to how it might go on. Now I could carry on my example of that rectangle and add other information to it and I could soon convince you that even if you have a picture of that rectangle it's far short of a photograph. I could give you about three more lines to put in and then you would have about as much information as you were about to keep in that rectangle. So a very limited amount of information but an information that somehow or other is able to generate geometric properties which are not explicitly given in the description of the picture. I didn't tell you about that intersection. I never mentioned it. So today there are very important open problems in the area of artificial intelligence with respect to how the human head stores and processes these kinds of semantic representations as Schenck yesterday referred to this in other contexts about other kinds of problems of internal representation. Well what does this all add up to? What conclusions should we draw from it? I've already mentioned one of those conclusions. Cognitive science has progressed to the point today where we can begin to understand not simply toy examples of human intelligence at work but we can understand real examples, that is examples at the level of school and professional tasks of intelligence at work. We can very often model that intelligence in the sense of building computer simulations of it which track very closely the human performance and even the human errors. We can also if we're not interested in simply modeling the human performance we can also write computer programs which can serve as experts for us. We can serve to augment human intelligence. And within the past three or four years the construction of such experts has become a very popular game and even a serious economic enterprise in this country. But as far as applications are concerned it seems to me that in many ways the most interesting and the most important are the possibilities of application to pedagogy. We have operated our institutions of learning for too many years on the spraying theory that I mentioned earlier. Not because well, yes because in fact it was the only theory that we had available. And just as medicine when it began to understand the physiology of the human body was able to begin to achieve results in the way of prevention and cure of disease that it was not able to achieve simply on the basis of folk art and skill. So I think in the process of education we are going to be able to improve our educational processes at all the levels of our schools as we begin to understand what it is that a person has to learn in order to engage in an intelligent or skilled performance. I think that these developments have another implication or set of implications that we all sense that they really go very much to the root of the human view of our own species of ourselves. That every so often some event in the world of ideas which forces mankind to reconsider ourselves our whole position in the universe and to reevaluate our sense of worth. And I think the computer is having that impact. Copernicus came along and we could no longer believe that our little planet here was at the center of things. We were just circulating around the sun and the sun isn't even at the center of things. It's out on the wing of some galaxy and goodness knows where that galaxy is. Well, that was a kind of shocking idea. But we got used to it. I don't think anybody in this room lies awake sleepiness at night wondering how to get the earth back into the center of the universe or being terribly unhappy that it isn't there. Then came along a man named Darwin. Well, not all of us have gotten over Darwin yet. But most of us have. The great shock of learning that the human species had origins of the same kind as other species, that the mechanisms bringing about its creation were the same mechanisms that brought about the creation of other and more humble, obese in creation. We had to give up that idea of our uniqueness after Darwin. And most of us had peace with it. We no longer have to be specially created in that way, in that sense, in order to feel that human worth. And today, there are people coming along and you've been listening to one of them this morning who say, and I think this is in the best tradition of biology, at least of antivitalist biology, who say a human being is a natural organism, is a biological organism. The reason a human being can think is because a human being has neurons and other structures in the brain, which are capable of supporting an information processing system, a symbol system of the kind that I've been describing. And this is the reason why we can think. And it turns out that computers with a completely different kind of hardware are also able to support information processing, symbol processing, and therefore also able to think. And so we are beginning to get used to the idea that there are other creatures in creation besides ourselves who are capable of intelligence. Of course, we've already we've known that all along we know that our dog could think, our cat could think but only think little thoughts. We were the people on this earth who could think the big thoughts. And now computers are coming along and from time to time they're thinking some rather big thoughts indeed. And so we undergo a third challenge to our uniqueness. Now maybe there's a solution to this if it troubles us. Maybe there's a solution to this that is a solution not only to this specific problem but a solution in a slightly more general sense. Maybe the trouble is not that we are losing any particular sense of our uniqueness. Maybe the trouble consists in considering that it is our uniqueness which defines us as human beings, our uniqueness which defines human worth in this world. And that we can again find a way of relating ourselves to the world in which we find ourselves not by finding a new basis for pride, for judging ourselves in some respect superior to the rest of nature. Not by setting ourselves apart from nature but by recognizing at last that we are a part of nature and that we must learn to live at peace with nature. Thank you. Ladies and gentlemen let's gather down front here we have a number of very good questions for Dr. Simon and I will ask just that Dr. Simon come up and the other panelists may remain seated. Dr. Simon I find it extremely difficult to believe that Dr. Simon introduces new concepts surely it introduces a symbol which the observer interprets as inertial mass specific heat etc. If we didn't already have the theoretical context these symbols would be meaningless and uninterpreted the computer has discovered nothing because it could never have supplied which the terms are embedded and outside of which they lack meaning comment. Don't run away with the question. I think I would prefer to continue to say that Bacon introduces new concepts if one looks let's take the concept of inertial mass which I used in my example if one takes classical physics classical mechanics and tries to formalize the conceptual apparatus that's used in classical mechanics then at some time in that formalization one has to introduce a symbol which corresponds to inertial mass and if you look at the way in which that actually happens in an axiomatization of classical mechanics it turns out that it happens as what the logicians call an existential quantification that is if you try to state Newton's laws of motion for astronomy for celestial phenomena you will find yourself at some point writing down a law which says there exists a set of numbers m1 through mn such that m1a where a is the acceleration again plus m2a2 and so forth equals zero exactly the expression that I used in my lecture so if one talks about simply introducing a symbol then I'm afraid one would have to talk about just introducing a symbol in physics too the symbol takes on its meaning because of the role that it plays in the law that describes the phenomena the symbol that Bacon introduced takes on its meaning because it turns out to be associated with the fact that some bodies resist acceleration more than other bodies and there's nothing there except the symbol and the connection between the symbol and the phenomena which are described in the law so I really don't know what it means to say an uninterpreted symbol unless you would have to say in exactly the same technical sense that mass and axiomatization of classical mechanics is an uninterpreted symbol Dr. Slaman Descartes said that science has the perspective of studying the uniquely simple problems in material nature repetitive processes yield nicely to science and math however unique processes heroism, love, mercy diverge from this adaptive model from repetitive processes they also seem rather non-material how do you relate to these characteristics to a perception of the non-uniqueness of man in nature of course I relied on the fact that this conference is dealing with cognitive science and not with human beings in general and I tried to be very careful this morning and introducing physical symbol systems to say that a human being is at least a human, is at least a physical symbol system now the human brain is of course resides in the head and the head of course is connected with the body and the connection is thought to be important important enough so that most people are reluctant to have the two separated from each other in order to introduce any of the concepts of human motivation and affect we would have to have of course a much more elaborate and comprehensive theory than any of us are discussing or proposing at this conference we would have to have a theory at the very least if we consider the biological aspects of it we would have to have a theory which would have an autonomic nervous system which would tell us when we were hungry which would feed us all sorts of stimuli other than the stimuli we get through our external senses and I think Roger Shank pointed out very effectively yesterday afternoon that one of the reasons why it's very hard to talk about things like love or heroism in connection with a computer is that a computer has none of those interactions with the body and that a computer or at least computers of our generation have almost none of the experiences which human beings store away that are relevant to those aspects of our lives that is I guess the way in which I would prefer to describe the distinction and I would prefer I guess personally not to take those phenomena out of the realm of biology but they certainly are outside of the realm of current cognitive science I have a question here I think from a graduate student and then one from a high school student will we be able to guess which is which try and what do you base the assumption that the same processes used to solve well structured problems are the same as those that are used to solve ill structured problems aren't there meta processes of problem epistemology which must be worked out in ill structured problems graduate student well of course we we don't know that the processes would work for well structured problems will work as we press further and further into the domain of ill structured problems it's an interesting working hypothesis cognitive science has started as all sciences by dealing with the problems that thought it could deal with and it started with things like puzzles and has moved into things like physics and algebra and geometry and at each step as we move into problems that are less well defined and vaguer the question has to be re-asked will the same set of processes work in this domain as has worked in the previous domains that again is an empirical question I've had some speculations about how we can extend the system to deal with problems which are in fact quite unstructured problems but my speculations aren't very important on that what's important is to pursue the research and to see what difficulties we run into final comment it always turns out that ill structured problems cease to be ill structured at about the moment someone writes a computer program that does them I think if you had asked people ten years ago whether medical diagnosis was a well structured problem or ill structured problem you would have gotten a strong vote at least from the doctors ill structured problem ill structured problem is something we know how to solve without quite knowing how we solve it and we quickly have to redefine it as we get a deeper insight as to the nature of the processes we are actually using that's the hunch that I work on but again we don't have to have belief here we can simply wait until we see what happens when we try to extend the domain of things that we program isn't it possible that a computer could become smart enough that it will realize that to get noticed it must have power that to get power it must try to destroy humans and save computers its own race and win this war in other words we're tapping into a realm we must not enter there are a number of premises there each of which would require a great deal of discussion but the one that struck my attention was the idea that if you acquire power you use it or you acquire it by destroying people who are things or something that are different from yourself now we're already suffering from that disease right within the human species and before I did too worried about computers I'm going to put a lot of my thinking to seeing if we can discover some ways in which human beings can find out how to live on this earth and even to exercise a modicum of power without thinking that they have to use it to destroy other human beings I don't think our terrors are well placed at this moment in history when they're directed at computers Dr. Simon don't we risk trivializing the scope of human contemplation and creativity by seeing thinking as information process in information processing terms that the computers are now capable of creating there is a famous Dutch scientist of the late Renaissance Simon Stevinus who had a a family crest and on his crest it was really his personal crest and on his crest in Dutch he had inscribed Wunder en het geen wonder wonderful and yet not wonderful. What science is all about is taking wonderful phenomena phenomena which amaze us which lead us to wonder about them in the best sense of that word and to reveal the true wonder of those phenomena by showing how they can emerge from in fact the interacting of quite simple structures and forces that's what science is about showing the beauty within the beauty that the heavens at night can be very beautiful and there's a way of admiring them and looking at them and none of us want to lose that and there's another way of admiring them and looking at them by saying gee that's just Newton's three laws and I don't think you need to lose your wonder or your awe by having that second explanation. Dr. Simon if computer programs can in some domains perform more efficiently than humans then should not the search be for effective goal oriented procedures regardless of whether these procedures are the ones humans employ and if so does the value of protocol analysis diminish? Inspector the first part the area I've been describing this morning has in fact two goals and you can find people engaged who are largely concerned with the one namely understanding human thinking I regard that as my principal interest you can find other researchers in this field who are mainly interested in the nature of intelligence and in using the power of computers to augment human intelligence and that's a legitimate goal too and if you have that second goal you aren't necessarily interested in getting the computer simply to imitate humans you're interested also in using the power of the computer spinning wheels to do things that humans can't do like solve linear programming problems with 10,000 equations so both of these are legitimate enterprises and are no way contradictory to each other in the second enterprise human protocols play a much less role than they do in the first enterprise because we're not necessarily trying to find out how humans do it on the other hand human beings have in the two million year history of our species acquired a number of slide tricks I described one of them means and analysis and sometimes one of the best ways of getting a computer to do something clever is to see whether we can find out something about the slide tricks that the humans use in doing it or we find out something about the knowledge that expert humans have who do it maybe the cheapest way of finding out how to write the computer program rather than inventing it all over again a lot of time for one more question Dr. Simon are the mental process is the same for all thinking organisms and it's for non-human organisms well I've had a tough enough time trying to move forward with my colleagues in understanding humans I would only make the following comment when research began in this field we usually date back to the late 1950s I think it was widely believed that very rapid progress would be made in understanding simple human thinking like what we do in everyday life when we walk around and pick things up or like what a bulldozer driver does when he manipulates his his machine and that would be a very long time before we would have any understanding of the profound things that professors and doctors and lawyers did now it's turned to be just the other way around the progress on understanding bulldozer drivers has been very slow and the progress toward understanding the thinking of professors has gone in fact more rapidly than we might have supposed and it's going to be a long time before we simulate a bulldozer driver now there's a very obvious reason for this I have to say obvious although it wasn't obvious about any of us 30 years ago there's a very obvious reason for this and that is that the mammalian brain has been evolving for I don't know it's 200 million years a good figure or should I have made it 400 million years but hundreds of millions of years and the mammalian brain is a very sophisticated device after that period of evolution so I would guess the eye would be to taking information in parallel and to encode it and the ear and the musculature and the nervous system that guides our muscles when we do all sorts of fancy things those are very old structures which we share with our dogs and our cats and bears and monkeys and other delightful creatures on the other hand the part of the brain we're proudest of or those of us who are professors are proudest of the professorial part that's only been evolving for about 2 million years and my guess is that in 2 million years nature doesn't have time to whomp up anything very fancy this is sort of a Jerry built device and it's not surprising not surprising that we are beginning to understand the rather simple mechanisms on which it operates thank you for your address and your comments we can begin to number ourselves among our colleagues who have in the past been saying well Simon says we meet here this afternoon at what time Michael 1.30 we gather again at 1.30 thank you very much you're welcome this is a good question