 I'm very pleased, especially pleased to introduce today's forum. I'm David Thorburn, Director of the MIT Communications Forum. It's unusual for institutions as conscious of their status as MIT to mark the departure of one of their most well-known professors. And most especially when that departure is for a place down the river on Massachusetts Avenue, whose name just now escapes me. But this negative tradition has always seemed problematic to me. And especially in the case of Steven Pinker, whose 21 years at MIT are filled with achievements, all of us in this community should be proud to acknowledge and happy to celebrate. The communications forum, in fact, has been one of the venues in which Steven has been, to which Steven has been generous during his time at MIT. He's made several appearances here, always memorably and helpfully. And, in fact, today is partly intended to signal something that I think is important, that even though Steven is going down the street, he's not totally disappearing from our community. He'll be nearby, he'll be around. And I'm very happy to be able to announce that he's agreed to join a new advisory board of the MIT Communications Forum, my strategy for keeping him linked to our community, even though he's ascended into that other radiance. I will leave it to Jay Kaiser to do a full introduction of Steven, but I want to mention two things about him that have always mattered to me. The first is what a wonderful jokester, what a witty fellow he is. In one of his, I think it's in the language instinct, in one of his wonderful popular but intellectually rigorous books about language, he takes on the William Safefire, Sapphire kind of language expert who sometimes call themselves language experts, or there's one term that Sapphire often uses for himself is he calls himself a language maven. And Steven exposing the stupidity of the kinds of silly arguments that they often make about language usage, and especially they're often pump, a school mormish attitude toward language. He says, he makes particular fun of the idea that Sapphire calls himself a maven, and in a sentence that's characteristic of the richness and colloquial wit of Steven's prose, he says in the book, maven schmayven. And I've always treasured that moment in his writing, and then he goes on to expose the silliness of it. Well, you know that Steven is a scholar, a bestselling author, a public intellectual of great renown, but I want to emphasize what has always been especially important to me as a colleague of his at MIT, what a remarkable and central teacher he's been at MIT over the years. It's not an accident that he was named to a McVicker Fellowship relatively early in his time at MIT, and everyone who has been in his presence knows that he's a man of incandescent verbal power. I once sent a student of mine who was very impressed by me until he heard Steven to his introduction to psychology course, and the man came back to me and said, actually, I found it somewhat depressing because I'd been lecturing to him in the introduction to film course, and he seemed to like that very much. He said, Pinker's the most articulate human being in the universe. I'm not sure that's an exaggeration. Well, let me say a word about Jay Kaiser, who is our impresario and moderator for the day. In some ways, Samuel Jay Kaiser is an institution not only older and more central to MIT than Steven himself, but also wiser. He was the first Peter de Florens, how do you say it, de Florens chair? I always think of it as my favorite endowed chair in history, it's the Peter de Florens chair in jokesters, because one of the jobs of the holder of this chair is to make jokes, is to let the institution know about jokes and to publish jokes to the community. And Jay, of course, is a renowned scholar, a specialist in phonology and lexical theory in poetics, a prolific scholar. One of his titles, I suppose to linguist, it wouldn't seem like an odd title, but it's a title as an outsider that I especially love. It's entitled A Generative Theory of the Syllable. And that's a mark of his sort of scholarly territory. He's a very prolific scholar, of course. He's also a former associate provost, a former head of the fabled linguistics and philosophy department. Nurture of difficult geniuses such as Noam Chomsky and of benign collegial geniuses such as Steven Pinker. He is also a poet. He's published a book, a wonderful book of poetry called Raising the Dead. And he's the author recently of a remarkable children's book called The Pond God from Front Street Books. It's a great honor to introduce Professor Jay Kaiser. Thank you, thanks. I guess the drill for this afternoon, Steve and I will talk together for about an hour and then we'll throw it open to questions for you. I had prepared a little introduction to Steve and I wonder if you'll indulge me. About two days ago, I received an email that I would like to share with you. This email came from a correspondent whom I do not know and he must have read the announcement for this evening's lecture. The email said and I quote, much of what the author of How the Mind Works says is false. There is no mind. My advice to you, Steve, is pay him no brain. I could see parents all over America saying, if you don't brain me, I'll spank you. It's not surprising that Steve's work would cause strong feelings. He does not shy away from an argument as anyone who has read his defense of E.O. Wilson in the blank slate will acknowledge. But whatever side of the debate, the nature versus nurture debate you are on, the fact is that the blank slate has raised that issue to the, I think the public prominence that it deserves and in the long run, the intensity of the debate will produce more truth than friction. In his book, The Foundation of Language, Ray Jackendall refers to the current age as the age of cognitive neuroscience. An age whose inception began, I imagine, with the publication of Chomsky's 1957 Syntactic Structures. It's an age in which one could speak without embarrassment of the study of linguistics as the study, as Dan Oshison, in fact, described it as the study of computational properties of nervous tissue. Chomsky's monograph gave rise to the possibility of a scientific study of mind by demonstrating that language, its defining cognitive subsystem was not a set of mindless conventions, but rather a highly articulated, structurally dependent system capable of producing an infinite output with finite means. And I think now, 50 years later, no one will quarrel with that characterization, but that still leaves a lot of room for quarreling. And one quarrel has to do with the origin of language. If language is a structurally dependent system, how did it arise in the mind slash brain? On one side of the debate are people like Chomsky and Massimo Giatelli-Pamorini and also the late Stephen J. Gould, who, Pache Dan Bennett, take the view that language is a spandrel of the mind, a kind of an accidental side effect of the collocation of some other subsystems, or subsystems, for example, maybe mathematics, maybe the ability to count. In any case, it goes without saying that one of the great things about language is that it gave Homo sapiens a huge leg up over all other living creatures because it gave us a history. On the other side, people like Steve himself and his former student, Paul Bloom, who believe that language is not an accidental byproduct of some other mental configuration, but rather is the result of natural selection. This position is shared by, among other people, Ray Jackendorf, who believes that language has, through natural selection, evolved incrementally. So that's one important argument that Steve focuses centers around. Another quarrel has to do with the role of genetics in human nature. Steve is a central figure in this debate as well, arguing that genetics is far more influential than we've been willing to acknowledge in terms of human nature. And the question here is, of course, not either nature or nurture, but how do you render unto nature those things which are natures and how do you render unto nurture those things which are nurture? Steve has approached these questions, not only in his scholarly work, but also in a remarkable series of popular books, beginning with the language instinct in 1994, a book which has been published in Arabic, Taiwanese, Chinese, Dutch, French, German, Hungarian, Italian, Japanese, Korean, Portuguese, Spanish translations are pending if they haven't already appeared in Danish, Greek, Russian, Chinese, which means that he's speaking beyond the confines of the continental United States. And this book, by the way, The Language Instinct, was, well, among other things, was designated one of the 10 best books of 1994 by the New York Times. His other books have similar accolades. There's the How the Mind Works in 1997. There's Words and Rules, The Ingredients of Language in 1999, The Blank Slate in 2002. This series of four books in eight years over and above is considerable contributions as a cognitive sciences, a remarkable achievement. It has made Steve one of the country's leading exponents of science what one reviewer called The New Age Guru for the Machinery of Thought. As of July of this year, Steve left MIT to join the faculty of the school where he received his PhD in experimental psychology 24 years ago. He is currently the Johnston family professor of psychology at Harvard University. His departure has brought to an end almost a quarter of a century long formal association with MIT that began in 1979 when he became a postdoctoral fellow in the Center for, excuse me, for Cognitive Science at the Institute. For me, moderating this evening's farewell gives me a feeling of being in at the beginning and in at the end. I was director of the Center for Cognitive Science when Steve joined it in 1979, a position that I co-held with him from 1985 to 94. I was also the first holder of the Peter DeFloris chair at MIT in 1990. And when I retired, Steve was the second holder of that chair from 2000 to 2003. I currently drive a 1998 Jaguar XJ sedan. It is my fervent hope that one day Steve will as well. All right, so why don't we start, Steve? Is the lavalier mic on? Can everybody hear? Can you hear? Okay, good. Why don't we start with the center since that's where you and I started? What are your recollections of the center? Well, it was the, it was the center that brought me over to MIT for the first time. Can you hear me? Yeah, I was a graduate student at Harvard. So this really is a homecoming for me going back there. And I had applied for a postdoc for the Center for Cognitive Science, which I think had just begun maybe a year before. And I remember going over to your office after I received a letter inviting me to go to see what it was like. And you and I had a very pleasant conversation. And then Morris Howley came in, Institute Professor Emeritus Morris Howley. And that was the end of the pleasantness. No, no. And Morris came in and he said, Morris, you and I had met like 15 minutes before. And he said, Morris, I'd like you to meet Steven Pinker. A nice boy. And I thought, I like this place. The center was in the late lamented building 20. Now the graveyard on whose site the spectacular Stata Center is being built. One of probably my favorite academic building that I've ever been in. As it was for many people who were there, you could open the windows and get fresh air. You needed to string a wire. You just put the drill through the wall. If you needed to install something, you just did it, no one really cared. And it had a peculiar geometry that meant you kept bumping into people. Rather than a lot of academic buildings which go straight up and have a lot of little floors stacked on top of each other, it sprawled in this comb shape. And to get from your office to the men's room or the lab or you would have to bump into people. And it was tremendously social. I remember when the center was formed, we had a working group which consisted of faculty from EECS and also from linguistics and psychology. Sue Carey, I remember, was on it. John Allen was on it. Ken Stephens, Morris, Noam. And I remember at the beginning we had received a grant from a SDC, I think it was a. A Sloan Foundation to begin with. The Sloan Foundation, that's right. And then we got this, that's right, the Sloan Foundation. And I remember that what the working group had decided was not to use the money for faculty summer salaries. The idea was to bring in people like you. And so for the nine years that the center lived, we were able to make the money go a very long way because we only used it for postdocs. And it was, that was the, I think the wisdom of the working group. I guess you were among the first that we had brought in. It's funny, the one thing that I remember about the center that is important to me, it wouldn't be perhaps important to anybody else, but what was important to me was not, was that how the center ended. Because one of the things that I find at MIT is that MIT knows how to start things, but it doesn't know how to stop them. And I mean, when things stop, there's always a huge fuss. So for example, when ABS stopped, you remember applied biological sciences, there was a huge furor. Yeah, the department used to be called food and nutrition sciences. The students called the department of fruits and nuts. Right. But the center ended without a fuss. And I would urge all of you to go back to the presidential report of the year that the center ended and read the paragraph that I wrote about it. I mean, that was, I wrote a little poem in there, but I masked it as a report to the president. But it announced the sort of the ending of the center. There was one project in the center that was very dear to my heart. And I think you worked with it as well. That was the lexicon project. What did you do with the lexicon project? Oh, that was the lexicon project, which you and the late Ken Hale directed. Right. And Beth Levine was, I think, the most active researcher. She was a postdoc from computer science, I believe, which was EECS. She was even a linguist. But it came at a time when I was, this was one of the center's projects, and that was funded by the Systems Development Foundation. That's right. One of its projects was to develop computer-based dictionaries of languages, such as Warl-Perry and some Australian and other endangered languages. But also there's a great deal of theoretical work on what a person knows when they know a word and how the meaning of a word governs how it's used in a sentence. And for me, I attended the seminars and contributed to the reports and so on. And for me, it solved this problem that I had been working on for years and that I just, I couldn't crack the nut until the lexicon project came around. And it was a number of papers, actually, of Beth Levine and Malka Rappaport, who was a graduate student in linguistics here. Yeah. And I thought it culminated in my second book. My first two books were not sold in stores, kind of like the Vegomatic. They were fairly technical linguistics books and people remember the books that you can get on Amazon. But these were a little more obscure. My second book was written on this very topic. And I thought I would, so we aren't just waxing nostalgic, but I think in what I think of as the spirit of MIT, instead of just reminiscing or reflecting on what it's like to be at MIT, at MIT I think of it as the ideas and the content are always come first, people like to get down to work. So I thought, as we would speak, I'd also present some of the actual ideas and research that I developed in the MIT environment, and for which I owe a great deal to the MIT environment. So let me just go over the idea and the little paradox that the Lexicon project solved for me if I can find it. Okay, it was, when I was a postdoc, I was working with Joan Breznan, a linguist who was originally a student of Noam Chomsky's, then was on the faculty at MIT in the Linguistics Department and was my postdoc sponsor. And one of my fellow postdocs was Jane Grimshaw. Alan Prince was another one, he was quite an extraordinary group. But as a result of working with Joan and with Jane Grimshaw, I developed my first book, which was a theory of language development. What in the child's mind allows the child at birth not prepared to learn any particular language, prepared to learn a human language, but with no predisposition to learn English or Japanese or Walphery or anything else. What are the algorithms that the child executes on hearing parental speech that allows the child to master the language as a whole? And the book had one chapter each on a number of areas of language, words, phrase structure rules, auxiliaries, inflections. And there was one truly maddening problem in that book that I did not solve at the end of that book. And it had to do with learning how to use words and sentences. And in general, the verb pretty much dictates the structure of the sentence in that you all remember that there are transitive verbs and intransitive verbs. Transitive verbs have an object like John ate the pizza, intransitive verbs don't, John dined or John slept. That's a tip of the iceberg of several dozen kinds of verbs, each of which dictates how the rest of the sentence is organized. And one of the things a child has to learn when learning a language is which verbs go with what sentence structures. And in particular, there's a set of paradoxes, I'll mention the easiest example. There's a construction called the content lockup. John loaded hay onto the wagon for a direct object, prepositional object. Then there's a similar construction. John loaded the wagon with hay, where the prepositional object in one sentence becomes the direct object in the other. The direct object becomes a prepositional object. There's a whole family of constructions that work that way. John splashed water onto the wall, John splashed the wall with water, and so on. Literally, probably 60 or 70 verbs that would work that way. So we had Biff stuffed the breadcrumbs into the turkey, stuffed the turkey with breadcrumbs. With this pattern, it would behoove a child here and elsewhere to extract the regularity and use it to be able to generalize to new verbs. So if there's a verb that you hear in one construction, such as John sprayed water onto the wall, the child can generalize to John sprayed the wall with water. And in general, in explaining language acquisition, what you want to explain is generalizations. Because as Jay mentioned in the introduction, the essence of the language is that it's an infinite system. There's no limit to the number of thoughts you can express. You're only a child for a finite number of years, so you hear a finite number of sentences, but you've got to be able to make the leap to the rest of the language so you can say things that you haven't heard before. You're not just restricted to parroting back sentences that you hear. So this would seem to be a prime opportunity for the child to extract a rule to be able to generalize, such as if you've got a verb that appears in the construction noun phrase into or onto noun phrase, you generalize that you can flip the two noun phrases. So spray the wall with water, therefore spray water onto the wall. The problem is that there are some exceptions, verbs that just don't go along with the rule. So you can say any poured water should be into the glass, but any poured the glass with water doesn't sound right, even though it's perfectly obvious what it would mean. It's just not the kind of thing a native speaker would say. Someone said that that would identify them as a foreigner. In the other direction, you can say carol filled the glass with water, but carol filled water into the glass also doesn't sound quite right. And we've done questionnaires to confirm that people really do grimace at the sentences with the asterisks on them. Now here's the puzzle. You can be stated in different ways. One way of stating it is if you extract this generalization, a verb that can appear in one construction can appear in the other, then why don't you generalize from poured water into the glass to pour the glass with water or fill the glass with water to fill water into the glass? How could those possibly sound odd given that you have this generalization that allows you to make that leap? Or another way of putting it, it's kind of just flipping perspective, is how did the English language survive with this pattern given that you'd expect the first generation of children to see the pattern would have obliterated these exceptions. They wouldn't have been exceptions for very long. And one possible way out is that everyone makes these errors when they're a child and gets corrected by parents. That seems rather unlikely. And we've looked at transcripts of children in conversation with their parents. And by and large, when parents correct their children, it's for the content of what they say, not the form. I think there's good reason to believe that not every person in this room who finds the sentence pour the glass with water to be odd has been corrected sometime in their history. Another possibility is that kids don't make this generalization, that they're conservative. They just stick with, at least in this part of language, they couldn't do it for language in general, or they'd be parents. But at least in this case, this is a generalization that whose temptation they resist. Well, I tested that by looking at the literature and child language development to see if children ever do make errors like this. If they don't, then we're off the hook. But in fact, there are lots of examples. Can I fill some salt into the bear? That's a four-year-old child said that bear-shaped salt shaker. I'm going to cover a screen over me. Look, mom, I'm going to pour it with water. I don't want it because I spilled it of orange juice. Some of these come from psychologist Melissa Bauerman. So it's clear that kids are not conservative. They do make these errors, which only deepens the paradox of how do they grow into people like us who find these sentences ungrammatical, given that we can't depend on parental corrections. Well, the resolution of this paradox came from a set of ideas that I think emerged from the lexicon project. Probably Beth Levine and Malka Rappaport deserve the most credit for it. But there were ideas that were in the air in your work and in Ken's and others that I and other people were thinking about the rule in a wrong way. It's not a rule that fiddles around with syntactic phrases like move one noun phrase into the position of another, stick a preposition, and so on. That really this regularity should be factored into two regularities. One of them is a lexico-semantic rule, which is almost a Gestalt shift. It's changing the way you conceptualize a situation. From cause X to go to Y, one conception, to cause Y to change state by means of causing X to go to Y. So when you load hay onto the wagon, what are you affecting? Well, there's no single answer to it. It depends on how you construe the situation. You could be doing something to hay, namely causing it to go somewhere. You could be doing something to a wagon, namely causing it to go from empty to full. And language captures those alternative construals of the same event, the fact that the human mind is flexible enough to mentally describe the same event in alternative ways, and that the switch between these two grammatical constructions is actually an externalization of the Gestalt shift and how you construe the situation. Then the second rule says map the affected entity onto the grammatical position of direct object. So in one case, load hay onto the wagon. You're doing something to the hay. Hay is the direct object. The other case, load the wagon with hay. You're doing something to the wagon, namely changing its state. And hay is the direct object. And there's a lot of reasons to believe that this is the right analysis. I'll just mention one of them. Ling was sometimes called the holism effect. That if you say Mary loaded hay onto the wagon, that would accurately describe a situation in which Mary just threw a couple of shovels full and then stopped. But if you say Mary loaded the wagon with hay, it implies that the wagon is full. And so it's holistically affected, which is what you'd expect if you were changing your construal of the event from moving stuff somewhere to changing the state of a container. Now once you have that alternative, then you can start to figure out why some verbs seem to go into the alternation, but others don't. So that change of state is going to, putting anything anywhere could be construed as changing the state of the location. So if I put this bottle on the table, well, you could do mental gymnastics and say, well, the table has now changed state. And so formerly it had a bottle in one place, now it had a bottle in another place. But in reality, there's some situations that are easy to undergo this Gestalt shift than others. And because not much happens to the table when you move the bottle from one place to the other, it's more of a cognitive leap to construe the table as changing state. And you don't, therefore, don't say I put the table with the bottle. And the other discovery of Beth and other linguists around this time, Beth Levine, is that languages dictate for their speakers what kinds of situations can undergo this conceptual Gestalt shift and which kinds can't. And they do it by subdividing the world of actions into according to geometry and force. So in English, for example, splash and splatter both go both ways, splash the wall with paint, splash paint onto the wall. But drip doesn't. You can drip paint onto the floor, but you can drip the floor with paint. You can brush the turkey with butter, but you can't pour the turkey with butter. Well, what's the common denominator? And the discovery is that there are narrowly defined subclasses defined by their geometry and force that each language singles out as being able to undergo the shift or not. So verbs of simultaneous contact and motion where you are simultaneously touching something and moving something along it. Brush, dog, rub, slather, smear, smudge, spread, streak. As a class, they all undergo the flip. Brush, butter onto the turkey, brush the turkey with butter. Verbs of imparting force causing ballistic motion all undergo the alternation. Inject, spatter, splash, splatter, and so on. But verbs that would seem to be, at first glance, cognitively similar but differ in details of the intuitive physics and intuitive geometry don't, such as verb of enabling gravity to cause motion. Dribble, drip, drizzle, dump, ladle, pour, shake, slosh, spill. As a class, none of them permit the alternation. You can spill water onto the floor. You can't spill the floor with water. As soon as gravity is what intervenes between releasing the substance and it getting to the destination, the English language says, sorry, you can no longer construe. The object is having undergone a state change. Likewise, verbs of mediated attachment when there's something between the object and the location, pin, fasten, tape, attach, and so on. There, too, you can't flip it. You can pin posters on the wall. You can't pin the wall with posters. So what this says is that we underestimated the degree to which typical speakers decompose an event cognitively into the sequence of physical events that underlie the meaning of a word. That when you learn a language, you must subdivide your verb classes very, very finely according to this intuitive physics and that the grammar actually cares about the intuitive physics in making these very fine distinctions. And basically, the learning story would be a child hears a verb used by an older speaker in a particular construction. If the verb used in two constructions, you'll generalize only to verbs with the same intuitive physics description. As soon as you change the intuitive physics, so it's gravity as opposed to ballistic motion, then the child says, I'm not going to make that leap. So for me, it opened up a whole world of how the mind carves events into significant physical relationships in a way that language reflects. Other languages, many other languages have an alternation, like the one in English. As far as I know, everyone that's been documented also has the wholism effect, namely, load the wagon with hay means the wagon is full. Load hay onto the wagon doesn't. There's a statistical tendency for these kinds of classes to also, on average, be either able or unable to alternate. But the boundary where particular subclasses do or don't varies from language to language. And you can see, if you think about the events, how reasonable is it to construe the destination or the container as being affected? Well, the more direct it is, the more likely it is that you could think of the destination as undergoing a state change, and the more likely a language is cross-linguistically to allow verbs of that kind to undergo the alternation. So for something like simultaneous contact and motion, there's a very real sense in which, when you brush something against something else, you're affecting the surface because you're actually applying force to it. Likewise, when you're aiming something with ballistic motion, the destination defines the path of motion. Whereas, if it's gravity that's doing it, again, in the intuitive physics, there's one causal link in the chain separating what you're doing from what happens to the floor. And that continuum of direct affecting of the destination or target and container, I think, is universal, but the exact place in that gray area varies from language to language. But I always thought about these verbs. I mean, consider the difference between, say, splash and smear. You can say mud splashed on the wall, but you can't say mud smeared on the wall. And when you ask yourself, what's the difference between splash and smear then? Well, in one sense, there's an agent. And the agent is, in fact, involved in a notion of some sort of manner of putting it. I mean, you have to smear. I mean, smear is a certain kind of manner. So when Ken and I talked about these verbs in this prologometer to a theory of language and argument structure, we assumed that there was a correlation with the agent in something like this. But I think that's a kind of a detailed comment. I'd like to ask you a sort of a general comment. I know that Jackendorf thinks this is really, he was very much convinced by this in his book, Foundation of Language. He thought that the work that you've done here was really very convincing. That, in fact, what children are doing is that they are extracting some sort of semantic character, and that's how they fit it. The problem has always been, for me, as a kind of a died-in-the-world linguist is I hate lists. And I wondered what your reaction to that is because, I mean, there's obviously something to this way of looking at things. I mean, just to make it clearer for the audience, consider a difference between something like cut and hit. You can say, John cut the bread, and you can say this bread cuts easily. Now, what that correlates with is the, with this notion of effectiveness. When you cut the bread, you've affected it. So, for example, you've affected a break in the material integrity of the bread. But notice, you can say, John hit the wall, but you can't say the wall hits easily. So, clearly, there must be some assumption that we have that hitting something doesn't affect it, but rather it affects us. Okay, so let's suppose, yeah. Yeah. People from the audience should not speak without getting microphones. There are people who will hand them mics because the MIT world can't record their voices otherwise. Okay, here's a technical point. You and the audience may not speak. Yeah. Without a microphone. Unless you're spoken to by a guy with a microphone. So, if you want to talk, just raise your hand and we'll have a mic come around. Where are the microphone people? One on either side. One on either side, great. So, just raise your hands. There's a guy. Okay, so, while your microphone, come down to the man who's raising his hand and while he's doing that, could you react to that? I mean, yeah, how do you feel about, I mean, if you end up with a list like that, it's not sort of interesting. No, that's right. And in fact, the lists are kind of the data. And I don't think, it's precisely because you generalize to any item on the list that the child couldn't have learned it as a list. And what I try to do in a long and extremely tedious and boring section of this book is to lay out what I thought were the mental representations that underlay each list. So it's not that you learn that list. Which book do you mean? Learnability and Cognition. Learnability, right. That rather, there is a particular configuration of concepts of contact, motion, effect, manner, path, course geometry of an object such as whether it's extended into one, two or three dimensions. That with these primitives, you can characterize each one of these classes and that's what the child learns. The list is an epiphenomenon of that conceptual structure and the conceptual structure is what delineates that particular class of events. Right. And since that, for me, this was just a revelation of this huge world of conceptual structure that could be revealed through a seemingly silly little phenomenon of grammar. And I hope the next book that I write in three years is going to use language as a window onto the mind and try to lay out what are the conceptual primitives and ways of combining them. And it's a field that, since the Lexicon project, has been studied in a number of very interesting ways by new investigators. And my former colleague, Lera Boroditsky of BCS is here and she's done a number of very interesting studies showing how the conceptualization of an event in terms of space and time can be affected by the way that a language expresses it. You have a question and then give up the mic in this young way. So I was wondering if you could do the following. I'm fascinated by this idea of an intuitive physics. Could you say, think of a verb as an operator and the objects you're operating on as say a puck moving along the number line. So we can make an operator that smears the puck along the number line like it just, it's a raising operator. Just adds one and it moves you along versus an operator which hits you along the number line by adding 10. It makes you leap across the number line. Like could you, and then you could combine, so you can name each of those and make sentences out of them. If you made it five instead of 10, at what point do people start, like can you smear between these kinds of classes of verbs precise and make this intuitive physics precise? The interesting about intuitive physics is that to a very large extent, at least the part of intuitive physics that language seems to interface with. There's a kind of intuitive physics that you use when you chase a Frisbee and that's very analog that ties into the motor system. But the part that ties into the conceptual system is very digital. Events tend to be dichotomized in, not in, near, not near, affected, not affected. And so there's a kind of conceptual analog to digital conversion that goes into language and into presumably some interface layer between actual words and the raw perception of events which are of course analog in reality. And there's a whole field of work that both linguists and psychologists are pursuing to try to characterize that way in which we digitize things and conceptualizing them. Given that the children, at least at some point, do make these errors, how do they get from the point of making the mistake to not make these mistakes? Yeah, that's a really good question, one that I agonized over for a while. I think that they make them sporadically. I think that adults make them sporadically and I think that they don't store the errors. That the constraints that I've talked about on which verbs you can extend depending on their semantics are ones that don't prohibit you from ever extending the verb. They just make it sound a little bit odd or a bit of a stretch. I think we still make these errors. You occasionally hear people, I have a list of these examples in the book that I recorded on a little post-it notes as I heard people make them, like the water filling up in the basement really made me upset. I think kids like adults occasionally make them. If you look at the, as I did, at the number of errors of this kind that children make as a proportion of the number of opportunities to make them, you find that it's very, very small. So at no point is it a dominant tendency. Children tend to respect these constraints from the beginning, occasionally flout them, adults also occasionally flout them. And I think that historically, the way, often the way a language changes is if it becomes tempting to stretch one of these classes, simply because there's an event that is now easier to construe, as say, affecting the container as opposed to moving the content, then enough people start to hear it from one another and the language then changes and embraces a class that it didn't embrace beforehand. So you mentioned that you don't like lists and so somehow these descriptions are more appealing. But I look at the length of the descriptions and they don't seem to be shorter than the lists themselves. And in fact, the descriptions are really complicated. They have imparting and ballistic versus splatter, which is, you know, kids understand splatter. So what makes you think that it isn't just memorizing the lists? Yeah, no, it's a good question. Two things. One of them is that the fact that children make these errors at all and adults do as well shows that it can't just be a list because people leap to words that they couldn't have heard on that list. If a child says, fill some salt into the bear, it can't just be a list because acquired from parental speech because most of the time a child will make it to the age of two and a half or whatever age they make that error without ever having heard that from their parents. So there is a generalization tendency that shows that it isn't a list. But also the question you ask is quite appropriate, namely, how long is the list of features compared to the list of words? And the answer is, and I haven't mentioned this so far, but what's interesting is that the features that language cares about tend to be reused over many, many classes and tend to be universal, getting back to Bob Sylvie's question. And they are a very small subset of the number of possible verbs. So I actually compiled an inventory of them. And they are things like contact, motion, force, mediated versus direct application of force, one versus two versus three dimensional extension. It's long in one sense in that it's a couple of dozen, but that's short compared to a vocabulary of say 100,000 words and multiplied by the number of languages there are. So the other interesting result of this kind of investigation is that the features are a fairly small subset of the meanings. Why do you suppose natural selection would have resulted in this kind of distinction? Well, the one thing that's special about humans compared to other animals is that we acquire a lot of know-how. We figure out how the world works so that we can manipulate it to our advantage. We build tools, we develop recipes, we have intuitive theories, and so we can outsmart other animals. We can build traps and snares and we can detoxify poisonous plants. But not, even though we can do it, most people are not enough of a genius to invent all these techniques like Robinson Crusoe on an island. We pool our expertise and languages what allows us to do that. So I think we parse the world into causal forces that we can then manipulate and the reason that we digitize physics conceptually I think is that if you look, in many cases if you look at the effects of some continuous action, there's a discontinuity or a near discontinuity in the effect. So you have a marble on your hand and if you plot what happens as a function of moving it continuously, at some point it falls. Likewise, there's a difference between being in or on, there's a difference between being near and not near. So we kind of compress the high dimensional space of physics into a lower dimensional space of effects that we care about. That I think is the way we conceptualize the world in order to causally manipulate it in our imaginations. Language reflects that and as Lara would argue, also can in turn affect it as you learn words and it causes you to perhaps re-conceptualize how you can construe an event and it's part of the, I think, human lifestyle of being very good at manipulating the world and at pooling knowledge of how to do so. I'm, you know, what I'd like to do is there are a bunch of topics that I think people would like to hear you talk about and because the lexicon is so fascinating, we could just devote the rest of the hour to, we've got about an hour left, we could devote that. So what about, this is a strategy. Could I just mention a couple of other topics that I'd like Steve to talk about and then we could come back to this? All right, so one of the things that I'd like to ask you about, Steve, is because there are not many people who remember him as David Marr. I know that your early work with Steve Cosland at Harvard was on vision and in fact, I think when you and I first talked, we at the center, it was about your work on vision. Could you say a little bit about David Marr and his influence? Yeah, I've been, I guess, fortunate in my professional career to have rubbed shoulders with a number of real, I guess genius is really the only word, true visionaries in the study of mind. I was very lucky as an undergraduate at McGill to be there while D.O. Hebb was an emeritus professor but was still hanging around and he gave a couple of guest lectures. Hebb was not only probably the first neural network modeler, neural network modelist in connection to still talk about the Hebb rule of how learning and experience are stored in neural networks, but he was also a great generalist and there was no aspect of psychology that Hebb didn't find interesting. He actually wrote an introductory textbook, very eccentric textbook, which I still consult today. Then when I was a graduate student at Harvard, B.F. Skinner was emeritus. And although I can't say that my own ideas were influenced directly by Skinner, he was certainly a stimulating and also big thinker and I was fortunate to have some guest lectures by Skinner. During my time at MIT, I guess there were three great visionaries that were among my colleagues at the institute. There was me. That's right. Oh, four. Four. Chomsky, Marvin Minsky, and certainly David Marr. Too bad his name isn't Marrsky. So tell us about each one of those. Well, I'll start with Marr. David Marr died at an unbelievably tragic young age. I think he was in his 30s. He was at MIT from, I believe, maybe 1977 to his death in 1981. He was also in the center. And he was also in the center. So I didn't meet him as a faculty member here because he passed away the year before when I was an assistant professor at Harvard. But I did know him when I was a postdoc. And people were rightly in awe of him. He would subsequently, I think, there was somewhat of a backlash and deservedly so because his ideas were probably deferred to probably too much. And some of his proposals did become dogma. But I think now the pendulum has swung in the other direction. And he isn't appreciated as much. I'll talk about work that I did that was influenced by Marr. Both in one sense, I guess, kind of reducing a stock in that despite what I had originally intended, some of the experimental data that I had weakened one of Marr's main theories. But in another way, it did preserve another aspect of it. Let me just go to the right slide. Since it is in vision, it's almost impossible to explain it without visual aids. Well, those of you who have taken my classes know that I believe that Jerry Brunner had this famous quote in Bartlett's quotations that any subject can be taught in an intellectually honest manner to someone of any age. And my philosophy is any subject matter can be taught with the aid of either a Woody Allen joke or a lyric from a rock song or a comic strip. So here's my comic strip on one of the main problems that David Marr addressed himself to. That is shape recognition. So this comes from the late-limited comic strip Robot Man. Now I guess it's called Monty, where our nerdy hero says, they say, build a better mousetrap and the world will beat a path to your door. Well, check this out. I call it the rodent annihilator 2000. It has a computerized mouse recognition system and a laser-guided mouse-destroying missile launcher. It's kind of appropriate for MIT, isn't it? Here's how it works. The targeting system seeks out mice by scanning objects and identifying certain knock, knock, knock. Mr. Montahue and Mr. Robot Man, guess what? Mommy and Daddy are going to take me to Disney World. Target sighted, initiating missile sequence. Good Lord, Sally. Give me that hat and go, go. The final strip we have Monty in the hospital. Initially, we feared the heat had permanently fused it to his head, but we now have a specialist from Zora who thinks he can remove it. Well, this is the problem of shape recognition. And, namely, how do you recognize a shape, like a mouse or a letter? The simplest mechanism that anyone could think of would be to have a template, a stored representation that mimics the shape of the object. And if it matches perfectly, a detector will say, yes, I've seen that shape, kind of like Monty's mouse detector. If there's a mismatch in the shape, then it doesn't work. And it gives a negative signal. The problem, of course, is that a template matching mechanism will make errors, both of omission and commission. A P detector would give a false alarm to an R, but it would fail to recognize a shape if it was shifted over a bit, rotated a bit, rotated in depth of the wrong size, too small, too large, or with other variations. And as you can see with all of these A's, any of which a human can recognize instantly, there's no way that a template matching system could detect all of them. And some of you who are familiar with some of the spam defeating technologies know that now when you sign up for a free email account on, I think it's Yahoo, or one of the service providers, the way they can tell whether it's a human requesting the account or an algorithm that's just slurping up email addresses is that they'll present a distorted word, and you have to type in the characters of the word if the user at the other end can do it, that proves that he's a human being and not a spam email address slurper. But how does the human do it? Mars Insight, one of the things that I think he's most famous for is that here's a technique that could work. You first decompose an object into shape primitives, which Maher thought the most useful ones would be basically shapes that could be described by an axis and by a changing cross-section along that axis, what he called generalized cones, and that the shape would be described mentally by specifying the configuration of these generalized cones on a coordinate system centered on the object itself. So you describe the hand with respect to the arm, the arm with respect to the torso, the torso with respect to the body. The idea is, let's skip over this, that if you describe a shape by its configuration of parts with respect to an axis centered on the shape itself, then as the shape moves, the coordinate system moves with it and the description of the shape remains invariant. So you can contrast that with a viewer centered reference frame where you describe the geometry of the object with respect to your own coordinate system, namely up, down, left, right, and front, back. So if you take a suitcase, if you describe a suitcase as something like a block with an elbow where the elbow is on top of a block and you see viewer coordinates, then as soon as someone tilts the suitcase, it no longer matches the description, likewise over here, and as with a template system, you would be agnostic for your own luggage. Whereas if instead you describe the position of the handle with respect to a coordinate system centered on the suitcase itself, then as the suitcase tilted and slanted since the coordinate system moves with it, the description of where the handle is moves and you could match it regardless of the orientation. So it's a beautiful idea, and Mike Tarr, now a professor at Brown, and I tested it in a number of ways. By, I'll skip the text. The idea was we taught shapes to people, shapes that were too complex to simply be recognized by a simple feature, like a little curly queue. Would teach one of these families of shapes, give them names to people, teach them in one orientation, and then probe people after we transformed it in the picture plain or in depth and saw whether people could still recognize it. So for example, in one experiment, we would teach this shape over here and after they had learned it in a number of other shapes, we would test them in different orientations and see how long it took for them to name them. We might, for example, present a shape at two orientations, zero and 105 degrees, and then probe them with the shapes at all of these other orientations. Now, if Mar was correct and we had a representation of shape that was invariant across all of these orientations, you should be equally fast at recognizing them at other orientations because the whole beauty of the system is the description thought to be stored in the head is orientation independent. Across all of those transformations, the description doesn't change and so you should recognize it sideways, upside down, tilted as quickly as you do with the ones you learned it at. Alternatively, if you are good at shape recognition because you simply memorize what a shape looks like at all the orientations you've seen. So instead of story one template, you store 50 of them for each object, then you should be much faster at the ones you have seen than at the ones you haven't seen. The third possibility is that you have a canonical representation that is specific to an orientation and if it mismatches, you then perform some transformation. You crank it to the upright and then you see if it matches, for example. And so these three theories make different predictions. What we found is that the Mars prediction of orientation independence did not turn out so well. Here's one study where this plots reaction time in milliseconds, how long it takes you to blurt out the name for an object when you see it. This is orientation from zero to all the way around the circle to 360. These three squares are the orientations that people were trained on and as you can see, low means fast, that people are considerably faster at the orientations they had seen and the farther away an orientation was to the nearest orientation the longer people took. In some cases they would rather than rotating to the nearest orientation they were taught, they would rotate to the upright orientation which was psychologically more prominent. But anyway, this up and down profile suggests that contra-MAR people don't actually store a representation that sluffs off orientation and that therefore is constant across those transformations. And then Mike Tarr and my former colleague here at MIT, Tomaso Poggio, have promoted theories of shape recognition that are in some sense the diametric opposite to MAR and posit that we really, the brain stores lots and lots of orientation-specific views. You just know what something looks like, little tilted this way, tilted that way, sideways, upside down and so on and has an interpolation procedure. So that's kind of the way the field has moved but I also think that MAR actually got a little bit more of it right than most of us now are giving him credit for even with these results. So I'm gonna just show you one other result that shows that MAR was partly right. That is, we had in our own data kind of a puzzling thing. This is another graph showing that the farther the shape is from where the subject's originally seen it, the longer they take, which is what you'd expect if they transformed it or had some interpolation procedure. But we didn't find that when we gave people symmetrical shapes. If the shape is symmetrical, you can recognize it upright, sideways, upside down equally quickly, pretty much. How come? Well, one simple possibility is that even that you have to figure out what the significant axis of an object is. Symmetry, it's not hard to build a symmetry detector out of neural network. Maybe it's simply that when an object is visually symmetrical, you can zoom in on the axis so you know which way, where the top and the bottom is and which way to rotate it. But it wasn't that either, because if you have objects like this one, which are not really symmetrical, which are skewed, people are still pretty much flat, that is, recognize all orientations equally. And even if you had an object that was no or even close to being symmetrical, but we jiggered the training set so that the information on the left side of the object and the information on the right side of the object were redundant, if you ignored what was on the left or what was on the right, that was enough, that would give you a unique description of every object in the set and it was enough to distinguish it from the distractors. So what was crucial was not geometric symmetry, but bilateral redundancy. If all you had to know in order to recognize an object was the order of parts from one end to the other, not caring about which was on one side or which was on the other side, then you could recognize objects independent of orientation. And so the conclusion is that Mar was half right, that we do have an object-centered coordinate system that can be mapped directly on an object regardless of its orientation and onto which a description of parts can be aligned. The only thing is we can only do it for one dimension at a time. If all you have to do is remember what the order is of parts from bottom to top or top to bottom and that's enough to distinguish a shape from similar shapes, then orientation doesn't matter. If you have to keep track, if you have a set of objects like these where simple bottom to top order of parts is not enough to distinguish, say, this shape from that shape, then you can't rotate both axes at the same time. And so you can't assign both axes to an object simultaneously. You've got to rotate it to a canonical viewpoint, namely one aligned with your body, and only match it then. Now this is kind of a very indirect and data-based argument. And one of the beauties of research in both perception and linguistics is that you can often make, you can corroborate a point that you get by interpreting quantitative data by demonstrations that you as both scientist and human being can just see in the display directly. And that's largely what linguists do. Linguists don't have computers or gather quantitative data because you present the sentence to someone and you can see right away either it is well-formed or it's weird. And likewise in perception, a lot of work as Yogi Berra said, you can observe a lot just by looking. In the case of shape perception, there are a couple of very nice demos, again, that show that Mar was not, in large part, was right. And it's an old effect that Mar himself called attention to. Look at these two shapes here. In terms of Euclidean geometry, they are identical. Nonetheless, one of them, one of them we have the English word square, the other we have the English word diamond. And they look different. Even though they're the same in terms of their Euclidean geometry, mentally, these are different shapes. And in fact, it takes a bit of attention to notice that these angles are even right angles. If I were to distort it a little bit, it would still be a diamond. Whereas if I were to distort this a little bit, it would very quickly change from square to parallelogram. So what that shows, first of all, is that when we recognize, distinguish among shapes, it's not just the geometry, but it's also the way the parts are described with respect to an axis or coordinate system. Now, in this case, that doesn't prove that Mar is right because the coordinate system here is just the retinal upright. So maybe it is specific to the way you see something as opposed to being described with respect to a coordinate system anchored on the object itself. But here's a demonstration that shows that it isn't just the egocentric or retinal upright. This is an old display from a psychologist named Fred Atneve. The crucial thing to pay attention to is this guy over here. Now, well, even before you pay attention to that, notice that these things look like diamonds. What do these things look like? Well, they kind of look like squares, even though if you pay attention to any single one, they are identical. The reason is that you mentally describe this shape with respect to an axis defined by this row. Therefore, these things are diamonds. Whereas for this guy over here, you mentally describe it with respect to a coordinate system aligned with that row. And therefore, with respect to that line, the sides are perpendicular and you see them as squares. And the kind of the capitol off, the punch line is this one over here can mentally flip from square. If you mentally group it with these three, it's a diamond. If you mentally group it with these eight, it's a square and you can get it to go back and forth. So what's flipping back and forth there? What's flipping back and forth is the coordinate system with which you mentally describe it. And that shows that Mar did point to something psychologically insightful in saying that the perception of shape is not simply a matter of matching templates, but also depends on the larger context which defines a coordinate system. What about, well, Minsky's was part of a tradition that doesn't exist very much anymore of kind of the computer science to slash philosopher slash seer. Kind of a job description that Minsky pioneered and there were a number of other people in the 70s like that. And Marvin is one of the last, but Marvin I think was responsible for together with Herbert Simon and Alan Newell in the 1950s of, I think, demystifying intelligence which until then was this mysterious power that was completely inscrutable, could only be explained, either explained to some gift of God or had to be written out entirely because it was inexplicable. And that's of course what Skinner did. He denied that mental states were scientifically tractable since no one could figure out where intelligence, thoughts, memories, images came from, a science of mind according to Skinner should just do without them. What Marvin argued together with Newell and Simon and others is that you can make sense of thinking as a kind of computation, not necessarily the kind of computation that your computer does, your PC, perhaps a kind of analog or parallel computation, but that this takes a realm that formerly we just could not even connect to the physical realm and you could then assimilate it to science. So that was the big idea that for which Marvin and these other two deserve credit. Of course, Marvin also had many brilliant technical accomplishments, but in terms of the big idea, I think that's one that he deserves credit for. And then finally of the trio, Chomsky. Yeah, certainly Chomsky is one of the, had an enormous influence on me. And many people that I speak to, journalists or people that I meet when giving talks or and so on assumed that either I was a student of Chomsky's or that I worked with Chomsky, which I didn't do. Chomsky was in a different department and being a linguist, his methods tended to be different from mine. I was more of an experimentalist. But it was when I was a student, I think I was like a freshman that I read an article in the New York Times Magazine in I think 1972, which first I think exposed the world to the Chomsky revolution in linguistics. And I remember being utterly fascinated by this. Again, these were big ideas. And I tend to think in as much as you can identify early influences that one of the reasons I went into cognitive science was because of reading about Chomsky in the New York Times Sunday magazine. Among the ideas that inspired me were the idea that I guess allied to Minskis that you can think of mental life as symbol manipulation or computation. In Chomsky's case, the idea that the mind is a complex system composed of a number of faculties, language being one of them. The idea that much of the organization of the mind was innate. An idea that was completely revolutionary and incendiary in the late fifties when Scenarian psychology ruled both in psychology and in large part of linguistics. And Nome overturned that by saying that children are in a some sense pre-programmed to acquire language. My own attempted contribution to this idea was to flesh out exactly what the child was born with in terms of a learning algorithm that allowed him to acquire language. And also, I think what excited me about the whole Chomsky and Uvra was also the way it tied contemporary work on how people, what makes people tick to ancient questions about human nature. And that was another thing that Chomsky revived ideas that really had been kind of dormant since the Enlightenment of what is a human like and how does that tie into our political arrangements, the way we conceptualize humans in the broadest sense. An idea that even though I found it tremendously exciting at the time, I didn't really come back to until my most recent book, The Blank Slate, which returned to some of these issues. Chomsky in particular argued that even though a kind of consensus among intellectuals is that the enlightened, politically enlightened view is that the mind is a blank slate. That was an appealing idea because it would seem to negate the possibility that races or sexes or individuals could be innately different or that there were constraints on the kind of society that we built owing to flaws in human nature. Chomsky turned that on its head and said, well, in fact, a blank slate can also be a reactionary doctrine because it's a totalitarian's dream. If people really are blank slates, then a dictator is apt to think that we damn well better control what gets written on those slates instead of leaving it up to chance. And Chomsky's own libertarianism and anarchism owes in part to his conception, I think, of a rich human nature that can't simply be written at will by political leaders. On the other hand, there are a number of regards in which my own work and my own thinking are quite distinct from Gnomes. In terms of actual technical linguistic work, I'm, my own, the analyses of language that I feel more comfortable with are not as abstract and far removed from actual surface forms as is typical in Chomsky's own theories. Just to give a concrete example, although I don't, I certainly agree that you can't describe language at the level of just the surface order of words, that there are underlying mental representations such as the semantic ones that we talked about earlier. But often in, those of you who are familiar with Chomsky's technical work and linguistics know that they can get extremely abstract and far from anything that a child actually hears and growing up. Just to take a very simple example, one that I explored in some of my other work on language. Here's a regularity about language. And among the English irregular verbs, you've got ring, rang, has rung. Drink, drank, has drunk. Sing, sang, has sung. There are a few others. So it looks like there's a rule that says for a certain subclass of words to go from the base form to the past, change it to a, lower the vowel, and then to go to the participle, lower it further to a. It goes to a, goes to a. Well, here's another form. I run, I ran, I have run. What kind of fits the pattern 66.7% of the time? The preterite or past tense works, the participle works, ran, run, but the damn stem isn't what it should be. Now, according to a proposal by Nome and by Morris Holley and their magnum opus, the sound pattern of English, a solution to this problem is that the underlying form, kind of the deep structure, if you will, of the verb run is to rin, and that it's actually stored in memory as to rin and that there is, in addition to the rules that convert it to a or it to a, there's a special rule for run that converts the it to a even for the base form of the verb. Now, that's just a tip of the iceberg of the incredible intellectual beauty of the sound pattern of English and of much of Nome's work. But for me as a psychologist, it just doesn't sound plausible because how is the poor child gonna know that the underlying form of run is rin and what good does it do him since he's gotta start with the run and memorize it anyway? So I think there's something of a disconnect between characterizations of language that are done with an eye towards economy, elegance as the main criterion and those that are meant as the most accurate possible description of the mind of a child. So that's a kind of example and elsewhere, my first theory of language acquisition used a theory of language that was not orthodox Chomsky and theory, but the variant that Joan Bresnan had worked out precisely because there were fewer levels of abstract deep structure that were very different from what a child heard and that made it easier for me to develop a theory that described what happened from the child's point of view. So that's a difference in linguistics in terms of larger questions of say the larger place of language in the story of what makes humans and of human nature, two prominent differences are the role of evolution in understanding language. I think that if you're going to say that something is innate, it's kind of responsibility to then explain how do we get it? How did I get here? And in the case of innate structures, especially ones that are said to be highly complex, natural selection is the only process that we know of in science that can result in the evolution of complex, adaptively complex innate structures, which and it seems hard to deny that language is adaptive for a species like ours that lives by its widths and that exchanges know-how and that cooperates and so on. And so therefore a complete story of language wouldn't stop with it being innate, but would then ask what role did it play in human evolution? What were the selective advantages of having complex grammar as opposed to not having it? Tromsky definitely does not see it that way. Not only is he militantly agnostic about how language evolved, not that he denies that it could have evolved by natural selection, but doesn't see how there's any insight gained by describing it that way, but also has become I think increasingly in recent years hostile to the very idea that language is in some sense a system designed for communication. That, in fact, language is, Tromsky has- Who left? Who left? You think I'm out? Now, just because there's almost this, for someone who doesn't know Tromsky's writings, this might also almost sound like a mischaracterization or a strawman or a caricature, but I assure you it is not. And Tromsky has written, language evolved for beauty, not for use, and in fact is unusable. People can, it very, I mean it is very hard to know exactly what one could mean by that, especially given that the argument is itself couched in language, but indeed the Tromsky's skepticism about evolution extends far enough to say that there's nothing about language that's particularly well adapted for communication, that people use it for communication in the same way that they use hairstyles or clothing for communication, but there's no sense in which you can understand its function to be communication. So that's an idea that I consider unhelpful to be, I guess, one way to put it. But you know, I mean, I don't, I actually, I sort of like that idea and I'm trying to figure out what there is. I mean, why is there in our different approaches? And I wonder if it isn't this notion of what you said, which I thought was really rather, for me it was an eye-opener, that the actual forms of language for you are really, they mean an awful lot. And so if you are led to RIN, you think, well, wait a minute, this is really getting off the stream, but notice what about the other things? Like, you see, if you look at RIN alone, you're right, but what about insane insanity, the whole business of the valve shift being there? I mean, that can't be an accident, you know? So that's, and so my feeling is, if you're gonna have the valve shift in the language, the great valve shift is a major phonological change that took place in English around 1400. Actually, it took place between 1400 and 1700. The first change in the great valve shift was at 1400. It's why Chaucer is different from Alfred, for example. And then the next one, why Chaucer, the next one was with Shakespeare and then the final stage of the great valve shift took place in, I think, the early 18th century. In any case, the thing is that if you look at these historically, and then you look at modern English, the rules are still there. So, and you probably agree with that. I mean, as you said, it's a beautiful construct, S-P-E. So my feeling is, if you're gonna buy that, RIN is inexpensive. Yeah. I think there's an empirical question. When you have regularities that are left over from historical shifts, and certainly the great valve shift and your own work on it is something that anyone interested in the English language should know about because it affects so much, such as why we have such crazy spelling, which is not so crazy when you consider great valve shift, why the English spelling differs from that of all the other European Roman alphabets. Right. So it does explain a lot. The question is, is the proper locus of the explanation in history, or is it still preserved in our psychology? That is, does ontogeny recapitulate linguistic phylogeny, and do children, as Chomsky and Halle actually said, do children in deducing linguistic forms literally trace out the historical evolution of the English language, which is a main part of the Chomsky Halle theory, which doesn't necessarily follow, at least in the psychologist's sense of actual mental processes that going on in the brain of the child, as opposed to a perspicuous account of what the child ends up with. And it's possible, I think that some of these historical regularities are acquired not by retracing the history, but by learning the patterns as associations in memory without actually having deeply stored in memory the original historical form, and then retracing its history when you use or learn the word. One of the, I wonder how you feel about this. I was, as you were talking about the influence of these people had on you, I was thinking, I mean, you never met Chomsky when you had read his article. I was influenced by Chomsky because I had actually been invited to join RLE, and I'd never met him, and I was invited up here, and I remember meeting Chomsky on the steps of Widener, and we went into the needy across the street, which isn't there anymore, for a cup of coffee, and he was telling me about his work on SPE. Well, I had just been given an opportunity to go to England, and I loved England, so I was gonna go to England, but then I talked to this guy Chomsky, and I realized he offered me a year here, and I realized I had to take it. No matter how much I liked England, this was somebody who was coming at language in a completely different, and for me, startling way, and I just wanted to share this with you because when I decided to come to MIT for a year, and by the way, the reason why they were interested in me was because I happened to know something about the history of the language. I knew nothing about the theory, but I knew an awful lot about the history, and they were interested in looking at the history from somebody who knew, but what became apparent to me at that time was that there was a real watershed in the field because if Chomsky and those guys were right, then all the people that I had studied with were not merely wrong, but their work was worthless. Now that's a terrible, no, but that's a terrible thing to happen, and that was the strength, that was the engine that ran the so-called linguistic wars. You see, and this is something you missed because I think it had already, the battle had already been won by the time you came into the field, but when I came into the field, I came in at a time when there was a fight because there were still the older giants of the field, and then here's Chomsky, and if he was right. I mean, you know, it's not bad to be wrong in a field, but it isn't too good to be worthless. Yes, no, of course not. Of course not. The problem with that is that tends to happen as you get older, is that younger generations start to say that about you. Yeah. Well, I remember what Noam said. I said, Noam, I remember I was once walking down the street, Vassar Street with him. I never forgot this, and I said, Noam, who's gonna win this battle? And he said, he thought that the issue was all over, and I said, why? He said, because everything depends on what the graduate students ask, and what questions the graduate students are interested in, and that's really how my experience of the field went, but there was something I wanted to say to you, and that is that what I shared with you was this incredible eye-opening revelation that language was a symbol-manipulating system. I mean, when I read syntactic structures, and I saw his analysis of the auxiliary, I was incredible, I mean. There's a tremendous formal beauty to language, as Chomsky and Halle analyzed it, and I think some of it might be elegant description of historical processes, but I think some of it also is a description of psychological processes, and I think the empirical question is, what is really a description of history, and what's really a description of psychology? Let me mention one other kind of connection, since it's become increasingly of interest, I guess, to me as I've gone full circle to the really exciting cosmic questions that were raised in this article when I was a freshman. I often get asked, do you also agree with Chomsky's politics since, and also the corresponding question is, what's the connection between Noam's theory of language and his political orientation, since both of them are so striking, are so radical by the standards of, I guess, the larger intellectual context? In my case, I was influenced by Chomsky's argument in a book that not cited very much, but since it came out when I was an undergraduate, it had a big effect on me called Reflections on Language, which I think was one of Chomsky's most interesting books because it is the book that lays out the connections between his theory of language, his theory of human nature, and his theory of politics, and I think it's the theory of human nature that links the language ideas and the political ideas. Again, this is a major argument in the blank slate is that a lot of people's reaction to theories of psychology are because they seem to embody pictures of human nature that have political consequences that people often are uncomfortable with or embrace. In the case of the link between his politics and his language, he's properly says these are logically independent. One should not be judged in terms of the other. One can be right, the other can be wrong, but that there is a discernible thread, and the discernible thread for Chomsky is that his view of his politics are those of, he describes himself as a libertarian socialist. For many people, that's an oxymoron, but also an anarcho-syndicalist. Yes, anarcho-syndicalist. That is a kind of a left-wing anarchist as opposed to a kind of right-wing, iron-rand capitalist anarchist. The idea is that people have a spontaneous tendency to cooperate, to produce for the sheer sake of it. We're just creative, productive organisms that have an autonomous need to express our thoughts, to create works of art, works of science without regard for the reward or the consequences. And that the reason that you, the only way, first of all, the only way you can be an anarchist is if you believe that people are naturally good, as opposed to, say, being a Hobbesian and believing that people without a Leviathan to control them will be at each other's throats. So you have to have a somewhat romantic view of human nature. And Chomsky does, he traces his ideas, some of his ideas back to Rousseau, the doctrine of the noble savage. And if you are not a capitalist, if you believe that people don't have to be motivated by wages or profits, you have to have a conception of the human that we have an endogenous, spontaneous need to create for the sake of it. In fact, the early Marx, who Chomsky also cites favorably, had this as part of his theory of alienation, that there's a natural human tendency to produce and cooperate and that social institutions of capitalism kind of suppress that. And also, suppress the natural human tendency to affiliate, to cooperate, to form harmonious communities. So I think that is the, that's the, I think that's the deepest root of Chomsky's belief system. On the one hand, it leads to a anarcho-syndicalist politics. On the other hand, it leads to a conception of language that emphasizes productivity, creativity, the finite algorithm that can generate an infinite number of products. And where I, and I'm sorry, the third thing is it also leads to a view of the evolution of language that would deemphasize the utility of language as a system of communication, but rather say the language is not for communication. Communication being something where you expect some effect, and you do it in order to get some effect on your back from the person with whom you've shared information, but rather it's just a endogenous system for externalizing or expressing thought. And language not being useful, simply being an urge to create or express, therefore can't be explained in terms of its beneficial consequences, which of course is the essence of Darwinian natural selection. Things are selected because they're useful. They get you things. You tell someone else where the berries are, they tell you where the fish are. Both of you are less likely to starve. You have more babies. Those linguistic abilities are passed on. That's the opposite of Chomsky's view where language has nothing to do with finding out where the berries are. It's just an urge to create. And that's, I have to say that that's a view which is fascinating and in some ways beautiful, but it's very different from my view. Being a more rooting, my own, nativism in evolutionary biology, I'm more impressed by the Darwinian arguments and evidence about where innate things come from. And that makes it very hard to think of humans as first of all developing any complex system outside the evolutionary process, but also leads to a more Hobbesian view of human nature that I'm not an anarchist. I think liberal democracy is a very good thing. Partly for reasons that Hobbes pointed out, namely that in a state of anarchy, I think the empirical evidence suggests that anarchy and the dictionary sense of no government leads to anarchy and the vernacular sense of violent chaos. And so politically, I certainly part company from Chomsky and not being a radical or an anarchist but being a moderate and a big fan of liberal democracy. And it's because I think the view of human nature that impresses me is one that emphasizes that we, although we do have some tendency to cooperate, there's also a dark side to human nature. And that's the fundamental split. I think your description of Chomsky's focusing on the creative impulse in human beings, the way you described it is really a pretty good description of the romantic movement. And Chomsky, if you were to place him sort of an intellectual history, he is in that movement. And I don't mean romantic in this narrow sense, but rather in this broader sense of life, essentially being a generative thing and even works of art the way Coleridge talks about them as essentially being generative. They come from an imp, and if they don't grow organically, they don't work. And that whole notion, of course that's one too. Yes. And that whole notion. You know, I can't, I really, there is something that I'd really like to hear you talk about. I hope the audience will indulge me. It's, David tells me that we can go on to 7.15 and that will leave some time for questions, but I'd love to hear you talk about your teaching experience at MIT. I know that you, for you, a watershed was probably when you took over Introduction to Psych. Could you tell us about that? And also, what your reaction to MIT, what your view is of what MIT thought about you as becoming a quote popularizer, unquote. Yeah, this is one thing I've learned of speaking to people outside the university is that many of them take a rather dim view of how modern research-oriented universities function. And so when I told people that I taught Introduction to Psychology, this large lecture course, the reaction was always, oh, how come you got stuck with that? Or did you draw the short straw? Or isn't that what they give to graduate students and first year assistant professors who can't say no? Yeah. Okay. But what I'm very proud to say is nothing could be further from the truth. I'll find out what it's like at Harvard and I can't say what's true of other major research universities, but at MIT, I got so much support and appreciation for doing this, not because it was kind of a dirty job that someone had to do. But it really, I mean, I was impressed. I originally took it as part of a deal that I could relinquish another course that I'd gotten sick of teaching. Also because I had written a book, How the Mind Works, whose subject matter gave me lots of cartoons and jokes that I could then use in lecturing. But once I took it over, I realized what an enormous regard teaching is held here at MIT. I found that at every level of the administration in my own department, the department head, Murgana Soor, would give me anything I needed, would support me in any way that I needed in order to make this course prosper. The Bob Silby, the dean's office, the president, Chuck Vest, the various resources in support of teaching, Les Perlman, the writing program, really taught me a lot as to how to teach writing, how to organize a course, instead of just getting up there and talking with a piece of chalk. What are ways in which you can bring university level pedagogy up to modern standards in the best way we know of instructing undergraduates. The meetings, the McVicar Fellows. Every semester there was a meeting of the professors of large lecture courses where we would get together. And I just found resources at every level and emotional and intellectual and financial support for teaching that certainly give the lie to any thought that teaching is downplayed, at least at this major research university. And in fact, one radio interviewer, this was one of the, when you write a book and you go on a publicity tour, you have a lot of very odd experiences. One of my surreal experiences was being interviewed by, now younger people in this audience may not recognize the name, but anyone who was an adult through the 1970s will recognize the name G. Gordon Liddy. Whatever happened to G. Gordon Liddy, and the answer is he has a talk show and I was on it. And he said at the end of the interview, I'd like to congratulate, offer my appreciation to MIT for getting one of their well-known faculty to teach introductory psychology instead of fobbing it off on some graduate student. So it's, I don't know if a compliment from G. Gordon Liddy is what we want. But this just shows that there is, first of all, a misconception that teaching is at the low end of the totem pole and something that MIT really should be proud of and that we should make known because it does impress people, rightfully so. And likewise, another question that I get asked, also I think reflecting a misconception certainly of MIT and I suspect of other major universities is, did your colleagues look down on you for popularizing your field? Isn't academia an ivory tower in which if you spread ideas, that must mean you're dumbing it down and you lose status because you're, people can understand what you're saying instead of writing in gobbledygook. And again, nothing could be further from the truth. From every one of my colleagues, from every level of the administration, Emilio Bietzi, who the department had for many years while I was in BCS, Murganca Sur, Bob Sillley, Chuck Vest, Phil Currie, I got an enormous amount of support for bringing knowledge to a wide audience and it was most definitely not something that was looked down on, quite the contrary. I did the math in terms of how long I had been here. The question was why did Steve leave MIT? Well, it was absolutely the most agonizing choice I ever had in my career. I'm an extraordinarily lucky person to be faced with that dilemma and literally lost many, many nights of sleep worrying about it. I think it was, at some point I think it can be beneficial to have a new set of colleagues and a new set of students and new environment. I was telling Jay that I was recounting an anecdote to another colleague I said, I'm talking about my own university experience. I said, you know, when I went to college, I had some of the same professors that my mother had when she was in college. And I said, you know, that just shows how super annuated and stodgy the department was. And then I thought back and I thought, I've been at MIT 21 years. Some of my students could have had children that I taught later. And I realized I have been here a long time and that a change probably would be intellectually stimulating. But it was certainly another again to sort of divulge what a lot of reporters asked me. Everyone was sniffing for another Cornell West soap opera. What was your grievance? Who insulted you? Sort of desperately hoping for something juicy. And the answer was incredibly boring. Namely, I really do love MIT. It was a wrenching to face this choice and I did it just for the, because I thought it was time for a change of scenery. But it's a place that I have just immeasurable admiration for that I'll miss and that I've, as David mentioned, I'm eager to continue to be part of the community and I hope I will. In his five volume autobiography, Leonard Wolfe said that he changed careers every seven years. And it's what kept him young. When I knew that you were gonna go to Harvard, I mean, I completely understood it. I mean, you know, change is very important. You know, it's now 10 minutes to seven and why don't we throw it open to the audience for some more questions? Is there a microphone? He has one coming down. And then there's somebody up here. Yes, bring it down here. There's a question here. I just wanted to go back to this discussion you had about your differences in thought about language with Chomsky. I'm wondering why isn't there some middle ground here about the development of language? Maybe language was born evolutionarily as a method of expressing creativity, singing, whatever. And by accident, like a spandrel, it became important for communication and then became something that natural selection, which doesn't have a purpose, it just keeps going and made it to what it is. Yeah. Well, there's a sense in which evolution, of course, has to work like that in the sense that the initial variation can't be for anything. If it were, then we'd be back to Lamarck with a kind of felt need where the variation is useful from the get-go. Clearly it has to be to start with not for, it's an ultimate function. I guess the reason that I wouldn't put it that way is that the system for expressing thought is so that language gives us is so has enough complexity that it couldn't have arisen just out of random mutation and sexual recombination. It's too organized. And that since there is no reproductive benefit and simply externalizing thought for its own sake, whereas there is a reproductive benefit to striking bargains, exchanging know-how, it seems to be more plausible that the initial function was to communicate and that sheer expression as in poetry, oratory, language as an art form was probably the spandrel from communication as opposed to the other way around. Especially since the other consideration is that language has costs. So it doesn't come for free. There has to be some benefit to pay for the costs. One of the main costs being the anatomy of the vocal tract where we humans have a larynx that's considerably lower in our throat than other mammals. We're the only mammals that can't breathe and drink at the same time, for example, and we're at risk of choking because every bit of food that we swallow passes over the opening of the trachea with some chance of getting lodged in it and until the Heimlich maneuver was invented. A lot of people did choke and the reason that we needed the Heimlich maneuver is that we have a vocal tract that seems to be adapted for language at the significant cost of risk of accidental death. So it could be possible, for example, that the first language was sign language? Well, it is. It's interesting that you raised that. It's not a crazy idea and that it's been... It's an old idea. I think it was a guy named Hughes, H-E-W-E-S originally proposed it, but it's recently been revived by the New Zealand psychologist, Michael Corbalus, in his book with the wonderfully witty title, Hand to Mouth, where he argued that the gesture was an intermediate stage. I mean, this is something we can't really know. It's certainly beyond what we currently have evidence for, but there's some circumstantial evidence. The fact that chimpanzees do have hand gestures that are somewhat communicative. The fact that sign language is so easy for children to acquire, deaf children, apparently no penalty compared to spoken language. And the fact that most language, much language use now involves a high, look what I'm doing. There's a hybrid of speech and gesture. In some cases, it's almost impossible not to use your hands. I mean, one example is, what's the definition of the word spiral? Try to define that. You can... Right. But so these lead to a plausible conjecture that gestures communicate. I never thought it was a crazy idea. I mean, for just the reasons you said, namely that this particular part of the anatomy is so badly designed for speech. Who's got the microphone now? And then when you want to raise your hand high so that the people who are carrying the mics can see you. Yes? I've heard the word child quite often during the past hour. Does an adult learn language in a fundamentally different way than a child? Yeah, it's a really good question. We know that adults are not as good at it as children. Adults are much more likely to be saddled with an accent that is not to acquire phonology. Adults often give themselves away with little turns of phrase and quirks that you could understand them, but you know that a native speaker wouldn't say them. So some of the syntax and inflection is harder to learn after middle adolescence, say 14, 15 years old, probably because of some change in the plasticity of the brain that occurs with biological adulthood. I think the effects are even stronger than we see by studying people learning a second language in adulthood because I think that adults learning a second language actually do better than one would think because they have a first language to fall back on. And much of learning a second language in adulthood I think is mentally translating from your first language or using your first language as scaffolding. The two reasons that I think that, one of them is that notoriously adults make errors in a second language that would be grammatical in their first language, but also the study of deaf people acquiring sign language allows us to do experiments that would be unethical in the case of children spontaneous learning a first language because deafness is one of the only cases in which you can study people who've made it to adulthood without learning a first language if the deaf person has been kept away from a signing community. And one lovely study, I'm very rarely cited but I think very informative, compares people who've been deafened in adulthood learning American sign language as adults to people who were deaf from birth, kept away from signing communities. Often there are oralist schools that believe it's bad for children to sign who then gravitate to a deaf community and learn sign languages as adults. You control for age, two different populations both acquiring sign language for the first time as adults. One of them had English, lost their hearing, now learning it as a second language. The other didn't have a first language learning ASL as a first language in adulthood. And then the control group is native signers who learned ASL as children the way all of us learned English as children. Of course the native signers did best. Of the other two groups, it's almost paradoxically it's the adults who are hearing who lost their hearing as adults and are learning sign language as adults who did much better than the deaf born individuals learning sign language for the first time as adults. And the reason presumably is that having a first language even if it's a spoken language makes the second language sign language easier to learn. And so this suggests that there would be an even bigger difference between adults and children if we looked at adults who couldn't fall back on an earlier language to help learn the second one. I have a question more about basically articulation. So you two seem to be very articulate people but the rest of the world isn't so articulate all the time. We don't have the vocabulary necessary to express all the say more continuous and analog thoughts in our head the more visual imagery that we have in our head but to express it out into the world and put it in books and papers and stuff like that. Not everyone has that same vocabulary. And so my question to you is how much is language binding? Over your experience in the past years doing research in language, meeting people and doing all these kinds of experiments how binding is language in the fact that you kind of have to express somewhat continuous and analog signals that are in your head in a sort of finite basis of words, your vocabulary. I mean does language prevent you from thinking certain thoughts? Well it's sort of an iterative process in that sense. If you could go on just thinking your whole life and not ever have to speak then maybe you could think crazy things. But if you're kind of used to thinking in language because that's where you were taught then kind of goes back and forth and slowly your thoughts start to fit in with the language and language starts to fit in with the thoughts. So I guess my question more is do you feel, do you feel it's binding at all? Like as a very articulate people I think you have a very unique gift in expressing exactly what you think on paper. And when we see that, wow that's beautiful. That's a piece of language that you're gonna remember because it really coincides with something you're thinking about. But in a way that you probably wouldn't have thought of expressing it. And so, but for the rest of us we don't have that kind of vocabulary who can't express it as well. It's interesting to see that. And I just wonder how binding language can be. I guess I wouldn't think of it as binding in the sense of preventing you from thinking in certain ways. Even if you had a smaller vocabulary than someone else. Partly because thought is so rich and so analog and although language just multiplies the value of thinking because you can acquire concepts that would never occur to you on your own from the collected wisdom of other people. And language can cause you to pay attention to aspects of the world that you wouldn't otherwise. But still the fact that, and the idea that language is binding that there is very popular in many of the humanities. Language is a prison house of thought we can't think except for the categories that language makes available. The thing is language is a moving target. Language is always changing. That's why you have language mavens who are always decrying the decline of the language because the kids are inventing slang and we don't know if we can understand them and there's jargon and there's a drift in the language as Jay pointed out. And so the fact that language can all, can be stretched by borrowings from other languages you know from metaphors, from metonyms, from neologisms suggests to me that language really doesn't confine us but rather we, if there's something that we have trouble saying we change the language rather than being unable to think it. Hold it, you can't talk without a microphone. Bring him a microphone. Oh yes, and I- Here, give him the microphone. No, no, you got to- David will say. Okay. And David would be remiss not to mention why we have writing programs at universities namely to expand the expressive power of people by getting them to use language in more expressive ways and to remove whatever bounds that language might impose for them. But you know that I wonder, do you remember Chomsky's Killian talk one in, I think he raised the possibility in that talk that it's quite possible that the human mind has the ability to have thoughts which are inexpressible and that in fact in that sense, I mean this is at a very abstract level but that's very possible, raise that possibility that we have this ability to imagine cognitive structures but natural language as effective as it is is after all, it's an interesting notion. To me the idea that verbal power is still that you can improve as part of the answer to that. So part of the answer David emphasizes is that it's a skill that you can improve but also there clearly are thoughts that you can't express in language. First of all, that's why we have mathematics. We have other notations. Right, right. Precisely because you can't express complicated analogous. That is really very interesting because what that suggests is that what mathematicians do is they, I mean there are all kinds of systems that they come up with which and every so often a mathematician will discover that one of them has an analog in the real world. And the only thing that prevents him from seeing that before was lack of imagination. And that's sort of the driving thing behind it I think, imagination. Listen, oh yeah, there's some young, oh. Question in the back. Oh okay, question in the back and then you. I'm gonna be a Martinette at 7.15, we'll stop. Whoa, my question goes back to the, some of the Darwinian concepts you were talking about before. I'm wondering if there is a evolutionary explanation for why Chomsky is so romantic. Why Chomsky's what? It's so romantic. It's certainly not an evolutionary one but I think these are great themes in our intellectual tradition that often differentiate thinkers and in particular I think differentiate if you wanna make a really rough cut people on the political left and people on the political right who have different conceptions of human nature. The debate goes back hundreds, maybe thousands of years. And one of the chapters in my recent book, The Blank Slate, is on the implicit theories of human nature behind left wing and right wing political ideologies. I should mention by the way that although there's a thread that connects Chomsky to Humboldt and Rousseau and the Young Marks. Chomsky also is, I mean this as in many regards is a very unconventional thinker in that the conventional alignment in most of intellectual history is that it's the people on the right who believe in a, have a strong conception of human nature and the people thinkers on the left who are much more likely to embrace a blank slate. One of the reasons that Chomsky was so radical in the 60s and 70s is that he upended that equation and here was a guy who was firmly on the left who had a rich conception of human nature. Many people especially in France at the time just couldn't wrap their minds around that and denounced Chomsky that he must have had some hidden right wing agenda for being an innateist. And Chomsky sort of threading this intellectual line had a exposed kind of a novel line of thought because the later marks believed in something much closer to a tabula rasa and traditional right wing doctrines like the need for a tough law and order policy, the need for a strong military, a distrust of utopian thinking all comes from a dark view of human nature that says we are naturally selfish and cooperative, violent and that's why we need a Leviathan and that there's nothing in human nature that can prevent us from achieving a utopia. So Chomsky is here and elsewhere is in a sense, sweet generis. Yes. Okay, you've just, you've mentioned a couple of times how you think that language is one of the defining human characteristics and so on and so forth but there are a number of people who would sort of disagree with that and say that's sort of a species-centered chauvinism and that in fact there are animals which are capable of language like chimps and gorillas which you know just because they don't have vocal tracks they can use sign language and that sort of thing and what I've gotten from your writing is that you seem to think that most of the complexities of our language are in fact unique to humans and I'm not saying I disagree with you I actually do agree with you but I wondering if you could talk about why exactly you think they're specific to humans. Yeah, the, for one thing the human abilities that so impress us emerge spontaneously in any normal child in a natural human community whereas the abilities of chimps that were even tempted to compare with humans are the result of intensive training by humans held bent on pushing them to their limits. So in a way it's not a level playing field to begin with but even with that difference the ability of trained chimps to use grammar in the sense of expressing meanings that are dependent not just on the meanings of the individual symbols but the way they're combined is extremely rudimentary even in the most charitable interpretations. So if a chimp uses one sign, the sign for an action before the sign for an object say 60% of the time whereas chance would be 50% that's what the the chimp fans call evidence that chimps can master grammar. Now in the case of humans not only do you have much more consistent use of order but ordering one word with respect to another is just a tip of an enormous iceberg in terms of the way in which we can express thoughts using syntactic structures. So I don't think it's meaningful to ask the question do other animals have language because that is simply a semantic question of how broadly you want to stretch the word language but if you ask the question do other animal communication systems ones that spontaneously develop in them the way language develops in us do they work according to the same principles as human language? I think the differences are rather extreme. Probably the two main ones are the are dependence on combination as opposed to individual signs that is the central importance of grammar and the other is the way that we refer to things independent of our immediate emotional state or demand or desire or drive whereas the vast majority of animal communication is not just referring something to something but rather expressing some emotional attitude towards it. I think a lot of people find cognitive and linguistic science very fascinating but one of the things that frustrates them is that great thinkers can have such radically divergent if not mutually exclusive ideas and this is very different than the way say when Einstein expanded upon Newton's ideas by refining certain subtle notions of space and time. You and Gold for example just have very different ideas on the role of language and evolution. Chomsky's notions of language are very different than those that preceded him. So this gets to the issue of hard science soft science and predictability and what I'm essentially asking is do you think with the advent of technologies like FMRI and accumulation of huge computer databases of all kinds of languages across the world that there can be some strong non-trivial predictions that can be made in the future that might distinguish or dissociate some of these consistent but contradictory hypothesis that have been espoused by different thinkers in explaining these types of phenomenon? Oh, well definitely, I'm certainly an optimist in terms of trying to make ideas more testable and empirically responsive. And already, I mean it's not as if these are just people shouting different opinions at one another. Even these debates as they stand point to studies of animals, of anatomy, of neuroanatomy, of language structure, of linguistic experiments and so on. So it's not just kind of people yelling at each other. In terms of how new techniques might further resolve this, I think functional neuroimaging might be one way but although I tend to think that genomic analyses might be even more informative and I have in mind a study that came out a year ago in Nature that looked at relevant to the issue of whether language was a target of natural selection or arose by chance or as a byproduct. It was an analysis of a gene that was isolated in the sequence that's responsible for a disorder of speech and language in a three generation family in England where the gene and the disorder correlate perfectly one of the rare cases in which that is true. The gene has been sequenced, it's homologs in other mammals have been sequenced and it's worldwide distribution in different ethnic groups has been ascertained and the interesting findings are that it has a homolog in monkeys and mice as virtually all of our genes do but there are some sequence differences that are different from the human version of the gene to the mammalian version. The differences are functional, that is they change the shape of the protein and its function. They're uniform across the species that is every person other than these members of this afflicted members of this family are monomorphic, have the identical sequence and there are a number of techniques where you can look for signs of selection if you know a sequence by comparing the number of substitutions from an ancestral form that don't change the function of the protein which give you a baseline for mutation and drift and the ones that do change the function. If there are more changes that in the genetic sequence that change the functioning of the protein and they're uniform in the daughter species then you can statistically rule out the possibility that it evolved by chance and that is the case in the normal version of this gene which seems to be for normal language function and I take it as the first of kind of evidence from genetics, first of I think an increasing body that supports the idea that language was a target of Darwinian natural selection. Last question. Could you please address what I see as a morphing of the whole idea, the whole religious notion of blank slate and ghost in the machine has morphed into a new way of anti-determinism, free will, democratic human freedom among new conservatives and among what we see as kind of a new imperialism. Do you see that as a new way of, a new spin of what in the old days just determinism and essentially blank slate, ghost in the machine type of thing and now they make it appear as though it's human freedom and we are fighting for freedom and that we are essentially transforming the human society by kind of giving freedom to people by going to war for them. Does that make sense? I mean, is that something? Well, I'll answer, I think I want to answer directly but I'll answer something that I think is close. That is, is the, is a new emphasis on human nature. You can tell me whether this is answering a question. Is it kind of reviving older right wing views of human nature as providing imposing constraints on the kind of society that we can hope for in the future and Pache Chomsky, the left wing inadist, are other forms of inadism, kind of a part of a vast right wing conspiracy, as Hillary Clinton would put it. I don't think so for a couple of reasons. The main one is that modern conceptions of human nature arising in a scientific framework by being part of a larger, being embedded in a larger biological framework which has natural selection as the main mechanism of evolutionary change is not gonna win a lot of friends in the American right. And in fact, evolutionary psychology and sociobiology has been denounced not only by the kind of 1970s era academic left, the science for the people movement and so on, but has also been denounced by the right because it is godless, it is materialist, it says we're just our brains, there's no such things in an immortal soul, that we aren't products of divine creation, that humans aren't put on earth for the purpose, for accepting God's purpose, but rather with evolutionarily installed aims. So they consider it to be quite nihilistic and immoral. So it does not make natural bedfellows with a contemporary American right. If there is any sympathy, it's more to a more traditional Edward Berkean right which is very different from the kind of say in the Republican Party today. Also, there are a number of, Chomsky isn't the only left wing inatist, but I have a kind of a tour of the Darwinian left in one of the chapters of the book, which includes people like the philosopher Peter Singer who has a book called A Darwinian Left, sort of left liberals such as Robert Wright, Melvin Connor, and others who try to use findings about human nature to support what they would argue to be progressive thinking. One of them is refuting the notion of economic man, sort of utility maximizing agent that's at the heart of microeconomic theory that in turn is at the heart of a lot of market oriented policies. So that's one of the ways in which human nature is being used to try to advance a more of a left liberal agenda. Steve, I know you will join me in thanking the audience for asking such a wide spectrum of interesting questions. An audience, I know you'll join me in thanking Steve for answering them. Thank you. I'm seeing the evening and David for making policy. Thank you.