 OK, good afternoon. Welcome to CS61A session two. That's Jordy. He said TA. If you have a problem, talk to him, not me. There we go. OK, so apparently something I said last time started a huge panic, and like half of you showed up at the self-pay center wanting to know about 3S. And I'm trying to figure out what it is that I said that's different from every other time. And I can't think of anything. What I did say was, if recursion, that example up there we looked at that's recursive, is new to you, then you don't have the programming prerequisite. But I'm really surprised if so many more people this semester than usual haven't seen recursion before. So if you have seen recursion before, but it's just that the course is going fast, well, the course goes fast. You have to get used to that. So try not to panic. What you do is do this week's reading, do this week's homework. If you don't understand something, ask for help. And then if you're feeling lost, then panic. OK, who still doesn't have a computer account? Good. Ergonomics. Right now as we speak on the computer science education mailing list, there's a big discussion going on about computer ergonomics. Because apparently, a lot of people your age are reporting risk problems. How many people have wrist pains regularly? How many people have wrist pains sometimes? OK, that's serious. Speaking as an old guy with problems in this respect, it is way, way easier to not get repetitive stress injury in the first place than to fix it later. So the first twinge of wrist pain that you have, go over to Tang and see a doctor about it and change the way you work on computers. We think, we the world think, that mostly people your age who have wrist problems, it's because of video games, not because of computer programming. But the same principles apply. The most important principle is, take a break every half hour, every 20 minutes, something like that. Other important principles, there's actually two pages of ergonomic stuff at the back of course reader volume two, which you should read. Take that as homework. But one thing is, how many people use a laptop as your main computer? Wow, OK. Laptops are an ergonomic disaster because either the screen is too low or the keyboard is too high. So that's OK for out in the field like us. But when you're at home working on your computer, you guys with laptops should invest in at least one of an external keyboard and an external monitor. You can get external keyboard and then put your laptop up on top of a cardboard box upside down on your desk so that your eyes are pointing straight ahead or put a monitor on a cardboard box and have the keyboard down at your table so that your arms, the top part of your arms are up and down and the next part of your arms are horizontal and this is more or less a right angle, a little bit more than a right angle. So anyway, ergonomics and due risk exercises and stuff like that, please don't develop repetitive stress injury. It's a really bad idea. OK, last thing of this sort. Last time, understandably, about a million of you came rushing up to the stage at the end of class to ask questions. That's really not a good idea because there's another class coming in. I'm in a hurry to get things put away and get out of their way and I get kind of snappy with you. So there's two kinds of questions. If what you have is an administrative question, like can you sign my form or something, come to my office. That'll be better because there I can look you up on the computer and see what your situation is and all that. If you have an intellectual question, stick your hand up in class and ask it. There's two reasons, I think, that people don't do that. One is those of you who were brought up polite to your elders are afraid that I've made a mistake and you don't want me to look bad. So it's OK. If I have, in fact, made a mistake which does happen once every other semester or so, I would much rather that the whole class hear about it than that you all go away thinking the wrong thing or being confused because what I said didn't make sense. So if you think I'm wrong about something, stick your hand up. The other reason people are afraid is they think they're going to ask a stupid question. So here's the story about stupid questions. What's the very worst thing that can happen because you stick your hand up and say something stupid in class? Here's what it is. You stick your hand up and I say, yes? You say something really stupid. I give you a sarcastic answer. Everybody laughs at you and you feel like an idiot. OK, that's the worst thing that can happen. Doesn't that happen every evening in the dorm anyway? Whereas what if you don't stick your hand up and ask the question or say the stupid thing? What's the worst that can happen then? Well, you go on not understanding whatever it is. You fail the course. You drop out of school and you spend the rest of your life on Telegraph Avenue asking for change. So it's really worth asking the question during class instead of coming up afterwards to ask it. On your homework, your name, your log, and your TI's name. I said this I think last time, but let me say it again. The physical piece of paper that you turn in for homework one is the way that each reader knows which students he or she is responsible for grading. So if we can't figure out who you are from the piece of paper, nobody's going to grade your homework. And that's another way to flunk the course. So that's important, do all that stuff. OK, Pig Latin. Just one more time quickly. The crucial point here is that this is a recursive call. We're calling the same procedure that we're writing. How can that work? Well, the right way to think about recursion is this. Inside the computer there are a lot of little people. That's how computers work. And each little person is an expert on doing something. So we have member question mark experts and butt first experts and word experts and if experts. And we have PIGL experts. And when you make a recursive call, what you've done is you hired Peter to do PIGL of Scheme. And Peter hires Paula to do PIGL of James. And Paula hires Paul, who hires Pamela. And each little person has its own idea of what the task is. And finally, one of them has a task where the word starts with a vowel. And then that one reports back to the previous one, who reports to the previous one, and so on. So that's the way to think about recursion. And then it doesn't seem like you're cheating. What you don't want to do, this is really, really important, any time the words go back appear in your head, as in you say PIGL down here and it goes back to the beginning. Root that out. If you think like that, you're not going to understand recursion and you'll be confused. And your life will be erect. The other thing I said about this example is that if it's a special form, that's important because when we get to the base case, namely, PL done returns true. We want to compute this non-recursive thing. And we don't want to compute this recursive call. Whereas if we're in ordinary procedure, the way Scheme handles procedure calls is first it evaluates all the sub-expressions. So it would do the recursive call before it realized that we're down at the base case. So if has to be a special form so that only one of the two branches is actually taken. That's sort of where we left off last time. Now, somebody came up and asked me this question just before class and I promised to answer it. Luckily, it's the next thing in my lecture notes. Computer science is kind of the wrong name for our field because it's not a science and it's not about computers. It's not a science because what scientists do is ask questions about how the world works. For the most part, what we do is much more like engineering where we take as given how the world works and we build stuff. And the stuff that we build is software, but it's the building not the finding out that's important. That's not entirely true because part of computer science is theoretical computer science in which we do ask questions about how the world works. And surprisingly, at least it surprised me when I learned it, these really abstract things that are computer programs actually obey certain rules. So you will learn that certain problems just cannot be solved by a computer program period. It's not a question of the computers too slow. Even in principle, if you had the fastest computer in the world, times a bazillion, you still couldn't solve the problem. That's an interesting thing and finding that out was a sort of exploring the world and asking questions about it. But mostly what most of us spend our time doing is really engineering rather than science. And the other half of it is computer science isn't about computers. The field that's about computers is called electrical engineering. Those are the guys who build computers and understand how computers work and stuff like that. We build software. So what our field should be called is software engineering, but unfortunately that term is taken for a particular sort of philosophy about how to do computer programming. So if you say I'm a software engineer, there's a whole lot of ideological baggage that comes along with that name. In non-English speaking countries, there's a word informatics. It sounds kind of, you know, fru-fru in English, but informatique is just the word in French and so on in most of the world's Latin based anyway languages. And that would be a good word for our field too. Anyway, so that's the deal about computer science. So what exactly is it, why do we need a whole field about this? When I was an undergraduate, there was no such thing as a computer science major. So I was a math major and some of my friends were double E majors and that basically was the choice that you had if you were a geek, you know. There wasn't computer science. And you know, we wrote computer programs anyway and we just sort of forged ahead and did it. And the deal is, this is kind of a secret so I don't tell anybody, but computer programming is like the easiest thing in the world that people get paid a lot of money for. Computer programming is so easy as long as the program that you're writing is small so that it can all fit in your head at once. Which is how it used to be in the old days, you know. Because computers were small and they were big physically. They took up a whole big room but they were small in capacity. And so you could fit your whole program in your head and they hadn't invented window systems yet, which helped a lot. But once you're writing a big program, which basically all programs are now in the sort of graphical user interface world, you can't write a program to add two plus two anymore without first saying, well, let me pop up a window and let me make it a text window and let me put a scroll bar on it and maybe a little go button, you know. And then I'll put four in the answer, you know. So programs are all big and complicated and what computer science is about is the control of complexity. Okay, so another thing you could call our field is complexity engineering. And how do we do this control of complexity thing? Well, there are two answers to that. The old fashioned answer is you go out and hire 5,000 programmers and you put them to work on this program. And that was the way people did it for a while and what they found out, the companies that tried this is the more people you put on a project, the longer it takes. So instead, what we have to do is somehow build our ideas out of bigger chunks. So instead of thinking about every little detail, you can think about big chunks of things and that way you can fit the whole program in your head, the whole structure of your program. And those sort of bigger chunking techniques are called programming paradigms. And that's what this course is about, programming paradigms and there's a list on the board, which I'm not promising is complete in terms of the world, but it's complete in terms of this course. This is what we're gonna be talking about. Right now we're doing functional programming, we'll do that for a month or two and then we're gonna do object oriented programming for another chunk of time, that one you've heard of I'm sure. And the last two we're just gonna touch on briefly, client server programming and declarative or logic programming. Not because they're not important, but because there's only so many weeks in the semester. We want you to know they exist. So we can't actually teach you the entire content of computer science hard as I try just in one semester. So those are programming paradigms. The programming language people, by the way, these days tend to say that you shouldn't talk about programming paradigms anymore because the way people really write programs takes something from this and something from that and mixes them together and that's all true but that's a good way to think about it once you understand what the programming paradigms actually are. So for now we're gonna start as if you did everything purely one way or another. Okay, this picture behind me is a picture about abstraction. Abstraction is one of the central ideas of this course. The book uses the word a lot but never really quite defines it and in a way that's too bad because the technical computer science meaning is a little bit different from the ordinary conversation with your friends back home meaning. So behind me is a chart of layers of abstraction and at the top we're writing an application program and we do that in a higher level language of which scheme our language this semester as an example. High level languages are implemented in terms of lower level languages. Low level is not an insult, it's not high as good, low as bad. It's a low level language is one that keeps the way the computer actually does things in your consciousness so you're really thinking about okay, where exactly is this in memory? Things like that. Whereas a high level language tries to hide all that under the rug and let you think only about the problem you're trying to solve. That's the difference. A lot of you will spend a lot of time working in Java which is a kind of medium level language. It's high in some ways and low in others so these aren't rigid boundaries but nevertheless low level languages are implemented in terms of the language that the hardware actually understands which is machine language which is coupled with an architecture. Architecture is sort of the electrical engineer's way of looking at that same level of abstraction. It's sort of what pieces do we put together and what arrangements in order to make a machine that understands this language. So they talk about things like well, there's the memory over here and here's the arithmetic unit and so on. Those things are done in terms of logic gates which are circuits that compute Boolean functions, true false functions. The reason those true false functions are so important in building computers is that you can represent a true false value on one wire. So all the big more complicated things like doing arithmetic on numbers are built up out of a bunch of little one wire logic gates. Logic gates in turn are built out of transistors. From our point of view, a transistor is basically a remote controlled switch but the way it actually works is a little bit more complicated and the electrical engineering people teach you about that. If you really want to understand how a transistor works, it's built on top of quantum physics how the behavior of subatomic particles are what make transistors do what they do. So each layer of abstraction is built on top of the one underneath it. Most abstract, least abstract. The way people talk about abstraction in sort of normal conversation, the opposite of abstract is concrete as in something you can hold in your hands. So in ordinary conversation people would say, well the computer is something I can hold in my hands and a lot of people actually try to understand computer programming starting from there. Let me take this thing I can hold in my hands and then I can build on top of that the different programming techniques or I can look underneath it to see how the circuitry is built. And so from that point of view, the things that are abstract are the extremes. Abstract kind of means weird, right? So quantum physics, that's as weird as it gets, right? And higher level languages are pretty weird. All your friends will tell you, why are you using this weird language for example in your computer science class? When we say abstract we don't mean weird, yes. Yeah, abstract means built on top of other pieces. Okay, so we take pieces, we put them together. We actually don't use abstract that much as an adjective. We talk about an abstraction, which is a way to take little pieces and put them together into bigger chunks that we can then treat as black boxes. Okay, so the standard example people always use is under the hood of your car. What is there? There's like little hunks of sheet metal and little hunks of metal curly cues and stuff. But you don't ordinarily think of it that way. You take a bunch of those things and put them together and say this is the engine, right? This is the alternator, this is the transmission. That's what we're doing in abstraction is we're making big pieces out of smaller pieces. Does that answer what you're... Yeah, abstract versus something you can hold is the sort of lay person's version of abstract. Our version is built on top of something else. Okay, so for ordinary people, the picture goes abstract, less abstract, less abstract, concrete, more abstract, more abstract, more abstract. From our point of view, down at the bottom is fundamental building blocks and then we have more abstract pieces and more abstract and more abstract and more abstract and more abstract, okay? All right, talk to me later if I'm not getting your question. Okay, that's my writing on the chalkboard for today. Oh yeah, what's a function? We're doing functional programming. We should know what a function is. You know what a function is. You learned in high school where we said n of x equals 2x plus 6, right? That's a function. And probably your teacher drew a little box like this. I'll put like 2x plus 6 in the box and you can put like, I don't know, 7, 20, right? Did I get that right? Yeah, good. And if your teacher was a better artist than I, there was maybe a crank on the side of the box that you could turn. So that's a function. That is to say it's a relationship that has zero or more inputs and has one output, right? And what makes it a function is every time you put in the same inputs, you get the same output. So this function, if I put in seven and I got out 20 today, tomorrow if I put in seven, I'm not gonna get 46. I'm gonna get 20 again, regardless of anything that might have happened in the meantime, okay? So why does that matter so much? Why is the idea of function important to us? And the answer is two-fold. One is functions are pretty well understood by theoreticians and it's easy to do reasoning about a computer software system that's built functionally. So we can prove theorems about functions and stuff like that. That's one reason. The other reason is these days, computers are doing more than one thing at a time. That's always been true because of time sharing that the computer would run your program for a tenth of a second and then run a different program for a tenth of a second and so on. Now it's especially true because you buy a computer and inside the box there's two or four or eight processors in one chip, right, they're called multi-core processors. They do that because for like four or five decades, every year they made their computers faster and they're pushing limits. Not really fundamental speed of light kind of limits. The one that we're pushing on right now is temperature. So it turns out the faster you make a circuit, the more heated it generates. And we're at the point where little chips melt if we try to make the computers run faster. So instead of that we make the computers run a little slower but we put bunches of them in one chip because they are getting better at cramming more circuitry into the same amount of space. So if your program is doing a bunch of things at once and if the behavior of this piece of the program depends on what some other piece is doing, you can get in trouble. Later on we're gonna look in more detail about the nature of that trouble but for now the important thing to understand is if your program is entirely made of functions, this function by definition doesn't care what the rest of the program is doing. There's nothing about the sort of larger state of things in the computer that affects the fact that this function applied to seven gives the answer 20. So if you use functions in your programming you can much much more easily get a program running in a situation that involves parallelism and that's really super important these days. So until a few years back people kind of looked on functional programming as something that only professors cared about and we don't use that in the real world but because of parallelism people out in the real world are starting to pay attention to it as an important idea. Okay, take a vote. Question is are F and G the same function or different? Who says the same? Who says different? Some of each. Trick question. We are gonna say they are the same function, different procedures. They're the same function because a function doesn't really care what's inside the box, it cares if I put in seven what answer do I get and it's 20 for both of these, right? And the same for any other value of X that you put in. So as a function they behave the same way. A procedure is a sequence of steps for computing a function. F says take the argument, multiply it by two, then add six. G says take the argument, add three, then multiply by two. So it's a different sequence of steps, different procedures that compute the same function. Inside the computer there aren't really any functions. There's only procedures. Okay, so the way we represent a function in a computer program is to provide an algorithm, a sequence of steps for computing that function. Because of that, it's kind of like what I said about the different kinds of names for arguments last time. Because of that, almost always I will use the words function and procedures if they meant the same thing. But every once in a while it's important to call attention to the difference and I'll say well, these are the same function but they're different procedures or something like that and you'll understand what I meant. Yes, right, he's asking what if the function were two X plus B instead of two X plus six? So we had what we call a free variable, a variable that isn't an argument to the function. And so it does depend on what's going on outside for what the value of that variable is. The answer is if that's the case, it's not a function. Okay, it's still a procedure but what it computes is not a function because you don't always get the same answer for the same argument, okay? Good question. Is that clear what he's asking and what the answer was? Good, yes. Ah, could there be a function for which different procedures take different amounts of time? Yes, absolutely, we're gonna talk a little bit about that in two weeks and it is one of the main topics of 61B. For the most part in this class, we don't worry about efficiency most of the time and to an engineer that's kind of horrible but the reason is in practice even, it's a lot easier to take a program that works and figure out how to make it faster than to take a program that's fast and figure out how to make it work. So we're gonna concentrate on how to get a program to compute the answer you want and later on you'll think about how to do it faster. Okay, okay, moving on, I hope. Okay, who knows how to play buzz? Not very many people, God, you never had childhoods or something. Okay, so you're sitting around the campfire, right? Waiting for your marshmallow to get hot. And you start counting but if the number that you're up to is divisible by seven, you have to say buzz instead and if it has a digit seven, you have to say buzz instead, right? Okay, go, you, two, three, buzz, good, 15, okay. Nope, buzz, okay, got it? All right, so here's this procedure that plays buzz and the way it works is you give it a number like you say buzz 15 and it says 15, you give it buzz 17 and it says buzz, okay. How does that work? Look up at the top half of the screen and it uses conned. Conned is an alternative to if that's designed to be easier for situations where there's more than two possibilities. So in this case, there's three possibilities represented by this con clause, this con clause and this con clause, those three. Okay, so what's a con clause? The reason I want to take a minute to talk about this is that the notation, boy, we're running slow, is a little unusual. I remember I told you last time there are a handful of situations in which parentheses mean something other than call a procedure and this is one of them. So the way it works, you say conned and then a clause and so on, however many causes there are. Those are ordinary parentheses, but what's a clause? A clause is parenthesis, a test and an action. These green parentheses do not mean call the procedure a test with the argument action. These are the ones that are special. Now, test and action may involve procedure calls. So test could be function, argument, argument, et cetera and action might be, typically is, in fact, function, argument, argument, et cetera. So when you combine all of this and collapse it into one thing, you see con, what's it called? It's called n. Because the notation is like this, you're gonna be very, very, very tempted to say, oh, the way conned works is in each clause you have to say open parentheses and open parentheses. It's like they're double parentheses are in a con clause. Please don't think like that. You have to think of the green parentheses as special and then this parenthesis that comes right after it is a plain old scheme procedure call parenthesis. Any questions about that? Yes. Yes, conned is a special form and the way it works is it starts with the first clause, evaluates the test. If the test is true, then it evaluates the action part and it's finished. Whatever action returns, the whole conned returns and it never looks at the other clauses. If the test returns false, you go on to the next clause, do the same thing and then at the very end, this word else here, maybe I should have joined that in green also. It's what's called a keyword. It's not the name of a procedure or anything. It only is meaningful inside a conned clause and basically you can think of it as meaning true. So it would work perfectly well if they just had a variable named x whose value was true because this test always succeeds. So if none of the other things work, then we do that. So the order of clauses within a conned matters. You have to, for example, if you're doing base cases for recursion, the base case tests have to come before the thing that does a recursive call. Otherwise, you won't find out that it's the base case. Yeah. Do you have to have an else at the end of a conned? Technically, no, you don't. If you don't, the return value is, it says in the standard, unspecified, which means every scheme system does something different. Some of them actually return number sign unspecified, but some of them return something else. So basically the short version of that is yes you can, but don't. Okay, good. I think we'll make it. If I skip over all this stuff about recursion and go right to normal and applicative order. So remember I said that when we do a procedure call in scheme, step one is evaluate all the sub-expressions so that you turn actual argument expressions into actual argument values. And then we give the procedure the actual argument values by substituting the values for the formal parameters in the body of the procedure. So that way of doing things is called applicative order. It's what scheme does. It's not the only way of doing things. There are bunches of rules you could have. Another important one that we're gonna be revisiting later is normal order evaluation. In normal order when you call a procedure you take the actual argument expressions and substitute them into the body. And we don't ever actually evaluate anything until you call a primitive. So your procedure calls a helper procedure that calls a helper procedure that calls plus that's when the arguments get evaluated. So I'm going to, oops, I'm gonna load in a little scheme interpreter. His name I've forgotten and it doesn't say in here. Wait a minute. Okay, so I'm gonna define a couple of functions just like in my lecture notes. Oh it does say I just missed it, I'm blind. Def is like the define for this special scheme system that I'm about to show you. So pretend it says define. So f of a and b is plus g of a, b. And g of x is times three x. And now I'm gonna do an applicative order f of plus two, three, and minus 15, six. Strong. All right, so what happened? Up at the top here is the actual expression that I typed in and we're gonna watch, sorry, the process of evaluation as it happens. So in applicative order, what scheme actually does, we start by evaluating the argument sub-expressions. So I evaluate plus two, three and that's actually complicated too. We have to evaluate plus sign and evaluate two and evaluate three, but those things are easy and so we finally get five. We evaluate 15 minus six and get nine. And now we take the actual argument values and call f with those values. So first evaluate the argument sub-expressions, then call the function. So here's the body of f substituting five for a and nine for b. Let's see, can I get the definition of f up here? Yes, I can. So here's f of a, b is plus g of a, b. So we're doing plus g of five, nine. So you're comparing the very first line on the screen with the one where the cursor is blinking. Okay, so you see how we substituted actual argument values into the body of f? And now this is a new expression to evaluate and we start by evaluating sub-expressions. So I have to do g of five. Well the value of five is just five so I'm not showing that step. And so we substitute five for x in the body of g. We get instead of times three x we have times three five. The value of that is 15. And finally we can add 15 to nine and get 24. And here's the answer, 24. Okay, now we're gonna do it again in normal order, the same thing. Okay, this time I'm not gonna start by evaluating the argument expressions. I'm gonna take the actual argument expressions and plug them into the body of f. The body of f was plus g of a b. So I'm gonna make it plus g of plus two three minus 15 six. As plus two three b is minus 15 six. Now, plus is a primitive. It's an arithmetic operator. It actually needs numbers to work with. So at this point I'm gonna evaluate argument expressions. The first one is g of plus two three. G is not a primitive. So following the normal order rules, we take the actual argument expression plus two three, substitute it for x in the body of g. We get plus, I'm sorry, we get times three plus two three. Times is a primitive. So now I figure out what plus two three is. It's five. We do times three five. That gives 15. Minus 15 six is nine. 15 plus nine is 24. I get the same answer in a different order. So if we get the same answer, what difference does it make? Well, here's a case where it makes a difference. So I'm defining a function called zero. Takes an argument x and computes x minus x. So the answer should be zero, right? Okay, so let's do an applicative order. I'm gonna do zero of random 10. Random takes a positive integer argument and it returns a non-negative integer strictly less than the argument. So some number between zero and nine. So here's what happens when I do this. Whoops, that wasn't supposed to happen. Okay, why did that happen? Whoops. All right, nevermind, let's just start again. Def, zero, I said z here? Okay, minus zz, thank you. That's interesting. All right, applic of random 10. No, applic of zero of random 10. This time for sure. Okay, so an applicative order, we evaluate the argument expression first. Random 10 happened to give us the answer eight. So now I compute zero of eight. Substitute eight for z in the body of zero. I get eight minus eight, which is indeed zero. Now, let's do normal of zero of random 10. Look at that. In normal order, I substitute the actual argument expression for the parameter in the body. So that gives me, instead of minus zz, I have minus random 10, random 10. Okay, so now I'm doing minus, which is a primitive, so I need a value. So I compute random 10, I get the answer eight. I compute random 10, I get the answer one. Eight minus one is seven, and that's my result. So here's a situation in which it does matter if you use normal or applicative order. How come? Now, let's not always see the same answer. Yes, yeah, random doesn't always give the same answer every time you call it, or in other words, random is a procedure that is not a function. Right? The answer, you call random with the same argument, you don't always get the same answer. So it's not a function. This is just one example, but it generalizes, it turns out. If you write correct functional programs, purely functional programs, then you get the same answer no matter which evaluation order you use. If you do something that isn't functional, then all of a sudden it matters what order things happen in inside the computer. Okay? So again, we're gonna come back to normal order. We'll see uses for it later on, but for now the takeaway point is that functional programming protects you from having to think about what's going on when inside the computer. All right, see you Monday.