 I continue talking about performance. We're going to go over Butler-Lampson's famous paper on hints for computer system design. Just a couple of announcements. So we have our first members of the 100 out of 100 club. Ramya and Rakesh, that's why the stand up. Take a round of applause. Why don't you guys stand up so everyone can get a good look at you. Here's the people that know how to do assignment three, other than the TAs and the ninjas and everyone else. But please ask them for help. So great job, guys. OK, so just to also point out what I just reiterate what I pointed out on Piazza. This is the same thing I do every year. I want people to complete the course evaluation. I don't see any feedback on the evaluations until June 1. So this deal is not in any way predicated on anything about how you complete the evaluation, although I hope you guys will do it thoughtfully and constructively to allow us to improve the course in the future. That's the goal. And we do take this feedback very seriously. I know that, sadly, Yubi has decided not to share this information with other students, which I think is incredibly stupid. And I've asked them to do that, but they don't. But anyway, I look at it. The course staff looked at it. We take the feedback very seriously. And we do try to improve things for the future. So here's the deal. When we get to 70% completion, I will release one of the short answer questions for the exam. You get to 80%. You get a medium answer question. 90% gets you a long answer question. So as soon as you guys cross these thresholds, assuming I've written the question, I will post it on Piazza. And you guys can see it before the exam starts. You get a sense of how to answer it. I will just point out as a caveat, even before we start this process, that in the past, the distributions on the questions that I've released have been very similar to the distributions on the questions that I have not released. I don't know why that is. I would just encourage you once you guys start discussing the questions online to maybe think about that a little bit. OK, so does this make sense, everybody? Right now, I think you guys are at 5%. And I'm not even sure that's a real number. Actually, the new tool tells me that you're exactly at 5.62% or some egregious abuse of significant figures, but whatever. I don't think that's even true. I think maybe they've miscounted or something. So anyway, once periodically I'll update you guys on where you are, but please go and complete the form. I think you guys have a couple of weeks to do it. But obviously, the earlier you do it, the better. OK, so let's talk about this paper. And early on in the paper, Butler Lampson makes this very strong statement. He says, computer systems are harder to design than computer algorithms. I completely believe this. Other people in the department may disagree. But why is designing a computer system hard, or harder at least, than designing a computer algorithm? This is a couple of different reasons. Rakesh. OK, so that's fair. You could potentially prove something about algorithms. But what else? Damn. Yeah, so there's potentially a lot of moving pieces. Those moving pieces have to interface with each other. So as you guys are doing assignment three, I hope you're thinking about this, because one of the things that makes assignment three work or not work for people as they attempt it is whether or not you get the internal interfaces correct. So for assignment two, one way to think about it is there was this very somewhat wide interface, the system call interface. But below that, there were a lot of functions that were almost ready to accept the arguments that were passed to the system calls. You just had to do a little bit of extra work, particularly for the file system calls. For assignment three, on the other hand, we give you one function you have to write. Just VM fault. Just make it work. And the rest of it is up to you. And the core map allocator, that's sort of another thing. But to get those to work, actually, there's a lot of internal interfaces you have to design. And the more cleanly you design those interfaces, the better things will turn out and the happier you will be. So that's one reason. The other one he points out is that the requirement is more complex. So the external interface is more difficult. Algorithms we usually try to formulate so that they're doing something very specific. An algorithm sorts its inputs with either stable or not stable. The computer system, like an operating system or a database system, has a much more complex interface compared with an algorithm. If you just think about the system called interface you guys are familiar with, that requires the underlying system to do a lot of different things. The system has much more internal structures. That's what Diana pointed out, a lot of internal interfaces that need to be designed that go into the challenge of designing the system. And the measure of success is much less clear. So to some degree, that's connected with the first point. But I don't know exactly what the interface specification is. So for an algorithm, I would say that the point of the algorithm is to sort some integers and the way I evaluate the algorithm could be how much space it uses or how long it takes, things like this. It's very easy to specify the requirement. It's very easy to specify how to evaluate it. Computer systems do not have these nice properties. So this is sort of the part of the preface, but Butler-Lamson, this paper was written quite some time ago since then. He has certainly continued to build computer systems. And I like this. I have built a number of computer systems, some that worked and some that didn't. I've also used and studied many other systems, both successful and unsuccessful. So clearly, a paper like this has to be derived not only from successes, but from failures. And so if you look through the paper, you can find plenty of places where he uses examples of things that were done incorrectly and uses them to illustrate his points about system design. He claims no originality for these hints. Nonetheless, it's useful for people to remind themselves. So I find it helpful to reread this paper every once in a while, because these things are easy to forget. And as he points out, after the second system comes the fourth system. So what does that mean? What happened to the third system? He just somebody skipped three because he liked powers of two. What happened to the third system? It's stank. I tried to build it, and it just ended up as a disaster. So there's another famous book about computer system design that I encourage you guys to look at. It's called The Mythical Man Month. And it describes what it has. Has anyone ever heard of the second system effect? Anyone heard of it before? Oh, good. So this is sort of related to this. The second system effect says that when you get a chance to redesign an existing system, this can be a very difficult undertaking because the people involved know a lot of things about the existing system that they want to fix. And in this book, he outlines many cases where attempts to build second systems failed, mainly just because they got too ambitious. They tried to do too much. When you build the first system, you're kind of aware of the fact that you don't know exactly how things are going to work, and that makes you a little bit more humble. When you try to build the second system, you think you know everything. And so that system can frequently spiral out of control and become a total disaster. So in this case, maybe the third system was that system for Butler-Lampson. OK, so what are the three goals of the three parts of the system that Butler-Lampson focuses on in this paper? Three general requirements for the various parts of the system that he enumerates in the first two pages of the paper. What are they? Ramya, what's that? OK, so now you're onto the hints. What are the goals? These hints are organized. Remember, there's this diagram. There are three parts of the system and three goals, and the hints are organized into these categories. So keep it simple. Certainly a hint that we can talk about later. But what are the goals? What are the goals for designing computer systems? Yeah, try again. Speed, performance, functionality, speed, and fault tolerance. So functionality means, does the system do everything it needs to do? Speed, does the system do it quickly? And fault tolerance, can the system handle problems? What does the system do when things go wrong? Can it continue on? How robust is the system to various types of failures? And then there were three parts of the design task, kind of interesting. And it's interesting actually, again, hopefully some of you guys have studied software design. So you may be aware of these three parts of the design process. What are the three things I have to do in order to design a system? And you guys are in the process of doing this for assignment three, or some of you have finished doing it for assignment three. The rest of you are still working on it. So what are those three steps? Well, you can think of them as steps, but they're different parts of the design task. You're on to the hints. Don't jump to the hints. Yeah, yeah, like what are the, so OK, completeness. So let me map these onto assignment three for you. You read assignment three, and you start coming up with your design, your method of attack. And part of your job is to make sure that you've completed the entire assignment. You don't want there to be some corner case or some important piece of functionality that you forgot. So part of designing a system is figuring out everything that the system has to do. The second part is designing interfaces, or as he puts it, choosing interfaces. Now clearly, these two parts are related because in order to make the system complete, I need to use the right interfaces. But even constraining the system to be complete, I still have a lot of choices frequently in terms of how I design the interfaces. So in the paper, he makes an analogy between interface design and what other task that a lot of people were thinking about at the time and still are thinking about. It's a very interesting analogy. He says, designing interfaces is a little bit like designing what? An interface is almost a tiny little what? This is almost like a tiny little programming language. So this is weird, right? Did you guys read this and think, what does that even mean? But a programming language gives you a set of functions and tools for accomplishing certain things. Constructs for looping, conditional expressions, various ways of affecting code flow. And an interface does the same thing. It gives you a sequence of functions, but those functions, in a way, define a small programming language. So your core map interface defines a very small programming language. There's a set of functions that you have to use to manipulate it. And to the degree that you design that language effectively, you can make using the core map very intuitive if you do it badly, it gets very ugly. So you guys have probably used a couple of different programming languages and have your preferences because of the ways that they force you to do things and design it in interfaces in his mind, very simple. So I think this is a really cool analogy. And then, of course, finally, you actually have to build the thing. You have to design an implementation. So once I know the system is complete and I have a set of interfaces in mind, I have to implement those interfaces. So here is the diagram that lays out all of the hints for you, hints that help me ensure that it works, that it's fast enough, that it keeps working, hints related to completeness, interface, and implementation. So this is sort of your guide. I don't know if this is on the web version of this paper sadly, but if you want to see it, look at the PDF because this is a really helpful tool, particularly when you guys are using this paper in the future because I hope that somewhere down the road you'll I have a problem making my interface fast enough. Let me go back to Butler's paper and look and see what sort of hints he has about how to get that to happen. OK, so at this point, I don't have any more summary slides. I just want to talk about the paper. So how I'm going to do this is let's do a grab bag of the hints. So who has a hint that they want to talk about or hear about from this quite exhaustive list? Yeah, Isaac. Keep it simple. So keep it simple is something that, where is that? Shed load, I think that's in do one thing well. So let's scroll down to that here. Keep it simple. So let's also do one thing and do one thing well. So here's the problem. The problem is that when your interface tries to do too much, particularly when either the interface could try to do too much, and this is a hint for interface design. Notice this. This is not a hint here for functionality. And so there's a very important distinction here. For example, you might have attempted to write one set of interfaces. I'm not sure there's a great mapping not to assignment 3 here, but let me try. You might be attempted to write one set of interfaces for everything that you needed for assignment 3 and then be forced to implement all of that interface. Because remember, the assumptions that people that use the interface make about your interface are bounded by the interface itself. So if I have an interface that involves accessing the core map and accessing the TLB and I publish that as a single interface, I have to now maintain both of those things, even though they really don't necessarily have anything to do with each other. So by keeping the interface simple, in this particular case he's talking about design an interface that accesses a single part of the system and only does the things necessary to interact with that particular part of the system. When an interface undertakes to do too much, its implementation will probably be large, slow, and complicated. And part of this, so we've talked about interface design before in this course, part of this comes back again to the contract that the interface represents with people who use it. Interface says, here are the things that I'm going to do. And if that contract gets too large, you, as the implementer, may find it difficult to deliver on all of the promises that the interface delivers. I think there's a little bit more discussion here. And there's probably some examples, which would be great. Here we go. OK, so this is a good example. And these examples are awesome because they're from the past. So you guys probably never heard about any of these languages. I haven't either. So he talks about a programming language that tried to provide consistent meetings to a variety of different operations across large numbers of types. And you probably guys would probably use languages like this. So I need to be able to add two objects together, regardless of the type or there's a method that every object has to support, even if that object doesn't really naturally seem to support it. And ensuring as I, so if I promise to do that, for example, if in an object oriented language, my base class has a method that every subclass has to implement, I had better be pretty certain that that method makes sense. For every single object, every one. If I start putting, I mean it may feel powerful to require that objects that inherit from the super object implement a number of different features because you start to think of all the cool things you could do. The problem is that unless you choose those operators very carefully, you can get into trouble. So this is one example. And he also points out another consequence of this, which is it was very difficult for programmers to understand the overhead of certain functions. So on one type of object, a particular method would be very fast. On a different type of object, that same method would be very slow. He also points out here that there's a tradeoff that depends on how heavily the interface is used. So an interface that gets used extremely often, like he points out the virtual memory interface, had better be very simple and very efficient. An interface that gets used frequently might be able to trade off some performance for a little bit more general. So the tradeoff here is when the interface gets used a lot, make the language that the interface represents as simple as possible and as fast as possible. When the interface gets used in cases where performance isn't as critical, it's OK to make the interface a little bit more powerful. OK, so that's a good one. So let's go on to another one. There are some here that are, I think, potentially a little bit less obvious than, I should say, that was obvious. Who has another hint that they want to discuss? Don't hide power. That's a good one. There we go. OK, so I'll go over two of these that are related. So the first one with interfaces is make it fast. And the reason for this is if it's fast, then it's a lot easier for the client to use it to build the thing that they want. So it's very difficult frequently to imagine what people will do with the interfaces that you build. But if the interface primitives themselves are fast and fairly predictable in their runtime, then it's a lot easier for someone else to come along and build something using your system that you could have never anticipated. If you try to anticipate too much of what they're going to do with your system and build an interface that ends up being slower because it's doing a lot more, most of the time you're going to be wrong about what the person is going to try to do. And they're going to be really frustrated because they're forced to use your slow interface, and it's still not doing the exact thing they want. So unless you know exactly what the person who's using your system is going to want, it makes a lot more sense to build an interface that is as fast as possible, even if the functions that it provides are a little bit simple. So the second, the same complication that's about is don't hide pout. And remember, if you took every hint in this manual and tried to apply them all at the same time to a single piece of code, your brain would just explode. It's impossible. A lot of these you'll notice are self-contradictory. So part of the art a system designed is understanding when to use various hints and when not. And frequently, this is a case of it depends, and you have to make. That's what's fun about designing computer systems. It's not just a cookbook of ideas that you can follow. There's a lot of human input that results. And keep in mind, you guys at this point in your life really only know about the systems that worked. You guys know about Windows, and you know about Linux, and you know about a bunch of computer systems. Maybe reading this paper gave you a little bit of insight into the fact that the modern operating systems, file systems, programming languages, database systems that we have are the people that survived. They're the systems that survived a long, hard path. And there's a lot of carcasses and corpses in the past. There's all sorts of systems that people tried to build that never worked. There's architectures that died out. Entire types of computers are gone. They're extinct, because they didn't work well enough. There's programming languages that were popular and are close to being dead, and if only we could finally kill them off, and then there's some that finally actually did die off. Thank you. So you guys know the things that worked, but you don't know all the failures, and things have failed, because they made some wrong choices. And sometimes they were built by really smart people who tried something, and it just didn't work. So that system is no longer with us. So don't hide power. Loosely translated, this means if the system can do something really important or really cool, don't build a gross interface on top of it that makes that power hard to access. And so here's this example. The AltoDisk hardware can transfer a full cylinder at disk speed. Doesn't that sound super exciting? So the basic file system can transfer the files pages to a client memory at full disk speed, with time for the client to do some computing on each sector. That's what the few sectors buffering the entire disk can be scanned at disk speed. So the idea here was that, look, I can actually read or write to the disk at the speed that there's no bottleneck on the disk. And so don't introduce a bottleneck at the file system. The disk has this capability. It can actually read things at full disk speed. And so let's not actually introduce any bottlenecks on the way out. Let's make sure that our file system design preserves this property and doesn't hide this power from other systems that are built on top of it. And he continues with some examples of different types of software that made use of this property. Can you guys think of other examples of this? Don't hide. He doesn't have any either. We'll go on. Other hints that people want to discuss. We've talked about keeping things simple. We've talked about not hiding power. We have a bunch more left. Anyone else? Interesting ones up here. Use brute force. Oh, there we go. Now we have implementation hint. I like that. So brute force. Have a keep secrets, plan a phone away, divide and conquer, good idea again. So he talks about caches. He talks about hints. Static analysis, that's nice. Cache answers. Here we go. When in doubt, use brute force. So there's somebody in my scientific community who is determined, so I read papers that this person's group publishes, and every time he should use brute force, he uses brutal force. And it's really kind of a different thing. It always makes me laugh. We're not trying to kill the problem. We're not trying to use brutal force. You guys understand what a brute force solution is? For example, if I was doing a search algorithm, a brute force solution would just be to look at everything in order. Don't do anything fancy. Don't build any trees. Or try to build an index or something. Just look at every entry. And I think in many ways, the use brute force is related to the keep it simple, particularly when you're getting started. So what's an example of a brute force page table design, the most brutal force page table design that you can think of? How would that work? Big array. The array is going to make it hard to implement address spaces. That would actually probably be brutal force, as if it would kill your kernel. So we're looking for a brute force. We want to use just one inch less force. Don't kill it. Yeah, I use it. What's that? In hardware. Oh, that's even more brute force. Yeah. Unfortunately, your brute force is unlikely to implement the hardware features that you need to create this page table. Brute force. We've talked about many different page table designs. What's the one you would consider the most brute force? Some sort of list, a linked list, unsorted linked list. I have to search through the whole thing every time. That's a brute force data structure that leads to a brute force algorithm. What's nice about it? It works. That's pretty much the only thing that's nice about it. So let's see. I think that actually this point in today's world has gotten even more true. Because since this paper was written, computers have gotten a lot faster, a lot faster, a lot more powerful. What hasn't changed is computers, for a while, run this incredible Moore's Law trajectory, which maybe has slowed down a little bit recently, but whatever. That's a very powerful curve. The amount of time it takes people to write complicated programs has not been increasing at that rate. You guys just aren't getting better at programming. Maybe a little better, but not that much better. And so what's a corollary of this that applies to how you design systems today? I've got these incredibly fast machines. And yet I still only have 24 hours in the day, and smart people still require a lot of time to design computer systems. So what should I do? Yeah, and use tools that allow you to be productive. Don't start off saying, OK, I'm going to put up a website so that I have a web presence. I'm going to write my website in C, because my website's going to be blazing fast. I'm going to implement everything in C. I'm going to implement my own libraries, and I'm going to implement a rendering engine for my own little language. It's going to allow me to render the page. Don't do that. That's just dumb. Get something off the shelf and deploy it, and you'll be done 10 times faster. And even if your solution is slower, nobody cares, because the computer is so much faster. So the brute force suggestion here I think also really affects how you guys manage your time. Your development time is really expensive. Computer cycles are super cheap. So something that takes you 10 times less time to build and runs twice as slow. I will sign up for that in a heartbeat. And so will your boss in the future. They don't care. The computer is really fast. So this is another example. But again, I mean, this is a place where you get to, if you get to the point where you have a performance problem, you can apply all the nice tools of algorithmic understanding that you've learned in other courses to try to figure out what's going to help. All right. We have brooded off brutal force. Any other ones? There's still some very cool ones left up here. You guys understand all of these perfectly. If I ask about them on the exam, you guys are going to be able to give me an example of everyone. Keep secrets. Ah, OK. This is another good one. I'm going to actually try to use the fine feature here, so I don't have to. That's not going to work. There we go. Keep secrets. So what does this mean? This is really interesting. This is really interesting. So what keep secrets means is don't necessarily expose all of the assumptions about how your interface works to the client. And why wouldn't I do this? I mean, shouldn't an interface provide as much information to the client about how my implementation works as possible? Isn't that useful? So why do I want to keep secrets? What does that allow me to do? This is related to one of the other hints. Yeah. Yeah, when people start to rely, so another way of thinking about this is with respect to side effects. So there are frequently functions that are part of an interface that have side effects. Or there's something about their behavior that's not obvious to the caller. This could be that for certain inputs, they run a different algorithm. So imagine you have a sorting function that looks at the number of arguments. And if the number of arguments is small, runs a particular algorithm, and the number of arguments is large, runs a different algorithm. Not necessarily a bad thing to do. However, if you let clients know that this is the case, they may start to rely on this behavior. They may, for example, try to break up their arguments into small enough pieces so that you run this small case algorithm. That's one example. So by not telling the client everything about the interface and its implementation, it allows you to make changes. So in the future, you may say, you know what? I've done some profiling, and this approach really isn't working out. Or I've looked at the inputs into my algorithm, and it turns out that I very rarely get to run the small algorithm, because most of the arrays I get are so big that I'm just going to get rid of that feature entirely. If you've published that feature to clients, that can negatively affect their performance. So a lot of this is about classical good interface design. A good interface tells me what the interface does, tells me enough about what the interface does in order for me to use it, and to use it in a way that's powerful and effective, but tells me not enough to abuse the interface and not enough to prevent the implementer from making positive changes in the future. I want to be able to improve this. I want to be able to make changes, fix things down the road without worrying about the fact that a bunch of clients are making incorrect assumptions about it. And this goes back to the story we told about Microsoft as well, because when you just are looking at the interface, it may be difficult to figure out what to do. But if you know the guy who wrote the interface, and you have access to the source code behind the interface, it may be easier to make a decision about what particular version of the read system call from six versions of Windows is the right one to use in a particular scenario, and which one is actually even being maintained. That's a good one, though. All right, keep secrets. All right, onward. There are a bunch left. Yeah, I see. Use hints. Oh, this is a good one. OK, where is this guy? Use hints. So he points out that hints are like a cache. What's the difference between hints and a cache? On the slide. What's the difference between hints and a cache? It may be wrong. A hint may be wrong. If I build a cache, the cache would better be correct. The cache isn't correct, I have a problem. Using hints, on the other hand, hints don't have to be correct. So we actually had an example in this class already that was on the midterm, maybe, or maybe we talked about it in class, where you were supposed to use hints. Does anyone remember how this worked? Yeah, so the read lock, read design pattern is an example of using a hint. So the fact that the entry, remember, when I'm looking for an available entry and array that has one lock, I can read elements of the array to see if they're taken before I grab the lock. So that part doesn't have to be synchronized. That's a hint. That read provides a hint, potentially, that that slot is open. Now again, it's not a cache, so I don't know if it's correct to confirm that the hint is correct. I have to grab the lock and check again. So this is actually a nice example of how to use hints. Let's see, I've read this one before. Now he points out something about hints though, which is, like a cache, if hints are wrong most of the time, then it's not a hint. Imagine you're working on assignment three, and one of the TAs keeps giving you hints. You keep following their hints, and it's like, this is a terrible idea. Then you might not keep following those hints. Hints are supposed to lead you in the right direction, even if they may sometimes lead you astray. So the read lock redesign pattern is supposed to be an example of this, because the probability that the hint is wrong is bounded by the amount of time it takes me to acquire the lock and actually claim the entry. So because that's a small operation, the probability that somebody sneaks in, grabs the lock, and claims the entry in between the time that I consume the hint and grab the lock is pretty small. Let's see. So I think this is talking about how file systems maintain identifier or file from its name in a directory or from the disk to find the disk address of page. I'm not going to go through this example. You guys can look at it. But this is an example of a fairly drawn out example of how to use hints in the file system. There are other places that this works. I'm trying to. OK, so here's an example. So internet routing is frequently based on hints. The route table information that a particular node in the network has might be wrong. It might lead me down bad paths. Now the whole system is designed to make sure that this is robust. So if a hint leads me down a path that doesn't work, I can retry the packet and another path will work. But to some degree, routing tables are still designed this way. So they're not guaranteed to be correct. They're updated in a best effort way. And they can potentially, there can be triggered updates based on failures. So if I'm trying to use a particular entry and I can see that that routing path is failing, I can certainly use that as an opportunity to try to update my state, to update my routing information. So this is another good example. All right, he's got some other good examples of hints in here. OK, use hints as a good one. Still think I've got to some of my favorites. Yeah? Keep a place to stand. This is an interesting one. This one always tests my ability to explain these. Right, corollary is very fast. Yeah. So the goal here, and this is really more of a software design suggestion, is if you have to change interfaces. And now remember, particularly once you publish your new JavaScript framework online, newawesome.io or whatever, people start using it. And then you think, you know what, I really don't like this one function I'm provided. I'm going to change it. That's now very hard to do, because people are using it. So you can't just change the function. It's going to break everybody's snazzy, interactive, NBC, JavaScript-based web pages. So you have to. So essentially, you can think of this as providing backwards compatibility. So in this case, when I do want to migrate an interface, I make sure to provide an implementation that allows clients that need it to continue to use the old system for some period of time. And frequently, this can be done by simply implementing the transition interface on top of the new interface. So I have the new interface that I want people to use in the future. And then I have a couple of functions that map down to the new interface that I start to warn people. By the way, I'm going to start taking these away, so please update your code. But this is a nice example of how to transition. Yeah. Yeah, basically it's deprecation. Or it's a way of managing deprecation to avoid angering people who are using your library. All right, I've got time for a couple more. Couple of other things. Oh, boy. You guys are picking the hard ones. Dynamic translation. Did I skip it? No, here we go. So what's an example of dynamic translation that you guys are probably familiar with? I wouldn't call that dynamic translation. It's more of a programming feature, but you're kind of barking up the right tree here. What's an example of dynamic translation? What's that? I don't know if I like that answer. That's not quite what I'm looking for. Dynamic translation. What is a program that you guys have, I know, have written code for? You've written code in this language? I suspect, yeah. Yeah, Java Virtual Machine. That is what it is doing. It is taking one set of instructions written in Java byte code and, at runtime, dynamically translating them into operations appropriate to that target machine that manipulate the state of the Java Virtual Machine appropriately. So when we talk about virtualization next week, maybe starting on Friday, we will also talk about another case where I need to do dynamic translation. Because in order to virtualize the system, there's places where I need to take instructions that would not be safe to execute inside the virtual machine and dynamically, at runtime, rewrite them to safe instructions that allow the virtual machine to update its state appropriately. I mean, he talks about small talk. Small talk keeps the, so what is something that the Java Virtual Machine also can do to speed up execution? Yeah. Yeah, right? So manually translating every Java byte code instruction can be kind of tedious and slow. But if I can take a bunch of them, a whole function, and translate that into the low-level implementation, and just run that. And then once I've done that, I don't need to do it again. As long as I understand how to pass arguments in. So right, dynamically translating things into a faster low-level representation as needed. This is another strategy that's related to dynamic translation. If I have to do dynamic translation, usually there's some overhead to this. But if I do it intelligently, and this is the same thing that the virtual machines that will start talking about on Friday do. If I do it intelligently, I can avoid having to do it too often, and I can cache results from previous translations. This is a nice thing. All right, let's do one more. Last one who wants it. Isaac, you've had two already. Anyone else? All right, we'll give Isaac the last one. What's that? Plan to throw one away. Oh, there we go. Yeah, this is a good one. You know what? I know what this one means. Where is it? Tolerance, speed, categories. This one is plan to throw one away implementation. Does it work? So this is really simple, but it's something that I don't think you guys, and I have had to do this a few times, but as you guys do more programming, you will find that this can be kind of fun. Actually, it's kind of a fun way to program. Plan to throw one away. Allow yourself to do it wrong first. So you've sat down. You've designed something, but here's the problem. You have not met the enemy on the field of battle, and there is a lot that you are going to learn as you start trying to build your interface. Now, so it doesn't mean that you have to throw it away. That's an important corollary. If it works, if you've designed a beautiful interface that's super easy to implement, you're done. But the reason why you should plan to throw one away is that the temptation becomes once you've started writing some code, you're invested in that code. And even if it's terrible, even if it's causing you all sorts of problems and it gets gross and convoluted in your cutting and pasting code around, and you just feel bad about yourself, you feel stuck with it. This is the code I have, and I'm going to finish it this way, come hell or high water. So what this says is, don't do that. Be ready to ditch it. Be willing to say, you know what? This was a bad idea. I'm going to start over. Because the thing is, particularly when you're designing small pieces of the system and not redesigning the entire system from scratch or redesigning new interfaces, the second time you try to implement something, you know more. You just tried it. If it failed, you have some ideas of what went wrong, and you can go back to the drawing board and come up with a solution that's more likely to work. But again, you have to be willing to walk away. You have to be willing to say, you know what? These design choices led me to a very dark place. I don't want to be in that place anymore. I'm going to unwind those choices, go back to the top, and try to redesign the interfaces of other part of the system to get myself to a better spot. All right. So for Wednesday, please read the Scaling Linux to Many Chorus paper. There's a lot of neat ideas in that paper about benchmarking and how to improve real systems. And we'll talk about it on Wednesday.