 So our basic idea of performance was really simple. We just take one and divide by our execution time and we get our performance out. And that's really great if you just have nice simple problems. Maybe you're only looking at one task, you can run it on one machine, see how long it takes, then you can just plug it into the equation. But quite frequently we're interested in running something more complex. Often we have lots of programs that we want to run. We may know something about how frequently we run those programs. And we want to build them together into a workload. So the workload is just the collection of programs we've run and hopefully something about how frequently we run each program in that workload. We can talk about two different types of workloads. We can talk about discrete workloads or continuous ones. And for the moment we're mostly going to focus on discrete workloads. We'll see continuous ones more in a bit. So our discrete workloads are ones where we have nice fixed end points. We have some raw materials that come in and we have a nice finished product that goes out at the end. You can think of the example of a car where we start with some raw materials, some steel, some plastic. We do a whole lot of processing and at the end a car goes out the door. We can have the same thing for computer programs where maybe we've just got a task that we need to run once in a while. It does some fixed set of steps and produces a result. That's really nice and a lot of things do correspond to that. But not everything does, but that doesn't work for everything. Sometimes we don't really have nice fixed steps that we seem to go through to get from a nice start point to a nice end point. Instead we may just have some program that kind of runs continuously. It does lots of work for us, but the granularity there is really small. And maybe we're really just interested in how much work it finishes in some amount of time. In that case, we'd be more interested in specifying that workload in continuous terms. And that corresponds really well to a lot of servers, where the server is just running constantly. It accepts tasks whenever they're given and tasks may not come in at a fixed point in time. They come in in waves, come in randomly. We can kind of model that and see how much work this server can process in a given situation. For the moment, we're going to focus on discrete workloads, just because those are really easy and they correspond really well to the idea of execution time that we've looked at so far. If we've got a continuous program, it doesn't really have a good idea of how long is it running for. Well, we may plan to run it for the next 20 years. Some computers have stayed operational for over a decade, just running the same program. If it works, they don't bother fixing it. They don't bother changing it. But to say that this machine has run for ten years, it's been running one program for ten years. That would seem to have very bad performance compared to a machine that crashes every week and has a start point at the beginning of the week, has an end point at the end of the week. It has much smaller execution time, therefore it would seem to be much better performance. But that doesn't really tell us anything about how much work those are both doing. So we will look at continuous workloads a little more later. For the moment, we're going to focus on discrete workloads.