 Welcome back. This time we're going to be looking at performance in computers. So, there are lots of ways we can document our performance. And we commonly use benchmarks. There are two general types of benchmarks that we can use to test our computer, see how well it performs. Is it a good computer? Is it a bad computer? And the two general types are synthetic and real-world applications. So, there's all sorts of synthetic benchmarks out there, got a few listed. And each of those is good for some specific type of problem that you might be interested in running on your machine. So, if you're interested in running regular desktop software, chances are you'd want something like one of the first two, SciSoft Sandra or PCMark. If you're interested in running a supercomputer, you'd probably be more interested in the spec benchmark instead. 3DMark instead is obviously for benchmarking 3D applications, 3D performance. And synthetic benchmarks are really good when you want to compare two computers that may not be running the same type of applications regularly. If you just want to know something about the raw performance of the computer, synthetic benchmarks are really good. Unfortunately, most of us don't run synthetic benchmarks every day. We're not really interested in how that works. Instead, we have some tasks that we'd like to solve, and we'd like to be able to solve it better. So, we like computers that perform well on the task that we have, and we may not care about other tasks. So, real-world benchmarks tend to address this problem by actually running the programs that you're interested in. So, that could be a video game. Lots of video games now come with their own built-in benchmarks where it runs certain levels with certain actions, and you can often see some really cool things happen in the game, but it's all scripted. So, any computer you run that on is going to run the same set of operations, but since you're running it on your local hardware, each of those machines will perform differently on that benchmark. So, again, there's all sorts of real-world benchmarks. You may just have some data set that you want to run your program on and see how long it takes. You can do that. That works for all sorts of different programs. You can try compressing a whole bunch of files, converting some music, compressing a video file for your cell phone. Any sort of thing that you think you'd be interested in doing commonly is useful for a real-world benchmark. You just have to be sure it's repeatable. If we want to look at actually documenting performance, especially on a real-world task, we have to have some measure of what performance is. And we typically use this general formula to look at performance. And it's telling us something relating performance to the execution time. Because, fundamentally, the execution time is what we're usually interested in. We want our program to run quickly and get it done so we can go on with doing something else. So, computers with high performance tend to finish their problems quickly. So, as you can see, the execution time is on the bottom. So, when execution time is large, performance is going to be small, and vice versa. So, the basic idea is pretty simple. But we'll be able to see some more complex applications of this when we start looking at what goes into our workload, our benchmark, and how can we really compare two machines?