 So this time we have a larger problem to work with. Before we had, say, two to three tasks that we were balancing. This time, instead, we have five different types of instructions. And these are common instructions that you'll see when you're writing assembly language programs. We have an indigenous instruction, multiplication instruction, a pair of memory access instructions, one to load some data, one to store it into memory. And then we have a conditional instruction. And in this case, each of our instructions takes a different amount of time. And those times vary between the two machines that we're looking at. We also have five different programs that we're looking at, each of which has a different balance of instructions that it's using. So again, we'll be looking at figuring out how much time it takes to run each of these programs on each machine. Note that we don't have an overall number of instructions that each of these programs contains, so we can't really say precisely how long a program will run for. But we can say something about the average instruction's execution time. And we'll be able to use that to compare the relative performance for both machines on each program. And then at the end, we'll be able to say, which machine should I run each of these programs on, and you could use that to decide which of these two machines would I like to run this program on, if I have both of them. Or I could use this to help inform a purchasing decision to decide which of these two machines I should buy. So we'll approach this problem pretty much the same way which we have been. We'll be looking at calculating execution times. Again, as I said, we don't know the overall number of instructions, so we won't be able to calculate the exact execution time. We'll just have the average execution time per instruction. So program one is the easiest. All of its instructions are add instructions. So if I start with machine A, I know that, well, 100% of the instructions are add instructions and each of those take one nanosecond to run. So the average execution time here is one nanosecond. For machine B, again, 100% of the instructions are add instructions. But this time, they all take two nanoseconds. And the average execution time per instruction then is two nanoseconds. So I can say that machine B is half as fast as machine A. Or that machine A is twice as fast as machine B here. So for task one, machine A is two times faster. On program two, we have a mix of add and multiply instructions. So here, machine A, so we have 25% of the instructions are addition instructions. And each of those takes one nanosecond on machine A. And the other 75% of the instructions are multiply instructions, each of which takes three nanoseconds on machine A. So 25% of a nanosecond plus 2.25% of a nanosecond equals 2.5 nanoseconds on average. For machine B, again, 25% of the instructions are addition instructions. And on machine B, those take two nanoseconds. The other 75% of the instructions are multiply instructions. And this time, those take four nanoseconds. So I've got three nanoseconds there and half a nanosecond there, which gives me 3.5 nanoseconds. So again, machine A is taking less time than machine B. So machine A is going to be faster than machine B. It's time I've got five over seven. So machine A is seven-fifths times faster. Not as good as we had for program one, but still some speed up. So program number three is longer. We've got three parts now, but we're still going to be doing the same thing. Calculate how much time it takes for the addition instructions, how much time we spend on the multiply instructions. And then we'll have a different another column for how much time we spend on the load instructions. 25% of the instructions are addition instructions, which still take one nanosecond on machine A. 25% of the instructions are multiply instructions, which take three nanoseconds. And then the remaining 50% of our instructions are these load word instructions, which each take six nanoseconds. We'll do the same thing for machine B. This time I have 25% of my instructions are addition. 25% of my instructions are multiplication. And 50% of my instructions are load word. This time I have two nanoseconds, four nanoseconds, and four nanoseconds for each of those categories. So this gives me a quarter of a nanosecond plus three quarters of a nanosecond. So I have one nanosecond and three nanoseconds. So machine A is four nanoseconds on average. For machine B, I have half a nanosecond plus one nanosecond plus two nanoseconds. Which gives me three and a half nanoseconds. So this time machine B takes less time than machine A. So machine B is faster. How much faster? Well, I'd have, say, eight nanoseconds divided by seven nanoseconds. So I get eight sevenths. Again, not a huge improvement, but there is some difference there. For programs four and five, we're going to do the same thing. We'll just have even more terms than we did before. So for program four, 10% of the instructions are add instructions. For machine A, those all take one nanosecond still. 20% of the instructions are multiply. Those take three nanoseconds. 30% of the instructions are load word. Those take six nanoseconds. And the remaining 40% of the instructions are store word instructions, which take nine nanoseconds. So again, we're going to calculate the average execution time per instruction. So we have 0.1 nanoseconds, 0.6 nanoseconds. So we've got 0.7, 1.8, 2.5, and then 3.6 plus 1.5 gives me 5.1 nanoseconds for the average instruction in program four on machine A. Machine B, same thing. 10% of the instructions are add instructions, taking two nanoseconds. 20% of the instructions are multiply instructions, which take four nanoseconds. 30% of the instructions are load word instructions, which will take four nanoseconds. And the remaining 40%, the instructions are store word instructions, which will also take four nanoseconds. So here I have 0.2 nanoseconds plus 0.8 nanoseconds. So there's one nanosecond. Here I have 1.2 nanoseconds. We've got 2.2, and here I have 1.6. So I've got 3.8 nanoseconds on average. So machine B is again faster than machine A. So 3.8 nanoseconds for the average instruction is clearly less than 5.1. But this time I'm not going to have a nice ratio that I can really work with. 38 over 51, or 51 over 38 times faster. Still an improvement, better than what we saw with previous one. But we're not up to that two times yet. So the last one is number five, which has the most terms. So for machine A, program five starts with 30% of the instructions being add instructions, each of which still take one nanosecond. And I've got 10% of my instructions at three nanoseconds, 30% of the instructions at six nanoseconds, 10% of the instructions at nine nanoseconds, and the remaining 20% of the instructions are these branch equal instructions, which take four nanoseconds. So here I've got 0.3 plus 0.3 gives me 0.6 plus 1.8 gives me 2.4 plus 0.9 gives me 3.3 plus 0.8 gives me 4.1 nanoseconds for the average instruction here. Now we can look at machine B. So for machine B, again, 30% of the instructions are addition instructions, each of which takes two nanoseconds, 10% of the instructions are multiplication instructions, taking four nanoseconds, 30% of the instructions are loadward instructions, taking four nanoseconds, 10% of the instructions are storeward instructions, requiring four nanoseconds, and the remaining 20% of the instructions are branch instructions, requiring six nanoseconds. So here I've got 0.6 plus 0.4 gives me 1, 1.2, so I'm up to 2.2 again, 2.6 plus 1.2 gives me 3.8. So again, machine B is slightly faster than machine A. How much faster? Well, 41 over 38 times faster. So not a huge improvement, but there is some difference there. So for the last question, we're looking at which programs is each machine better at? We've pretty much written down that information we went. Machine A is clearly better at programs 1 and 2. Those are really heavy on these addition and multiplication instructions. Whereas machine B is better at 3, 4, and 5. Because machine B is really good at these memory instructions. So much so that it kind of weighs out any of the arithmetic or comparison instructions that we're doing in those three programs. There's enough improvement in just the memory column for machine B to be better in those cases. So we could use this information to decide which machine should we run each task on. If we've got a lot of task one, will we want to run those on machine A? Same thing with program two. If we've got program three, four, or five, we probably want to run them on machine B. But some of these are pretty close to one. Three and five aren't too far off. There is an improvement if you run them on machine B. But if machine B is already busy with lots of tasks of program four, maybe we should put those on machine A, anyway. We could also use this to say something about which machine would I perhaps like to buy. So if I know I'm going to be running lots of programs that have strong memory component, I probably want machine B. If I'm going to be running lots of programs with a large arithmetic component, then I'm probably going to want machine A instead.