 Okay, so next up is testing after discussing inspections, and testing is essentially execution of a program, of a code, of a prototype with artificial data. So some kind of testing data that you put in there. And what you do is you execute the system, the input data, and you're looking for errors, the system somehow crashing, are there any exceptions, are there any other things that are obviously an error. You also look for anomalies, is there anything that is behaving in a strange way that you don't expect, it might not be an error, it might not be crashing, but it's still not the way it should be. And for example, if we talk about quality requirement limits, if you have a performance test, there might not be an error, but the system might be too slow for your taste or for the requirements. So these are typical things we're looking for in testing. Now, summer will distinguish us between two kind of aims tests have, and that's validation testing. If you remember, validation is about the requirements, is are we building the right system? So basically, what you do here is you write a test for each requirement. And then we have what he calls defect testing, where you basically find errors, or your aim is to find errors, bugs in the system. And there are differences in the sense that for validation, you often have a few tests per requirement that are sort of the success cases, is the system working as intended, so you're actually not expecting any errors. Whereas in defect testing, you're much more, you're trying to cover as much of the system as possible, as many strange situations as possible, just so that you also find issues in the corner cases, when the user behaves in an unexpected way and so on. So they're rather different in their purpose. And typically, you do have a combination of both. So you somehow want to show that your requirements are fulfilled, but you also want to make sure that there are as little defects as possible in the system. There is a classic quote by Dijkstra, which is important in this context. All you ever show in testing is the presence of bugs. So you can only show if there is a bug. If everything works, you don't know whether there are any bugs in the system, or you just haven't found them. So you can never show the absence of bugs. So you can try to do it as good job as possible, but you cannot really prove that the system is bug-free, error-free. That's simply not how it works. And in practice, this is typically an optimization. Testing is expensive. It takes time, and it takes execution time. It might be manual testing where someone sits and clicks buttons, essentially. So it's just a matter of how much can you do, not when are you done? Usually it's just now we have no more time, so we have to stop. So that's quite often the way it looks like. Apart from validation and defect testing, there are a couple of other ways we can look at testing. And one of them is if you go from the different activities, if you say we start by specifying our requirements, then we go into the design, we go into the implementation, we go into what is called the integration. We put different parts of the system together, and then we finally operate it. We run it. So you can do testing at different levels. In the specification and design level, you typically don't have anything that is executable, so that's more a case for inspection. Sometimes you might have executable models or things like that, but it's rather uncommon. But then when you have implementation, you start doing testing, which is typically known as unit testing on single parts of your system. Again, we'll cover this later. When you put things together, you put multiple pieces of your system together, you'll have to test it. That works. And then finally, when you actually run it, or maybe right before you run it, you should test whether this actually runs productively with users and so on. And what Somerville calls this is that these two stages here are called implementation testing. So it's basically testing during the implementation done by the developers, and down here you have release testing. You test for the release or right before the release. In practice, I think these things have changed a bit, that's not clear-cut. So nowadays, quite often, things are getting implemented, integrated, and directly released in an iterative way. So these terms are actually not that important anymore to know. But it's good for a reference that at least you can consider these different things. Okay. So this is a rough overview. Now in the next part, we are looking at the different stages in particular at the implementation integration on different levels, and what kind of testing levels we have here. I've already mentioned unit testing, but there are a couple more that we will be looking at.