 Welcome to this video on the validation of requirements, so that's our third important activity in requirements engineering, making sure that these requirements that we have elicited, that we have specified are actually appropriate. And what this means in practice, I'm just using the list from Sirin Lawson, which has a popular requirements engineering book, is that you should make sure that the requirements are correct. That means that each requirement actually represents a stakeholder need. There shouldn't be anything in there that's not needed. They should be complete. That means all important, all relevant stakeholder needs are actually there. They should not be ambiguous. There should be no ambiguity. That means every requirement is sort of clear. You cannot understand it in a different way. They should be consistent, all requirements together. So for instance, if it says somewhere the response time should be less than 300 milliseconds and somewhere else it says the response time should be at least 300 milliseconds, that would be inconsistent, would probably also be meaningless. But in practice, even in the functional requirements, as the specification gets large, it's really hard to make sure that you don't have things that are contradicting each other. So this is something you need to check. They should be ranked for importance. And this goes back to the prioritization that I've mentioned earlier. So how important is each requirement so that, for instance, if you run out of budget, you can actually decide which ones to skip. If you start implementing, you can decide which ones to tackle first. So that's important. They should be modifiable. And that means somehow that you have written the requirements in a way that it's easy to change things. They should be very viable. That means you have to be able to write, for example, a test that can make sure that the requirement is fulfilled or not. That goes back to, for instance, to the quality requirements if it says the system shall be fast, then you cannot verify this because it's not clear what this means and how you measure it. If it's very clear how the scale, how the metric is, how you measure the response time, and it should be less than 500 milliseconds, then you have a verifiable requirement. And finally, it should be traceable. And that means that, for instance, you have connections between requirements. If you have functions, functional requirements that are similar, you should be having traces that say, okay, this requirement depends on the other requirements. For instance, if you have a statement that says the system shall be able to allow voice commands to switch on the radio, then you might have to connect this to, let's say, this requirement depends on the system shall have voice command capabilities. So that's sort of more basic requirement. That's, as I mentioned, this tracing is very important, for instance, if you want to show how you fulfill regulations or if you have to change things in the system that you know, if I change one requirement, what else might be affected by it. So these are the things that are important. This sounds difficult and it is difficult. It is in practice hard, if not impossible, to achieve all of these. So for instance, making sure that you have all relevance they call the needs is probably one of the hardest things in software engineering to decide. Similarly, if you use natural language, it's really hard to get rid of ambiguity and so on. But some things like ranking for importance should be fairly straightforward, if you think about it. Now, how do we make sure that this is the case? There are lots of different methods. Some of them are mentioned in the summer world book. I just essentially mentioned three types of them. There are prototypes. So you can build a prototype of the system, for example, a paper prototype that just has the screens. And then you can go through that with, for example, the end users and say, is everything there? Is there a button that you can actually do the functionality that you would like to? You can also check for quality requirements. Is it actually going to go in the right direction and so on? So prototypes are a way to do that. There are reviews. And they are, in fact, probably one of the most important techniques. A different way to do that. But one of the important things is you should actually do this with different stakeholders. So just because the specification is clear to you, who has written it, might be not a good indicator for whether it's clear to, for example, the developers. So you should, when you do reviews, include lots of different stakeholders that tell you what is missing, what they misunderstand, how they may be talking about ambiguity, how they misunderstand different parts, and so on. The other way, the other thing that is important is how exactly you conduct the review. And there are different techniques. For example, you can start with informal reviews. You sit down and read, there are no instructions, just read through and say whatever you think. There might be something, and that's actually the next validation technique, but there can be checklists. So you get the specification, you read through it, and you have certain things to check off. For example, questions. Is every requirement ranked for importance? Is there a priority for every requirement? Do you understand it in a clear way? And so on. So there might be different questions, and you can just check them off. There is something called perspective-based reading. And in a way, that's a bit like role-playing. You give the specification to one person and say, okay, imagine you're an end user. Read this specification from an end user perspective. And again, you can combine this, for example, with checklists. This is, of course, a bit cheaper. You don't need an actual end user. But of course, not everyone is able to do this in the right way. So it's also not easy to do this good, to do this well. And then just for completeness, I'm not sure how common this is, but there is something called end-fold inspection. And that basically means you break down the review into end groups and end people read it in parallel, and they discuss, and then they come together. And that's just to, well, to parallelize it, to have many eyes looking at it, but also to avoid that they influence each other. So in their end groups, they discuss the issues, and then they discuss the issues altogether, and so on. So there is a specific protocol on how to do this. I won't go into detail, but just so you have heard the term as well. Let's talk a bit more about checklists, what could be in such a checklist, different things. Again, they usually look at these things. But typical points you might have in a checklist are, for example, a content check. And that just says if you have a standard format, you need to have, for example, a section for scenarios, you need to have a section for quality requirements, then the content check would just be a question. Are the following sections all present in the document? And then you just have a checklist that says, yes, the introduction section is there, yes, the scenarios are there, and so on. So that's a content check. You might have a correctness check that says, okay, are the identified requirements, are they actually relevant? Are they actually stakeholder needs? You might have a consistency check, so asking, okay, are there any conflicts between the requirements, looking at the quality requirements and the functional requirements, are they anyhow in direct conflict? And you have to cross that off. And finally, you sometimes have something that's called a CRUD check, and this state stands for create, read, update, delete, and has to do, of course, with the data or the entities. So if you, for example, handle financial data, and you have users, you have credit cards, you have different data objects, the checklist would look for. Are there requirements for all the required functions? So for example, if there is something that talks about you should be able to read all the credit card entries, then maybe there needs to be also a function that actually creates them, because otherwise they cannot be there. So this is essentially a checklist that checks for, can you do all the required things to the data? So this is a way of validating them. There could also be other things, for instance, prototypes, I mentioned paper prototypes, but of course, you could build a formal model that you could actually execute. So it's more an executable model of the system. You could implement part of the system and review it. So this is a process you can continue also while you're implementing, for example. And maybe as a final note to validation, the purpose why we do this, of course, we want to have good requirements, but also the earlier we find an issue with a requirement, the cheaper it will be in the end, because if you implement everything at the very, very end, years later you realize, well, many of the assumptions we made, many of the requirements we had were actually completely incorrect. It's not really what we needed. Then you have to do all the work again. If you do it on a prototype level, or if you just review some text that someone has written, it's of course much cheaper. So that's really what all of this is about. Thanks for watching. And in the final video, we look at requirements management and then wrap up the whole thing. Thank you.