 Okay, now we have discussed different strategies how to create unit tests like with partition testing or following certain heuristics, but we have the concept left of what it actually means that we cover as much as possible. And for that I have drawn something here. I have here on my left side I have a basic Python program, but even if you're not familiar with Python you should understand this. So essentially I have a function that takes an A and a B parameter. I assume they're integers, but it doesn't matter. And I basically have a nested else if so I have a if B is larger than A and B is larger than zero then some code A should be executed. There could be other statements here. If A equals B then we should do B instead. And finally if none of that is true then we should do C. So we have three different cases and then after all of this we do D in any case. So that's our program. It's quite simple of course, but it's just to show you the idea. And we can represent this as a sort of diagram of how the different conditions are evaluated. So if this is our entry point, this is our function, then we can go through three different places in the code depending on the conditions. So our first part here A is executed if B is larger than A and B is larger than zero. If A equals B we're in the middle otherwise we're on the right side and no matter what happens we'll end up in the bottom and do D. Now on this tree people have essentially defined a number of different criteria on how to represent how much of the code is covered in a percentage from zero to a hundred. And our different of these coverages and I just want to explain to you what they mean and basically what they represent and how complex they are. So as a most simple one we have something that is called function coverage and that's basically if each and every function is executed at least once we have a hundred percent coverage. So it basically looks at the part up here, are we executing the function? It doesn't care what exactly happens in the function as long as it's executed. So that's a very basic measure but it's a start so this is function coverage. Then the next one that you might hear about is the so-called statement coverage and this means instead that we execute every single code statement at least once. So in our case it means we have to get here, we have to get here, we have to get here and we have to get here. And if you think about this every execution can only go into one of these three places so to get a hundred percent statement coverage we already need three different test cases. For a hundred percent function coverage we would only need one. But of course that one test would at maximum execute one of these three so it doesn't really give us as much information. So if that one test would pass we wouldn't know whether the entire function is correct or whether there are any other obvious bugs hidden in the other parts. That's one way then the other one which in this program is sort of the same but we also have edge coverage and then instead looks at these different arrows here. So are all the different arrows visited at least once? In our case it's exactly the same so if we have three tests we do one, two, three then we have visited all the edges, we have a hundred percent edge coverage. But in different programs this might look slightly different. So statement coverage is not the same as edge coverage. And then finally and that's sort of the most complex one we get into condition coverage. That actually looks at all the different conditions we might have in our code if statements may be something like loops or switch case statements depending on what your programming language has. But we look at each condition has to at least once evaluate to true and at least once to false. So this needs to be true once, false at least once. This needs to be true once at least and false once, once true once false. In our case it's exactly the same again as with statement coverage if we do three test cases that go into these three different branches we have basically each of these statements at least once true and at least once false. But there is an addition to this so sometimes people look at whether it's actually atomic or not. So whether the entire condition is once true or false or it might also look into the actual predicates themselves. Or not the de Boulien statements. So basically this one needs to be true once and false once. This one needs to be true and false once nothing special here. But in this case we could actually look at this one needs to be true once and false once and the second one needs to be true and false as well. So then suddenly we would have more cases there. And you can of course make this more complicated if you want. You could also say you want to cover all three cases. So true false true true and false false four cases false true as well. So then it already gets much more complicated. We have these coverage criteria because you can calculate them automatically. So if you have automated tests tools nowadays are able to run all the tests and automatically record the different coverages. So you very quickly get a number that says you have covered 88% of the statements. And this is not perfect because it's coverage itself does not equivalent that you have tested very well. But at the same time it gives you some kind of idea where you are at. There is definitely a difference where they have 5% statement coverage or 90%. So it can help to at least indicate where you should do more testing. So it gives you a quick number and it also often breaks it down by class by function. So you can locate exactly. It's not only your system has 88% unit test coverage. It can also tell you and class A only has 20. So maybe that's the one you need to test a bit more. So these things you will hear many companies use them as an at least as an indicator to tell them how are we doing and they might have some goal. We want at least 30% statement coverage. So these are definitely useful things. So it's good to understand what they are. And it's also at the same time good to understand that they are not a perfect tool. That's not a perfect indicator for how well the system is tested.