 Reštne deljeva, kjer lepo reštne deljeva in reštne deljeva. Zato bo, da ungajim zrešnem, ampak jaz imamo, in da je nekaj nekaj nekaj prišljena, da bi bei vse pravno seč, zaprej si, da bi ima nekaj rešten jaz, kjer nekaj bolj mu reguč, dobro vsega. So, let's start from the user story. The user story is what the user would like to have from our system. In that case, the user story said, as an astronomer, I want to interact with the radio telescope through the common line interface. When I execute a setup, I want to know if the setup is actually in progress. I also want to know if the setup is done, and in that case, I want to know if the telescope is ready to operate. Eventually, I want to get the actual and the commander setup. So, are we sure we got completely the point? Maybe not. So, we brought a manual acceptance test. In this case, you can see, this scenario is a common line scenario, because our astronomer wants to interact with the system from the common line. We brought the acceptance test, and we want the astronomer to tell us if we completely got the point. So, when he or she tells us, OK, this is exactly what I want. We're starting to write the code. In this case, the acceptance test said, we issue a setup KK command. KK means we want a particular configuration of the radio telescope, for instance, the keyband receiver. We want to know if the system is starting. In this case, it's starting. We want to know what is the commander setup. In this case, it's KK. The actual setup is known because the system is not yet ready. But when it's ready, the actual setup is equal to the commander setup. This is a manual test. And from now on, we start writing automated tests. The first one is the functional test. And this picture is the typical scenario. So, the astronomer from the common line issue a setup KK. This partial gets this command and calls the CLI method of the components. What does it mean? Each component has the CLI method. And this is basically the interaction from the user point of view with the component. So, we should test the CLI method of the component to write our functional test. In this example, we are using the PyTest library. You can see there are two functions. The first one, the component function, and probably you know a test fixture. Basically, it's just some code that the framework, in this case, PyTest, runs before the test to prepare the test or to bear down the test. In this case, it is executed before the test and it returns a reference to the component. So, the test setup function is actually the test. And it gets the component reference and performs the setup calling the CLI method and passing to it the setup KK string, exactly the string we got from the common line. After, if you notize, the assets are exactly the command that we got from the command line before, this one. So, we want to verify that the CLI method that returns a string, in the case of starting, returns true. The command and setup is equal to KK and so on. When the system is ready, we want the actual setup to be equal to KK and it's ready to be true. The second kind of interaction to the component is from the other part of the system. So, from other components. That means we have to test the, not the CLI method, but all the API of the components. In this case, you can see that different components can call different methods from our component. And this is the integration test. As you can see, in the future, we call directly the setup method here and after we verify that the starting returns true and we wait until the component is ready and after we assert that the starting returns false. But the most important thing in this slide is that you, I don't know if you noticed that we are testing just the starting method. That's because we brought just one acceptance test, just one functional test, but now we want to test the APIs. So, we have to test different kinds of methods. So, we have more tests for the integration test than the functional test. The last step is the unit test. The unit test is a test from the developer point of view. So, for instance, inside the body of this setup function, there are several parts of code we want to test. For instance, we want to verify that at the end of this setup, the servomechanizem position is exactly or close enough to the position we commanded during this setup. Because at some point, the setup method, there will be a set position command. So, in that case, what we do? We got the expected position from this particular configuration. We wait until the component is ready. We get the component actual position, and we verify that the actual position is close enough to the expected one. And now let's see a brief snapshot of code about the implementation of the component that allows us to pass this test. Here we get the setup method of the component. We get the position we want to command, and after we set the position. Now, this is an important point, because probably the servomechanizem needs some minutes to reach the position. And that means the test will run in two minutes. But we want the unit test to run offline. We want the not fail outside the unit test, the unit of code under test, and they should be fast since we run them continuously when programming. Therefore, they should be independent from external resources, and they must also test errors and other conditions to reproduce. And that means usually unit tests require either simulators or mockers. Also because sometimes the external resources are not available during development. So, let's see the same unit test using mock object or simulators. In the case of mock object, basically the test is the same as the previous one, but if you notice here there is this line of code where we mock the set position method, and in the test what we do is we go to the expected position, and after we verify that the set position method was called with the expected argument, because once we mock the set position method, what happens is that when the method is called, the argument is recorded, and after we can check if the recorded argument is exactly as we expected. This other scenario is about simulators. So, before I told you that in the setup method of the component there is a call to the device set position. The device is another component that interacts directly to the hardware using a certain protocol. So, in the hardware set position, if we write a hardware simulator, then we can execute the fast and offline both high level and low level test. Why this? Because if we write an hardware simulator with the same APIs of the real hardware, for the functional test, the integration test, the unit test, the component device, everything is changed, because really not changing for them. And so there is no need of extra code, mock object that complicates the test or the features. All tests run in the same way in simulation and in real mode, and we can verify that the real world APIs behave as expected. What I mean with this? I mean that if your colleague write the real server, real mid-ware to communicate with the hardware here, and he changed the code because he want to update the server. If you want to be sure he or she doesn't break the APIs, you can run the integration test against the server and make sure everything is still working fine. So we brought more unit test than integration tests. More integration tests than functional tests and more functional tests than manual tests. But is that the good approach? Not for all. For instance, this is the opinion of the editor of Ruby on Rails. Ruby on Rails, sorry. The current fanatic TDD experience leads to a primary focus on the unit test because those are the test capable of driving the code design, the original justification for test first. I don't think that's healthy. Less emphasis on unit test because we are no longer doing test first as a design practice on yeses, low system tests. But there are also opinion difference than this one. For instance, unlike unit test, the functional test don't tell you what is broken or where to locate the failure in the code base. They just tell you something is broken. That something could be the test, the browser or a race condition. There is no way to tell because functional test by definition of being end-to-end test everything. And I agree with this opinion. This is a recent story. The Airbus A350 is a new airplane manufactured by Airbus in service from the beginning of this year. At Airbus they use a testing approach called testing pyramid. The approach is this one. It's exactly our workflow. Cover your code mostly with unit tests. If you see in the bottom the shape of the unit test is bigger than the other one. Verify the API, the APIs behave as expected. That means right integration test. Ensure the user expectation. So right functional test and reduce manual sessions. Our acceptance test. And do test driven development. So what are the lessons learned? If you don't want to test your test to be useless and harmful and so they fail. Before fixing a bug always write a test at face to point out the bug. Use integration test to establish component API contacts. Unit test must be fast and selective with ideally one answer per test. If the external resources API are stable, prefer simulators to mockers. And the test driven development ensures the maximum test coverage. And don't be religious. There is no one approach that suits all context. And that's all. Thanks for coming. Any questions? How much time does it take to write the test compared to writing the code? I think maybe it's equal. But I don't know exactly. But much time. In the long term of course you gain a lot of time. Because in the first two years when we didn't have regression test and the code goes in production it was really an nightmare because for every you can change also just one line of code but you don't know if perhaps you resolve your problem but you don't know if you don't break something else. Usually you break something else and maybe you can realize that after one month because it's a particular condition and so you have to spend a lot of time to realize where is the to localize the problem. If you have functional test for instance if you have only functional test and after one month if you have functional test but you see the test fails it's the same thing because you can't localize the error you just can see there is a problem but you don't know why so the unit test allows you to localize the error to quickly patch your code.