 Good morning everybody. I want to talk about tests today. Essentially I want to talk about three different parts, starting with a little bit of background. What is a unit test, what is an integration test and all the theory behind. Then how we write unit tests and integration tests in KDE and then how to integrate it with a built system and with a CI. I hope I'm fast enough for a lot of questions. Okay, let's start with the motivation. Why do you want to write a test and I mean automatic test, which is something the CI is executing and tells your result? If you ever wrote a manual test and you had to test it and run it again and again after each change, you'll see it's a time improvement. If you have something automatic that you simply automatically start and see that you did not break anything if you changed it. And it's even better if you have a lot of tests that test tiny assertions to ensure that well to find errors easily and fast. It's also important if you ever refactor your code, you can see with your tests that you don't get regressions or you can you know where your code regressed and you can directly adapt to it. And some less commonly motivations, but both quite important to me is often mostly if you write libraries, you can use tests to document behavior, which is sometimes written down, sometimes not. And yeah, it helps you to not break unwritten contracts with your users. And another thing that in my opinion is extremely important in writing libraries. When you're writing a test to test your library, the most integration test you're testing your API by yourself and you see, well, it's working, it's usable or not. There are different kinds of tests. I will look at them to view angles. One is the angle of white box and black box. It's like the light is on, the light is off. When you are writing some piece of code and you're really in the code, you know, there's something complicated going on and it can break, then you're usually writing a white box test and writing a test that fixes exactly this situation to ensure that's correct. In a black box test, the view is different. You look from the outside. You look from the features you have on your library or on the API side or the features of the user and you see that the core functionality is working as expected from the outside. And it's always good to remember that there are really two different view angles for that, because you're testing different things. Another way to view at them is to look at what you are testing. So-called system under test. The unit test usually tests a class or maybe two at most three classes that are really tightly coupled together. So these are the red blocks here. It can be the application code. It can be in library code and you test it singularly, really small thing that you want where you want to ensure the functionality. On integration test, it's in my view, mostly these black lines. So you're testing the API, the access of a black box thing, like a library from the outside where you're only using the API of the library. So usually in such a test, you should never include any CPP file. You only should link to a library and check that's working. And then we have a subsystem test where you mostly look from the user perspective, that's what the user want to do is working as expected. A nice thing that should be named in such a talk is test-driven development. If you never heard about it, you should Google it. In a nutshell, it's a way of developing code where you start with writing a test before writing any code. And then you check that the test is really failing. And then you implement the code to fix the test, and, well, just the test. And if the test is just barely passing, then you write the next test that is failing again, and you're writing again the implementation code to the implementation that fixes the next test. And so over time, you're creating more and more tests, and you ensure that the code quality improves, because you're testing it really deeply. Usually, you also have a better architecture because you have to design your code that is testable. And often, it's a better decoupling of different code units, the introduction of reasonable interfaces, where you split code parts and structure it to really have a good architecture. And mostly, with test-driven development, you have to refactoring during the whole time of writing your code. So refactoring is part of your development, and so it works really nicely. Coming to how to write a test. In KDE, I think nearly all, if not all tests, we are writing using Qtests. It's a really lightweight framework, which is provided by Qt itself. The idea is you have a Q object or Q object-divived class, and every slot of that object is interpreted as a test case. You have to add some magic behind, so you have to add a main function that's automatically generated usually, but you're writing the class, you're defining some slots, and they are executed. Qtests also provides you with several test markers, for example, something to verify that the expression is true to compare if two values are the same and also comparison methods for a lot of internal Q types. You have a signal spy to check that a signal is really triggered and received, and there are a lot of more things. For those, you should look at this Qtest list of all markers as functionality, because there are some hidden gems that really make your life easier, and there's a good introduction how to do it. But let's look at some example how Qtest looks like if you never wrote one before. We simply have to include Qtest, then create a class called a simple class derived from Qobject, and I defined two slots, one I called in a test case and one my test. In a test case, this is actually a special slot because it has a special slot name. Here I added a list of those special slot names. There are a few more, but these are the important ones, I think. In a test case, it is always ensured to be called before any other test case, and similarly you have cleanup test case, which is a case triggered after the last one, and that helps you to set up your code and to clean up. Also for the individual test cases, you can define in it and clean up, which set up any test case. In this test case, I added two really trivial functions. I verified that two is true and that one is one, and so this test, if you compile it, should be probably true. Going a little bit more difficult or more advanced features you can do. For example, if you want to check inside any of these test slots that the signal is received, I created a small code snippet where you have a push button. The push button has this clicked event that you want to see if it's really admitted and received. Then you can create a signal spy, which is really similar to an ordinary connection. You should always check that the signal spy is valid because it checks that the emitting object is not null, and if you have string-based connections that you did not make any typo, then you emit the signal and you just check if there is a signal received, so it's count of the signal is one or not. After that, you can even take the signal and the parameters and check if they are what you expect. One important point here is here it's good to remember the training from David if you have been there Friday because the signal here we know it's a direct function call and so it's received before the spy.count. You should be careful because sometimes if you are testing some signals between different slots, you need different methods. For example, there's a spy.rate function that rates for some time and spins its own event loop to check if the events are received or not. Really, it depends if it's a direct signal call or a cute signal call. In 95% of the cases, it's totally valid to do this, and then you have sweating and you have to be careful. What you can also do with QTest is create data-driven tests. If you never do it, did it? Well, the idea is you have a test case but you are starting it with a lot of different input data. For this test, I have a slot that is called the same but with underlying data and you are defining columns and rows. It's really a table what you could think about it and you are fetching the tables or the lines of the tables several times and run the test with it. Another nice thing I added here is a QExpectFail to show it once that marks the next test function to be failing, which is sometimes what you want because it's not yet done and does not work yet. It's also nice because if you fix it in the future, the test will fail because it does not fail anymore and you can remove that marker. Another thing I would want to show is you can also test QtQuick. That's not that come, unfortunately, that you really test a QtQuick binding or QtQuick class. A nice way to do it and there are different ways. The way I prefer is that you start your own QQML component and you have your own QML engine. I have some test QML class and I don't show it here. It's simply a really small Qt object derived object there. It has property that I call test property and then I create an engine, I create a component and I say it's about to be created synchronously because I'm in a test and it's much better. I created, I see, did it create correctly so is there any syntax probably in QML and I verify that I have an object, I verify that it's loaded, which should be already done because it's synchronously and then I can directly access the root object of the scene that I created and test the functionality. With that way, you can look at properties and do even more complex QML operations inside the test QML class and provided we have properties back. Another way to do it is to use QtQuick tests where you have a QtQuick-based testing where you define QML functions, JavaScript functions with certain operations where you can define that you want to click at something and you test if it's really worked, but it's tricky sometimes, especially if you change dimensions and not take care on creating tests, but sometimes it's the real way to do it if you have a QtQuick. Small lessons that I learned over the years when creating a test, you should take care that a unit test, so real test slot is not depending on another one because it's much easier to restructure your tests and it's never a good idea to have them dependent on each other. You should also not test production code. There's usually a thing which you call a fake, you have also a stop and a mock, so it's test objects that you have behind that you inject somehow into your scene with expected behavior and expected answer that allow you to test something and that makes your test reproducible. You should also ensure that your test is not slow because nobody else will execute it then. You also should not test third-party code because if the third-party code, the library, is not doing correctly, you should not use it or you should fix it and provide tests to that library because it's still open source. You should not create a test that includes dozens of CPP files into the test case because then it's probably some architectural thing that you have to fix first because it will not be a good test in the future due to any refactoring you have to do. Also a good device when talking about tests, you should also look into software patterns. There are a lot of good books and they help you to write better code and better testable code and that then allows you to add stops and mocks and fakes mostly by introducing interfaces and doing other nice tricks. Okay, that was a really short run over what Qtest is doing and well how to get the test now into the build system that you can really execute it on your system. We are using CMake as a build system and CMake bearing CTest which is mostly a tool to execute tests. So you can use CTest to execute all of your Qtests and get meaningful results of it. The documentation is here. I will only show a little bit about how to use it, how to integrate everything into your application. It's the easiest way is to include the extra CMake modules at test module and then just use this CMake macro, ECM at test, edit there, define the sources of your tests. For example, the simple test class CPP I had before. The libraries you want to link to, for example Qtest, give it a name and maybe a prefix which we will see at the next slide while it's important and tell if it's a GUI test or not and there is some documentation how to use ECM at tests. And well if you did all of that and then you compile it then you can go to your build directory and run CTest minus n and you see a list of all available tests and you can use CTest minus r and you run all tests or minus r minus v and you get some meaningful output at the point where there is some important test output. You can also tell that you want to have output when a test fails because you only get a list of results. And for the daily work the most meaningful in my opinion is CTest minus r and something and that runs just a test that contains that word and that's really, really, really important to just run the test for the code that you're running at the moment. And for convenience we also have MakeTest and that is just running CTest with all the tests and that's it. Okay and well not only you are running tests, also RCI is running tests so at build.ke.org you can see for your project except for the Playground project, test ones and they are one on different architectures. So they are one on SUSE and on 3BSD and again several suit versions and that gives you good coverage of systems to see if everything works nicely and if it's possibly possibly only one on your system because you have some test data relying around. A good thing to do today is if you have a project that you maintain just go to build.ke.org and check if all your tests are green there. That's really important to really have a look at the system because we have it and as far as I know in the future you can, you will have the possibility to get results even earlier directly in GitLab but that's an ongoing thing. Another nice thing you can use at build.ke.org is to see the GCaV coverage. GCaV is a small tool that simply locks which line of code, which possible code path is executed by your test and if you go there you see for all your modules which has a good test coverage and which not and I think the most important point here is to see where are the completely read corners, the corners where you could forgot to write any test maybe it's where we all test or I might got there but that's the point where you can see at improving it. Okay and that's what I prepared for now and I think we have some minutes for questions. Are there questions? Yes indeed we have. Thank you very much for this session. Applause in the chat please and we have a few questions so starting with in Ubuntu we spent lots of energy running tests when we build packages but whenever something went wrong the developers would say those tests are more for me. Is it useful for distro packages to run tests? In my opinion yes but we have to ensure in KDE that tests are always green and right now it's complicated because we have the split between pull requests and CI and I think it will be much better in the future when we have test reports directly in GitLab before we complete a merge. So we see everything is green and it should be green and at that point I think it's reasonable to tell packages that tests are guaranteed to be green and they should run it again and then it's still green. At the moment it's hard to for some projects they are always green for others not that's not good. Okay thanks. Next question. Do you have experience with other popular C++ test frameworks QTest, Cache2, DockTest and how do they compare to QTest? Not much. I looked a little bit in GTest but I'm quite happy with the combination of CTest and QTest except one point and that's the Qt Crater integration. In KDevelop we have a really nice integration where I can really run all the C tests differently in Qt Crater. It's really hard because the CTest test properties that you can also define to adapt your test behavior a little bit are completely ignored because they are only the executables that you can execute and they have better support for other test frameworks. Okay, thanks. Now someone from Python and Java background is asking for a good solution for mocking. Sorry, I don't have experience with mocking frameworks. Usually I write my mocks by hand. I know there's gmog that's used by several people. Sorry for that. Yeah, I think the person is asking for C++ mocking solutions. It's the same for me. I've not much experience with mocking frameworks. I write my mocks by hand. Okay, and I think we have time for one more. Does it matter if a test is reproducible? It's so important that a test is reproducible. If it's not reproducible then it doesn't make sense because sometimes it behaves that way, sometimes the other way. So you should really assure that a test is reproducible mostly by ensuring that there are not dependencies to your test system, that you set everything up by hand. That's not a good idea. You should set it up automatically. You should clean the build directory where you run the test and so on.