 So Daniel Kooledge of the University of Stuttgart will be presenting to us on the system test portal. We are 15 minutes. At the conclusion of the 15 minutes, we'll end the presentation. He then has an optional demonstration for those of you who would like to remain. But it's an intentional, we'll keep to the schedule for the 15 minutes for the presentation. We'll call stop at that point. And then if you'd like to see a demo, he has a separate demo available. Daniel, thank you. Thank you for the introduction. So the system test portal. I think some of you might think that test automation is great. Who would agree that test automation is great? The majority. And actually, yes, test automation is really great. But, and there's the small but part about test automation. Well, as we've seen in some of the talks in the staff room here today already, to actually get to test automation, usually you have some complicated kind of tool landscape that you have to set up, you have to maintain. Then you have to switch the test framework or whatever. And it's, yeah, it's a huge effort. The second thing is when you do test automation, you have to define the expected results. So you have to provide some kind of Oracle, right? Now, what people usually do in test automation is that they really think about the result and they specify it. But the Oracle that evaluates this result is quite pretty stupid. So for instance, if you test some, some, I don't know, login screen in your web application. Yeah, and you just check that after you fill in your username, password, click on login, then you're logged in. The test Oracle will say everything is perfect. Even if there was some dancing gorilla on the screen, it will not see it. Yeah, because it will only see it what you told it to look for. And the third limitation is, yeah, well, how is it suitable for the embedded world? So here are some, yeah, free software projects that I would consider somehow in this embedded domain like post market OS or line edge OS or Corbwood. So these things are pretty close to the hardware. Now, when you want to test systems on the hardware level, yeah, it's not so easy with test automation. I mean, the people in the automotive domain, they do a lot of stuff with hill stands and so on to somehow get there. But if you just run your, say, normal three developers open source project, you probably can't make such a stand. Yeah, and so how do you test them, your system? So due to this, yeah, you can do that by just testing manually. But in the free software world, at least my impression is he wants to test manually. Why? Because it's boring. You better do some coding to push the project further and testing. It's just a repetitive, annoying activity that's just just below your level, so you don't want to do it. So how do people in the commercial world solve the problem? Well, in the commercial world, you just pay the testers enough compensation for all the pain and suffering they have for going through this repetitive stuff. Yeah, and actually, yeah, if you pay them enough or you just find the right people, yeah, then they will just do it. Yeah, and why is this the case? Why is testing not fun? Yeah, well, because of the lack of tools. And what I see in the commercial world is these office documents. Yeah, people tend to write lengthy word documents about which test steps they executed or they have horribly huge Excel files where these test cases are maintained and oh my gosh. Then what I also see some rhymes, which is really better than the office documents, but still markdown and wikis. Yeah, so there are different efforts to try to put these test cases in wikis or in markdown. But then, yeah, well, depending on your editor, well, editing tables in markdown is not so much fun. And there are some professional tools and most of them are either commercial or they are complicated. They are maybe not the right thing. You can just ramp up and start using them not really for end users. Yeah, and this is where the system portal comes in since without with the situation we have it's really hard to do any manual system testing. Yeah, and so you ship some system. You don't know actually if it works, if it didn't work. What I had experienced is I really love these things like say Leder or OpenWRT. Yeah, so I bought one of these platforms where it's supposed to work and it didn't boot up. So obviously nobody tested it. It's stated as supported platform, but it doesn't work. Yeah, I bought one of these nice Chromebooks which are supposed to run with core boot and with the mainline kernel. They did three months ago. Now they don't. And I have no idea of finding that out. So this is where the system test portal tries to solve this problem and it's a web application that has just some basic functions. So it helps you to organize your test suite to execute the tests, but of course it's only an application for managing. So it helps you log your execution and to help you analyze your executions. And there are some cool features apart from that. So we have step by step executions of test steps. We have a quite fancy system under test version and variant management. So you can log that you executed this test case on this system under test in this whatever browser variant. You can define whatever variants you want and it's traceable. So you can even tell which version the test case was where you executed it. This is pretty important if you want to prove that you tested your system enough before shipping. Otherwise you don't know which, how the test case looked like when you executed it. Yeah, and we have some reporting, some sort of dashboard so you can see, hey, for instance if you want to buy some new smartphone, you want to have it supported by LiHOS and you have some features. You don't care about Bluetooth but Wi-Fi is pretty important for you. So you see which smartphones would pass which tests. Yeah, so the platform in general should be useful for test designers, people who think about how to specify test cases. For testers, the people who just execute the test and it's I think a really good opportunity to get some people from the community on board to just even if they are not developers, they can be still good testers. They can just follow these steps that test designers designed to just say, hey, this and this feature works on this and this platform. Yeah, for test managers who want to know how well tested the product is, is it shippable, which test cases pass which fail. Yeah, of course for developers who want to see which test fails and for the end user who just looks at the matrix of the dashboard to see, hey, how is the situation, which device should I get and could be also useful for them. This is actually how it looks like. So yeah, it's a pretty simple looking web application. It's maybe some bit inspired by Gox and Gitti. So it's trying to go in the same simplicity track. We have a simple navigation bar with a couple of buttons and this is just what you do. It's mobile friendly and it also has some other characteristics apart from functionality. So it's GPL v3 license. It's a pure web app. You don't need any plugins or whatever for running it. It's mobile friendly. As I said, it's written in Go. It's lightweight. It's cross platform and you can host it yourself. You can even use some low-end hardware that was too slow to be your router to run the system on. So it has really low hardware requirements. Our target platform was the AOMA 68A20. Since it wasn't available, we used a QB truck for that. But yeah, the focus is to really use not too much resources and just some words about the project. This is actually a student project. We have seven very motivated students and three supervisors. I'm one of the supervisors. We have a model that we simulate the customer. So I was the customer in this project. I had the vision for this project. I didn't do any coding. So please don't ask me about why we chose this library or whatever. And the project is running for 10 months now. There are two months left. And I hope and I'm quite confident that we'll be able to ship a good working one zero release at the end. Of course, we welcome community contributions. And our follow up project is also planned already. There is a lot of future work like this dashboard that is not merged yet. We really think about more social functions to foster interactions between these different parties who are involved in the testing process. And yeah, one mistake or not mistake, but one thing that we didn't do and that really prevents it from being just installable right now is that we didn't eat our own dog food. So we don't have a test plan written in the system test portal for the system test portal. We know that running Docker and say Docker and so on. And yeah, we really want to do that. Just our persistence framework is not merged yet. We are about to do it. I think our students are really busy. I saw a lot of commits today also. So expect that in the near future, then we will create a cool test plan. And then we'll sort of most of the issues. As of now, when you try to build the system, according to the build instructions, it won't work. But you can use the binaries or you can figure out how we do this in our CI pipeline. Yeah, there is some stuff that we need to sort out. But I think it's still something you can already take a look at if you're interested. Yeah, so that's it for this part. And I think now you have some time for questions or demo or if you want to try it yourself. Yeah, so the question was about workflows we support. Actually, we don't have any pre-scribed prescriptive workflow that we say you have to test it this way. We try to keep it open. But what you can do, we have the concept of test sequences. So you can have several test cases. You group them in one test sequence. And the test sequence says in which order you actually execute your test cases. So this is a way how you can drive the workflow. Yeah, you group test cases in these test sequences. And we also have the concept of labels, which you might know from GitLab and others. So you can just give labels to test cases. And then you can, for instance, search for this label and you find the test cases that are grouped there. And the question was, if I have heard about IO portal, no. Okay. Yeah, so thank you. No, I didn't hear about this. Yeah, actually, it's also really hard to find comparable tools by searching because the terms everybody uses different terms. But we can chat afterwards. Definitely have a look. So I think one of the things that I didn't see in other systems is that we really wanted to provide the most easy way for an open source of free software project to start testing. So the ramp up time from installing this thing to getting your test suite and managing it should be as little as possible. The hardware requirements should be as little as possible. And this was our main, was our main goal. Yeah. Yeah, the question was if there is a backend database. Yes. And no, it's not merged yet. We use Xorm for the persistence layer and we plan to support as Kulite. But also if you have some bigger development deployment, you can use Postgres and we try to stick to SQL 92 to make it as compatible as possible. But even if SQL 92, you can't create your schema universally. So this is where some database specific code comes in. But apart from that, we try to keep it as simple as possible. Okay. The question was why we use a relational database. Yeah. Well, we have a lot of relations between our objects that we manage. And we thought that it will be the easiest way to keep it simple and keep it easy to formalize queries and so on. But I'm sure the system should be able to support other pickings as well. So if you want, I can just give a short demo of how it looks so you get some feeling. Very much. And demonstration time. Yeah, so I'm not afraid of their life demos today. So actually, this is what the system looks like when you first access it. There are some features like the dashboard that it says this will be becoming a future version. It's almost done. We just haven't merged it. I'm just showing the stable branch here. So one thing you would do here is that you log in as admin. Yeah. And then you have functions like you can create a new project. We already have some example project, which is about testing this doctor go search engine, which most of you might know. And this is what it looks like. So these are the test cases in this in this project. We also have some labels. You can easily change it and so on these labels. And then you can also here click on one label and you will only see test cases. So this is about the grouping. Yeah, now we can have test sequences. These are the test sequences. And here we see in one test sequence that it has one, two, three, four, five test cases attached to it. Yeah. And yeah, we have these protocols, maybe one thing we can try here. If we go to test cases, there is a simple test case. Don't know if it's the best one, but we can try to. So we see it has three steps. And here we can click on start to actually start the execution. And then the first thing there is some description and stuff. So this is what the test designer has put in for the tester to know which preconditions must be met and so on. Since we can use different SUTs in this test, the tester has to state on which variant here Chrome and which version. It's the version of a SUT. It was used. Yeah, so just let's keep the defaults here. And now, yeah, we start with testing and this is done step by step. Yeah. So the first step is the search results appear in a list type this system test portal into the search bar and press enter. Yeah, let's assume we did that here. We can we can describe our test result and we can say if it passed or failed. Yeah, so let's assume the sidebar opens. Yeah, great. This worked. And then we did we did some change. This didn't work. So let's say the step failed and we get a summary. And now even if only or even if all not even if all if let's say if two things were okay and the third step failed, it's still up to the tester to design to decide if this test case was successful or if it if it failed. Yeah, we also have this this kind of not assessed or passed with comments. Yeah, so even if this, for instance, the third step failed, we can say, well, the designers it passed with comments because the last step was not so important. Yeah, we can finish this and we're done with executing this test case. We can go to the protocols. And here we select a test case. And there we see now for this test case, which variant did we use, which as you do version that we use when and we can, of course, open one of these protocols. And then we will see we can open these things here and we can see these expected observed and so on. And read about that. So it's really simple, I would say. So the question, yeah. Yeah, so the question was if we have some sort of hooks so you can use, for instance, if you use GitLab and you can say that passing our system test is a condition for some feature being merged and also maybe one hook for when you merge a new feature, maybe a new feature needs additional test cases and then you should extend your test suite. Right. Yeah, no, we don't have that at the moment, but we actually planned to offer an API so external tools can hook into our application. We also thought about using APIs of other external applications, for instance, to deal with failure reports so we can open bugs and stuff. Yes. So the question was about identity management and external authentication back ends. No, we at the moment we only support the local authentication back end, but of course, it's one thing we are considering for the future. Yeah, so the question was if a test failed, if there is a planned shortcut to generate a bug. Again, the same answer. We don't have it yet, but it's of course a cool idea. Questions, no? So thank you.