 Okay, well, let's start this. My name is Elio Martinez. I'm part of the Starlin X project. I will talk a little bit about myself. I have more than 12 years of experience in testing, working for graphics, working for kernel, and so on. And I was caught by the Starlin X team probably eight months ago. I'm really excited to talk about this because it seems to be the more challenging and interesting project that I was part of. So let's start with this presentation. At the beginning, we need to know some basic concepts in order to understand what are we trying to do with SUL. First of all, we need to know what Starlin X is. It is a completely, fully, scalable software platform capable of too many good stuff. I think that the most important should be that it's completely easy to deploy. It has low touch manageability. It is rapid response and at last it is fast to recovery. Okay, we're thinking to implement SUL as a main tool with the continuous integration. But what does a continuous integration sign need to be? It seems to be really straightforward, right? But it is not. We need, at first, should be compatible with the open stack continuous integration. Our point is that it should be easy to maintain, should be regression testing capable, and it needs to keep your testing fast. As a disclaimer, all the stuff that I'm going to tell you on this presentation, it's only from the testing phase. We are not going to talk about the development cycle. I think that is a point for the QI guys, right? We are thinking one step in front of the developers. Okay, Starlin X, continuous integration, SUL, but exactly what is SUL. Getting from the website, we can read that SUL is a pipeline oriented project getting system. It facilitates running tests and automated tasks in response to Garrett's events. SUL is a program that drives continuous integration delivery and project and deployment system with a focus on project getting and interrelated project. But is that it? Right? We can choose another stuff, right? We can choose another tool. Why SUL? First of all, and I think that is the most important part is that we want to be completely aligned with the open stack foundation. As a Starlin X project, we want to maintain or for principle studies, open collaboration, open design, open development, and as a consequence, a completely open source project. And I think that another good part of using SUL is that we can use it in two different fronts. One of them as a gatekeeper and the second one as an adjup orchestrator. What are the advantages that SUL can bring into the project? First of all, it is really easy to configure. Second one, and as I mentioned before, it can be used as a gatekeeper or job oriented, both in the same project. It is completely Garrett friendly. It is Jenkins compatible using the German plugin. And it is optimized for Ansible playbooks as well. So we know what Starlin X is. We know that we need to implement a continuous integration infrastructure using SUL. We know all those advantages that SUL can bring into the project. But it becomes with three main questions, right? First of all, what is the Starlin X place in relation with the open stack community? We don't want to duplicate those existing test cases that all the projects are executing so far. Plus, what can be tested in the infrastructure that already exists? Again, we have to save time. We don't want to duplicate jobs. And the most important should be what kind of infrastructure do we need for those specific features that are going to make Starlin X unique? With these three questions, we are creating a plan in order to accomplish our goal. I'm making emphasis on those specific features that are making Starlin X unique because we need to understand that these features need to be tested in a different way with a different infrastructure added to the existing one. So answering these questions, the implementation will challenge the following questions. First, Starlin X is completely new with the open stack community. It will introduce additional functionality as we know. And all the testing phase should be completely automated. And from the scratch, we already got more than 3,500 manual test cases that need to be automated as soon as possible. Talking about the configuration, we know that we have several configurations. The first one should be just a single server that contains all the stuff that the cloud, that that edge cloud system needs. The computer, the controller, and the storage part. The dual server that is not just the mirroring for one server for the other. If one server fails, then the other one will take all the functionality. And the multiple server that contains in a separate way all the different phases for the edge cloud. So getting back to the implementation, we are trying to implement as I mentioned before in two different ways. First one as a gatekeeper and the second one as a job orchestrator. If I'm talking too slow, just throw me something, please. As I'm showing on the slide, as a gatekeeper, Sule needs to solve all those dependence issues according to test cases that are going to be developed, right? And as a job orchestrator, must be considered for all the testing phases and needs to be designed for different test suites. Sorry. So let's start with the first phase. As a gatekeeper in the test code development. Just letting you know, we are using robot as an automation framework. Why robot? First of all, it is completely Python based. That's a good point because if we need to create certain validation, we can use simple Python scripts and robot will read it straight forward without any kind of dependence issues. Second one, robot has a plenty good of libraries that we can reuse in order to save time. We have already a Selenium library in order to test everything related to the horizon. And it has libraries to work like a command line interface. As another important part from our test cases is that we are including tax. I know tax doesn't solve anything. We need to be really careful using tax because we need to maintain them and check what is the functionality for each of this. And at least our testing should be smart enough to return the expected value to see if we are executing what we really want. We don't want just to return code zero as a pass validation criteria. With this in mind, okay, how are we developing our test cases so far? Onto today we are following the monolithic way. One developer working on a single test case submitting the code to get it, then going through the revision until it reaches the qualification. Then the code is going to be merged. So far so good, right? But what happened if you have a large group of developers working on all the test cases? You will create a bottleneck effect. Why? Because there are going to be some dependencies. If one developer touches the same file that other one, you will create merge conflict. And from one team, only that developer that doesn't touch the other files, it's going to be merged. So again, what problems stand to the surface? The development will be really slow because of those merging conflicts that I mentioned before. And as a consequence, the testing will be executed really, really slow. And we don't want to waste time. So then, so coming to the rescue and say, you know what? I can create a system for you in order to avoid those dependencies issues. Let's put a simple example. Two different batches, two different batches. One is using the same file. And the other is using the same file. So it's going to pick one after the other and then complete the merging. Please remember that so will give you the first revision point of your code. As a matter of fact, all the contributors in the next with the community are having this revision. They only need those conditions in order to make the batches a little bit smarter. But what does a single gate contain exactly? It is made of three main components. The first, it has all the codes already merged in a temporal wrapper. The second one is that you can modify and create conditions that validate your code through pipelines, wrote to in game and format. And the most important, the job scheduler, because it is going to merge one batch after the other, avoiding those current conflicts. But what happened after the merging part? We need to identify those test cases in order to split them into different configurations, into different components. And if we want, we can identify if this test case is going to be executed on a virtual environment or the bare metal environment. So I think that we are ready with the first phase, the test code development, using Sule as a gatekeeper. We can move to the second phase, using Sule as a job orchestrator. We have a big large number of test cases that we need to identify, that we need to code in case, according with the component, according with the configuration, and according if we are going to introduce special features. Again, I'm really, I'm really making emphasis on the special features because it is the most important part from Sule's perspective. Then, with this kind of organization, we can create different suites according with our needs. What are those special features? I'm not going to get deep into this because we are going to have plenty of numbers of sessions talking about those special features that are making Sule next unique. So let's start with the execution. Again, we are not going to talk about the development we are going to consume already compiled ISIS. So we are going to execute the sanity testing, we are going to execute a special issue that contains all these special features and a full testing cycle. I'm including, in this slide, Jenkins because we already have one Jenkins job executing the sanity testing. We want to develop a Sule using playbooks. And in future slides, I'm going to tell you why. But what is inside of an Ansible playbook? Again, it is really straightforward for us. It is wrote in a GML format. It contains all instructions that need to be executed. And it is similar but not equal to a batch instruction. So it's pretty easy to configure. What should be the execution timeline? As I mentioned before, we are going to consume ISIS. We are creating one ISIS daily that contains all the changes that came from the community. We are going to create a special feature, ISIS. We don't have any estimated time for this. It depends on how the developer is working on the changes that next needs. And of course, we will have our official release. This official release needs to be tested in every way. It needs to be tested using all the configurations against all the possible test suites. So that's introduced our testing scope. As I mentioned before, the sanity testing only will execute or only will exercise those special features or those basic features that an ISO needs to be tested on. Such as the boot up, such as the live migration, such as the instance creation, all of these tests. So far we have about 30 test cases just to validate that our ISOs are stable. The open circle contains all the special features that we want to introduce. Again, we need to be really careful with these special features signs. We don't know how it is going to affect the other open stack components. According to the feature, it's going to be the scope. So it can be bigger or smaller. And of course, the full testing, the full testing needs to have several test suites. Because we want to test all the open stack components. We want to get performance values in order to see if we are having progress with the new features. Again, the special features need to be tested against the other components. We want to be compliant. That's because we are going to use Tempest. And why not? We can add to the scope stress testing. So reporting. As a matter of fact, robots can bring you all the reporting part. But we need something that can organize those reports as soon as we get the execution done. So for the testing, we are going to have one report daily. For the special features, we are going to have the full report once that we get the ISO. And the full testing needs to divide or need to split those reports according to the test suites that we are going to execute. Then you can also imagine all the full picture. We are going to work in parallel our test cycle, sorry, the test development cycle, creating all the test cases using sull as a gatekeeper. The second one, we are going to consume the ISOs. Then sull is going to work as a job orchestrator. And it will be the main bridge in order to start uploading all the reporting. So let's go back on time and see what are we doing so far. At the beginning, we have a lot of manual test cases that need to be automated and we don't have any CI CD infrastructure so far. What are we going to do? We are going to implement sull as a gatekeeper for the test code deployment. We are going to implement sull as a job orchestrator for the different test suites that we are creating. And we want sull as a main bridge for reporting as well. The full expectation should be that we have sull working on every single phase as a gatekeeper and as a job orchestrator without any kind of changing jobs. Why is that? Because we want an homogeneous environment. We don't want to maintain two different kind of jobs. So as a conclusion, we plan to use sull as a main CI tool because the community feels comfortable with it and we have a lot of support on the subject. We don't want to reinvent the wheel. We want to save time. We don't want to waste time just investigating more tools. If the community feels comfortable and it's already running, use it. And of course, we want to maintain our four principles. We want to keep the sull next to us completely as an open source project. I know that this way to solve the issue can bring you a lot of questions, but I think that the best way to solve it is as a community. I will upload all the progress during this implementation. And if you have suggestions or if you want to comment something, if you see any opportunity, just let me know. And for that, we have several ways to communicate. We have our IRC channel. It's starting next at Freenote. We have our mailing list. And you can join the weekly meetings through Zoom. Okay. I think that I speak so fast. Questions? No questions? Well, I will give you back 20 minutes if you want to enjoy your outside. Thank you so much.