 My name's Steph. Good morning, everyone. I'm involved in the objective for CIS and CD in Fedora, which really has potential to change a whole bunch of how we work and remove a lot of the maintenance burden and help us move quicker and get past that flat part that Matt showed in his, you know, dinosaur graphs. There's more of a talk on that tomorrow at five in the grand two, I think. I wish these talks were the other way around. So we're gonna go, imagine you've come back in time and now you're diving deep, diving deep into the tests. In the future, no, in the past, no. In that other talk, you're going to, I want to share more of the vision why tests are so fundamentally important and what they enable. But now we've transported ourselves here to look at the test themselves. And in addition, we'll have an introduction to the tests in Fedora, the goals, how they're laid out, how they work. And then there's a workshop by Fred that goes into actually doing these things. We can get started on doing these things if I run out of words to say during this talk that we can get started on that. But we will, we actually want to hack on real tests and curate tests and work on bringing tests into Fedora. What time is that? Well, tomorrow, Thursday, we can actually look and see it would, 10 a.m. Cool. All right, so what is the goal of what we're doing here? The goal and end result is that we curate tests the way we curate source code in a distro. For a long time, we take this for granted now, but it really was an awesome thing. A distro took source code from all of the internet, brought it together in a standard way. Some people have weird tar balls, some people distribute some shell script that dumps code on your system, and some people have pure git repos and so on. We bring it all into this git in a standard way, and we're able to then build something off it. We have to do weird and different things from it. Obviously, use it for different systems, for containers and so on. We're going to do the same thing with tests. It's very much ties in with open source, bringing together the efforts of the open source community and curating it. Tests, in this context, when we talk about them, we're not talking about unit tests. We're talking about tests that validate a complete system. I know sometimes you might not might raise questions in your mind. What is a complete system? What is a complete Fedora system? It's a system that is set up as a user or customer or a cloud or would use that system, would use it in production. So rather than being a build route where you test stuff or a single package where you test stuff, it is a complete system. A good example is, a good way to think of this is that it's not a, you don't get to put your hand inside your software like a sock puppet and make it do things. You have to interact with it from the outside. You have to interact with it from the tools and commands and so on. That means, let's take an example of curl. Let's say you want to test curl. You don't get to go inside the internals of curl and figure out is it making its HTTP request right and so on. What you do get to set up is a full system, how it uses open SSL, how it uses certificates, how it maybe uses various dependencies such as SSH and then set up a little server for it to interact and check that it makes sure that it works. You can also break things in that setup as you would find in real life in production and of course make sure that the tests fail. So I have a question. So I know that curl has pretty extensive tests to it. So what's the point of creating new tests to it if there's already one present? It's a good question. We're going to come back to that but let's make sure that we understand that slide earlier. We're curating tests. Sometimes we invent them ourselves. Same way, sometimes we invent source code in Fedora ourselves. Pretty often. But most of the time, we curate source code from elsewhere and build it in a certain way and make it predictable how that is then run through the process and get gotten out to users and the same with the test. So if curl has an extensive test week, which it does, then we would curate it and invoke it in a standard way. Really important is to make the tests as easy to update as the package itself. It's not okay to have the test be a mysterious thing that hide off somewhere else. They need to be able to be updated as easily as the software that we have been working with in Fedora this whole time. Oops, there we have a duplicate. We want to find and fix bugs while the code is fresh in our minds rather than months or years later. If someone tests a change that you've brought into Fedora as a package maintainer and tests it much later, months or maybe it goes downstream years later, find out that there's a bug there. As the maintainer, or even as a developer of that code, you're going to forget why you made that change. And it's going to be really hard to kind of remember, oh my God, what happened? Why did I do this? Whereas it happens immediately right away in the tester right there, together with the code, we can run them and get feedback immediately. And then you can it's it's really fast to fix stuff. You have an idea of what's going on. You may have just seen the the release this from call, you know why you're bringing in that new version or why you apply the patch. And you can tell Oh, it used to work. Now it doesn't. So how do we pull these kind of goals off? And here's where we go into the details. So this is something that's been happening for a while. And many of you have been involved in in pushing this for that exactly the right decision. I'm glad we chose this as a as a fedora project to put the tests in this get along with the source code. So along with the spec file, sometimes the source code along with the choice of which trial which version that you use, so that they can be updated together. And right now, this is in progress. Tests are going into this get. There's one test in this get so far. But there's a whole bunch. I think there's another 80 that are ready to land in this get. And obviously, as we bring tests in this get, what happens? That's very good question. We have more about this later. I'll share a link to where you can go and look at this. But there's a lot of these tests that are between the question. Oh, certainly. So the question is, where are these tests coming from? The one test that's in this get so far has come from an upstream project similar to what Tomash highlighted. Oh, an upstream project has a comprehensive test suite. And then we execute it in a standard way. That one is RPM OS tree. Thanks, Jonathan, for making that happen. A whole bunch more tests are coming from the red hat QE folks who have been writing tests. But we have not curated them in a standard way before. Now they're being curated in a standard way, brought here ready to be brought into this get and invoked in the standard way similar to how we package source code to standard. And I'll show you the link to that. So we share how do we how do we make this happen, where we have different people in this get touching different parts of the disk repo? Well, it turns out that this is initially it might be a bit of a change of thinking. But where we get to is that we share the responsibility for the tests and for the code. So oftentimes, if you find that bringing in, if you're a package maintainer, bringing in a new package breaks the test, the test don't account for a new feature, either test the additional test or broken in some way, you get to actually change it at the same time. Just like we change the check sums of that new tarball. Oh, we change, for example, sometimes the spec families be updated or old packages need to be thrown also sorry, old patches need to be thrown away perhaps a new one brought in maybe some yeah, CDs are merged upstream and so on. We do the same thing with the test, we start to care about the two together. Basically, here's here's the software itself. And here's the definition of how it should work. They're up. Yeah, they're updated together with the source. And we started to think about, okay, in order to make this actually work, we need this curation of the tests to outlast the testing system that invokes them. RPM, whether you love it or hate it, like you're in front of doors, you better love it has outlasted all sorts of stuff. In fact, it's even outlasted young mouse, you know, it's working with DNF. RPM works with OS tree. It's outlasted and grown beyond tons of different kind of technologies that use it. And so that's what we needed to do with these tests and what we've done is split the idea of the test away from the testing system that invokes it split the idea of the test and its framework and how it wants to do things. Some of them are really nice and cool, well written understandable. Some of them are cane and strange. It doesn't matter. The testing system, the CI pipeline, the any other automated testing system invokes them or even humans using the tests, it should be able to do it in a standard way and not have the two bound to each other forever after. You can have Jenkins in both these tests, you can have Taskatron in both these tests, you can have all sorts of different systems in both these tests. And that is really important. So there's a spec and it's at this link. We're going to open it in the one browser sharing so you don't have to be furiously typing the link. And this link, this spec is about how to discover tests, how to stage them, how to invoke them and how to gather their results. Now we had on the Fedora CI mailing list, we had this posted, we had discussion about it, you can click going from this page, there's a lot of the options that were considered for this spec. I think there was three, an RPM package based, shell script based spec where we talked about how to invoke them using a shell script, an Ansible based spec for invoking the test and then also a auto package like WNUSES spec. And after the discussion, you can see all the logic, you can see everyone who weighed in and helped make this choice. And a lot of those people are here in this room. The Ansible one was the one that was chosen. So here's what a testing system is responsible to do. This is the part that uses the spec to invoke stuff. Sorry, the slides render rather poorly here. It's responsible to build a test subject. The test subject is the thing being tested. In some cases, that's a package. Build a package. I'd like to test this package. It's a bit of a gray area. Like I said, we need to test that package in the context of a full system. But it turns out you can kind of tell young hey install all the dependencies for this and you can hope to get close to what that package with how it would behave on a complete system. But more specifically, some of the test subjects are really coherent integrated subjects such as the container image, q cal image and always tree tree, those are complete systems that then you'd say I'd like to test this. I'd like to test this atomic host image. So that the testing system is responsible for building that. It's responsible for deciding which test suites looking at which test suites in this gift, we run against this thing, scheduling jobs, maybe it needs to start up some definitely nodes or some or some open stack nodes. It stages the test suites. We'll talk about we'll look at the details of that and invokes the tests in a standard way. It gathers the results that those things then put out. And it relays the results to something that's going to use them. And that that should be if we don't really test results back into the into someone's workflow, some human's workflow and the process in Fedora, then the tests sort of just useless to invoke them as useless. We're going to talk more about that on Thursday. But that's the responsibilities of the testing system. The standard interface is then the thing that the testing system uses. And it just this this is the standard interface that uniquely identifies a test suite, stages a test suite in this dependencies, how to do that provides the test subject to the test suite, tells us how to do that, tells us how to invoke the testing in a consistent way and gather the results. So all the things that the testing system needs to do. So that kind of question. So, so the first bullet point was build the artifact. So why why it won't get the artifact from Koji, like usually want to build a thing in Koji and test it to build it again. Again, this is okay. So we're going back in time here. So I will, I will refer to my future self tomorrow. And when you do continuous integration, if you look up continuous integration on any definition, old school continuous integration basically means bring the whole thing together. And just the act of bringing it together had these amazing results. And turns out there were humans. Remember this is the 1800s now. But there are humans bringing this stuff together. When you had when you start automation into this, which is really important, automation machines and so on into this, you needed a way for the machine to tell I integrated this thing. Now is it good or is it bad? Is it right or is it wrong? And that was the test. Therefore, continuous integration is so tightly now bound with tests because they are so fundamental to it. But even just the act of bringing it together had massive results. And that, that tells us that part of continuous integration is building the thing, is composing the thing, is preparing it in a way that a user would use it in production and then running the tests on it. Now it can call out to Kogy to do that. Bear, it can call out to use something else to do the compose and so on. That's, that's legit. But this is part of the entire CI process, the part of, part of what you would think of when you talk about continuous integration is bringing it together. So lastly, this is make, this is something you have to remind yourself of a little, there's some surprises in here for people who have been involved with tests. The test suite, which is the thing that sits below that standard interface, is responsible to bring in any dependencies it needs, such as its test framework, such as the modularity test framework, or beaker lib, or Python unit test, or whatever libraries it likes to use and so on. It's responsible to execute that framework. It's responsible to provision any containers or VMs, or maybe a little cluster if the test needs it, and so on itself. The test suite is responsible very likely by its framework, so not every test has to reinvent this. It's responsible to do that. Sometimes, although this is typically less reliable, it may call out to another cluster, or it may call out to do these things somewhere else. That's fair. Make sure that you know, okay, make sure that that's reliable. But that's all within the purview of the test suite and its framework. And it's responsible to provide the test results and test artifacts in the way that the standard describes. So, by the way, by the way, this is the framework for writing an Ansible module. It's called Ansible Modules. It could you not. So, if you want to go write an Ansible module, you're going to have to type the word Ansible. There is some, let's say, there is some string repping of your module going on to look for strings like this to know that it's an Ansible module. It is a ball. It's one of those balls made out of rubber bands. So Ansible actually, it wasn't my first choice for this. But I think it's very goodness the right choice that we chosen here. And as we built this out and got more experience with Ansible, some of the aspects of Ansible that we'll see actually really fit very well. The way that Ansible has dynamic inventory fits super well with the fact that this thing is supposed to launch its own containers in its own images, I mean sorry, its own VMs and so on. That fit really well. The tagging of Ansible fit well. The way that Ansible allows us to then call out to any other shell-based test suite or bring in any of these things. It really is a good tool for this job. Keep in mind that if you don't like the way that Ansible represents its worldview, its playbooks and its tasks and so on, it's really easy. We'll look at an example to have the test suite and test framework all be written in shell script. Or how do you like and literally this much Ansible to then just call it. And so Ansible is the thing that we, the standard that we used to invoke these things. We'll look at how we use Ansible. But if you're allergic to Ansible, then it's not a blocker. It's not a big problem. If you compare Ansible and and RPM spec files on the, you know, how many obscenities they generate from package maintainers trying to change them, probably RPM spec files won't bring way down the scale quite a bit more. So it's not perfect, but it does work well. And for extra kicks, rendering is not working. Just install CASA on your computer and Ansible will say everything like this. So if you're getting pissed off at Ansible, just do that for a little bit. Really. Now you can imagine that some of those things that we asked the test suite and test framework to do are things that not everyone should have to reinvent over and over and over again. So there's this package called standard test roles. It's Ansible roles, Ansible inventory in various scripts. Now these are not in the spec. These are ways of implementing the spec easily. And we've made sure that those are separated. This is akin to a test framework, God help me. Every time you say framework, like a bit, the butterfly somewhere dies. But this is a way of making it easy for tests not to have to reproduce logic all over the place. So if you find stuff that in your implementation of tests that seems to be repetitive between different diskit repos, that's where it goes. Merlin has done a really good job of maintaining this. Merlin Matizius and he's on hash fedora CI, like all of us. You'll see that at the end on free note. So that's where the development happens. And this is getting pretty stable. I think people are using it to good use. And one of the criteria for getting a stable, I think we're getting to the point where we're almost there, is to have it itself be tested. And then we're ready to say, yep, not stable. Alright, so what we're going to do, we can ask questions, we can dive into the actual spec, read through it, look at the different parts. We're not going to read every single word in it, but you get an idea of the kind of things that are involved. In addition, we have a tutorial that works through here, given sufficient bandwidth that actually takes you through some examples. Then there's this link here which has a body of tests being upstreamed, which Tim's actually going to talk about more. When is your talk? Yeah, so he's going to dive into this project, which is a lot where a lot of the tests are coming from. Existing tests, remember we're curating them. Some of them are coming from upstream, some of them are coming from downstream, and are being upstream, so to speak. We can dive into, perhaps here, or perhaps in Fred's talk, to adding inventory for different things like modules or new kinds of test subjects. We could look at how to invoke non-ansible stuff via this standard interface. We can look at how to bring upstream tests in. We can have examples of that, and so on. If you guys have a, I would like to dive quickly into the spec, unless you have a other suggestion we'll look through and get you familiar with it. Any questions before we dive in there? Yeah? As far as you think, the more the merrier, the more the part of you, but in my experience in any project, it has time to do so much study, because I've never done that for so long. So are there any meanings of definition? Maybe you have to speak then how to categorize them, so people can have some quick test, anything like that. That's a good idea, and a good point. So far we've been so gung-ho on the fact that we have no tests. In this spec there's a, let's look at this, but there is a way to say the test context, the kind of expectations for the setup and so on, and we could use that to say more deep testing versus more initial gating testing, and in fact, again invoking my future self, talking about the pipeline that runs these things, there's a portion of the pipeline that gates, a portion of the pipeline that just says, you know, prevents a broken change from affecting other developers, and then there's a portion of the pipeline that then tests further and batches updates and does maybe day-long tests and so on, and if those break, it will ask for things to be reverted, with the assumption that the first part caught everything that would affect progress in the Fedora in general, and you have people's machines rebooting on raw hide and so on, those kinds of things, and the second part would actually find deeper bugs or performance bugs or so on. So I think that's where the split comes, I think there's places for that, but we should actually have an opinion about how that's tagged and marked. Alright, let's look at this, oh sorry, is there anything else before we dive in? Okay, so what we can expect is here on Fedora, on the Fedora Wiki, and you can see that it's written in quite some detail, let's zoom out a bit because it turns out this screen zoomed in just fine. It's written in quite some detail about the definitions of what we can expect, the terminology that we use, we talked about test subjects, test suites, test results is a Boolean pass-or-fail test artifacts, which we'll keep referring to are ways that test produces output testing system, as we said it's like the CI system that's coordinating this and so on. So again, we talked about these responsibilities already, then listed here much more verbosely. If you find, in reading this, if you find stuff that's confusing to be specific, so I like that. That's a bug if it's not. I'm not expecting to read through all this right here, but just to get familiar with it, the requirements all written in the should, should not, must, must not type language. And here we see how the testing system should stage stuff where it should look for things, what needs to be pre-installed, what a job or runner of this thing should have ready than the invocation. Now you can see that here there's different kinds of test subjects, and we pass the test subject to the playbook, and different kinds of test subjects have different identifiers. We use it kind of a simple globbing to tell what is what. Further ones can be added here. So some areas where this is not here yet is for modules, whether they fit into the repo bucket or whether there's other descriptions of how to describe to the testing system and to the tests that I want to test in modules. You can see there's some basic stuff already there, Docker, QCals, those are used and RPMs, those are the ones that the test typically use a lot of already. Then there's, in addition to the test subject, there's what we call the test context, which is how the environment that the test can then expect, and the environment to which the test is written. So a certain test might just be testing, I'm writing to slash user, and I'm doing this and that, like that's part of the test, so it doesn't work in Atomic Coast where you have a slash user that just cannot work. Regardless of the kind of test subject, regardless of the dependencies installed, whatever, it's just not intended at all for Atomic Coast. So we use Ansible tags, you'll see those when we walk through the tutorial, to tag the test to say, what is it intended to test? In some cases it's intended to test everything, in some cases only a few things. So as you get more familiar with these tests and we start to find new test context, we can add further ones. And this may be where CMO's point comes in, where we might have some tests with special tags that say, this is a performance test or this is a long running test and sort of give some expectation of what the test is. Maybe a small note here, there is some initiative about metadata for upstream, but it's another project probably, and it should also solve these differences between test context and subjects. So hopefully we can represent that with these two concepts. So then here's the kind of stuff that the testing system does to execute the tests. And what's interesting is that you can run any of these tests on your own machine, just like you can build a spectra on your own machine. It's fundamental that you can run any of these tests and both of them on your own machine. Even if they call out to do things elsewhere, they should call out from the machine. And this is, so some of these things when we walk through the territory, you'll see us doing, you'll see us taking some shortcuts obviously because we are not as rigorous and strict, you know, that we don't have to follow the spectrum exactly when we invoke the tests, but this is the basic stuff that you'll see us doing, such as setting different variables, you need both admissible variables and environment variables for where the test subjects are, where to put out artifacts, those tags that we just talked about, these ones up here, which ones we choose to invoke the test with, where to find, where admissible should find its inventory. One interesting thing is that the test itself can specify its own inventory and its inventory is a great way of starting up the test subject into a VM. You'll see this in practice in a second. But if there is, if a test doesn't care, then there are standard ones that fall in, and the respect describes how to do that. Tests are always invoked as root, and then the test itself, it likes to, if it wants to test something as non-root, which it should also, should test both, it should add a user and suit that user and run further tests as that user. The test, the playbook itself, the admissible playbook, the exit code success and failure. Okay, yeah, Mike? So does number five mean that tests are not executable in OpenShift? That's an interesting topic to dive into. In general, a lot of the stuff here needs a pretty privileged OpenShift container. We are testing an operating system here. That means we are going to run all over privileges, and new kernels in many cases, a Docker service. We're going to test all sorts of aspects of a system. It is possible. We have done testing this inside of OpenShift, but you need sufficient privileges in order to do that. And I've done some interesting work on getting to have KVM inside a container so that you don't need a privileged container, but you do need root access in the container. Otherwise, yeah, a lot of this doesn't work. Yeah, I would just clarify, root becomes separated when you're talking about containers, right? That's correct. So much stuff will work without Capsys admin, right? Basically, default Docker privileges is very different from true privilege. There's a big difference between default Docker and Docker run desktop privilege, I guess is what I'm saying. That's right. So the root inside a container is not the same caveat is that you can see that in many cases we want to invoke a VM. So we need DevKVM. You don't need root to access DevKVM in a typical Fedora and VAL system, but you do need it in a container. It's not there by default. So that's the one wrinkled where typically, if you're doing this today inside of OpenShift and invoking these tests in OpenShift, you run a privileged container only for the reason of getting DevKVM. Other than that Is DevKVM not a thing you need? It is. And I have some work on an OCI hook that actually just gets KVM inside the container. And that's interesting. Hopefully we can finish that up. Yeah, that'd be cool. I have a card for OpenShift that's up there. I'll CC you on it. Just have some discussion. All right. Put the results in the right place and there is some discussion with the folks who want to use this in logging and data aggregation and actually use some sort of big data concepts on the milled alts and artifacts to name the results in certain ways. So if they came from restraint or if they have an X-Unit output or if they have a TAP output, they can be named certain way, but that's on the spec here. All right. So that goes into more details and so on. So we're going to skip, we're going to jump over to the tutorial and walk you through a few examples of this so you can see how approachable it is and get familiar with it. Are there any questions on the spec before we do that? Well, let's go on then. So here we have, I would encourage you to open this URL if you like. Follow along on your own machines. This is all literally doable on your own machines right now. Given internal access, there's nothing bad here, big here. All the commands that you need to get started are here in this tutorial. So what we're going to do is, and a lot of this is packaged into Word 25 and 26. I am one guilty soul who still has Word 24 on this machine so I have done some I've installed this manually from RPMs, but it's all in the standard repos now. You can just actually do sudo dnf install and get this stuff in place. That is really hard to read from back here. Let's do that. There we go. A little better? Big or fun? Yes. So, as you can see on my Word 24 system that last little package isn't there but on your system it should be because you all look great, right? But why are you doing the dev inside a whole system instead of a container? That's like one of those guilty questions like do you have a loss? Yeah, exactly. And I agree we should do that. I'm not set up for that. So we can clone the gzip package. So again just to clarify on this here you can see this is work in progress the upstreaming of tests is currently ongoing like we said Jonathan put the first test in diskit that means using fed package to clone the gzip one won't find me the tests right now. I want to make that change in the next few days but we need to clone them from this other repo where they're staged and ready for inclusion. So do that. And now we can see in this other repo from upstream first, Tim is going to talk about 10 tomorrow. 10.30 tomorrow. 9.30. 10.30 tomorrow. But the entire repo is the contents of the test directory. So imagine in your mind you meant to map that we are now in a diskit repo we are in the test directory. Let's look at the RPMOS tree Let's just look at it quickly off the door you'll see that here is a test directory inside the diskit repo we go into tests we see this content we see test.yaml this is a litter file here as is this but in the gzip repo we will be in the test repo already you see the test.yaml that's the entry point playbook that we use that the invocation specification prescribes let's take a look further so we become root now we can run the tests so the basic way to run the test is just to run them like that what it does let's um I jumped ahead there I was set up I guess I was trying this out more so let's do that again I had an environment variable in place that told me I was going to do something different than what you'll see you'll see those environment variables in a second here we go let's as you can see we have cal say very nice um and we're green we're good so this actually worked let's see what's in this test let's see what it looks like we can read so if we look at test.yaml we can see it's just including this other file let's look at that one we can see that there is a bunch of code here to invoke a shell script and the shell script could be an entire test suite and this is the kind of code that then moves to the standard test roles this is not there yet but that's where we would put a lot of these tasks we can see that there are variables that go into the into the spec file I'm sorry into the playbook let's tell it about the artifacts directory there are tags like we talked about before this test is valid across atomic class of fedora containers we have some extra stuff here to invoke the script why because we cannot assume that the script is always local we know in for example in in the atomic case in the container case we need to copy the test into the machine to actually execute it and we have executing the tests here we are pulling out the logs this is an example of a completely standalone spec compatible test that doesn't use the standard test role helpers to make his job and use it we'll bring those in in a second so here let's try putting these the artifacts in a different directory let's just show you how that works this is how you pass a variable to ansible and we'll tell it there are some test artifacts in temp output we can see there's some test artifacts in this case it's a very simple test and we can see that there's a little log for diagnosing any problems typically this gets to be very complex and big but we're starting with a very simple example and again we can specify tags like this to say that we'd like to only run the test if it's compatible with a classic context you can do this for atomic you can do this for containers you can do this for a classic system so far and again you can yeah like that or dash like this cool so let's start let's start bringing the inventory into it and the test subject into it so far assuming that Jesus was already installed on the system it's really testing the system in situ testing the system as it exists but that's not the intent here a lot of tests can do that but imagine you had a test that needed to reboot or you had a test that needed to destroy the LVM disk or something the whole idea then would be to start something smash it up verify what happened and say yes or no that was right or wrong the system that the test is being invoked on but let's start with something simple we're going to bring in a so this here tells us you know what I can also do also okay so here we are this makes me a little bit blush I wish this was simpler but the intent here is that the testing system will check if this inventory file exists in the test and if so it will use it this is an Ansible inventory it tells us how to launch something to then run Ansible against so it starts up a system then Ansible can then test if it doesn't exist we're going to use the default shorter shell snippet to put in this tutorial I would love that okay now let's get a RPM okay so now we have an RPM to test now the first kind of inventory is a really simple kind of it makes you question the intent here I'm really sure you know what you're talking about it's going to use that RPM as the test subject and the inventory script that are available with standard test roles given that there's no inventory directory here we'll use the ones in user share Ansible doesn't that have to say that that's not a real pilot I did download it to that file actually it doesn't exist let's just make the tutorial shorter maybe that's confusing maybe we can put it back so these standard inventory scripts know how to start a dark container install RPM's in their dependencies how to start a Q-commage there can be more of these there should be more of these maybe we'll invent one tomorrow in Fred's workshop we'll distribute it back to the standard test roles RPM's so that's a good question inventory files in Ansible can take on all sorts of forms one and it depends on the content so if it is if this Ansible inventory and the inventory or the standard directory if it's a file if it's a directory it will enumerate it if it's an executable it will execute it in this case they are executables in this case it's actually a directory of executables that's right it will execute them all we can look at them these are inventory files they have lots of stuff in them ready to launch a Q-commage set it up with a certain port to access to have SSH work then then they watch it with their coordinates so that you can access it with the right keys there's a lot of work that has gone into this over time different people some people here have also contributed to this and there's also a debug thing which we'll turn on in a second but let's start right so there's a standard way that a dynamic inventory executable puts out some JSON well let's execute this yeah let's execute this just as a done script and not by Ansible yeah so users share Ansible inventory standard so what's it going and doing it's going and installing that RPM and then it outputs on standard out a bunch of stuff that Ansible can consume to save here's the host and in this case we tell it really just connect to the local machine let's come back at this let's look at this for QCAUs and Docker images it'll make more sense and now when Ansible here let me execute it like this and given that Ansible inventory file and the test subject environment variable when we execute this like this it does those tasks and then invokes the playbook on that system where the RPM was installed in this case it's just the local system this is really the dumb case let's move on to atomic host this gets more interesting then we already have the Ansible inventory Google Deco paste it in here so let's get a if this takes too long I have one that I prepared before as you do and this is taking too long that's minutes so let's copy atomic QCAU right so here we are and we can tell it tell the playbook that the test subject is this atomic image that we totally downloaded but not downloaded before and that's a stock image that came from Fedora and I think fancy inside it this is just normal stuff and again we can then well let's execute let's execute that playbook and see what it does in this case it launches a virtual machine takes a minute takes a moment and waits for SSH to come up and then it will output JSON that Ansible can use to connect to that machine it's taking longer than I thought it's just interesting that there isn't any pre-planned responsible for provisioning we went through that before we actually had the playbook responsible for provisioning so we had the record to have one playbook execute another playbook inside of there and Ansible already had this concept baked in so then we started just using it that was one of the changes that came after the initial version of the spec but shouldn't it default because I'm guessing the assemble test rules will be installed by default on the test and machine right? yeah so in the staging part of the spec it says this is the one place where what I said before was not completely honest and it is that when you stage for executing tests you have to install these packages typically when you run ASILANA Fedora URL system you must install these packages this adds one more to the list therefore these standard roles are available but the default should be for the testing playbook to do the provisioning like instead of overwriting the behavior and putting it like your own inventory file you should have the opposite semantic of I want this to be taken care of by the roles we'll look at our test where we do exactly that and a test can take over its own provisioning completely and it can do it I think you did this in one of your work on RPMO Street the test can still do what Ralph is saying no problem it can do provisioning but we moved a lot of the default so that we use inventory it's much easier and much more same for people to comprehend they can still do what you're talking about Ralph and still do what you did in RPMO Street with a module or however you want to do your provisioning of the test subject those are legit choices in fact we haven't yet but when we bring the cockpit test the cockpit has hundreds of CI tests that go all over the system we have our own provisioning of a VM launching it and interacting with it yeah we'll just tie that off with this either with those inventory or in the playbook so let's run this while we're talking that's useful so as you can see that standard inventory file outputted a whole bunch of stuff how to connect to the thing you connect on this port and this IP with this password with this SSHP and that allows this playbook then to do exactly that in fact let me cancel this I'm going to show you one more thing if you put test in front of this you get a lot more output later for actually debugging this then it put breaks in theory a static inventory in theory you can make those scripts not in your test be a non executable file slash test slash inventory I don't think that works I have not found a way to make that work if you do and it's standard ansible then that would be cool I wouldn't talk about that let's do it either now if we have time or in the workshop we can explore different things like that so as you can see we ran it against the atomic host that was launched and we checked that jesus worked in an existing atomic host and because I had that test debugging host one thing the VM is actually still running if I like and if everything went down then just I can actually then install SSH into this VR and so on I can see that this is a host we could do that we could do that yes yeah it's true so typically when you're doing something like SSH just to keep in mind it will be a more complicated example that you would launch something inside there you'd probably start SSH on a different port or you would launch it in such a way that by default SSH was on the port ansible connected there and then another SSH process inside or you'd figure out some way those are the awkward situations networking when you're doing CI we managed to test these awkward situations really well but each of those requires a little bit of like stretching and scratching and figuring stuff out okay let's disconnect from here either you're going to like this answer or you're just going to go like this with both hands what happens what we do in the inventory files typically inventory files in ansible are meant for existing infrastructure for me to find my cloud for me to find my Amazon VMs and so on what we do is we watch the parent process that invoked us I'll put this to you we watch the parent process that invoked us and when it goes away we die so if you see, if you look we can see that it still is running that's the inventory script and if you look for the QCAL it could be that more than more I mean maybe I did this before I started out QCAL Q and U and now there's two of them right and if I actually exit my because I invoked from my shell and not from ansible when I do that one of these should go away the second one because I did test debug won't go away because it turned off that cleanup logic but if I go like this and then I you can see that one of the QMU processes went away from that previous invocation that I did I launched it right from my shell output the JSON, you remember that one didn't get cleaned up because it was watching my bash process and now when I exited bash it got cleaned up implementation details are kind of scary but I think we have a solid concept underneath which all sorts of stuff can be tried here that's the point right so again we already looked at test debug let's just look at one more example here of a container this may take a little longer jeez it there's a subject saying this is a docker container to be pulled from the docker registry at that place now other containers OCI containers and some of them are accounted for differently I don't think the inventory script for them is written yet needs to be written so that an OCI tar type file can be imported and tested that would be great but so far a lot of the stuff that's happening for the rest of our content and let's invoke that and that didn't work why? because I exited and didn't have my inventory environment variable so I actually tested on the local system so now you can see it's actually starting a docker container the container folks have done a really really nice great amazing job of making the container so small that we have to install stuff in it in order to run Ansible against it which is good because of the flock conference and because of the problem that you brought up in that state of the union we had to download about 200 megabytes of bullshit metadata in order to get any packages installed which is again bigger than the container itself anyway lots of little things to work through and solve but this is not something that purely affects tests this affects a lot of these concepts in general that's why it's slow it's actually it's blocked on yum right now and you'll see the yum output come up just the metadata download takes this long you say that the first front of the Ansible didn't work but it still existed 5 that's correct so I said I showed how we were we were not let's just make it clear as we're doing these commands we are invoke we're doing a subset of all the things the spec says you must do so we are not must musting, we are not following the spec exactly the spec says you must set test subjects environment variable and you must set it as an Ansible variable as well you must do various things that we're not doing we forgot to set the test subject environment variable in addition we forgot to tell Ansible about the inventory you can either do by environment variable or you can do on the command line with a dash i and how a testing system does that is up to it we chose to use that big online to set an environment variable so we did not follow the spec therefore it did not act the way we would expect it to but the check says in order to launch a QCal and in order to render it does use the default behavior from standard test rules it does not have its own inventory but then the check but the answer fact is not mentioned it is not incorporated I think it worked it just executed against the wrong system does that make sense I'm just afraid that basically that means that you run the test and every single one finds it except that you don't test what you were supposed to be testing the CI system is saying is there a good point yesterday Jack had a good idea that we should pass through the CI system an invalid set of conditions that we know will fail through the entire thing not just in this part through the pipelines and so on and that helps two things it shows up in the reports and data as the system works it also helps us be alerted to how to do a false positive check I think that's a good idea so I hope that will cover that case sufficiently well that is testing the spec and testing that the CI system is following this we can look at each of these musts we can probably do a wrong thing for each of them and we can make that work that way I was going to talk with Ari about that and see if that's something we can work on when we can work it in step is 9.59 you have one minute okay where are we? the tutorial goes through adding tests goes through how to work with upstream first although Tim is going to describe this in more detail so I'm glad we can move over that writing a new test where to put stuff how to execute a shell command hopefully we can get that simpler how to tag a test we run in different situations test context wrapping a script again we use some of the same examples I would like to have this tutorial go into wrapping an install test those are kind of a GNOME or Glib concept for installing unit tests and then running them locally sort of integration tests really test the libraries that it's using and so on wrapping a beaker test that Red Hat QE wrote and this is how to actually make them work there's restraint test which is another test framework and I'd like to add module tests to this tutorial and add a few more things so we're all done anyone who wants to get more information go to Tim's talk, go to Fred's talk and come to my talk