 Recording, good. So a few lines about me, a federal ambassador and contributor for a long time, working with the CPython, that's the Python programming language. And federal cloud engineer, that's my official work title. That means I work with the community to build a federal cloud. They make this. But directly coming here. So anyone who does federal QS stuff, I can see you. And you, anyone else who ever tried to do any participated in any test day, federal test day, or did and any kind of test anything in federal line. So we have few people, almost a big number in this group. So the idea is, how do we currently test our cloud image? That is what we do right now. Anyone participated again in the federal cloud test event day? Anyone? So what do we do there? To put it in a simple way, we run few commands. We check for the output. Am I correct, Tim? Well, it happens, yeah. So yeah, so the whole idea is, so we have a couple of pretty good self-explained wiki pages where it is written like for testing of the cloud image. I think these are right now four. So these are the four tests you have to run. And each test contains its own image, wiki page, where it got explained. So things like a SLNX check. So you run, get enforced, and see what is the status. And so there are tests about services. So you disable a service. You reboot the image or instance. And then see if the service is still disabled or not. Then you again start it, again re-enable it, do another reboot, and see whether it's still running or not. So basically, end of the day, we are just running few Linux commands, and then checking the output. So I never participated into a cloud test day before. I never tested the cloud image before I joined this team and started working with the cloud working group. But when I did it for the first time, I found, oh, yeah, it's very simple for me to do these things. But first I have to find out where I can boot a cloud image. So at the beginning, it was a difficult thing for me, because I never had any access to any OpenStack or AWS or anywhere. But I found one of our contributor, Rosy, he made a very simple script called TestCloud, which was like booting up a cloud image locally on your system. So I used that, boot up a cloud image. And then I'm going to the wiki page. What's next? The next step was log into the system and run these commands and check the output. But I did that. I did that happily for the first few days. And then for while testing the nightly builds, the first point came in my mind is, this is 2015. And I think my computer should be able to do that instead of me every time running the same thing. And I know there are these things called CI. And there are many different versions, many things available on the industry, which we use regularly. I used Jenkins before, very little experience, but I used it before and looked at a few other CI systems. The biggest problem I had, or I should say I still have, is that I'm not a very good CIS, I mean. I can do, I mean, I can make things work somehow. But I'm not that confident on myself. So what I was looking for is a simpler way where I can just test my cloud images. That was the first thing I had in my mind. To me, is the answer for me in that moment. So the idea was to have, I never called it CI at the beginning, but later I thought, OK, it can be used as a CI. So the idea was to have something very simple with the minimal configuration, which can help me to test my cloud images. And then if I just remove the cloud images part, that means it can help me to test my whatever application you put in there. So from that point, what Tuneer can do right now, so it can boot up a cloud image, basically a QCOP image. And that code is not written by me. So I'm using a very old version of the same test cloud project from Roxy. So I'm just using that. And it's directly using QMU KBM to boot up the QCOP image. And I modified it a little bit so that I can have multiple instances running if required. At that time, it was not there in test cloud. But test cloud is coming up in Fedora 23. It should have a look if you didn't already. It's an amazing application, very simple, very little. But it can do many things for you. So Tuneer can boot up cloud images. It can log into pre-installed systems, basically using Paramiko. And yeah, the Python Paramiko, the SSD library. At the beginning, I was thinking about whether I should go and use Ansible, maybe something like Ansible directly. But then I was like, oh, I want to keep it as simple as possible. And people can use Ansible inside Tuneer maybe right now. Yeah, smallest config. I talked about that a minute back. So I think I should never know how to make it small. I don't know if you can. Is the font size OK? Yeah. OK. So each Tuneer task or a job contains two different files. So most of my talk, by the way, will be more of demo part or directly showing you stuff rather than just talking over slides. So this is a configuration file required for cloud test example. So you just have a name, which I'm not using till now much. I'm saying that it's a type of a VM. That means it will boot up an image. That's the path of the image. So basically that's the only thing a user needs to modify if they want to test. The amount of RAM I want to provide and the default username password we are passing for test cloud. That's it. And then along with this, we have the same file name .txt, which is basically almost like a back script, but it's not a full script. So one command each line. And you'll find two things which is not part of BAS here. Something slipped in capital and lines which is starting with double-addeded side. So running a command over on a different system is easy, correct? Just use some library, say Paramiko, log in into that box and execute some command. But a part of our cloud test required me to do a reboot. After reboot, I have to wait for some time to make sure the system is booted up again. Reboot it properly. Then again start executing the commands. So slip basically does nothing but just the to-need thing goes into slip for the provided number of seconds. And then exit, start executing the commands again. And if any command starts with a double-added in the beginning, that means the return code of that command is not 0, not equal to 0. So sometimes we know that we want to, some taste or some command should fail. And if it does not fail, that means there is a problem. So I thought it would be a good idea for me to have it in few cases. I never knew that before. Like Shulo reboot was returning a non-zero return code. I never knew that before. So this thing was added. So any job, so if you want to test to-need, use to-need for your own testing on your laptop or on a desktop. So you may just want to put up the commands which are required directly into this file, a .txt file. It will not require anything else. So this is how you can execute the actual tests or the job. So basically the first input hyphenfn job is the name of the job. That means this is how to-need will know which JSON file to read and which txt file to read to get the commands. And at the beginning I had an idea about saving the result in the database. The code is in, but I'm not using it at all. Like I never tried that after that. So I have another argument hyphenfn stateless. This means it will just execute and print everything on console. I hope this will work, not break. So basically just putting up the image. Now it actually waits by default around 60 seconds for the image to boot up. And then starts executing all the things there. Is it a sign for a trial open? A slip. Now I can put up in trial open. Anyone can just, this should be like two or three. No, no, I'm just saying. I found at that moment it was like just put it in on slip. The other reason was there are cases when our image might be wrong completely. So it will not boot at all. So even if after like say 60 seconds I'll retry once more, another 30 seconds. And if it doesn't come up by then, that means there is something wrong with the image. That was my idea. You can see that it executed two commands now. So if I go back up here. So the first command was lsfnl, then sudo cat, slipping for some time, and then last command. So job status is true. So the output is like this because then anyone, if they want to do anything on top of this, because this is not a full CI system. So you may want to write some kind of glue script which can then parse through all of these things. The output and then for each command and the status and the output of the command. And so this was the case I was talking about. So I knew this is supposed to fail because there is no such subdomain and it failed. I see you were just trying to demonstrate the fact that you didn't just have an idea of each other. Can you just ask the question again? Do you actually find that as a test? Yes. So like test means. So the default code behaves or believes that everything should, if it is OK, it should return as 0 as written code, correct? Yeah. Yeah, but then while doing sudo reboot, I found out like, nope, it doesn't. And then there are a few cases. I know at least few of my old applications where we were running things which should fail. Yeah. So the painting of that domain is that something you would actually do. Yeah. That's true. But yeah, so few cases are actually pinged. Actually, I don't do ping. I use DNF install package PSS. So I just want to make sure that the cloud image can install the package things. Many ice peas do GNS while partying. And I would only want to have that so I don't have to work around it. OK. Never mind. Can you actually already use that to write what the output should be and so it just checks if it's OK? Yeah. That would require a little bit of work. So now coming back to Fedora. Because Fedora and our official QA application, which we use for doing all our automated QA part, is task at run, written by team and others. And so the idea is like, as I said in the very beginning, so what we do in the Fedora land is we just check the output for some particular values. So right now I have another GitHub repo called tunit test, which basically contains few Python unit test cases. Nothing else. I'm just opening them. So basically two files for now. As you can see, it doesn't have anything fancy. It's a standard Python unit test case. And if you see the tests, these are the tests we actually run inside the Fedora cloud. So testing the SLNX thing, then test the generally logging. I added something extra, which is not there in our official test, is that actually testing if I can install an image or not. And I skip it if the image is an atomic image, because I am not or I cannot. Just DNF install something randomly on an atomic image. So and this will run just as a normal Python script. So the idea is like, I'm using tunit or my plan of writing tunit was to just use it on my laptop or maybe in a remote computer, where I want to use it. But when we have the latest version of a testosterone deployed, I just want to reuse this same test if possible, if maybe it will require some changes. Tim can only can confirm that we can just want to execute the same Python scripts as to run and find out what it can do, the cloud image inside Task Patrol. Tim, is it OK? Do you think it's possible? I'll tell you the same thing I told you over and over again. We are done with the features that you think are going to be there. And we'll make something happen. I'm not guaranteeing that the stuff that you've written for what the spirit is going to work. That's OK. But it is on the roadmap and we will make something work. Yeah, so we'll get something work. But the idea will be the same, correct? So we'll execute some commands and we'll know the output, correct? You're the thought testing group. I know what we have on our roadmap. And we will make something work. Something will work eventually. That's perfect. The formal, the way the automated tests, which Fedora runs, will be on Task Patrol. We'll be using whichever way it will be provided. The whole idea behind today was that I just wanted to do it for my system. And because it can do SSH into any other pre-installed box, I can actually use the same to need to test the background images or even the AWS or any other providers. It just requires if instead of password, you can just provide it a key, like the SSH key to log into any box and it can run the test for you. So in the to need test, I have a file called Fedora.txt and the JSON. So this is the actual thing which I'm doing right now inside the test. So I just download, duplicate, or in this case, call the test, untie it, and then just run. First run the test I just showed you. Then inside the cloud image, I just use the cronD as an example service, which is top, disable, restart the image. As you can see, both these things returns non-zero return code. Is it valid? So then see for 30 seconds, it should reboot by then. Then again check if after reboot, the cronD should be state-disable like that. And also re-enable cronD reboot again. That's line number nine. And then for 30 seconds, test again if cronD is up and running or not. So this is, I think, what you said. So we are not checking the output directly inside that to need.txt, but instead you can just write whatever you want to, whatever language you want to, and today we'll just execute it. So if you already have, say, a test suite for your application, you can just install the test suite or build it up here and just execute the test suite. So that was the idea I had. So any kind of inputs on those things or anything you may want this kind of thing to do, that would be useful. I just have a point on my experience is testing the cloud images. So we're talking about the grocery, the career behind test cloud. And one of the things I learned that it used to take the Fedora cloud image in part of Boolean, like Dirt Manager or GNOME boxes. I think there's no user, there's no password security that counts for the Fedora boots. And one of the things to test cloud with us is it provides like the cloud interface. So cloud is going to be like, yeah. We create a seed image in the test cloud and just provides the metadata. Yeah. And the idea is that is that putting a test cloud in that actually, it actually sets like a core password or a root password. Yeah. Because I don't want to have to meet the one that's up like, I want to try out this cloud and it's always Boolean, Birt, Manager. Oh, I can't log in. Yeah. So because as I said, I never wrote even that code. So I found test cloud was amazing. And it was already doing that thing. So I'm using, so it's, the tune is the same license of test cloud because I'm using the same code base. Which is written long time back, the first one of the first versions, one single small file. Yeah. So, and then it got improved. It got huge amount of improved over the last. Because the version of test cloud you bundle with Tuneer is not what we're trying to package. No, no, no. Yeah, correct. Almost everything sends them. Yeah. When you're earlier searching for test cloud can't do multiple instances? No, no, no. We have set method base. Long time back. That's what I said. So it was only then when I was just started working on this Tuneer. But test cloud now can do multiple instances. You can list and stop instances. That's why I'm saying like test cloud can do a lot of things. It's perfect in that way. When you say perfect, obviously it doesn't mean perfect, perfect. It can do a lot of things. So yeah, so it's, because as I said, it's using the test cloud code base. So it's actually uses the same way it creates a seed dot image and passes the username and password there. So I guess the question is, what would be the ideal test suite that you would have in running Tuneer? Do you want to have a large amount of regression tests? OK, so what are the things? So I think that question goes more into what are the things we want to test in image rather than anything. So I think Rosy is working with something about putting the T functional tests from the CentOS team into Fedora land. So that's something I would love to see running for all our tests. Not only cloud image, I want it to run on the server side also or even if possible on the workstation. Because there will be developers who want to do the similar kind of thing. So those are the tests which I really love to see. But other than that, at least like we have a Docker for the Fedora Atomic Test Day. Or Atomic Test, we have one of the places where we are doing Docker storage setup. And we are also running an actual Docker image and see the output, whether it can actually start the container and execute the command and just get out kind of things. So yes, I want something more into that similar line and maybe adding more non-getting tests. So things, at least for now, it means the things which can fail. But we are still OK for it. But for an atomic image, if Docker fails, that's not non-getting, correct? It did not make any sense if Docker fails to work in an atomic image. I have a big question. So it sounds like you're only testing the cloud images on physical hardware with QV and KVM. Or you can point it to Amazon or RatchBase. So two years can work with those cloud devices. Yes. OK, good. So what Tunis does is just uses Panamico. It doesn't care whether it's an Amazon or a local box or anything, it's just SSH into that box. So I wanted to have, while writing Tunis, my idea was to have a single way of executing tests. And if it is VM, then only just boot a cloud image. Otherwise, just assume that whatever IP is provided, that IP is there, the given username and SSH key works or the password. So that's how I was actually tested. So I tested the background images, the new images we generated for Fedora 22. Because I had a small street which was just downloading the background image and start background up thing. And after that, I just used Tunis manually to just check this image with this IP. That's it. I have a comment. So recently, people were criticizing the global workstation team for releasing the Fedora workstation release ISO and never releasing an updated version like Frontal Updates. We used 4V to learn the critics. He said, I can't install Fedora 21 because it doesn't have the driver. The updates are available when the updates be available, but the ISO doesn't have the driver. OK, so I think any sort of cloud images like Amazon, if you use Amazon official images for a boom queue or whatever, they are updated every two months or so. So I can answer that from a cloud working group side. So we had a proposal and we were working on it to release updated cloud images. And if you see, we have a ticket open against release engineering because I opened that ticket few days back. And we pointed out to a particular release tree that we tested this tree. And the tree and the image works fine according to our official test. So right now we are in the process of doing that. So if it comes out, that would be, I hope, afterflop. And that would be the first updated release. But the idea is every month, by the end of the month, almost in the last week, we want to produce the latest updated image. We have the images. It's just we need to formalize the full procedure. And QA should tell us, OK, we are OK with this image to be released. We have to, I know that we have to make sure that we test it enough. Right. That's not really the message that my understanding is. My understanding is it's work fine with it, but it's up to you guys to be testing with other people. Yeah, so, yeah, but that's. So the problem with the Isis, anyway, I think the Isis is much more difficult, I know that. Well, it's just the amount of testing goes into it when we say it's GA and it's released. To redo that in the middle of the stream of working on the next Aurora Isis is a giant time sink. And I don't think anybody is willing to. Right. It has to be. But one thing we are doing, so for the cloud images, that and then also for the Oratomic, we're working on doing two-week release cycles for that. Yeah. So every two weeks, we will have an updated Oratomic image. Right. So at my point, I have a lot of efforts like this to make this up as possible. Oh, got it, sure. So what I was talking about, the two-week atomic thing. So we are coming up with something for that automatic test. So whenever there will be a new cloud image built on Koji. So right now, we are using Tuneit. In the time, a new version of Taskatron will be deployed. And then we'll find out how to do that on Taskatron. But we will be testing those images automatically. And I have a working system up and set up. I think I just got a new hardware while I was coming to Flock. Smooth did it for us. So we'll be using that hardware. And we'll be having that workflow and the pipeline up. And anyone can go view the things like which one worked, which one failed, and for why it failed. So we don't want to do it only for Atomic, because it's basically almost similar kind of test image. So we'll be running it for all cloud images which are built on Koji. So it will be working for Atomic. It will be working for the base image. And three bin, that is, oh, wow. And Ralph, he opened another ticket there. So he wants to make sure that we can, it should be some way to mark that if this image managed to get into Amazon as an AMI. See, he just filed a ticket that we will look into later on. But yeah, I can tell you that that's in our comment. On our plate right now, and we'll make sure that that thing happens properly. Because now we are having those nightly builds done pretty awesome way. It's not that difficult. And for the two week Atomic, he is working on those things here. Right. I think I already covered this thing. The future, not, I don't have, it's not super big in any way. So I don't want to keep adding random things into this. I wanted to do one thing and one thing very well. That one thing was like, you know, a boot-up system is required, or SSH into somewhere and execute some commands. I know that at least one thing I'm adding up pretty soon is that as I said, like there should be some non-getting tests where we really don't care if the written code of these commands are either zero or non-zero. But we still want to know what was the output in the view. So I'm planning to add one more super stupid syntax that may be pound, adderate or adderate pound in front of any command. That means we don't care what is the written code. But yeah, so that is the thing in my mind to actually add into it. For the rest of the things, it's working perfectly fine for my use case and at least the people I know who are using it for small things like smaller Python projects and stuff like that. But any commands, anything you want as simple enough like to be added in a tool like this. Because this is not a big tool. So the whole code base is at max maybe 200 lines of Python code. So it's very, very small. Now. Anything? So any such feedback would be lovely to have. I think what you're probably looking for is a sync update in Python. Yeah, so. Like people can go like, you know, and do things on Amazon, register them. Yeah, so we'll be having it by the time of 24 coming out. So we'll be having them properly coming out every month. You can actually get into that ticket, which already has the first image details. I think Colin Walters added one comment about he wants another, what do you call this? Another new tree so that we can have a, I forgot what is the name of the package, but he found some bug in some package. So just a question on, so like these are like your issues are usually pressed with, with tenure or the cloud image process that's mainly using the task that's been structured. I forget the name, but we'll have to write it out and put it over. You track like tasks for the structure that we're using. So for, for Tuneer, I'm not using anything. It's a GitHub. So if you want to do anything, you can just put issue there. But for, say, the two week atomic, we have our own Kanban, what do you call Tega? Correct. Yeah, so for the two week atomic and also the layer image build service working on, we have a Kanban for that's in a web app called Tega. Like I don't know how to pronounce it. But we're going at that. So like we want to advertise it to the point that, yes, we have this in a public accessible, but we don't want people to put market because it might disappear in six months and you replace that with anything else. But yeah, I mean, if you're interested in it, there's a link on our team that will work on this stuff. And also we are doing public demos. So the demos are over, what do you call, Hangout here? Yeah, we do Hangouts on here. We do spread demos every two or three weeks, whatever. We actually have, we have something from Vienna that's actually like, you know, hurting the cats that are us, it's kind of amazing. It's like we've always just, you know, like run around trying to get things done. You can join those, maybe the Hangouts. So if you have any comments, you want to see how we are doing the things, you can just turn off the. Like I'm at work from nine to five and I can't use the whole Hangouts at work, so. Okay. So the YouTube recordings are public, so we can watch it later on. Yeah, we do Hangouts on here and record them to YouTube after the project. Yeah, so they're probably published, post-meaning. So the first one we did is public, so like many people actually watched it later on and talked. Yeah. So yeah, I think I don't have anything else. That's the documentation part of it, to need.rtfd.org, that's read the docs. And it has, one thing I never talked about it, it can actually do things over using Docker containers too. But I never talked about it because I'm really not sure what I'm doing there is okay to do. I remember because somebody just asked me, oh, it's nice, can you do Docker? I'm like, okay, give me two hours. And I like had it another three hours, like containers were working the same way, but yeah, because I spent only three or four hours, I'm not really sure what I'm doing there. Did you, are you using the Docker 5 API? No. I looked into the Docker 5 API and that time I was like, oh, I need to just finish it off in two hours. I don't want to look at more, unless somebody comes and tells me that there is real issues. Yeah. So, yeah, so like VM remote boxes and then containers. So it can do like similar kind of things. But I may believe that Docker might be broken, but I never tested it afterwards. Docker broken. Tuning handling Docker is broken. I should repress. So, I think. I'm not in front of this stuff, like if I was doing a test for writing new, I remember I'd be able to start writing tests. For cloud image or? Atomic specifically. So feel free to add it now. I mean, because then we can just add up the tests or make sure that we run those states in the data to atomic thing. Because my plan is to hopefully after I go back, so I can't do almost any work next week because I'll be traveling a lot. So after going back to India, I will still take three more days to recover from the jet lag. And after that, maybe. So within two sprints, I want to have at least a staging instance up for that automatic test for the cloud image, atomic images. So if you just add up the formal Fedora way of maybe the test, or if you don't want to do all those things, if you want to just maybe write a blog post or doing things, we can take it from there and then move it to the formal Fedora things. We should actually open up the atomic tests and start, we should start doing those somewhere that people can send the word for us. So basically, since you can run a tuner on your laptop, you can run a tuner to test the atomic cloud image. I think, Hagger? Hagger, yeah. What is that? It sounds very PG-15. So it's... It's an open source way of doing things, GitHub does. It's an open source plan of GitHub. It's a, so PAGURE.io, that's where, so TR wrote it, and it's very good. And some people have a lot of adverse reactions to the concept of using GitHub, because it's closed. Yes, yes, but there's also a lot of the red-eyed group of Macs, but we don't talk about that because there are people who are very fast on it. So I'm one of those guys who are using GitHub a lot, but now I slowly, with the newer projects coming up, I'm slowly moving my old projects into PAGURE from GitHub because it just works for me. Yeah, there's varying degrees of commitment to the open source movement in all populations, and the population of red-eyed employees is not excluded from that. I'm just saying, I know, I think what he's paying attention to is GitHub, and I think the thing is, it's pretty new. Yeah, it's like in the last month, it's been new to existence. And I've heard Pierre say that people are bound with both views, but he's not sure it's ready to all this open-source challenge. That might be true. I mean, is there other projects that call themselves GitHub funds? I don't want to just talk about all this stuff right now. So this is PAGURE. Just in case you don't have a look at it. There's also the gods. Yeah, here we go, here it is. The web UI, apparently, is going to get a refresh. It's going to be based on I don't fly or something. Yeah, click on, go to Pungie for no reason at all. Where is Pungie? It's off the web as well. I can see Koji. Sure. Yeah, oh man. So actually, so PAGURE is self-posted. It has its own. So here's the overview. You can click on, you know, forks, forks, issues, tags, frees. It's very similar. Scroll down to, so that's the reme.md. So I just heard today morning that there is a new feature added that even if your Git is on some different applications that you have on some other place, you can still sell PRs from there. Which is fantastic. E-mail interface? Well, so yeah, so it's a, it does some kind of a syncing through the API. Okay, gosh, yeah. So like when something happens one place based on API calls, it syncs between the two. I don't understand, I won't pretend to do it, but he was explaining it over to me last night. It's pretty cool. Doesn't he have a phone? He's giving a lightning talk today. Yeah, yeah, yeah, lightning talk, yeah. But yeah, if you want to play around with it, just install it. It's available in Fedora land, 22 waterworks. Very simple, so I think I already pushed the Apple 7 build, if I remember correctly. But yeah, so. Oh, what? To need, oh no, this is not to need. This is to need. Oh, yeah. So, I thought I clicked it. Gotcha. So, yeah, and feel free to use it for any application you want to test on your laptop maybe. Because every time it boots up new systems, so maybe your test you can actually test from the scratch whether it can actually build and then run the test kind of thing. Does it cache the bottom when it does the download? Well, I noticed on there since it's been downloaded, this image can sit there and download it. So, is it caching it somewhere? Yeah, so. Yeah. So, you can, oh, okay, okay. I missed that part, for some reason. Yeah, so it goes to a temporary directory and then after the work is done, the directory, I'll just kick it out. That's probably solved, I think. Yeah. Swank. So, the newer version of test cloud is much more efficient. It actually goes into proper path, like world-leap test cloud instances sometimes. It has. Yeah, it has proper things, for instance. So, you know, give a talk about the version that's using the program in a human game directly. Nice. Give a talk to that and the version can understand use the version of that on the line in a human game mainline. Oh, yeah. No, I figured. What does the name mean? What does the name come from? Yeah, okay. End the talk and then I can, I mean, end the recording and then I can talk about it, maybe. Yeah, I don't know. Anyway, thank you for coming up.