 Hello, everybody. Yes, it's me, Mike. Yeah, I'm good. I'm good. Excuse me, guys. Please stand by. Hello. I think so. That's why you sent this to me. It's okay. Oh, okay. That's fine. I'll just come to the room. Hello, guys. Yeah, Ryan, nice job. Oh, thank you. Ryan, nice job. Awesome job. What do you want me to do? I think I would do a party to do. You want to see what you're doing? You want to see what you're doing? You want to see what you're doing? You want to see what you're doing? You want to see what you're doing? Watch out, what's your feet? Oh, cool. Thank you. Wow. It went from empty room to pretty much, or a forum, because pretty much an empty room. Here, for automation and open shift, I guess. I can't see. I'm here for automation and open shift. I'm here for automation and open shift. What's your shirt say? Sorry? What's your shirt say? What size words are not? What's the gear? What is it taking of you? It's Coconut and it's So the Login Trees, that's some conference, some cool start-up at around the world. I just want to add a word, at least switch to Alpine Linux. Is this a tutorial or a talk? A tutorial. Awesome. I'm here to learn practical things. Probably the most of the stuff on here. And since the network is so awesome here. Yeah, I was just trying to get on the network and not. I saw also there's a talk on bad USB here as well, so that's... You can choose to... It was my computer, so you can choose to... I'm just putting bad stuff on yours, that's all. Thanks. It's not long and on. It's not a matter of fast. We're using the gateway. The gateway thing's not doing its thing. There's just two things, and you will tell me something about automated testing in Federa. As soon as... It worked early. It worked just an hour ago. Maybe there's some automated test for that. You know, there maybe there should be, but I think that's in the back. I guess. Can I use the GMI? Nope. I forgot all my adapters at home. Do you need one? Do you have mini display port tips, GMI? Yes. Because I tried that before, too. For whatever reason, it doesn't work. But that's what I did my presentation earlier today on it. It worked? Yeah. Well, you saw the slides, didn't you? Yeah. Because I tried that before with the Apple Adapter and it was good. Maybe there's a different version of these adapters around. People actually wanted to hear what I have to say. I'm here for you. Sweet. We've got at least one person. I feel better now. One plus is your back. I suppose it is probably about time to test or start. Alright, so... I want to, because there's a bunch of links embedded in it that are probably going to be your click on. The one USB stick I have with all of the files on it. There's a bunch of stuff to download and repost to clone. What I'm planning to do is... Well, the original plan was to go through and teach everyone how to write tasks for Taskatron in particular. And before I start, I do want to say that we are just getting into having people that aren't the core developers writing tasks. So, when you hit pain points, please tell us. Because it's things that we want to fix. It's things that we need to fix. I mean, even just preparing for this talk, I hit about 10 things that just dog pooping. It's like, well, why do I have to do it this way? So, if you find things, please tell us. It's actually strange. The same old... I swear this worked yesterday. I blame you, Max. That's fair. Yay, it ran. I never remember until I'm sitting up here. It's like, yes, because white on black text is so easy to read from a projector. Or white text on a black background. It's not really okay to read. It's okay. You don't need to read. This bottom part doesn't matter. I hit all the stuff ready. All right. So, hopefully these slides are the same as the ones I put online. Like I said, there's a bunch of links in here. And I suppose I should probably start with telling people the stuff to download. Because being at a conference and relying on network is always such a great idea. So, where did the USB stick go? If anyone wants to actually write the task, there's a bunch of files on there that I would suggest getting. It's kind of slow because it's three gigabytes. Yeah, and the files in there, it is a virtual machine image. It is get repositories with tasks in them. And the contents of the Copa repository, if for some reason your machine can't get to the network. Most of what you need is this command here. Oh, where did that slide go? Oh, for right now, it's only going to work on Fedora 23 and newer. Or at least the exact way I'm going to be telling you to do it. Does someone have something like Fedora 22, Fedora 21? What do you do? Can you go back? Fedora 22 or you, Fedora 22? Okay. Quick, DNF upgrade. No, well I'm not going to stop you from doing that. I would heavily recommend not doing that. I'm just trying to think of the best way to do it. Okay, I mean because the problem, the reason it's at 23 plus is the way that we spawn virtual machines is written in a way that requires a newer Lidvert bindings than what's in Fedora 22. I think it's possible to fix it. It's just one of those things that it works for us. It works on my death machine. That's wrong with yours. So if you can either create the virtual machine, and I will show you somewhat, you can point Lidtask and try to have the pre-created virtual machine. That will work. It's just the spawning that isn't going to work on Fedora 22. Oh, sorry that you were raising your hand. So just for my reference, who all is trying to follow along to create the task? The others. Huh? What about the others? Checking your email. Checking your email. I mean I figured that through you guys are here to throw out vegetables at me. I can just follow and... The slides, they're complete enough for me to do this today. They're not really complete enough for someone to follow along at home. That will be changing. The link will stay the same. This, and I will update that with the remainder of the instructions. If all else fails, the full example is in a Git repository. It's in a BitBucket repository. The link is also in the presentation. I think it says something about cheating. Yep. The link here. It's already in Git if you want to cheat, or you can write it from scratch. It's up to whoever. But if that USB stick is making its way around, I'll get started on... What's making that cool little flippy thing in your slides? Why can't I think? It's a Javascript. It's been a long time. Why can't I think? It was so cool. Reveal.js. That's what it is. I'm not sure if you know about... Do you know what it is? Yeah. Okay, so I know there's a group of slides going. Which uses Reveal.js. I've never seen you run before. Are you sure you work for it? You're a community guy, man. I'm one of the sneaky people. Alright, so getting into this. My name is Tim. I'm going to be talking about automated jobs in Fedora. This, more than most other things, I plan to be very informal. If you have a question, please get my attention. Even if that means doing something you might consider rude. I would much rather answer the question at the time than to find out about it later. Because odds are, if you have a question, someone else is going to have the question. So, that being said, some of the stuff I want to go over today is just a little bit of introduction. Talk about some of the systems that are available in Fedora which I'm probably going to breeze through because I think most of the people, if not everyone, either already knows it or was in my presentation earlier. And then basically start getting into more of the guts of how Taskatron works, specifically the runner within Taskatron, getting into writing a task to test the Apache HDVD package. So, at the risk of sounding like a pitchfrazombo.com, you can do anything with Taskatron. Anything at all. The only limit is yourself. And it is kind of... Taskatron itself is designed to hand off to other things. I am not naive enough and I am not full enough of myself to say that I know how to test everything better than everyone else does. So, the idea is to have Taskatron coordinate. So, it will schedule the jobs, it will provision the virtual machine, and then hand off to someone else's tool. You know, if Matt wanted to test Docker, or if Camille wanted to test framerates on X. You know, they may know more about these things than I do as someone who is, you know, administrating Taskatron or writing Taskatron. So, when I say it can do anything at all, that is kind of what I am getting at, is we are... The idea is to hand it off to something else and to get the testing tools in the hands of the people who know how to test the things that need to be tested rather than assuming we know better than they do. Oh, I didn't take that slide out. So, I have a couple of things because the way that I define these things aren't as common. I do not like the word automated test. I think it is overused and I generally do not use it. The reason I don't use it is because I am a big believer in the value of human testers. And by equating the things that a machine can do with what a human can do sort of denigrates what that very capable and smart human can do. So, I tend to use the words task and check. A check is a kind of test, but the idea of a check is what most people think of as an automated test. It is something where you run it and it gives you a binary pass or fail. So, all checks are tests, but not all tests are checks. And most cases when we are doing test automation, what we are interested in is checks because we want to be able to feed that binary pass, fail, answer through a system and use those results for something. And then the task is just purposely vague. Like, one of the reasons this is called Taskatron is because it's not limited to what we would consider, usually consider automated tests. The whole, one of the big things behind it was that, especially in Fedora, everyone has more to do than they have time for. And, you know, instead of having, you know, ten different automation systems set up by, you know, ten different people, one to go do kernel and one to test the graphical stuff and then another one to go, you know, do all this kind of stuff that's a lot of overhead. So, the idea was that the system will work on tasks. So, whether that be static analysis for code, whether that be, you know, running things that we would traditionally think of as a test, any of those things are tasks and pretty much can be automated with this system. So, I'm just going to breeze through this because I'm pretty sure, is anyone, was anyone not, doesn't either know these three systems or was not in my presentation earlier? Okay, I will go through it quickly. If you can go quickly through how vehicles are available with Fedora because I didn't know we had that already. Yeah, that's a messaging issue and most people say that. Is it ready now? It is mostly ready. Because I know you had given me access. I could get in but I think I was trying to request a system but I don't think I ever got one. I did not know about that. The staging system should be working. The production system looks like it's working but it really doesn't. Maybe that's what I was getting there. So, it's a matter of me finding the time to redeploy the system's production is pretty much what it's lacking. But the staging, when other systems it depends on aren't broken, staging works fine. Right now it doesn't work because for some reason the staging off systems are down. I think maybe that's the one you told me to ignore an error when you first log in or something like that. But if we, as part of this at the very end, if we just went through getting a system then you'd be clear. I've been having network problems all day. So, we can try but I make no guarantees that it will actually work. So, Beaker is a system originally developed by Red Hat and it's still maintained by Red Hat. It is used extensively to test RHEL. It is a combination of a test automation system. It's a test coordination, test automation system and lab management. So, the idea is to manage, you have a lab of test systems. So, Beaker can manage, okay, this one's getting this test, now this one's getting this test and run things through to try and make things as efficient as possible. It is in my mind very good for tests that require bare metal and things that are generally a bit longer. Most Beaker workflows work as you are given the machine. It does install with Anaconda. You get the machine, you can prepare then you run the tests. You know, if you have something that takes, you know, 30 seconds to a minute and you are installing an OSBA Anaconda every time that I'm sure there might be a use case or two where that makes sense but it seems a bit silly. But that being said, I've done hardware automation before and I will, I have no intention of ever doing it again. If I can get away with it. So, they're doing it and as long as I can piggyback off their work, he works on Beaker. I am very happy to do that. I will get more into Taskatron. OpenQA is, was originally from the OpenSUSE people. It is in a nutshell, it takes screenshots and it has pre-recorded screenshots. So, it goes through running a test. It takes the screenshot of the graphical output and you tell it, okay, there's this little button I'm looking for and here's a picture of it. Go find it and if you can find it, click on it. And a tree of tests, like of steps that would make up the test. It's something that's very good for graphical testing and it's something that's very good for environments where you can't have like a testing interface, the testing back, like where something like Dogtail, you can't look at the accessibility information. You can't put some sort of test interface to facilitate your testing. Any questions so far? If you write a test for OpenQA, well, does it have to be in a certain language? It has to be created through the OpenQA web interface. Okay, very good. Not necessarily. Ask the question. He knows more than I do about OpenQA. So, I'll try to show it basically. What do you need to do? You need to provide the screenshots and symmetry. You can screenshot. It doesn't necessarily need to be a picture because it's basically fragments, but we usually do it in a way that we take a screenshot of a machine and then we say, okay, this part here is a button, this part here is something like that. And then you can easily use these fragments between the tests. Yeah, exactly. So, it's mostly programmed. Okay. You can do this web interface, which is very user friendly. You can do it by hand. You can use some other tools that I don't use, but I don't know where I'm going to use this later. And then you just write a test in Perl, but you are not using Perl too much because you are just using a library. Like, you are using method calls. Right. Like, it's as simple as anything else. Like, you just call some methods who don't necessarily need to be able to deal with Perl as such. Okay. So, you don't have to write your test in Perl. You just have to hook to your test. Hook to your test, you think it's all right. Because that was a misconception I had when I looked at it originally. Oh, I will have to write all my tests in Perl. You basically just write the script which does in Perl. Okay. But you can provide the data, like the images with the fragments with the data provided. Okay. Thank you. This is just an example of the front-end. This is a failed run from last night. I don't know enough about OpenQA to tell you exactly what went wrong. But that's what it looks like. Okay. If you have questions, we can reply to them. Yeah. OpenQA questions are best sent to him or Adam Williamson. Or the two that are the best to ask OpenQA questions to. Any other questions before we get into more detail? The USB stick. Where is it? Here. Oh, absolutely. All right. So, does anyone still need this URL? I suspect that's going to be our own stuff. So, within TaskCon there are, we have three different real ways to execute a task. And we call them local SSH and the verb. Local execution is pretty much what it sounds like. It reads in the task and it will run things on the system from which it was executed from. So, if I was in local mode and I ran for my laptop, it would run those commands on my laptop. And it's good for quick local execution. It's good for development in certain cases. Things where it's, you need it to be quick. It's not destructive. It's not going to cause any problems. It's usually when it's useful. We start getting into the more complicated stuff, which would be then SSH. So, this basically then delegates the execution to a remote machine. So, if I were to use SSH mode on my laptop and I had a virtual machine, you know, I start the task. It says, oh, look, I have a virtual machine right here. I can log in. And then instead of running things locally on my laptop, it does them on whatever that remote machine is. And then when it's done, it knows where the output's supposed to be. It gets the output, it extracts it, puts it back on the machine from which you started. And that way, you don't have, you can run things on a remote machine so you could, you know, delete the root file system. You know, you can install packages and not have to worry about side effects. You can run on different versions of Fedora. You could run it, you know, on other things or other versions than what your starting system is. And it just gives you quite a bit more flexibility. And it's, in our mind, it's generally used for development, which will make more sense in a second. But as far as how this actually works, does this make sense? As far as, you know, you start from the machine. It sends commands to wherever your target is. And, you know, when all that's, you know, back and forth, it does all the commands, does the task, and then when it's done, we go, we know where the output's supposed to be. We extract it and bring it back to where we started from. Does this process make sense? So then we get into what we call libvert. And this is what happens in production. Basically, we start with our execution. So I'm on my laptop. I'm going to start this in libvert mode. So I pointed at a task and it looks at it and says, okay, I need a virtual machine for this. So it finds an image. It spawns a virtual machine using that image, gets the IP of that, and then basically does the SSH, the same thing with the SSH execution. So, but instead of the VM being there to start with, so we create the virtual machine. But then that process is the same, where it sends the commands to the virtual machine, does all the tasks, gets the stuff out of it when it's done, and then when all of the data execution is done, when all of the output is out of that virtual machine, it kills that virtual machine, leaving the system you started in in basically the same state as when you began. So are these like basically plugins for different modes of applications? How easy is it to add another one? I'm assuming you're talking about like the cloud use case? Yeah, or just a test in a container use case for something faster than a VM? Right now, containers aren't really possible. The reason we went that direction is we're trying to... There are cases where a container won't work, but there are no cases in which a virtual machine won't work. There are a lot of times when you want that isolation at this level. So we may end up doing containers in the future, but that's not on the immediate roadmap. In terms of selecting another target virtual machine is not yet supported that will be coming relatively soon. How do you work with multi-machine tests? That comes along with we don't support it yet. It's on the roadmap, but we're trying to do baby steps. So now we've got the ability to isolate the task execution, which was a big blocker for us. And the multi-host and selecting images are going to become later. Where do the images come from for virtual machines? They can come from just about wherever, as long as it's like a QCOW, as long as it's... It's my problem. It's my problem. I'm saying it can be flexible. I do publish the virtual machine, like we have virtual machine images and we publish them when we rebuild them. That's what was on that USB stick was one of those virtual machine images. It's basically a minimal Fedora install and it has the Taskatron packages on it, and that's about it. We pre-populate the DNF cache. So you can do either way. You can either use any of the Fedora cloud images. Anything that's going to... Anything that TestCloud supports, which is what we use to boot the virtual machines. And for now, that's Red Hat land. So one of the Red Hat derivatives. So there's client packages that need to be inside of... I know you don't have to have them, but you mentioned a rebuild VM. There's client Taskatron packages, like something that can talk back to the host or what. Kind of. They don't have to be there. As part of the spawning process, the Task will check and see if the Taskatron is installed. It's relatively simple. From here, if this is a separate computer that I'm a representation of that remote machine and this is where I'm starting from. So I start here. This spawns this computer and then checks to see if the Taskatron is installed. And then it SSHs into here, runs the Taskatron in local mode and does all of that stuff and then extracts the output. So it is a client in a matter of speaking, but it is basically just the same thing without SSH. It's just the same as putting it in that local mode. I was just wondering what the benefit is if it was necessary, if it wasn't. What you're saying is it is necessary, but you don't have to have it in there when you start because it will install it for you. But could you not do something like what Ansible does, where it just copies over the things that it's going to execute? That way you could just use the cloud image and not have to rebuild it. You don't have to rebuild it. We do it because it's faster because we don't want to be updating the DNF cache every time we spawn a virtual machine. So we want the DNF cache. We want to have lived Taskatron in there already. We probably could do it that way. That would just be kind of cool because then literally you can bring whatever image that you build and run it through. I think it's like this conception that Taskatron is not a client, it's just a runner. So it's just like a binary that you run and we give it to the test. So in all honesty, we could do that. It's not something we've designed for. It's not something we do. At some point I want to support Ansible Playbooks as part of this, but we're starting to get further down the roadmap. But to answer your question, we could have done that, we did not. We could have done it this way. What is TestCloud? TestCloud, I suppose I probably shouldn't have assumed everyone knew what that was. So TestCloud started life as a script written by one of our co-workers because he wanted to test cloud images. Because the raw and the QCOW2 images that get produced for Fedora, you can't just throw those at virtual machine manager and boot them. Yes, it's in Fedora 23. Is that an infrastructure, a piece of infrastructure? No, it is designed to be about this. It is when you want a cloud but have no interest in either depending on or installing opens that. You feed it a cloud image and it prepares disks in such a way that cloud knit will work, injects a password, increases the size of the disk if you want, and then boots it as a local virtual machine. It's sort of the answer to the question but we get a lot because the cloud image does not have a password. You cannot log into it unless you do something special. And so that's one of the special things is speed metadata and any real cloud environment has that very easily accessible. Yeah, it pre-does it, but your machine doesn't. So we were getting a lot of questions about that. So it's really just a convenience. It started off as a convenience layer for booting virtual machines but we use it because it is simple and doesn't require... It still lets us have this paradigm of you can go install the Taskatron, you can install TestCloud, and you can run it basically on your local machine without the need for production infrastructure. So it's trying to keep with that where if we were depending on OpenStack we could tell people to install OpenStack. I think that would be I think a bit much and somewhat naive but that's really why we went with TestCloud instead of OpenStack as one of the better known things. So it's implementation detail on this. Yeah. Can you do parts of the task locally? Can you do parts of a task locally or parts of an execution locally? Right now it is all or nothing. We don't support the concept of some things locally, some things remotely. It would be cool for example if I wanted to check out the latest the latest comment for let's say some ARM application and then cross build it and then install that and test it. And so I don't want to build it slow or something like that. So to test something that would run on a machine which doesn't have enough resources to build something fast I would rather build it locally. So you could cross compile a binary and copy it. And I want the cross compile to be a task because I would like to see that it passes or fails as well. Just saying that would be cool. If you have suggestions like that let us know because like I said before we want to know what the pain points are and where there are parts of what we have set up that we have deliberately left as simple as possible because we want to see what people do with it. Going ahead, can you chain tasks? Do you have dependent tasks? Yes and no. You cannot do it from the task itself but every time you put it into our result system that will emit a Fed message and everything from the parent masochron system is triggered off of the Fed message. So you could do that with your structure but you could have your first task be to build the thing and the second task be to use the thing. There is one other thing I hope I am not giving you bad advice but Biko for example uses a test hardness and at the moment that test hardness is in Python but there is another one called restraint Biko-project.org if you want to check it out and this restraint you can also run the tests locally. Maybe something to check out not sure if that will help you on that Thank you. Any other questions? Just making sense so far the difference between local execution remote and then this libvert spawn virtual machine for the purpose of running a task. Cool. So I am repeating a few bits here but one of the things I want to emphasize because as we are getting to the point where we can accept tasks from other people this I really want to make sure I am clear so this won't be the last time I say it. If you want to write something for Taskatron you do not need to install the entire system. The only thing you need is at a minimum you need a libTaskatron core and probably libTaskatron fedora both of which are a couple hundred K in size and have one or two dependencies. When you start getting into the libTaskatron disposable that is going to pull in libvert which depending on what is on your system may pull in things but at the most you are going to be pulling libvert in which is probably on a look like it is on my laptop already but once you have that installed you can do pretty much anything that the production system can. Without the obvious exceptions you can't access the same machines it doesn't have credentials that kind of stuff but that is one of the bigger design paradigms that we were going for is that it's easy for people to write tasks for it. If you want to develop something for it install one or two RPMs and then you can go off and do pretty much what the production environment does. So that again I will probably repeat myself again. So the other thing that we went for was basically keeping things as loosely coupled as can be possible. One of the precursor to Taskatron was very tightly coupled. We had to change everything at the same time because you couldn't mess with this part as soon as you mess with this part you would have to do something over here to mirror that change and it got difficult for us to maintain this really tightly coupled system. We have the Taskatron which is complex enough on itself but that is only the runner that goes through, it reads how you've written the task, it goes through it will spawn the virtual machine, it runs everything and then pulls back data. It doesn't serve results it doesn't have an off system it doesn't schedule jobs it doesn't do any of that the Taskatron system as a whole is not necessarily is made up of many parts of which Lib Taskatron is one. We use Buildbot for delegation we have a system called ResultsDV which is on purpose, very simple but it is a database with a restful interface that you can put results into and you can query results from and it all comes out in JSON. We have what we call ExecDV which is for lack of a better way to put it reference point. The purpose of ExecDV is when we start all of this we say we have this signal that was a new build of HTTP and when that happened we schedule this task, this task and this task and these are the URLs to get to it. So it's so we can change out any single one of these pieces without having to worry about some of the problems you get with really tightly coupled systems. And the last thing is we don't want to restrict people to a single language or framework. I can sit here and list off several frameworks, several harnesses, several things that people use not even getting into different languages. Just because we do use Python Lib Taskatron was written in Python you will get the most convenience methods if you also use Python and use hard utilities. But as long as what you're running spits out something we can understand be that XUnit, XML, TAP or we have a results YAML format then it can be understood and it can be reported to the right places. It can show up in Bode it can show up on dashboards. It can do all of those things so that the emphasis there is as long as that interface, that results interface is there you can use Lib Taskatron for whatever you have. Any questions at this point? So getting into writing tasks. As I mentioned before as far as spawning virtual machines for our 23 plus is needed Lib Taskatron should work with anything for our 21 plus that's what we've tested it with but it is also Python 2 for now. Just looking at some example jobs I'm going to skip this one for now because basically that's what I'm going to walk people through one of the other ones that I wrote recently and hopefully I have the network to actually load it. So for Docker their upstream their upstream test suite that is run in auto test and it's something they call Docker auto test because that makes sense and what this will do is it builds it spawns that virtual machine it installs Docker, installs all those dependencies and runs through the entire upstream test suite and then translates that into a pass fail that we can then put into our systems. What interfaces? I don't understand. This is Bitbucket. Which we will be migrating off of soon. It was just one of those didn't have a good option at the time Bitbucket is just as good as bad as GitHub but yeah this is Bitbucket. Where are you going to? Where are you migrating to? Either Pujir or our fabricator instance. We wasn't originally set up for Git hosting. But yeah this is also this is all public. Everything's under, for now is under the Fedora QA thing on Bitbucket. So you can see all of our current production tasks. There's lib taskatron. This is for building images. This is the check that we're going to be running through writing. And then here's the docker auto test task that I was talking about. Okay so before we get into the details I just want to talk more about what the idea is behind what I'm going to try and walk you through. So HDPD I just kind of chose somewhat at random because we had a Fedora test case for it and it's a package that most people have at least familiar with what it does even if they haven't used it before. And it was I wanted to demonstrate that we can take something like this that was written before we had much of an automation system and then without too much effort we can have a wrapper around it and have that run for every build of that package. And you are welcome to type along. Some of it may get a bit long so like I said it's already in Git. It was on the flash drive that went around already. Alright so let's try and get started. Are all the people who are trying to follow along? Do you have libtaskatron installed? Is anyone following along and does not have libtaskatron installed? Alright we'll take that as a no. And does everyone have the disposable image? I'm sorry. Is this in copper? Is it going to be in proper? It's on our to-do list. We just, it hadn't been a priority. When it was just us writing tasks it didn't matter all that much and it's just we haven't gotten to it yet. As far as I know there's only one thing we need to change before it'll pass review for Fedora. And the thing that's on there is a gzip file that will need to be extracted before it's actually useful. Oh no that's right I did that before I put it on there. Which is actually bad because it's slower to copy the whole unzipped thing that would be extracted. Sorry. I was hurrying and I just unzipped it and then all of a sudden it's like oh well dash K needs dash K. So as a summary of what we want to do is we're going to download and install the latest HDTPD build. We are going to run some simple tests using the scripts that were already written and report the results we find. And if it will play nice generate an HTML report. So a side effect of not having taught many people how to do this before. So I am missing a bunch of steps. So let's do those before I confuse people more. So there's a bunch of configuration that's going to have to happen. And I am writing these things down so in the future it's quite so painful. But for now it's still worth learning. A quick question. Is this distinct from the directory under USB drive that says task HDB checked or is that the same thing? Basically the same thing. It is the same thing. I wrote it and figured that's a good way to, if you choose to you can follow along and write it yourself or you can see what I've already written. But configuration wise I don't know if you guys take notes as far as what we're going to because I'm not sure all of this stuff is in the documentation. Thank you. Kind of terminal. That's here we go. So the first thing, after all that's installed, the first thing that we need is Etsy. What do we want to start? Let's start with Taskatron.yml because I know if I open up well but the problem is if I do this Oh because I'm an idiot. Wrong machine. That was on a virtual machine not on my local. This is the main configuration file for Taskatron. The way that we have this written out the defaults are in here already. Otherwise they are at least mentioned. The moment the thing that we want to change is closer to the bottom is this. Image URL. For now Taskatron needs to be pointed to exactly where that image you downloaded is or copied off the USB stick in the format of the file URL. So that's where I happen to have mine wherever you put yours is going to change depending on the exact point in the file system. So is this a configuration file normally system-wide or is it just a convenient place to put a default that could have a per user one? We don't support a per user one that is something that we, I don't know it's going to depend on how we go forward. Honestly this is a temporary thing. Eventually what we want to do is we want to say here's a place where we put images find the most recent one of the type you're looking for and go boot it for this particular piece of configuration. The assumption so far is that it's going to be one configuration file per machine, but I don't think we're tied to that. It's just, that's how it works. You might be able to use Clance for that. It's the image store from OpenStack. You might be able to set that up standalone from the store. I'm trying my very hardest not to use anything from OpenStack but it scares me. It's beautiful as long as you're a user of it and you're not maintaining it. It used to be that that could be installed with a standalone just a little thing by itself, but it might have grown 10 goals and everything else. I don't know if that's a good idea or not. Would you mind just sharing what file that is again? I missed it. Just to be clear when you run a command you can't say dash f and then a path to an alternative file. I thought it was just it will look in your local directory first. Is it local directory then conf? I don't remember. It seemed like a second ago when he asked if this was a global configuration file or if you could have alternate ones. It seemed like it was just this location that could be used, but then I asked if you could pass dash f and give it an alternative but then you just said it will look in the local directory first which solves the same problem I think. I don't remember the exact path. We don't have the command line option for that. We'll look at the one in the local environment. One extra location which is when you check out from the registry you can just use the conf directory it is in there but you have to know about it we don't have it in the document. If there is demand that option could be added. Because it's just a configuration setting. I'm coming aware of the irony that I said that this was supposed to be easy to set up. I have a question for the intent with this. Say I want to test different distributions with different dependencies installed in the packages. Is this like the performance staging image that I go into to test a virtualization and start other VMs? Or would I have one of these for each of the configurations and change the configuration to reflect that? I'm testing Fedora 23 and Fedora raw wide. That's what I want to test my packages against in a simple case. Do I have different configurations with this? Or do I have like one Taskatron base image and within that I stage nested virtual containers for Fedora 23 and Fedora raw wide. The way that we had in mind was to have basically the same task executed for different versions. Honestly, that starts getting into details that we haven't actually implemented yet. So if you have ideas on how to make that better I'm definitely all for it. But the idea that I had in my mind was to try and keep it like this so that you have a task file in a master branch F23 branch F22 branch F21 branch just because the concept is there and I think that it's going to be somewhat rare that the exact same test with the exact same arguments and the exact same things are going to be used for the raw version the 23 version F22 version. So to answer your question Yeah So I think what started the most for us is that this is in Etsy so this is a global configuration file and for me this looks like it should be a parameter for the test. It will be eventually. It's one of the next features that's on our list is to the hard coding of the image location is a very short term thing. The local running is basically a hacky thing that you do for development maybe largely for development of Taskatron the real tests are running in a different environment. They are running in a similar environment for right now we don't support specifying your environment but that's the thing that's coming is that you're going to be able to specify I want for I want 23 I want to have the server edition I want to have the workstation edition specify that and then it will be smart enough to say hey this will basically be an image directory it's like go look here for images and then it will be able to find the newest image that is F24 server So it does not do that now that is on the feature list Will it be able to bisect and figure out where my test started failing? I have no idea Wouldn't that be cool? It would be but as long as as long as you don't have to write that How close are we to my dream of having a light diskit having literally the tests we checked in next to the packages I'm just trying to think of how to answer that Do you know what I'm asking? Yes I do and I'm trying to because I just wanted something new today for my answer Actually not that far away because there's been some changes in package DB and there's been changes in how they're doing diskit so that they already have multiple repositories that are grouped under the same umbrella that have separately controlled ICLs So it shouldn't and I know I just swore as a tester the should is the s-word it shouldn't be too bad because the git stuff is already taken care of Most of the git stuff is already taken care of so it's going to be a matter of the hardest parts are going to be figuring out a format and conventions because I don't and how we want to show the task git repo as part of the package git repo if that makes sense It makes a lot of sense There's questions it's like then do we have it as a sub module do we just have it as another checkout that package can manage there are all those kind of things that haven't quite figured out the details on but the git stuff is mostly there would not take much to add a task git repository that is attached to each package That's what I've been told about package DB is that that feature was added recently so like if you do a package clone recently it will have a message about how this isn't really what you cloned it was supposed to be something like rpm I don't remember exactly what it was it just happened when I cloned a repository recently and so that's what it's doing when you do the fed package clone for that package it does some switching behind the scenes to point you at a specific package repository instead of just naming to ask the URL you are using that in a sense any other questions that is the only thing we need to change in here the other thing this is why this all needs to be written down the other thing that we are going to need is an SSH key and it does have to be passwordless just do wow my brain is not working right now it's um oh SSH key gen I can type I swear I can type just do ssh key gen id rsa task and then emphasis on no password the way things are set up right now if you have a key with a password it will not work and then one of the things you need is that true even if my key is in my key ring is that true even if my key is in my key ring yes the only thing we really support we don't support key rings it has to be files it has to be passwordless this is the important part is right here that's for convenience you can set it really to whatever you want um there is I wish I had the original file um I think huh this is the etsy test cloud settings.py but the important thing here is that this is basically a template but what we need is users and then the D is this readable? it's this part right here the users part of this yaml the SSH key that you generated goes in single quotes and also that thing where it looks like cloud config is a comment up there don't be fooled that's cloud init being stupid that is actually configuration yeah also if anyone knows of an alternative to cloud init I am all years I am not a huge fan of cloud init but I don't know of um alternatives the problem is we needed the alternative so before cloud init became prevalent and now it needs to be not just less crappy it needs to be compellingly better and this is fine so it's fine in most cases yeah right exactly when it starts getting in the way of my stuff booting I start running out of patience with it yeah it's a market problem why do you bug it why do you bug it it's good right yeah so I'm just I'll just wait until I stop hearing typing and I will assume that does that mean something that means people are mostly done with their configuration is anyone having trouble with this is anyone having questions while typing happens what is going to be the best way to make this easier any ideas on what the best way to make this easier would be yeah it should all have an argument I'll look into more details we should generate the test cloud init file in live yeah or change test cloud to support passing the configuration in so it's not tied to a single configuration file we said everything can scratch that would make a lot of sense and the only piece of information you believe is like the path to your SSH path key or even allow generation of one or we can generate one on the file and the percent s is also important it's a quirk of test cloud because if there isn't at least one percent s then there it will fail I like how you have a super secure password yep yeah it's I mean it's not only so is there a need for a password since you have the keys not there is no need for it test cloud will set it so if you don't have it in there test cloud will explode so do we need it do we need to use it no does it need to be there yes alright is everyone done with this part of the config we can actually do some so we can go back I'm going to choose not to do all the fancy remote stuff just because the network is so bad here I've not been able to get one run to finish today before the network times out so we're going to go about it a slightly different way do create one we can do test cloud instance create URL and then give it the same but this task installs HDPD so yeah I mean you basically oh how about I give it a complete command so the format of this is this is the name of then the virtual machine you're creating and here's the URL to the file that it's going to use to create that virtual machine and the first time you use an image it's going to take a little bit longer because the test cloud works it's copying it to it's internal cache so it can create a backing store from it here's the command does that make it harder or easier to read it's neutral ok oh more notes they need to be added to the test cloud group so like I said this is I'm learning as I go with as far as the stuff that we don't we know internally and haven't written down so in order for test cloud to work you have to be part of the test job test yeah the team need to be part of the test cloud group in order to spawn virtual machines or if you have just some virtual machine handy that will also work you don't have to spawn at the test cloud but do you need to be part of the test cloud group or part of if you already have a part of the group that's you spawned VMs will that work no you have to be part of the test cloud because of the way it copies files and because everything is awesome if you are in a graphical development in a graphical environment that means you have to log out and log back in you can use su-dash in a terminal and you will get the new ones for the terminal you will? yeah anything that gives you a new log in you need a new log in so su-dash but won't that give you oh no yeah without the user and your user the new the new file and the local directory and all that they don't need to be in the other group what? like the file you are trying to create you can just use the work working directory that we tried to do that and it has its own set of problems like that's how the thing started is it would put stuff in the home directory and libvert starts freaking out and does weird things when it can't own everything and then you start having directories in your home that libvert needs to own and changes the ownership of not the user session not the system session of libvert but if you use the user session it's a networking thing so if you want to have network access now you don't and then maybe it's something that's changed if you look at the cockpit remember going through this maybe something has changed but that was one of the problems we had by using the user session there was another reason because we looked into using the user session we looked into the clues that you can only want to access network information that we didn't try if you need to get the IP address so for me it tries to use the user session now here the test cloud thing but fails that is part of that's part of creating the virtual machine when you boot it it ends up using the system session but libguestFS we use libguestFS to prepare it and that will use the user session okay but it doesn't work what is what is it saying it says home.cache so no such final well it's weird that it tries to access the home it should use libguestFS oh libguestFS libguestFS is what's doing it I'm going to be I'm going to be learning a lot I'm learning a lot today you're very welcome to contact us directly guestfish lounge course so this is the two different runners one's the one that just does it really for the tutorial next time just do the local one yep no like I said some of this is me learning because I didn't think about that until after I wrote the task and the problem is the task that I want to have people write installs hdpd and download stuff and then changes configuration file and really needs road access which I didn't really want I don't want running on the web server and your laptop which I don't want running on my laptop I'm not going to stop you from doing it but okay good alright so does everyone have a virtual machine ish no yep so make note of the icators want to need that okay so let's get to actually creating the task itself so because one of the things we designed for is having everything in a git repository it is going to help you out later to create a directory for the entire task that will eventually be that git repository this would just be my suggestion this is what I used for names you are welcome to use whatever you would like but just creating those files is anyone still typing alright it's hard to tell when I can so getting started with this I'm actually going to switch over to is this readable I can I mean okay so the thing that every task basically needs is this part sends the comment so when we are going through this each task needs a name, a description which is really just something that is going to be shown to humans and the same thing with the maintainer here is that if someone sees it and it's broken they know who to at least start talking to about getting something fixed or what something means the other metadata that is in here is describing the arguments and what we need there are only certain types of information that libtaskatron works with and right now we support koji builds bodie updates composes there's another one what am I forgetting oh koji tag and there will be more in the future again it's one of those things where the purpose of this is to serve people so if there's stuff we don't support tell us so that it can work for what people want to use it for one of the other things you can do and I'm just going to show this for this particular task you don't need it you can also describe the environment and the rpms so that by describing the rpms that need to be in the environment libtaskatron will go try to install all of these if they aren't already present before the task is executed but in the case of what we are trying to do here we are going to download all the rpms anyways so does it get them from koji otherwise we can do both repair and hold so for better or for worse the tasks are basically yaml which I imagine isn't anyone's favorite but I'm not sure I've heard anyone go into a rage fit about having to write yaml if we chose to do this in pearl or python I imagine that might we might have fits of rage so it's bland and neutral and it's enough to get the job done one of the concepts we have is these variables and this is as far as I know it's in the documentation when you start this there are certain variables that are already available to you this work dear every time you start the task libtaskatron will create a temporary directory for you and basically run everything in there trying to keep the noise out of or trying to keep things separated so that is one of the variables another variable is whatever we've passed in because this would be an item type you pass in the value whatever that koji build is which is available itself as a variable if it were a compose the name of the variable would be compose if it was a koji tag the variable would be koji tag and those are the variables you start off with so we are going to do we have a so the each task is made up of we have our meta data and we have our actions and actions are directives of what we're calling them we have a koji directive we have a match directive all of these are in the documentation or the online documentation as well but for the koji directive we give an action which is download or download tag so there are some instances where we want to download an entire koji tag and that will do that but in this case we only want the one build we want the build that we triggered on and we want to get the x8664 arch and we want to download it to this directory that gets all the sub packages of that architecture it will get everything from that SRPM does that answer your question? so every rpm that was built as or that was part of that build in this case we are just grabbing the x8664 the network one so it won't grab SRPMs and it won't grab x8632 it won't grab an arm if for some reason it wanted to source rpm you can use all you can use source rpm it's like if I wanted that I could do a koji directive that has documentation online so you can see the available arguments to those which I really hope I put that in a presentation I need to run off so thank you very much Tim I'm just going to have one of these days what was it? oh thank you if you guys are done in 20 minutes come join me in the fedora qa session there's a fedora qa session? yeah fedora council qa and qnda qnda not you sorry that was wrong context in this room and again with the slender but there are docs oh yes but I also have them built I'm not seeing so we do have directives so like for example we want the koji so this is what we were just using so these are all online if you wanted to use koji it will tell you so you have an action it can either be download download tag or download the last stable one so for example one of the questions that came up earlier today was about comparing the latest with the last stable build so you can download the last stable build from koji when you give it the new build but all this documentation is available online when you have competent network I guess I could just check the documentation but what's the mesh action doing? that is another thing that is a very much a fedoraism and that doesn't always occur to me that it's not commonplace so mesh is a tool to create repositories it is sort of a super set of create repo the reason that I use mesh here instead of create repo is mostly because it was already there but one of the things that mesh does is it handles multi lib so it will handle getting the x86 32 bits to an x86 64 machine so when you just do create repo you have three separate repos you have the 32 that you have for the source and you point mesh at the entire directory that it will remove the 32 bit rpms that would have conflicted if that makes sense but in this exact case it shouldn't have been necessary to have the directive here because ideally we would just install packages but we discovered some kind of bug or something I have a huge list of RFEs this dog foodie experience of me writing this has been incredibly instructive because it's like this is stupid why does it work this way we found out that we have to create repository just to install it because it's for some reason but it should not be there because I tried downloading the packages and then just doing dnf install and pointing it at the files but if those weren't in the right order dnf would refuse to install them so hence the creating repository and then doing nastiness that shouldn't have to do so I suspect that with because I think I have how much time left like 30 minutes or something like that it was to 8 minutes 8 minutes so I think we are going to wrap up the official parts of this so I'll just kind of go through what it's supposed to do and then I will show what it looks like when it runs so what we are doing and this will change because this is a pain in the butt you're downloading the new rpms and then making them into a repository and then we're running a we're delegating out to the python directive so the python directive is loading this other python file it's calling prepare and then these are the variable list of args that get passed into the function or the callable it can be a method, it can be a function so in this case it goes out to this prepare where you can see some of the insanity the so it will so we have two arguments here which come from these two will match up and so when you use these it will use the keyword args so it will be in that order but it also will give them keyword args to match the names that are here then it goes through and it does all of this which is basically some nastiness that shouldn't have to happen so you can install from a local repository install our packages using DNF starting htpd and restarting it to make sure that it caught ssl the next thing we do is we actually that shouldn't be there we actually run the tests so that again we're doing the python directive we're delegating to htpd test.py which is in and this callable and so we have the original test case have these shell scripts so we have the basic auth, we have php servhtml, servhtml, ssl and then vert host and this is just a pythonic way of going through all those shell scripts then creating check detail results and this is one of the advantages you can get by writing things in python we have a bunch of convenience methods already there so like for example the check detail you can change the result you can have multiple check detail objects and then when you're done you just pass it into this check.export.yml and it's relatively future proof because that's the interface that we use so if there are any changes in the future this will still work versus manually constructing something but then it will just run through these things and show the results and the command to do this I'm going through run task is the entry point for the runner and part of what Taskatron does is we have this key value pair so we have the we're talking about the koji build, koji tag body update composed is what we're supporting for now so this is the type so we're giving the type of a koji build and this I just grabbed out of koji this is the most restagedhtbd build but as an item and we're using SSH is for the pre-existing virtual machine and then I give it credentials so I have root at that IP address and for the sake of completeness I'm telling it where this is something you can put in the configuration file but for the sake of completeness I'm putting on the command line as well so use this private key when you open the SSH session if you hide it in the home slash dot SSH you should find it automatically and then pointing it at that YAML file which from where I happened to be is in the previous directory and then giving it the YAML file so I enter and it goes through and I don't think I need to narrate the output that's going across the screen but it goes through it tries to install that stuff so there's nothing to do now it's funny this is going to fail because I already ran the task in here oh well let's see what the failure looks like oh no they all passed cool so I like being wrong in those cases so going through this is it's pretty much in debug mode so it's relatively verbose but it's going through and it's saying what it did so we downloaded all the stuff for the RPMs we created the repository went through, installed the packages we needed and then started running the actual tests so here are what would have been reported if we actually had this hooked up to a reporting mechanism so the basic off passed PHP failed, serve HTML passed serve SSL passed the host passed and then stored the artifacts from this in this directory I realize that kind of I think I'm over time aren't I? not yet, three minutes I'm lying to you with this but I thought I was out of time already so I realize I breezed through that and I appreciate I appreciate staying here, paying attention, asking questions pointing out where the weaknesses are because that I find that I can be way too close to things so it's like all of a sudden what is Nash? of course everyone knows what Nash is but I have a huge list of RFEs just from writing that simple task if there are things that you want to do that you find you can't do, please let us know if you run into problems, please let us know please come find me it's like we're in pound Fedora QA on FreeNode I'm Tim Flink Tflink at fedoraproject.org Tflink at redhat.com IRC Nick is Tflink come find me and ask me questions I am more than happy to help I want people to start using this I want to help people help get this in people's hands so please let us know let me know if you have questions or if you have suggestions and malformed links go and that's just basically what I said and things that I've learned is that dog-fooding is a good idea and don't count on network for a demo those are the things that I have learned and relearned yeah, malformed dogs, awesome so that would be pretty much it does anyone have questions, comments thoughts about vegetables, hopefully not well, thank you very much and again, if you have questions if you have comments, please let me know can I get a response yeah, they're not finished what's your question have you heard about macabre yes I think what you're trying to do is I agree that there are a lot of parallels and there is some intersection I don't think it's the same thing it's not yes, I agree there's a decent amount of overlap you got it I'm trying to think of we are a little bit more language and test agnostic they have a lot more deep test really cool testing features I would say one of the things that I'm interested in doing eventually is integrating with avocados so that we can use that as a runner that's what I was thinking one of the biggest differences is that you have the amount of tasks and different things that's where I see the future that list of actions expands you have a higher level of abstraction of the tasks do you mind that you're using just a small bit of the whole task patrol so what makes the whole task patrol we have the trigger which listens for events and fires the relevant actions for the tasks hopefully they won't have to care about it so the lead task patrol might have some overlap but with just small part of the task patrol framework as a whole so we definitely think that's like integrating avocados into it and still all the other pieces of the task patrol how much infrastructure needs to be set up for running tasks I need to test different distributions well let me say with algebra you should just need the task patrol and then you can write tasks and if it works for you you can submit it I don't know into some repo and ask us to execute it on every new package build or something like that and it should be everything you need just to the task patrol and we handle all the tasks oh sorry I just reflected that has to be done there that's great so I don't I can't touch the page I have to test and I educate the machine although even just one say I want to test if the channel is blocking my access how do I test that um the status for a while the ABI package was not for a while I know it's done now as far as I know all the features that we needed in the task patrol