 Hey, thanks for coming down after lunch. I know it's a difficult time. So I'll be talking about Fedora Cloud. So a little bit about me. That's me flying. So Cloud is something I'm doing for a long time, as you can see, maybe inside the house. But other than that, again, a few lines. I'm a Fedora ambassador and contributor working with Fedora for a couple of years now. Working with Python Project. Officially working in Red Hat as Fedora Cloud engineer. That means I'll be happy to take orders and requests from the rest of the community to do things for all of us. And we have most of our cloud players here, a lot of people here, I can see. And if anyone is not familiar with the Fedora Cloud group here, anyone? Oh, we have three people at least. Two. That's good. Otherwise, it would be too boring for all of them. I could have just skipped the whole talk and go to the discussion part directly. So what I'm going to talk about is actually Cloud Working Group. It's not about just any particular product in general, but exactly what we do, how we do it, and how we can actually help us or try to do the similar things that we are working on. So, which brings us the question what we do, which is, next slide. A pretty old thing. We do cloud. It's too large. So which brings to the cloud-based image. The kick-out image which you can get, if you go to GetFedora.org and click on the rightmost cloud icon. So you'll find that different kind of images you can download from the thing, and the cloud-based image is basically a generic kick-out image which you can use on OpenStack instances or any other local private cloud if you are running. We also build the AWS AMIs from the cloud images we build. We have an application called FedImage, which is connected to the same workflow or the process. So whenever there is a new build of Fedora cloud image over Koji, this particular application actually takes that build, creates the AMI out of it and publishes it over all the different regions available. So if you want, you can actually right now try out the latest nightly builds on AWS as a public AMI. So all those things are available. I'll be going into these smaller parts of the project later on like in the discussion part if anyone wants to talk about. So another thing we are working on is towards the container side because yes, cloud and container is now going similarly level of passwords. Maybe containers will move now. But yeah, so we maintain something called Fedora Dockerfiles. Anyone here, do you know about this thing Fedora Dockerfiles? I'm not asking you. So Fedora Dockerfile is basically a GitHub repository which contains a lot of different kind of example Dockerfiles. So these applications, let's say MySQL, let's say Apache, there are many things including Firefox. So these Dockerfiles are made and maintained by the community, Fedora Cloud Working Group and the community which you can use to make startup base for any container you're going to build of your own. Or if you want, you can actually try these images from the Docker registry. I think Joe you were mentioning about, you are also using one of the images, correct? From you or someone else? In the cockpit, you wanted to update something, correct? One of the Dockerfiles? It's a bunch of updates to be done. One of the people you were talking about was to make sure that we have young to DNF issues because almost all the Dockerfiles were written before DNF was made. Yeah, so... Yeah, so that's another issue. Actually, I was talking about something else, but anyway... I think it's Steven Gallier, maybe. Yeah, not you, sorry. So the idea is that we have these Dockerfiles as an example and you can actually... It's not only the example which you can only see but you can actually use those containers to do needful for yourself. And there are Fedora magazine posts also about the same Fedora Dockerfiles if you search about it. It talks about... And obviously the wiki pages where you can go and see how you can use these Dockerfiles for something you want to build up. Another new thing came up in the last release is the background images. Anyone here uses background? So many of you. So it used to be not any official Fedora image in the background land but now we are actually building two different kinds of images. One is targeting the liver-based background and the other one is obviously virtual box. So we have both the images up and ready for you to download and try out. And it's coming out from the same pipeline so it's basically the same thing the same kind of image built up in the same Fedora infrastructure which you can use to having your own background boxes on top of it. And in the other thing, the cloud working group also makes sure that Fedora, the oblique system and the latest cloud image is available in different public cloud infrastructure. So like Dusty worked with the digital ocean team so it makes sure that the latest Fedora is available on digital ocean so that when you create an instance there with Fedora that actually is the same Fedora image which is coming from Fedora Group. Same goes with Rackspace and AWS. And as I talked about AWS before, it's an automated process right now. So what happens when we say that we built a new cloud image? It's basically a Koji task. If you want to go to Koji task, there is a task called create image. So this is the Koji task for the build which actually creates the images for us. And in the background it uses something called image factory. Anyone played with image factory here? So at the beginning I found there are enough documents about image factory but I didn't find much documents about how we use it to build our own cloud images. So one thing I'm actually trying. I don't know how good or bad it is. Some people talked about it. It's about whatever I learned from our work in the group. I try to put them up into this particular URL. It's a GitHub repository. Open. It's fixed up basically. So if you want to learn about what we are doing or how I built up the cloud image or how I do my testing and stuff so you can actually go there and check it out. I've heard from several people about this. Don't underestimate it. Thanks. I know at least one thing that this is the only document available in internet which actually explains to you how to build up the image. I'm going to demo that. And the other reason I was maintaining it was obviously because I keep learning and forgetting things. So if I just write it down here whenever I forget something I can just always go back to my own what notes and see, oh, how I did that three months ago. Anything that I know just goes in there. So it's the demo here. So the idea is the same thing. You can use the same tools and start building up your own images if you want to. If you want to play around or if you want to see how we do that. So I'm going to go into the demo mode now and show you what we do. So the first thing comes up about building a cloud image is from where we build it. So we have a repo called speed-kickstarts. It's under Fedora Hosted. So if you go to that project, you'll find that it contains various different kickstart files. And one of them is the cloud base. And you can find the other images, kickstart files like the background kickstart files are also there. So that's the first place to start with. I'm actually going to... You are being connected. Thank you. You've seen that around the... I haven't been in the conference in a couple of years. If it's working then... Yeah, no, it doesn't. Yeah. So by the way, this is the creative image task which I was talking about a few minutes back. So you can see it built the Docker image also, the Fedora cloud atomic background image and the base images are there, one of these. So one of the easiest way to start, even if you don't want to mess up with the kickstart directly from the spin directory, what you can do is you can just go to one of these and... Yeah, let's say I'll go to this cloud base raw hide. So this is the output. The primary output which we look for is this one, the kick-out image. But this also contains the kickstart file used to build this image. So that was my first point to start with. So we can actually see this particular kickstart file. So here is one point you should remember. So there are a few nightly builds which contains repositories starting with infrastructure.something like Fedora project.org and the URL. So please remember to replace the infrastructure with alt dot Fedora project.org so that you can actually have the same repository because it's not something hidden. It's just that it's not available from outside the production network, which is infrastructure production network. So if you go to my work notes, it actually contains all the packages which are required to be for you to install them. So first step would be just install the dependencies and then after starting Livered and things like desktop, please walk. No? Anyone knows how to use this? I'm trying to move the terminal from here to seem to be way too difficult. Please move. Seriously? I'm trying to figure out how. Here. Oh, finally. Is the font size okay? Okay. No, that's fine. Okay, so just kidding. But yeah. So I actually have a small sales script file so that I don't have to type it every time. But basically this is the command you were running. Image factory, hyphen effect debug, the base image, file parameter, install script, then the kickstart file, and RTDL hyphen, the architecture name.xml file. So you can actually download the same RTDL hyphen xxt6664 in our case.xml from the Koji task. It's there, available. But please remember if you see anyone pointing to infrastructure.federalproject.org, replace it with altoutfederalproject.org. That part is important. And after this, so I can actually execute it like this. So I'm using our Federa Cloud kickstart file which I downloaded before. So if it breaks, that means this might be one of the old ones. It's supposed to break. So after lots of, you know, maybe not so garbage text, you'll find you're stuck somewhere here where you'll be waiting for this three, six, that is ten minutes of, ten minutes, correct? 60 seconds. And now, okay. So it will wait for that, it's supposed to run till that time at max. But this is the only place where you have any issue with your kickstart file. It will fail. And it will fail if, within five minutes, that is within 300 seconds, if it cannot see any disk activity in installation, it will fail. So that's where I found that the people who actually tried this, they said, oh, I can see only this error, that no disk activity found in 300 seconds. Task failed. And then you have lots of error messages. Not that useful. So my way of debugging was to see what is going on right now on that thing. So I enabled VNC in it and I can connect to it. So I have a tunnel actually existing tunnel to the local box. So I'll use that. Not this one. There you go. So you can see the kickstart is working. So generally, if it crashes, it also creates an image from the moment where it's supposed to capture the error messages. But in my experience, most of the time, you will find a complete black screen as the output. Which is not that helpful. So I know that there is upstream work going on to have some kind of recording done, video recording of the whole installation, so that if required, we can actually see the screen session and what's going on in the installation and where exactly it failed. So most of the time, for me, just looking into this screen was helpful enough. If you do the install in console mode and do everything non-irractively, you can just do get serial output as well. Yeah, but this is not me doing it, because it's the image factory. No. Yeah, I agree. So what are some of the factors giving you over saying we're installing? What extra? Yeah, that's the way. Yeah, I've honestly used bird install to do the exact same thing before, because you just use the tree. So it doesn't do a full lot extra. It has for the capabilities we're not using. It can do some of the versus prep stuff as part of it, but it can basically output a lot of different format in the end, and also to iterate their plans and install in the image. The main reason we're using it is because there was already a plug-in for it, and then rather than inventing a new thing, we wanted to use the thing that existed. So doing an image factory in your own system in minutes, what would you do with Koji? So you have a similar body model. Which is kind of like using models. Yeah. And the other thing it does actually after the end of the installation, when the installation is done, it will try to remount the whole image and check things inside, and it will actually try to connect. So if you see the way when we build the Docker image, the Docker image actually gets built up in the same way, and there you actually pass a separate parameter where it will mount it as a file system in the Docker in the Docker build. So that's the only difference between the cloud image building and type the command from as a user's point of view. There's supposed to be a plug-in for an image factory in Koji or enhancement to it, so they will actually make a little MPEG movie of the entire installation as it goes. Yeah, that's what I'll say. The work is going on. But like, what's the benefit of actually doing a movie versus having the console output and then saving like, have you ever done script? Yeah. I don't know. It ended up being very confusing to me, but it doesn't have script support. Yeah, but I'm not sure that there just wasn't... Some of the places I was getting stuck in were early debugging for this, like the console output didn't have it up and they can read to you, and you're like, oh, I think and it kind of wasn't quite designed for that in mind. So what the policy is on keeping the video is like... We don't have it to be big, right? So that's why I was thinking if you just have the script and the timing information for the script output. Yeah. And also, I kept changing the kickstart to say command line and Dennis kept putting it back to be the GUI, so... I don't know. So one thing I found was like, because right now the only thing it's doing is the screenshot, correct? So only one screenshot can take the error message and the rest of them are all black screen. Yeah. So, like, if nothing else, if not the text, then I'd love to have that at least. Oh, yeah. Yeah, I was just wondering, you know, a lot of video versus text plus timing information, so it's actually really big matter. Yeah, searchable, right? You don't have to watch the whole video. And it's something which I can download from India. Like, video, no? Yeah. It seems like a useful thing to look at at least percentage matter. So yeah, so you can actually break up the Fedora docker image also using the same way. And my walk-node contains the other piece of the missing command, which we can do. So the final output of this command is actually a raw file, which you'll be leaving as uid.rawfile. So then what it actually does is it uses, I think, QMU image command, IMG, to create, convert the raw file into a QCOP image. And you can play around with that. So if you want to make, you want to submit any kind of patches to the Kickstarter files for Fedora Cloud, so I would request you to actually, you know, at least do one local build in your place and see that image works or not. Yeah, but anyway, because none of us can actually do a real build, except it is in the middle. So that's anyway one point. So what you're saying is we need scratch builds for images? That's something we're asking for a long time. You see, yeah, part of the problem is in order to do any image builds at all with COG, the permissions are not fine-grained, they're being brewed in order to still to be added. So they didn't want to give that out. So I think that the COG image is actually, for a while, always big about the scratch builds, because of some other issues. I think that's a fix now, or who there should be. But it was kind of a, does the implementation detail really? So you still had to be added in to do a scratch or not? Yeah, so we can't do scratch builds because this particular task doesn't have that thing. You have to be at the end. Okay, so that is COG. Yeah, so talking about COG fix, we actually have an open ticket on our federal side, I mean, cloud side, and also the COG is that the latest COG, which is in, not latest, the deployed COG in production, it actually creates a Q Cloud 3 image, rather than Q Cloud 2, so it's a version 3 of Q Cloud 2. So the problem with that is that our default, the Q Cloud image which we build on the system from COG, one cannot download, just simply download and upload it to any old boxes like running cloud, running on Red 6 or something. I promise some of you text them. And if it's the file names, should help for you. So anything requires a COG patch and I, the biggest problem is not I think writing about the patch but the place where I can taste the patch and I did that only once in my lifetime installing COG and I don't want to do that again. Never. So, another thing I see is that when it reboots the image after installation, what do you call it, the cloud in it, it actually waits for a long time to get the metadata and sometime it just fails for that and if I just restart the build again, if I do a phrase build again, it just works. I think it's some kind of race condition which, like where cloud will get stuck and then image factory thinks. I think you can see now the 300th thing is going down. Yeah, correct. It is finalized now. Oh, okay. You win. So is this building the image that it is built? Yes. Why? Yeah. Because it just built an image now we're supposed to ship memory. That's something I don't want to know. Okay. Yeah, because that's actually what happens when Burton saw what you do this and that's actually not what you want. You don't want to bring it up because that's what you want to do to have happen when the people bring it up in the cloud. We should be much more useful also. We should be much more useful also. Yeah. So one of the things which I do in this particular case is actually pass the common line argument for what I do for the talker also so that it doesn't boot it up. But that's more of a hack rather than having the actual solution in place. So I think I can just reconnect. Yeah. So here you go. I was doing that. So every time I see this and then I start hoping like can cloud in this, please finish before even if I do say that. Yeah. This build is broken and unusable. Nothing wrong. That's why. But you've got a thing that runs it. You have a metadata writer thing. Like the test file what this does is it creates a seed.ing and just adds as in in drive. What's using that? Yeah. I don't know if I'm going to have a little minuscule metadata. There's a R and P, if I build somewhere for that, I think I would come with it. So the other thing I'm trying to write up so that will have a better service which you can just plug it in. I'm working with a separate one. One mistake I made was download the cloud image especially when you get more paid sites. You want to show the download of the raw image and I try to do it in a work manager. So I'm going to go again. So the only solution to that is that you create a text file with the information. Then you create an image file out of it which explains actually step by step. The other option is as I said in the morning talk is case cloud. Yep, that's right. It's in copper by the way right now. I'm in the same boom in this grocery so I don't know how it's done. I tested it before I came here even so. Okay, so cool. I use just cloud for all the local things. Just taking the work pages to help you to make it clear that you can't use the raw image with any virtualization platform. Yeah, that's the whole idea about cloud. Anything you cannot just start using it. Yes, twice. Let's talk about the work page. Great. Yeah, so this worked. So now this is where I pray. 40, 30. So I still don't know if it will pass or fail. Yeah, this is terrible. I hope it will last. I think it will fail. 10. Last 10 seconds. Yeah, it's not going to fail. It's booted but the thing somehow is not booted. 10 out. Hey. So, yeah. So this is actually many times when I see these I try again once more. It doesn't work. I just pass it that in parameter true. Yeah. Yeah. The description of this talk said we were going to discuss the working work and how to get involved but I think maybe some folks would like that. Yeah, so talk description. So one of the point is that we are looking for this talk. So the description. No. Yeah, so I'll talk about that part also. The talk description was actually came from that Oddshock's David K. So, yeah. So he submitted this talk. He asked me like, can I go ahead and submit? I said, okay, I'll not talk about it then but because he's not here right now so I'm giving the talk. So, yeah. So I'm going to go ahead and submit it to starting part to actually one side but then how can I help? So that's, I think you want to ask me for this? That was just the next slide. So, I didn't mention that sir. So, Docs. I know we have a lot of VT pages but we need really updated information because cloud and containers are something which is getting updated like hourly maybe. So, we need more blog posts more user docs which will focus may not, it doesn't have to be you know it should show everything about everyone but maybe it's just documents which can show how to do one particular thing using the cloud images or say using the Fedora Docker image. Fedora Docker files maybe, that's another place. Like the same Docker files so if you can use it or if you are using it in some of your project then you may want to talk about write down like how you are using it so that anyone else if they have a similar kind of problem they can start using it. As I mentioned that we do not have many documents explaining like how things work. So, you may think that everyone knows about those stuff but if you still write about those things it will be helpful. Just to give you an idea how Docs things so we are working on another Fedora 23 or 24 is about system D network D to have it inside the cloud image and for the first few days it was really difficult to find out any information on how to actually use it properly. But it has really good documentation but just not from Fedora's point of view like I was thinking if I am the only person who is experimenting but then Major Haydn who gave our keynote really good blog post explaining how that thing works. Then it becomes like almost a single configuration file with two lines or four lines. Two actual lines and two nodes so that's it. It's the easiest configuration on your server for configuring network. Any kind of documentation help is actually going to help us and obviously the next big thing is quality assurance which is actually doing the testing stuff. So as I said that Koji there is an automatic way of nitri builds so if you want you can actually test the nitri builds on your system or if you have a stack or e-collectors or you have a counter to AWS. So the same nitri builds also go to AWS as NITRI AMS so you can boot them up and as I said about the network damage so I have a public image I think two zones only right now which is one I think in what I call in Singapore and US West one so I don't know where it is US West one so you can actually play around with those images also so they will boot up with networking inside AWS and we do have a taste wiki page so if you go to the just search for Fedora cloud taste wiki so we will go to the wiki page where we have the kind of taste which we run right now and what do we test so maybe all of it all of the things so the first thing I can see is your advertisement say what yeah we have another one of those coming up soon yeah the day I am flying right now the test matrices that we have are not as comprehensive as they should be but I mean you know testing things like making sure you can reboot making sure that you can log in or pass packages or so these are the few tests which are like mask pass for right now so based on system logging is working services and disable NMLs reboot in between acilinus de-enforcing and the last one is that's the service manipulation so this one there should not be any services failed on the system this particular test if you go into it so I have a project which I talked about in the morning to need which basically helps you if you don't have a cloud you can just install, DNS install to need and use to need to boot any of the local cloud images on your system so that will be complete automation for the whole thing otherwise this means you have to manually log SSH and type the commands so test cloud is the way to use the cloud images on your system so test cloud will bring up instances as VMs where you can SSH login and just use it as a normal VM but from the cloud image to need is an automated way to basically either boot up from a cloud image number one or it can do any remote system like using SSH it will automatically execute a series of commands and give you the output like whether they passed or failed so all these tests I have a GitHub repository I'll just show it here none of the test cases we have right now do anything that couldn't be done better by automation basically we require human to login and do some fun tests that a computer could do better than we could we should have more test cases but there's probably a hundred different things I can think of that we should add as more automated tests before we add more human test cases I see you were calling this is where this is where I actually this is where I actually converted those human tests into simple python unit test cases so I use this into need so as you can see let's say test login out of the SELinux so this is basically because obviously I am a lazy guy and python programmer so I really don't want to type these things for myself and it automatically takes those things and we really want to add as many things as possible so I am already adding something which I have in all cloud images if it's not atomic is this one is I actually install a package and make sure that the package got installed and the same thing goes to the atomic like you know the docker should work if docker doesn't work on atomic doesn't make any sense we do a lot of tests for cloud and net yes so what I need is actually using some way to if we can you know spin up those instances in different clouds so as of I can talk about only to me because it's working for me the to need can actually use just SSH doesn't matter whether it's a background image or a cloud image on AWS or Express or anywhere or your local cloud and actually do the test so following the same rule for Fedora 23 one of the major changes is a two-week atomic and there we are I hope it's okay so there we are working on something called auto cloud which is basically automatically testing the cloud and atomic images and if I am on the VPN I hope it's okay to go could you offer a picture for that if you want that's on there yeah so two week and two week yeah I also have some statistics from my talk yesterday morning that are more specific to cloud stuff that we could show if you want I can still help the slides yeah here you go that's super big wow and how to go this side click on view image click on it is it view image there I think it was just trying to point out that's the scheme that people are working on I threw that scheme in and threw it at people that make this happen me and a couple of other Fedora contributors are working on this particular part so where we are using to read actually so basically listen for the Fed message that the image is done download the image boot it up test it and then release the information based on that so the idea is we'll have these every two weeks have a page somewhere for our atomic page that has the latest past tests working of the image so this is the work completely working progress by the way another thing we had an online hangout demo and by mistake I ran it against the production data instead of test data but it worked exactly it's supposed to work in production so I was so happy so these were the three cloud images Poji task IDs so it says success and if I go to the output you'll see what all tests actually it ran and things so these are actually the Python unit test cases which I wrote converting our normal test cases so everything worked perfectly which is good for a demo auto cloud is just like a practical so there are a couple of group screens which is actually listening for a fit message used to need to boot test and save the data in a database and auto cloud is just that whole group and with a flask what do you call it that's good in front it will fire up the message based on that so I don't know my data yeah but I'm mostly managing it not writing much of code for this so I'm getting lots of good federal interpreters who are working on it full time from the community side so that's pretty good can you go to mat.org.com so this is an edition slide I showed you there are two spikes on the right are those spikes basically that's the workstation fedora 21 and 22 release as you can see below the red line is the cloud you can see that going to the new get fedora site really increased the visibility of that because it was that little red line so on the previous release there are 20 where it went from nothing to something that's when we had at least a cloud download page before that you had to go hunt for it so having a cloud download page and then bumping up to get fedora made it a visible line other than just hugging the bottom but we don't have a strong upward trajectory which I would kind of in an ideal world like to see and we don't have much of an increase from 21 to 22 it's still fairly flat right out of the bottom of course we can't with our current system count EC2 at all so who knows these are image downloads it has very little meaning on like did they launch a million or did they download it and be like what's a cloud? it's going to be even more distorted than the ISO super hard to understand what this means you were talking about install counting during yesterday morning yeah and so one thing that we're sneakily doing every other cloud image in existence that's popular a little bit too when you log into it it shows you a message of the day with number of pending security updates that also has the incidental thing causing with images to hit the update server so we can count them so that will give us more than nothing that we have now so if you're walking into pictures they run in the background to refresh and so right now I have the strong suspicion that all the images are proofed up and never touch updates so how do we get in? I've been installing on all my EL server machines find out where some start tasks and check frontage will be great right so it should be doing that one of the other things which is in the pipeline is that updated cloud images as I said in the morning for stock we already have one, take it open for an updated image so it will be able to do at least once every month to update it to cloud images which will be released but that's the progress so if you want to test those images because we tested it, we need more people to test and verify that it actually works well so if you switch to the other and this is still download but this shows just the atomic and CubBase ones there so the green line is the CubBase actually we do see it a little bit and atomic this last release is way up from here so that's pretty cool and on the topic in general of marketing for as a cloud image Josh posted to the mailing list a while ago and I think especially once we get this to the atomic thing actually working I would like to see us moving towards making the atomic thing the main offering of for a cloud I think we still need the this is Fedora that runs in the cloud CubBase image so that's useful for a lot of different things but I'd like to move that off the side and make the atomic because I think there's an interesting story around atomic and I really had a hard time telling people why they should use Fedora other than Fedora is awesome and here's a cloud version of it and that's not a very good hook to people who don't know if Fedora is already in the Fedora and know why people are screaming about Fedora but it's not a very good sell or a atomic has an exciting story about containers and this is a new offering my only point would be add enough message on that page or wherever it's not like we stop making the base image this is how we can download it because most of those users will be still looking for the updated cloud images that would be my point we are going to have a very good hook now Fedora I'd like to know for registering these official images with Daven Tools and Docker Tools so my basic image of Fedora and being the official one so the Docker I know that if you do what you call Docker Pool Fedora so that comes the official basic Docker image from here so it's just that the process is manual right now because it's mostly these are in the Docker side they're not that good for automation they still have manual one person does it it's Adam right now huh so our basic problem with putting cloud images and services is that we are not prepared for it from a legal point of view most of our stuff about redistributing Fedora on a legal page is focused on this idea so thank you sorry it's just a discussion I need to stop