 Here's a little bit of housekeeping. If you want to follow along. I don't have credits today, but I have discussed this with my business team and can pass credits back to you after the conference. So if you're participating and you want more detail or you want to you want to follow along. Let me know and you can follow along in your own own account and I can provide you with a modest amount of credit for for running a couple of instances for a short period of time and figuring out what it's like. I understand that not everybody has access to cloud accounts. So, if you would like, I can help you out with some credits today on on the 18th. Make it very clear 18th of February 2021 for the duration of the conference and help help you get started. See what that looks like. It's faster to just use your own account and then let me know that your that your intro, you know, you've participated in the workshop, and then what we can go from there so I'll also apologize I'm I'm in. So I'm here in Austin, Texas. And not, not closer to you but it looks like a lot of the check weather has come this way, and you may have read about the implement weather and the problems that we've had. So if I suddenly disappear from the screen, it's because we're having intermittent power outages. That won't happen today. Cool. So, I started putting this together this as a workshop, because I want to know about the use of fedora chorus and you know super exciting to me to to understand a little bit better the immutable operating system. Unlike everybody else I'm not I'm not a daily developer I am a solutions architect who works on looking at value stream value stream mapping and participating in discussions around quality of information and making sure that we have a good product line and and customers have an understanding of how they're going to pull things to plum things together that more advanced technologists have built to ensure that we have, you know, they have strong solutions for for their their business models and their their requirements. And I put this together so that we can explore those basic steps together. I don't want to say that I am an expert and I'm not saying that the people that I am teaching this to are less knowledgeable about it than I am so no Colin Walters and not a dusty guy who was working really hard to see how much of this I could put together in a workshop and deliver out to you today as someone who wants to know more. So, there's another thing that I think is really important is that. I'm trying to take these words right out of Matthew Miller's mouth, which is that as members of the fedora community, I think one of the things that we have a strong responsibility for is building is building solutions, and not just not just resting on the laurels of having sort of the day to day operations of building fedora as a distribution and core was really talk, I think really speaks to that as a, as a directive. So, the DOS is really mature offering today and has a history a long history from atomic and lots of other projects that were associated with that and, and, and, you know, is fed a lot by the experience the desktop experience that we have with silver blue. And I think that this is a really important thing for us to all get in, get an understanding of and recognize that these are instances that we want to build and destroy as quickly as possible. And that's a little bit about why me. Right. So, why, why am I the one who's who's who's doing this. Well, I need more understanding a better, a better relationship with the operating system and the people who are building this operating system, and the people who are building it. So, I wanted to take an opportunity to, to deliver some content that was related to that. So, not knowing a whole lot about what an immutable OS does. I thought maybe the thing that we could do is talk about where it comes from and, and the experience that we expect to have right so first off. The fedora core OS provides series of streams, a next testing and a stable. The URI for the stable configuration in the, well, streams. So, building a pipeline building sort of a, just a continuous process around the immutable OS and in any kind of container based workload. It's really important to be able to stand up and destroy everything that you have in a, in a relatively short fashion. And that's made quite possible by having a fast way to determine exactly what it is that you want to deploy. So, building in, in a way that leverages tags. It's super easy to build out a way to, to, to collect that information. And so I wanted to take this opportunity for the people who are as is experienced as this as possible to have a, having an opportunity to learn about this. So I started off with just, just looking at the, the images, how those images are, are defined and, and how they're provided. So I'm going to just quickly try and share something different. So let's, let's look at, let's look at that string. I think I'm, I think I'm sharing that. Yes. Okay, good. So, just looking at the content that's out there. Let's think I've got a pretty good history here. So, if I want to, if I want to look at information that's associated with the architecture itself, just to see, you know, how thick this is. I can look at the top of the stream to show that we're not just talking about one location. The team that is responsible for building this, the Coro S team, the Fedora Coro S team builds images across many different platforms and many different environments and so those are all easy to get to. And there is quite a bit of detail here. So the first thing I was looking at is in the exercises and I think I'm just going to share that whole screen. So in terms of the exercises, I wanted to look at what it is that we could see from just that one URI. So from one URI, we know that we have a stable and a next and a, yeah, and so stable and next and a testing so we can just easily interchange what it is that we want to get. And something that I thought was very interesting about this is that if I decided that I wanted to pull this information into a variable, I could look at my release and I can use tools like JQ integrated with the output from this curl from the stream. To get more detail around the output. So what can I do with this release information? Well, first I can just echo it and I can get back the result. So looking at that, oh look, you know, here's my current next release, right? So most recently built on, looks like February 17th and just yesterday. And then because I know I want to use that information in AWS, because that's where I live most of my daily life, I can take that information and translate that so if you didn't see that, I took that release and release detail and added it into a filter configuration. So, so here I am with the with the first this first exercise looking at how I can do this with with the release information to pull detail directly from the AWS API. So, if let me let me just quickly say, you know, I would love it if you have the opportunity to do these exercises with me and you may be able to, even if you can't follow along I'd love to see you be able to to take a few minutes and and go through and and go on this thought experiment with me, right? So, the AWS API calls allow you to to build an instance in AWS and they the call that you would use to create an instance is start start instances. How would you knowing what you've seen now put together a call for that for that cluster. So let's first off, let me follow this link for you and bring that. So now, knowing what what we've put together here. Let's go back. I have a describe instances call. I want to know if we have any takers. I have a hint. I'm just looking for a short piece of this, because all we have right now is the ability to look at the, the artifacts. So, inside of my requests. We have this your eye. I'm just going to push that into the chat. I did something kind of interesting there, which was, I pulled that detail. The images that I knew were associated with the Fedora account. And I did that based on a red hat article, knowing how these the configuration of the accounts works. I know that there's a specific owner. There's an owner for the red hat images. There's an owner for the Fedora images. But there's an easier way in the, in that call. So what I did originally in the release call, I grabbed the artifacts. But it turns out that if you just look at that information that's available, I should just be able to get it straight from the call. So if I make a curl. I can modify the the path, the object path to get directly to the image for some for any for any location so if I need the image ID to do a boot or to run an instance. How might I go about getting that that image knowing what I know from that, from that request. So that's going to go super fast. I guess we're, we're going to go super fast through the material of, you know, just kiss. That's okay. We can do that. So let's go back and look. Looks like sure enough, there's more detail here. And then you might have thought. So if we look, we can see that just for AWS alone. There's a lot of detail. There's a reason for that, but most importantly, I think here is that we're looking at the images, the official images that are published by the Fedora Core OS team. So now I've been able to easily discover using stream information published directly by the Fedora Core OS team a machine image that I can boot on Amazon EC2 and run a Core OS instance. So now that is all from the one stream. So in this case, stable, but it could have been, it could have been next and using detail that I pulled from here I could say, I want to do this in Stockholm. So EU central one and you can see the special characters here for the JSON have to be quoted. That's not enough. Or is it? Regions? No, it's not. Now we're down to just the release information and the image detail. So there's the image. So if I wanted to run this on AWS look something like profile. Profile is not important, but it helps me to keep my account straight. Image. Let's look back here. So I have image ID is something that I can build. So now let's just go here and I can just put that in as a as a dynamic parameter. Let's just add the whole thing. So now I've got my run instances with the image ID that's specified in this evaluation. I told the instance or I told the stream line where I was going, but I didn't teach the I didn't tell the run instances called what region that was in. And so turned out that was in the wrong spot. That's great. So now I didn't specify much of anything here really to just get an instance running. But the question is what is this really what I want to be running and what's inside. That's the important part. So here's an instance ID that tells me essentially tells me all this information here that's associated with it. I can describe that, but I don't want to do that. I want I want to say this instance actually has a it should have a public IP address I believe associated with it. And I'll need to find that now. So now I have these instance IDs. What I miss. Anybody care to get venture guests. Did it twice. It's the region information. So look at that. I have an associated host name, a DNS name with the public address should be able to just log right in, right? What do you think might be stopping me from logging in? I mean, I just started the instance. Should be easy as pie, right? Looks like not. Looks like I don't have a route. And it also looks like I don't have I don't have any information for excuse me, the user information. So how do I get that? How would I get so on a on a virtual instance or really any kind of preconfigured virtual machine like this that I'm booting either in open stack or in standard environment with the vert tools. How would I get a an instance that are an image equipped with additional information? So at some point, we'd have to be able to inject that information. What's a great tool for doing that in the context of any kind of cloud environment? Anybody have a have a guess there? I'm calling it. I'm going to spoil the surprise. It's cloud init. Cloud init is a way that we've done this for many, many well for quite a number of years now and by default has been shipping for a long time. There's interesting things about cloud init. It has taken on a sort of a saving role for figuring out how to inject information into instances, but then it's also taken on sort of a crippling role in terms of boot time and so people have long criticized cloud init and it's just in time compile and they want to they want to look at that as a way to move, you know, we all want to look at that as a way to make it to speed it up. So speeding up cloud init was one of the goals of the core OS team and they built a project called ignition and ignition was a way to do this in a compiled application environment. So one of the most important parts of building out a configuration environment like on KVM or with the QNU images is to leverage an ignition file from some location either on as a kernel operator or kernel parameter or sorry, either by identifying it on the kernel with a pointer or or bringing that configuration together for you from from some metadata location. So really the thing that brings you from elephants to ants is it's been said in the red hat circles before thank you Landon I found the image I have all the details now what am I going to do I need to create a configuration that is going to tell me how what I'm going to boot and what's going to be in that in that boot process and that's the addition file so if you're building a coro s image you need to create effectively what is similar to the cloud configuration in cloud in it and cloud config which is the yaml equivalent there in the cloud in it world you would create that yaml file base 64 encoded and deliver it to an Amazon EC2 instance or you would as a plain text file deliver it to basic support encoded I think that's right you were given you would give it that file and then it would do the work for you but of course with cloud in it the application is not compiled it's doing the work for you it's making parsing that with kind of a slow parser the yaml parsing is slow and then the additional effort is slow so and that comes from so on Amazon EC2 the user data is stored in a link and a link local address so any idea about what a link local address is thanks so the link local address ends up being an address that is not available for any other it's isolated to the single hop from communication point from that from the instance itself to convert I'm sorry we're using that to slurp up to the instance what we wanted to have in it from that ignition file and the ignition file starts off as a simple fedora coro s configuration file and then you use a what's called a transpiler to create the to create the ignition file so let's think about this as an exercise the exercise is to build your own coro s configuration file and then to convert that to json for use as an ignition file and what you're going to get is something that looks like this you'll start off with this file that effectively has the simple yaml syntax where we're looking at the we're looking first off at what we're doing in the past wd for users we're creating in this case a default user of coro s and we're injecting an authorized key into or pub key into the authorized keys for that user which means that we'll be able to log in as coro s but let's change that up a little bit it's easy to write a yaml file what I'd like you to think about doing is how do I have a second user or configure a different user as my primary user in the build process I'd like you to add that transpiler to the installation of the instance using ignition and then ask yourself some questions so I think we're wrapping up we've got just a little bit of time for questions let's start there thank you so much David there was a question from Pavel in chat about having access to this presentation I was wondering if you could potentially share it somehow yeah I'll put it on dabdunk the fedorapeople.org so I'll add it to my fedorapeople so that you can get to it and I'll just stick that in chat thank you if anyone else has any questions please add them to q&a I don't see any questions at this point do feel free to reach out on this court to David apologies for calling you Duncan it's fantastic sorry for that and Arnold Relp has a question there's a question in chat I don't know if you can see David I can I'm sorry I don't see your question I see that I asked did I miss was it closer I can see it in q&a so he's asking thanks any plans to be able to create coro s in cockpit via web interface I don't know but I can tell you that coro s with cockpit might be a little bit overboard but I want to be able to do and configure in cockpit are done through the ignition file and they're expected to be completed at boot one of the things that we talked about that was a big part of the concepts here was pipeline and the pipeline build process is done through that ignition file past the details of that ignition file in a get up style configuration model is a big goal does that answer the question yeah great