 Well, hello and welcome everybody to another OpenShift Commons briefing. This is the last briefing of 2016, it's been an epic year and we're having a wonderful speaker today, one of my favorite people, Adam Miller, who was one of the key folks on the OpenShift team for many years on the offside and helping us get it up and running and has moved over to Fedora and he is going to talk to us about the Fedora image builder service today, one of my most anticipated things. So without further ado, I'm gonna let Adam introduce himself, take it away, we're gonna have his presentation and then we'll open it up for Q&A afterwards. If you have questions in the chat, post them there just remember the speaker can't see the chat while he's talking and demoing. So take it away, Adam. Thank you Diane. Hi, my name is Adam Miller. I work on the Fedora engineering team at Red Hat. I focused primarily on release engineering tooling, automation, those kinds of things and what I've been working on most recently is the Docker layered image build service and we are going to today kind of go through what that is, what that means, but first I want to do a little bit of background just for anyone who's not completely familiar with all of the lingo and all the jargon. I want to define what containers are, we'll do a very quick brief history background, just kind of level set and discuss various different things and what Fedora is targeting today and what we will probably be looking towards in the future in the next year or so as a lot of this stuff solidifies out in the community ecosystem. From there we'll talk about Docker specifically, Docker build. In there we'll actually discuss various differences between container image, base images and layered images and instances of those images, how they become containers, those kinds of things. I want to quickly define a release engineering just because that will kind of provide perspective around why the system was designed the way it was and the way it is and certain attributes of the system and why they were chosen that way. And then we'll actually talk directly about the Docker layered image build service. The building blocks it was built on top of which is OpenShift, a utility called OSBS, which OSBS client upstream, the way that that special, it kind of provides a special use case, a pre-prescribed recipe for how to use OpenShift directly, such that you can kind of turn it into a build service. And while it is limited in some ways because you kind of ignore many, many very powerful features of OpenShift, for our specific use case we're able to leverage the build component of OpenShift very, very well. And then Koji container builds, I'll kind of briefly discuss that for those who are familiar with Koji and Fedora land. It is the canonical source of truth for all things built, all artifacts produced by the Fedora project. And then actually the implementation of Fedora's layered image build service, how it ties together with all of our disparate systems and services and things like that within Fedora. And then hopefully we'll have time for a quick demo and Q&A. So really quick before we get into containers, if there is anything, if there are any questions that pop up, maybe in the chat or otherwise, if Diane, I don't know if who all on the on the blue jean sessions has the ability to interrupt me, but please feel free to interrupt and ask questions if that would prove helpful. Awesome, wonderful. Feel free to hop in and we'll, if there's a dialogue that we can generate, great. Okay, so really quick, what are containers? This is actually something that kind of goes back and forth depending on you talk to, but at the end of the day there is a kind of formal definition around operating system level virtualization. This is a concept that's been around for quite some time and we in the greater links can be like to call them containers that has become the de facto standard way of how we actually refer to this thing that is more widely known as operating system level virtualization and it has expanded for years in the past. And there's a diagram down here and basically what this shows is how we were able to kind of enable the idea of wrapping up an application and its libraries as well as just kind of its runtime context and decouple it loosely from the operating system, the host OS and hardware and those kinds of things. And that is that is what we are able to do. So the concept's not new and I pose to argue and there are a few others out in the community who pose to argue this that the original container was the churrut, if anybody's familiar with the Brian Cantrell great speaker, you know, has a huge lineage in Solaris and all those kinds of things and nowadays Lumos, but he has great talk that kind of echoes some of this and has very interesting citations back into the history of pure science. But basically the idea of lying to a program about its reality such that you can have a context of or you can have an instantiation of that program in a runtime thread and the context of the world around it is different than the reality of the system below it. And you know, churrut is very primitive, you basically just say this is your root file system even though it's not actually the root file system of the system that it's running on top of. And it lacks many sophisticated features that we get later in life, copy and write file systems, quota enforcement, rate limiting, any kind of constraint for resources and those kind of things. So just really quick for those who aren't familiar, previous D had a more sophisticated technology called Jails than others linked to V-server, Solaris Zones, OpenVZ, LXC, both Linux V-server and OpenVZ while they were running on Linux, they never made it upstream and never made it in wide adoption. LXC is where things got really interesting. And for those not familiar, LXC stands for Linux containers and because acronyms are fun, there's not actually a third word, it's just the end of the word Linux. And it basically was just a user space of tools that was wrap kernel names, spacing and seed groups. So these were things are provided by the kernel and a couple of years leading up to 2008, these capabilities were implemented in the kernel and that actually gave us the ability to do many of these sophisticated features that we want. In 2011, SystemD wrote N-Spawn containers. They originally were created just to test SystemD so they didn't have to reboot computers all the time to just actually test the init system and why that gets interesting, I'll tell you a minute. So in 2013, .cloud releases Docker, so .cloud used to be the company that is now known as Docker Incorporated. They originally based their implementation on LXC, it has since been replaced by lib container and nowadays it's built on top of RunC and container being those kinds of things, it's not quite as monolithic in nature. So to the 14, .cloud releases Rocket, Rocket originally was a implementation of their specification and their container image and it was built on top of SystemD N-Spawn and that was very interesting because then somebody took, you know, what was originally kind of this primitive thing that added necessary state features to container technology and extended it to be more broad use case. Until 2015, this was, so the history lesson kind of comes to a point here is the open container project now known as the open container initiative is kind of a culmination of a lot of different technology companies out in the ecosystem coming together to try to standardize on container formats and runtime such that any person, company, open social community project could build a set of tooling or an implementation that would produce container images and runtimes that could then run on any other implementation of the specification. So, 2015 RunC came out, this was the standalone tool for spawning containers in the OCP. This was donated to the community by Docker and then again in just a two more recent container D again was created as the thing that will allow you to run many RunC tooling container implementations or container runtimes under that. So, the thing is running more modern versions of container, more modern versions of technologies, chances are you are running an OCI specified container image and it's just kind of being handled by various technologies over the hood and those are the building blocks for what we get to now what we consider Linux containers and also Docker being the de facto standard most popular thing out in the ecosystem these days for the build pipeline at minimum and then beyond that many people are also using this as a runtime. That is what Fedora targeted originally from the community space that's what our users are using what users want and then that's what the build pipeline is most familiar with. There's been a lot of really interesting tooling coming around that you know because this is not just Dockerfile specifically but you know an accessible container there's the ability to build through rockets you know chorus of rockets at sea and all these things because the fact is they're all interchangeable thanks to the OCI we have a lot of cohesiveness in the ecosystem so Docker being the most popular one in Fedora's user space this is what we targeted originally and that's kind of where we're going to go from here. So Docker itself has a client server model there is the Docker engine which is the daemon's kind of single point of entry has language bindings. It is an API that can be accessed locally or remotely and this conceptual is a thing that we need to understand is that containers are instances of images and this is very similar to how infrastructure is a service cloud environment have images and then instances of those images and you create instances based on an image. So in Docker space images are built in a standard way using a Dockerfile and for Fedora's context as well as many of the institutions there is SELenix support upstream in Docker that was provided by Mr. Dan Walsh. He is kind of known in the community as Mr. SELenix. He has been kind of the curator of that for many years now. There's a lot of pluggable backends for isolation beyond doing SELenix sandboxing and those kind of things and that extends out to the storage networking and those kind of things. You can kind of change your back end for what gives you what provides your storage rotation your networking those kind of things. So conceptually there is a difference between a base image and a layered image and that needs to be defined because what our build system that we're creating that we've created that is now available to Fedora contributors is for layered images. So I'm sorry I'm getting feedback. Somebody's like we've lost a link or something. Can we mute folks online? Okay awesome thank you. So container layered images will basically be built by the greater community and then a base image itself traditionally comes from the distribution much like the operating system releases are. So if you see like an ISO image to install an operating system out in the world and you download it and you install it that is normally going to come from the project that the distribution projects release engineering team. They're going to have a process in place that will send that through QA of some sort. It will go through validation. It will have you know a process put in place that you then have you know a signed and verified artifact that comes out. That's kind of the released entity. Well the base layers the base image for Docker for containers in general is going to be very similar to that. That's something that you're going to see come from distributions directly and this is no different for Fedora. So we have a process internally within the release team where the base image already goes through Koji uses our internal build system and that is produced and then we actually import that into a Docker registry and provide that for for community members. Whereas layered images when you need to pull in some functionality oh I just realized there yep okay I'm sorry I just realized there's a typo on my slide. So in the center of that you have container and it says Fedora 25 base image that's we'll say Fedora 24 base image such that the app layer in the base image would match one another bummer. Okay so the idea behind this is that you have your base image and your base image will provide your base runtime much like your operating system does in a traditional environment. So if you need application libraries and those kinds of things you can add them. Well we can start by having building blocks for run times of dynamic programming languages or or even just build tooling for compiled languages so we could have Python and Ruby and Node.js and PHP and all those things as a as a layered image built on top of the base image and then other images that would be built or for other software components for other you know services that need those things could then be built from them and we can kind of cascade layer up providing more and more functionality as we go higher in the stack of layers and the reason that we would want to do that is because it allows us to kind of share some of those things and we don't have to duplicate storage for the base image we don't have to duplicate the build and all those things on the end systems we don't have to have that carried multiple times and it kind of just gives a nice a nice way of managing the relationship between the layers instead of wanting to encapsulate all of that and it also kind of leads back to the the microservice architecture the idea that each container should do one thing and and provide only as much functionality as necessary to perform that one thing so we're going to want to we're going to want to try and and and mirror that along along the way best we can lose there we go so just really quick docker file for those who aren't familiar the first the docker file is is kind of a it's almost syntactic sugar on top of a shell script there's a handful of directives that are available and they mean different things but at the end of the day it has the flexibility of a shell script you can do all kinds of things to what becomes your container image via run stances and those are going to just be shell commands that are executed within the context of a layer on the build so the first line is from and that's going to tell your image where to build from and that's going to say that is going to be my base image that I build on top of and your base image does not have to actually be a base operating system image it can or a base distribution image it can be another layer so you can build on top of those layers so this one just start directly from fedora and you can specify maintainer so something goes wrong somebody can look in the the metadata and know where report issues and other kind of things and then we can actually run as part of the build process we're going to run commands that will perform actions and the the items in all caps are on the on the left are are kind of defined predefined things the the docker project documentation kind of explains those and there's there's many more than what I've listed here this is just a small example and then we can add some startup scripts and we can run actions on those and then provide a default command entry point and we can expose ports and other kind of things you can also add very interesting storage things and and there's just there's a lot of there's a lot that you can accomplish in a docker file that will provide a very nice default setup and deployment also there are kind of wrapper programs out in the ecosystem that will use labels so you can apply labels and there are wrappers out in the ecosystem that will use label data to actually perform an action when you run that container in a certain context so label data can be used inside of open shift so you can specify certain labels then actions can take place when that when those containers images are used inside open environment similar with the atomic command line tool from project atomic if you are to do an atomic install of a containerized application if it has you know certain specified label information it will will be actionable on those so it's very flexible it's very powerful you there's a lot you can do but the main thing I want to point out here was the from line and and the fact that all these things will run inside of the context of the the image itself that will I'm sorry yeah so it creates a an instance of the image which will then run inside of a container and then it will kind of be saved back so you'll then docker build and you can just build this thing and you apply a tag to it and tags are arbitrary strings and the tags are kind of two part one is effectively the name and then the other is ironically called the tag so this is fedora hbd and then you could put a colon and then some tag and that tag is kind of similar to a git tag for anybody's familiar with uh git versioning however the trick there is um that it's it's common practice to change the these tags to change what you know what hash value they point to so um while that does happen sometimes and get it's generally frowned upon here it's it's common practice so just note that the thing that a tag points to will sometimes change namely the latest so there's a special reserve tag called the latest tag and if you don't provide a tag it's just inferred so I could have done this command at the top here build dash t fedora hbd colon latest or just left it like this and it'll go the same so uh with our our whirlwinds tour of containers and docker let's move on to release engineering what is release engineering effectively it's just making a software production pipeline that is reproducible auditable definable and deliverable um and just as a note it should be automated to the best of a possibility there's a very nice uh formal ask the most formal definition I was able to find um by Boris Devick uh from google and uh I won't read that you because it's a little bit lengthy but uh it's it's it's very well uh it's very well defined and if anybody's interested please feel free to uh to read that um moving on open shift so open shift commons I assume everybody here is familiar with open shift but uh I like to kind of just run through this for the sake of of making sure that I covered all the bases uh of of the things that will go into our build system so for those who are not familiar open shift is an open source container orchestration platform and uh there are two components or I guess two main pieces of the project there is open shift origin which is the upstream community led and community powered release of it and then there's open shift by red hat which is going to be um the you know productized version and seeing that people will often run in their data centers and gives them with somebody to call if something goes bump in the night um fedora being an open source project and entirely community based and community powers we focused directly on open shift origin and and we have participated upstream with open shift origin as well as with their installer the open shift ansible installer for various things to try and make sure that we can enable open shift origin on fedora um so of this diagram I want to focus directly on the green section most overly on the left side I guess second from the bottom the build automation this is the piece that we're really interested in um oops not yet sorry um so open shift has a concept of a build there's kind of this primitive type this rest api endpoint for a build and and what that is and um there are different strategies for those builds and one of those strategy types is custom so for what we do in fedora for layered image builds tooling is we provide a custom build strategy such that we can define how i'll run what it will run the you know kinds of triggers or actions that it will respond to those sorts of things and what's very very powerful about this is that it has given us as much flexibility as we ever could have asked for as well as the ability to um very very rapidly scale our build infrastructure so once this is all set up at the point in which we decide we need more capacity thanks to the um open shift ansible playbook repository we just need to add a few uh systems to our inventory file and re-execute our our ansible playbooks and then it's just done um but what's really cool uh is actually so osbs client is what defines that build strategy custom type and it's it's effectively a template for that build combined with a python api that lets us kind of tie that into a lot of other things so for those not familiar most of fedora's infrastructure is written in python so we were we were very grateful to have uh have those kind of things so i'm going to go through a little bit and then we're going to kind of round back to that but uh again because this is the open shift commons briefing i suspect most people are familiar with with the next couple of slides but we're just going to kind of run through them quickly uh so open shift is um again it can contain an orchestration platform it's built on top of kubernetes for those who didn't know the mystery has been unveiled uh so open shift participates heavily upstream with kubernetes kubernetes is the basis uh open shift adds a lot of functionality on top of it for build pipelines uh ci integration application lifecycle management uh those kinds of things but at its at its very you know very broad overview um the main uh infrastructure architecture is very similar because of the fact that open shift is built on kubernetes so this is basically the breakdown you have a client that talks to a master over rest api um then the master has a schedule that talks to nodes your containers run inside of a thing called pods your pods are scheduled on nodes um the details from there can cascade out many directions but that is that is the base overview um from there um oh i kind of already talked about this but uh so open shift built on top of kubernetes i should probably rearrange those two uh slides uh so as a bunch of advanced features and in the build component is what we are directly uh interested in um also very nice rest api command line interface id integrations web ui admin dashboard those kinds of things the triggers in the builds are very interesting to us because uh right now um we have uh automated rebuilds happening um or sorry automated rebuilds are being written upstream uh and and we are going to use a combination of our custom build strategy and open shift triggers to cause that to happen and i'll explain a little bit more about that in a minute um so docker layered image build service is built using a handful of components so open shift origin which we just covered uh atomic reactor tool covered in a minute osbs client api again we touched briefly but we'll talk about again in a minute and then a registry some registry of some sort um which is where we will have candidate images stable updates those kind of things um in fedora's implementation we actually split this out we have two registries but um the osbs command line is how you would then talk directly to um open shift uh again because of those python apis and and with our specified um wrapper around um this this custom build type i'm sorry this custom build strategy gives us the ability to then provide a command line that allows end users who want to build container images a a very focused entry point into uh into the what has effectively become a build system um by uh by using these different pieces together atomic reactor is is actually a a tool that allows you to build and i'm pretty sure uh okay so goes osbs first and then all right sorry um so open shift build service uh so this is where we take advantage of the open shift built in primitive for the custom strategy and we provide our our templated build config that then gets populated by by the osbs client uh we rely on open shift for all our scheduling build tasks is what i talked about earlier for scalability those kind of things whenever we want to expand it's it's quite simple uh with the way that the the whole system has been built this presents us with a defined component uh to to developers and builders um the osbs itself enforces the inputs so this rounds back a little bit to the concept of release engineering and why we built the system this way instead of just kind of letting it be the wild west so osbs actually does input sanitization and enforcement um and fedora's fedora system itself is even a little bit more locked down than the upstream osbs and i'll explain that a little bit as well but we can we can say where the get repo for the source file comes in the get commits and all bills are centrally logged and then the build route so inside of osbs the um each build happens on top of a build route docker container and what's uh what's very interesting about that is that is where atomic reactor runs and i'll talk about atomic reactor in a moment but uh atomic reactor actually performs the build itself and atomic reactor has plugins and and has a lot of very interesting features but uh inside of the build route where the build actually happens um that is firewall constrained we can have unprivileged container runtime uh with et cetera enforcing and the inputs are sanitized so in the event that somebody was to try to um i don't know curl uh for those familiar with the curl command line utility on unix style systems allows you to basically just grab um a url and whatever's at the end of the url and in a common paradigm that's been floating around for a bit is the idea that you should just curl some web url uh and pipe its output directly to a shell um and just do that as a route and that's your install process um and that's fine it's it's fast it's easy and a lot of people have have you know had great success with that the kicker there is from an auditable uh toolchain aspect that's that's not an attribute we desire because what happens if that endpoint disappears so whatever that url was so let's just say um i don't know awesome project dot i o uh and if you do awesome project dot i o slash install and you curl that url and pipe it to a shell and it runs and it does some things uh number one we don't really have a good understanding of what those things it did unless we have somehow stored their install process script and whatever artifacts it installed somewhere in the look aside cache or or in some kind of an archive so that we can then audit later and then we have to version it and we have to almost try and police the internet and that just gets unwieldy so uh for lack of better term we have kind of curated content such that it has been um deemed uh okay to use and we have we already have that kind of audit pipeline that uh that trail of of uh information about it and and know that it's been you know checked and patched and cbe's and all those those kind of things so uh we make sure that all inputs are vetted um so from there uh open shipped image streams are used as input sources for the builder so when image streams get updated uh they're automatically pulled i'm sorry i'm reading a comment in the thing not as i mean because oh cool okay we might want okay we might uh sorry there's a comment in the chat somebody said that they don't have something quite as odd maybe yet but they're they are migrating somebody for the plumber of mary db active and q and unison which is their app but um it could be interesting to kind of chat about uh approaches and see if maybe we can uh share lessons learned and and offer improvements to one another okay so from there utilizes open ship triggers to spawn rebuild actions uh this is not yet in production it's currently being developed but we have the ability to um set uh trigger actions inside of osbs such that if a layered image so let's say the base image in our case would be fedora and let's say fedora 25 which is our latest stable release if we did an update to put our 25s base docker image because glibc had some just messy security vulnerability and we had to then issue an update for it well inside of osbs it would notice okay hey the base image was updated then we should find out what layered images use that as a from field and then rebuild all those and then take all of those and find out what layered images require those and rebuild all of those so in on the back end of the system you should be able to just update the anywhere anywhere in the stack of of layered images and it will just cascade rebuild those things so that next time when your end users or your your end systems go to pull those images they get fully updated a fully updated stack of layered layered images such that they're they're then their container instances when they're running are all all completely patched and updated without the need to go around and run dnf update all over the place because effectively that that gets done inside of the system so atomic reactor so atomic reactor i kind of alluded to slightly but it is it is very powerful single pass docker build tool and it automates all kinds of things i have a short list here it does much more than this but these are the kind of kind of big ticket items for what fedora is using it for it allows you to push images to register when successfully built you can inject yum and dnf repositories inside the docker file so if for any reason you need a customer repository that will provide the the information it's looking for so for fedora's context this is an internal local mirror of all of fedora's repose whereas external users don't necessarily need that but it's it's the exact same content that's available on the public mirrors but this is a system that is inside the protected network and you know the highly autoday than environment that the builds actually take place in so that needs to be exposed into the docker file some way into the build environment but without the need for maintainers of docker files to actually have to supply that information and then figure out a way to remove those those repositories at the end of their build process those kinds of things we can then change the oh we can also in similar fashion change the base image from in your docker file to whatever and and the idea there again is with internal registries we can then allow people to keep public public facing registries as their from their from stanza and then all that happens basically the the baseline url is is replaced on the internal so you can say from fedora or from registry dot for a project dot org slash fedora and then the internal registry inside the build system is actually replaced or substituted and it's all the same because we have we have checked some validation all the way through so we can we can verify that we are we're using the right the right images that are also available externally match registry available inside that isolated build route run simple that's not supposed to be a new bullet point and run simple tests after an image built so in the event that you know an image is built we can actually run tests internally with atomic reactor and then also there's a lot of plugins available i i actually meant to put in a note about the plugins there's somewhere in the ballpark i think 40 plugins currently available for atomic reactor and all of that is specified which plugins to run is specified as part of the the templated build specification for our our open shift build a custom build strategy and there's documentation and how to customize that if you want to add a new plugin you can add a new plugin and just change the config for like site specific implementations of this without the need to actually modify the global configuration of things and so we can do gating of updates as well this is an attribute of atomic reactor but this is an attribute of osbs so when builds are done they land in effectively a candidate registry or a candidate tag you can do either it's the configuration option you just kind of tell the system if you want to have a tagging strategy or if you have different registries so when the builds are done they land in candidate registry and then you can at your discretion tag and and move images around between different registries so fedora's implementation a little bit more complicated but that is because we add in a lot of different systems and this is actually a somewhat stripped down version in at the end of my presentation there's a there's a set of links and and the full the full picture one is there and basically fedora layered image maintainers will interface with a thing called disk it and for those familiar with how current rpm maintainers interact with fedora this will be very this will seem very natural and in the rpm standpoint disk it is where you put your spec file in your patches and then your check some reference to the the source code that you use and the source code is uploaded to looks like cash and then reference inside of the build system and to build an rpm you would issue the command fed package build well in the instance of containers we will have our docker file inside of disk it will have our service and it scripts those kinds of things tests documentation and then we will do a fed package container build and much like our pmb builds happen in koji the container builds will then happen in koji with the exception that container builds technically don't happen in koji there's a functionality in koji called content generators and that means that any external system can actually generate the actual content however there is a metadata requirement there's a specification of what information must be provided such that koji will then know all of all of the information required to rebuild that artifact that it then stores that is then re-imported because in the event that osbs were to somehow disappear off the planet we could reinstall it and then use that metadata as input back into osbs and rebuild and get the same thing the same resulting image and so much that we can actually audit it and say it contains the same things that it that it did before so we will do our builds via koji koji will then schedule them in osbs osbs will do the build and you'll notice two arrows going up out of osbs to the registry and the koji koji will store the single file representation flattened version of that image osbs will also upload the resulting image to our registry implementation and like i mentioned before fedora has two registries we have registry out fedora project dot org and candidate dash registry dot project dot org both of them are publicly available it's just that the candidate image candidate registry is where images land as soon as they're done building so they can immediately be tested and immediately be downloadable by the the broader community if they want to participate in the testing and then once validated they are moved over to the stable and we'll release them so um disk it so distro get each branch as a fedora release also in the future that will be known as uh fedora generational core as we move towards the uh fedora modular modularity project and and that kind of grand vision of of breaking the the operating system down into something modular um and and also to note modules will be distributed in in multiple ways one of those ways will actually be uh container images so this system will be used to produce those things that will kind of become core building blocks of of the next generation of fedora so fed package to fedora package and maintainer helper tool manage disk it branches initiate builds those kinds of things koji so i mentioned before koji's authority build system live usd images dvd isos everything everything is built in here and then now we're now we're adding the container builds koji container builds a plugin to koji that allows us to illustrate the builds between koji and usbs and then i will maybe make a rash assumption that everybody here knows what a docker registry is but it is um just the upload download destination and point of distribution for uh doc container images so um we'll quickly do questions and if there are not many of those or if we have no time i'll also do a quick uh demo right t i'm i'm not seeing any real questions in the chat room here um folks if you would like i will unmute you all and um you have a question you can unmute yourself and do that so i'm thinking maybe run through your well there's my goodness could you have more references i don't think so you might know i could do i think i think my references page a little bit outdated uh my newer my newer one has a second column but i will i'll happily make this available in pdf format if you're right yes and send me the pdf absolutely i'll attach it to all posts with all of this i'm i'm not seeing or hearing any questions i'm just unmuting everybody um so why don't you run through your demo because we're getting close to the end of the hour okay let me run share they can ask a question right just a moment i'm sorry i'm changing uh let me change a couple things around all right let me go ahead and share this terminal okay can everybody see a terminal yes and i love the font size good okay that was my follow-up question is how's font size um because it is it is very large on on my screen i was hoping to be good enough for uh for folks on the video okay um so if you notice my my current path that's in my my home directory i have i have a layout for fed package because fed package is now namespace um so uh so if i actually i'm sorry disk it is now namespace but excuse me i probably shouldn't name that directory disk it nothing about it anyways um so if we go ahead and remove uh this directory and if i did a fed package clone docker um cockpit um this will get the cockpit uh disk repository um from the docker namespace because uh there's also the rpm namespace so we will just go ahead and clone that really quick and now we have uh cockpit and for those not familiar um i'll show what cockpit is in a minute uh when we go over to my web browser uh we love our the cockpit yes i i'm a big fan it's it's uh it's very cool i went to the github page and uh yes that no no no that are pre-staged okay so in here you will find um we have our atomic install atomic run atomic uninstall and um these are just small scripts that do things for uh the atomic command uh for uh like i mentioned before it is a wrapper and there's also a docker file so if we just go ahead and then and look at the docker file or any editor of your choosing i use whatever you find productive and um there's a handful things you'll notice here first off we have um our bugzilla component because we need to know where to find the thing in bugzilla and uh we have our name so you'll notice something interesting about the name there's fgc and that is fedora generational core and that is also so that is actually inherited from the parent image so this is something that we have in our fedora guidelines for container images and that is how we namespace those things in our registry and then also will provide context for um what the base underlying fedora generation or fedora release that you know this this correlates to um then we have our version which is then pulled from that environment variable our release which is you know half pulled from the release environment variable and then the other half from the disk tag again disk tag is inherited by the uh the base image um and for those familiar with building rpms this is the same idea it's just that docker does not have a mechanism similar to rpm macros so we kind of just uh gonna move things around and implement it with environment variables and then apply it to labels so that it's it's persistent with uh with the metadata uh that docker carries around with the with the image in the end because we can then use that so um you will see here we have you know some uh we take the atomic install install and run and add them into the container and then we have labels that that do something with that uh in the end and that label and its data are then told to the atomic command so we are going to as this involves in the fedora project we are gonna develop guidelines around um adding functionality for both the atomic wrapper command as well as uh as well as open shift because um from the standpoint of enabling containers on the system atomic install is is the direction we want to go but um from enabling containers in a in a scaled environment open shift is is kind of the the direction that fedora as a project is going as as you know a fellow open source project that that we are working as much as we can with upstream so we'll we'll see a little bit more of that unfold as we look forward but those those labels are able to actually kind of coincide with one another because there's no name conflicts or anything like that we just just all in stride we gotta gotta get it get it done as we're able to so anyways um so we can make edits to this uh and then let's just bump it really so a little bit and then we can do the fed package commit uh pump release for for demo build and then we can fed package container build this will schedule it out there and this will take anywhere from I don't know two to five minutes to run depending on how intensive of the task it's doing uh however I already did one up here just uh so that already would have it done uh so I ran the fed package container build it created this task um then we watched it and then in the end we popped out with a koji build uh metadata import as well as a set of repositories that it can be accessed from and you'll notice there's there's many and the reason for that is we want to allow users to have as much granularity as they want so the first one is unique identifier and that is purely used uh in the sense of um iterative testing for for every single build and not necessarily builds that could be candidates for release but every single build uh so we have a an idea inside the the system called a scratch build and that's effectively a throwaway build it doesn't need an actual version applied to it but we need to build in so we can test it um it will only be given it will only be given this uh this label or I'm sorry this tag in the repository um and it will not receive any of the other ones but this one here and all of these are live I mean if anybody cares to type this out in your command line you can you can go ahead and doc or pull these these images now and play with them um so then we have uh the full version uh release and disk tag uh identifier and that is for testing uh the actual uh release candidate builds we want to make sure identify them all the way down to uh release number those kind of things and then we have major release identifier uh I'm sorry major version identifier without the release so if somebody out in the ecosystem uh just cares about what you know release version or I'm sorry version number 125 but is not concerned with what lease of that so it doesn't they're not really worried about whatever patch level it's on but they want the latest of that version no matter what um then they can reference reference that um and then uh the latest is just the default you you want the latest version release of whatever this component is that is being distributed as a um as a document so we will you know for grins just go ahead and do it all that did not copy that's cool yay we're getting data um okay so really quick I will go into I'm gonna switch really quick to my web browser okay pretty sure it wants tiny so the quick I said I mentioned what talk that product is it is a it's a very pleasant web UI that gives you all for all kinds of fun functionality and nice graphs and metrics and things to interact with your your Linux systems that can manage um containers that can manage system storage can manage container storage you can actually interact interact with uh networking and um I think you can actually do application scale that application deployment on both kubernetes and open shift for those kinds of things if you have a pre-existing application you want to deploy so it's very cool I highly recommend every check it out but I think we need to get a briefing on that soon sometimes soon I mean 2000s we'll get that one yeah let's uh let's look in yeah we should get to that cool um so this is fedora's build system co g dot for a project at orc if anybody's not familiar um so really quick there we go um so state closed built container um so this is the parent task and the reason that there's a parent and child task because right now today we're only targeting xc64 processor architecture because that is primarily what upstream docker and most oci compliant run times target directly however we do produce alternative architecture base images in fedora we will later be adding more architecture more architectures to to this as well um namely power pc and arm arm both 32-bit arm b7 and arm a art 64 so that's that's something that we aim to target there so each build will actually happen uh on on all supported architectures later in life and that's why there's parent and then child okay so we'll go ahead and uh oh really quick show results this is what we saw there our list of repositories and that kind of thing so we'll go ahead and go into the create container and uh this will again give us our results repositories those kind of thing task id but the uh of just incremental log and the reason it's title incrementals it is actually updated incrementally you can tail it in the web browser and it'll kind of scroll and offset and those kind of things uh but this is the the log file and it's very verbose and it's very verbose on purpose um and so i'll just kind of slowly scroll through here and i don't expect anybody on uh the video to uh either fully understand what the log file is telling you or necessarily be able to read it because it's a lot of text it's scrolling relatively quickly but uh the idea is just that we have a lot of information and we'll notice here uh this is our uh doc file that we saw earlier except uh here we have the Shasim so like i said earlier in the presentation that we actually do checksum verification during the build uh that's what's being done here so we're we're doing checksum verification instead of just pulling from some arbitrary named uh url um and then here you see it's actually doing the install portion the you know this here should be pretty familiar for anybody who's done a yum install or a dnf install package on our pmba system and then all of the uh the plugins down here run and it's doing fun stuff okay so uh very verbose logging uh incremental logs something goes wrong find out what it was when it's all said and done you also have a link to this and this is our content generator metadata import and this is what gives us uh kind of the round back to the release engineering concept of of allowing us to actually uh audit and verify these things uh so technically this doesn't have rpms because it's our archive or its artifact that comes out as a tarball um but we can actually go into the info of this tarball uh and that's our watch if you download it uh you can do a docker uh import and it will give you a resulting image but the name of the image will uh be with uh with a checksum on it so this this is all the information about it so this is all kinds of information about um the layers involved their checksums the uh git url that it came from the git hash that was used those kinds of things so we can actually verify uh where the inputs came from here's the list of files included as well as instar rpms so right now today the only things that fedora allows into uh container images are rpms we hope to expand that later but we need to find a a good way to curate um all sorts of other content uh you know ruby gems um pip installs peckles pairs no js npm everything and we just need to find a good way to kind of curate that content so we can we can have a trusted verified source of those things instead of just kind of opening the floodgates to the internet but uh we have this manifest so we can go through this manifest and you'll see this is one through 50 of 178 so there are 178 rpms installed in in that image and what's great about this is it allows us the ability to go back and if we need to we can recreate our repository with just these packages and then rebuild this image from the checksums in the previous pages metadata and reproduce the exact same image such that we can verify and audit everything that goes into it um in the event that we ever needed to and uh that in a nutshell is the demo awesome that in a nutshell is a lot to digest and 178 rpms is a lot of rpms so it's phenomenal what goes into these things so i think we reached the end of our hour yeah i ran over by one minute i apologize that's okay and we started two minutes late so it's actually right dead center on time so you did a great and i'll um for those of you who are listening i'll grab the slides from him and post this um in blog that openshift.com shortly probably not until after the christmas shutdown break holiday thing that we do here at red hat we'll be back the first week of january and i think that's when our our blog web editor um we'll have this up and running but um i'll make the the video available to you guys um privately in case you want to review this over your holiday break because you know it is the night before christmas kind of reading here um so thank you so much adam for for doing this today um i'm looking forward to seeing some discussion about how other people are implementing their build services and seeing if we can learn from each other and make um some best practices around this and maybe even have them reuse some of the work or just use the work that you guys are doing in fedora land so thanks again and um we really appreciate you taking the time today and have a great holiday absolutely i appreciate it thank you and i would love to get conversations going because definitely in the world of best practices about this stuff it's still evolving and there's a lot to digest here so thanks again and we'll see you all in the new year