 Hello. Please welcome our next speaker, Adam Miller from Fedora Engineering at Red Hat, who will be talking about Deploying OpenShift Build Service aka OSBS. Thank you. Oh, clicker. Okay, so I want to talk about first what the presentation is gonna be out the format we're gonna go in through and if any of this does not look interesting to you please feel free to go find a talk that is interesting you will not offend me all this be fun for everybody I'm gonna start off by telling a fun history lesson a little bit of a narrative of how this came to be in the Fedora space lessons learned there which is kind of the premise for the talk from there I'll go a little bit into the technology involved I want to take a quick moment to define what containers are and what we mean by container build images and layered images versus base images I'm gonna very quickly go over what is OpenShift and describe it from an architectural standpoint and why that is important I also want to define what release engineering is because that will provide context around why certain decisions were made the way they were the way that the why the image build system was designed as it is and then actually go into what OSBS is the Koji container build plug-in anatomic reactor and how those all kind of feed into and power what became the layered image build system so the history lesson once upon a time a very long time ago like 18 months not that long ago Matt Miller the fearless project our fearless leader randomly said in a meeting somewhere like hey so there's this open-source layered image build system we should deploy one for Fedora I don't remember the exact quote but that was roughly the paraphrase and what the misconception was that this was a finished product that somebody done and released upon the world and was ready ready for use so we kind of sat around and kind of talked about that and looked over the architecture design and discussed it from a high-level abstract and we said probably gonna take about four weeks to accomplish we need to you know first deploy it in stage we need to figure out where the integration points are for our message bus for our build system make sure it ties into our testing pipeline our release tooling that kind of stuff for weeks and this was an incorrect assumption because the layered image build system and its various components were were rightfully so following the release early release often so this is what actually happened we started off with a whole bunch of GitHub issues and tickets and in the early time when we were not yet familiar as a Fedora group we're not yet familiar with the code base we were just frantically submitting issues because we had no idea was going on and why why nothing was working the way that we thought it was going to it wasn't so much that it wasn't working it's that we were trying to introduce new use cases or new workflows to the environment that that the team that had originally written it had not taken into account or were not necessarily targeting so what I kind of flip through here wow that jump okay so we have a bunch of issues mixed in there are pull requests more of those yay and we have a production environment now and so we actually released just this last December so it's been live for about a month now so phase one was the single node builder and this was finished last November and I don't mean like three months ago November I mean a year ago November and and it went pretty well it went pretty quickly things worked however then image format v2 registry v2 and manifest v2 happened and this broke the original implementation and the reason for that is because fundamental components of the API or the specification for the image changed and they changed in non-backwards compatible ways and then actually during the journey of refactoring and reorienting the build system to handle these more changes happened that were also incompatible but because of the fact that the changes to the architecture and the code had been made to cater to the fact that we were broken previously was much easier to pivot this time and pick up these changes because we did not collectively make similar mistakes of relying on fundamental truths to remain true because we got bit what was it what once bitten twice something I don't know anyways I forget the saying but we tried we tried very hard not to make the same mistake twice so phase two was the scale-out deployment and this is what finished in December and and what we were able to get out and publish to the community such that people can start building containers and for those who aren't familiar the Fedora atomic working group which is focused around all things container you know atomic host container running technology container orchestration within the Fedora project we are going to host our first virtual effect for activity day this coming Thursday yes this coming Thursday and we'll actually be moving I think our initial goal is 50 container images from the the current Fedora cloud GitHub working space into the official Fedora build system so we have all that done it's it's fully compatible with all the new stuff we can run it on the latest versions of Docker and run C and and anything that can do OCI compatible images we have automated tests currently in staging that's going to be going out to production very soon so that we can actually automate the pipeline and then we have the ability to promote images to production release from there in the future we want to do image registry scale out so right now our registry is not as scaled as we like it to be we're going to create a front-end endpoint allow multiple registries on the back end to answer for registry out for our project org and this will give us more site reliability and we're looking also looking into ways to handle geo IP so when you do a you know a pool operation on an image you can get a a local instance of a registry or a local mirror to get the content from you know something that's geographically close to lower download speeds latency those kinds of good things search and advertisement of images there are very few options for searching and advertising images out of a registry and zero of them are native to the registry so each one has kind of that fear of the future where something will change again some fundamental truth that we want to accept it to be true will be removed out from under us and we'll have to pivot again so we're that's being discussed and we're gonna have that coming up very soon and then we're also trying to figure out a good way to do CVE and security metadata release and distribute distribution so when you're on your Fedora or system and use DNF or if you're on your rail or sent off system use yum you can check security info you can do you know update info and check security and see CVE data and read about change logs right now there's no what not any way to do that for Docker or or any of the OCI runtimes but Docker pull being Docker pull and Docker search being kind of the de facto standards in the community space for those kind of things and it's difficult because this is a desired feature and we but we want to find a good way a good balance of doing it without trying to lock anybody to a specific tool chain and have a good way so that's something we're trying to figure out as well because we absolutely want that it's just an interesting problem space and anybody who has a suggestion for it please come participate in the atomic working group because it's basically an arbitrary pile of metadata attached to a tar ball with some JSON and and there's no concept of versions or releases and we're trying to kind of define abstractions for that so lessons learned I want to I want to take two line item moments to tip of the hat and say some thanks to the upstream OSBS team is fantastic in the first few months when we were doing this and we had no idea what we were doing in terms of trying to touch the code base because there's a pretty respectable amount of code involved in all of this we were just frantically running to them thinking the world was on fire and everything was just you know falling the tire the wheels were falling off the cart kind of thing and again we later learned that it wasn't the wheels were falling off the car it's that we were trying to make the system do something that it wasn't originally designed to do so we you know kind of dug in and you know gotta gotta roll up our sleeves and learn the code and started trying to contribute patches but aside from that they were very gracious and originally taking on more work to cater to some of some of the early requirements for the system and I want to say thanks for that publicly also OpenShift team we had a similar scenario with with some of their stuff they there was a lot of a lot of the work was targeting kind of targeting sets of containers and and network components that weren't necessarily all that stable yet in Fedora and all of our build system stuff runs on Fedora so I want to say thanks because they were very responsive to again us running to them and saying oh my gosh the sky is falling and so it was really powerful and and what I mean and I don't mean that to be just kind of like some weird under him not trying to like sell you anything but it was it was very fascinating to to actually get to a point where we were deploying a system that had the ability to have a custom build specification and you just have this pile of JSON and some YAML files and you throw it at the system and within the cluster it would provide you everything you wanted out of it and that was just kind of really cool but also with that power came a little bit of learning curve trying to figure out how to handle that as a new workflow how to handle the system and I'll actually the web I have some slides a little later on that kind of explains what OpenShift is and and I'll have a little bit more specifics on examples of how that was so for those who who don't know I used to work on the OpenShift team but when it was when architecture v2 was a thing so I had actually moved over to the Fedora team around the time that architecture v3 was being hammered out and they were starting developing on it and basically rewriting everything and basing on Kubernetes so it was a huge learning curve for me so it went from this you know Ruby broker based thing with SC Linux sandboxing and you know their own implementation of C groups management with mCollective and now a sudden it was Kubernetes and all these things so it was it was really cool to learn how the build pipeline worked and be able to integrate that into the build system another thing that we learned almost in a trial by fire is that the container ecosystem moves fast nothing is guaranteed nothing is set in stone because the fact that the environment or I guess the the underlying technologies that for what I'll consider a traditional application runtime they're pretty they're pretty standard they're pretty set there are certain components about the system or certain things about general you know GNU Linux distributions that you can kind of rely on being kind of stable none of that is true in the container space everything is constantly evolving and it's it's very it's very exciting it's very fun to be a part of but it's also very difficult to track when you're trying to build something so we learned kind of the hard way that APIs are not always going to stay stable and they will not always stay relevant there was a certain point in time where there was an announcement about an API specification change and we had three months to update because the old version was going to no longer support I'm sorry the old version of the API was no longer going to be supported by the you know the the upstream so yeah and don't expect backwards compatibility that's kind of the the follow-on to that like breakage in API there was at some point of the cutoff where image v1 just was a thing of the past so containers really quick what are containers kind of talked about it from an abstract talked about building them but what is what are they is a thing operating system virtualization and if anybody attended my talk last year actually this is a repeat slide mostly because I'm a huge fan of it I borrowed some of the content from some other people much smarter than I and and I've been trying to kind of level set what it is in the sense that a lot of people out in the ecosystem have I guess what I would call like a marketing opinion of what containers are but they're basically operating system level virtualization it's the idea of multi-tenant isolation the ability to present an environment to an application run time such that it thinks that the environment which in is it exists is a system is a system it has a root file system it has you know various libraries it has various utilities that it needs it can you know for whatever reason inspect the process tree or proc file system however it wants to but it's not it's not a full system you know you couldn't boot it doesn't it's most times it's going to be lacking things probably doesn't have an in it system in it those kinds of things so containers are not new they've been around for a long time I always like to argue that the charoot was the original container in the context context that you can lie to the application run time and and it will execute and run like it would otherwise on a base system but it is not and they sure it was very unsfisticated and there's a brief lineage of Unix like container or operating system level virtualization technology whereas LXC was kind of the first thing and Linux got really interesting IBM released a tool chain that provided that somewhere around 2011 system DN spawn was created and system DN spawn as I understand it was just created so that the system D team could test system D without having to reboot systems all the time and it added a lot of sophisticated features that we didn't previously have dot cloud now known as Docker incorporated then release Docker a year later core s released rocket a year later the open container project also known as the OCI open container initiative happened started defining specifications because a lot of this technology was was in a lot of ways trying to describe similar patterns similar execution environments similar images but there was no real consensus around it so a bunch a bunch companies got together including Red Hat and and are trying to standardize on that so we can get to a point where we have these these image formats and these API's that are stable and bring bring some cohesion and and I guess stability to to the different areas was a year later same year yeah same year shortly after the OCI was created run C came out it's kind of the reference implementation of being able to execute images from the OCI specification then just as past year container D came out which is a OCI compliant daemon to kind of manage run run C execution run dimes sorry so that was just a brief history to kind of give the idea or make sure that is is understood that there's not one there's not just one runtime in town there's not one solution for this and something that fedora is trying to aim for is we're trying to aim for providing OCI compliant images such that our end users can run and use these images in any way that they choose we in fedora are all about freedom all about friends we are all about trying to be first try to bring the new features first so if we can provide the ability for our end users to run our images in any way that they can more all the better but really quick the reason it's called the layered image build system is because the idea is to allow fedora contributors to maintain and build layered images much like they build RPMs today but you need to understand first what the difference between a base image and a layered images so a base image is going to be the thing that provides kind of the core platform shim for lack of a vocabulary term to a layered image and that base component is going to provide things that you would kind of expect out of a containerized runtime that would come from a distribution so probably going to be things like glibc and maybe a shell you know bash for the fedora use case probably DNF probably want to package manager in there those kinds of things so we'll provide that in as small of a package as we can we want to keep that lightweight as possible and then a layered image is going to build on top of that that will add functionality for a specific use case and let's just say for example you wanted a web server you want a web server in a container so we'll just say you know Apache HPD or nginx or I don't know Lighty or I don't know what's cool these days what's we'll go with that we'll just we'll Apache I'm gonna stick with it so you run Apache and you basically have you know a build configuration that would say you know I want to add Apache to this and I want to be based on a certain layer and what's interesting about that is because this actually enables the ability to decouple our desired runtime from our base platform operating system so we could potentially run a piece of software from fedora 24 inside of a container on fedora 25 and what's cool about that is if you're running certain application stacks let's say Ruby on Rails or Node.js or Django some some framework for something and there's a giant leap so there was a I think those it was Rails 3 to 4 a few years ago Ruby on Rails 3 to 4 very non-trivial for most people to port between the major revisions and this kind of thing would have made that transition period a lot easier because you could go ahead and upgrade to the newer operating system take advantage of the new functionality the new speed increases the new you know kernel support for the various things new libraries and that sort of thing for the host but you can go and bring along your new your new image so what we're targeting here with the the layered image build service is the layered images fedora release engineering provides the base image must like we provide ISOs and you know cloud images and those kind of things and then contributors throughout the community will add to that also a side note for the release engineering isn't some kind of like weird close room cabal all open as well we have community contributors there as well there's just a little bit more structure and in the tool chain for for what the base image is produced so open shift really quick open shift is a container platform built on top of Kubernetes has advanced features such as build pipelines image streams application life cycle management and you know various other things that are great for developers the piece that we care about is build pipelines there is a concept in open shift of a build it's kind of a primitive data type to open shift it provides the ability to use pre pre prescribed build strategies or you can provide a custom one we do a custom one so this is kind of the the huge layout in the green section second from the top or I'm sorry second from the bottom to the left the build automation that's the piece we care about fedora we run open shift origin that's the upstream project for open shifts that's where we would participate this is the basic architecture overview it's effectively you have your client talks to the rest API there's a schedule scheduler on the on the master and the master talks to the nodes inside the nodes there are pods and inside the pods are multiple containers and the reason that's important or at least note worthy is because this architecture and by using open shift we have an ability to actually scale our build system pretty easily so we use ansible to deploy the entire build system internally at any point in time we realize we are low on capacity all we need to do from an administrative standpoint is add nodes to the inventory and rerun our ansible playbook and it will be able to provision more more capacity for us and that would be the node so we provision more nodes so release engineering what is release engineering I want to I want to go through this very very briefly just to kind of explain why some of the things that we do are the way that they are instead of just saying you know run Docker build on your laptop and ship it because yeah why not so it making software and a pipeline a production pipeline that is reproducible auditable definable and deliverable and auditable and the definable components I think are important as opposed to the other ones because I'm sorry reproducible and auditable definable and deliverable definable and deliverable you can do on your laptop but the reproducible and auditable are the components that we care about so that we can make sure to have a well established trust chain with with the user base so when we say that we've shipped something and it contains these things we can verify that so layer image build service OSBS open shift build service so what the open shift build services is actually a loosely coupled set of components the main one of it which is the OSBS client the OSBS client takes advantage of the open shift build pipeline by creating a custom build strategy and we have this build configuration and it defines the inputs that we must provide on top of that it also provides a python API and that python API is also presented to the user as a command line so you can enter the inputs that you want through the command line or you can do it via the API and we chose to do it as the API so we can automate it programmatically those kinds of things OSBS enforces the build inputs if you attempt to do something you're not supposed to it will fail we have a build route which currently is a limited docker runtime we're also toying with prototypes of other runtime backends for the build pipeline and you know trying trying to again explore more of the container ecosystem find things that are more performant or that are potentially more performant make sure that we're providing the best solution we can so it is far well constrained and we also have input verifications as far as input sources of content so if you do a docker build and and and you read stack overflow I don't know it's four in the afternoon you haven't enough coffee and you're just I don't know Facebook got boring and somebody is like hey you should just curl pipe this to bash his route you're like sure that sounds like a great idea let's slap that in a docker file and then let's ship it to Fedora users and while that is debatable as a deployment install strategy the problem comes in is the verification and the reproducibility let's say that the point that you did the curl from disappears at the end where we we resulted with basically a source input that says curl this thing pipe it to bash and that's going to do some stuff we might have some build logs but we lack the ability to audit the trail of content and find out where that came from what it's actually doing because we may we may never have stored the actual the actual install script so those kinds of things so if you attempt that curl pipe to bash inside of our build route the build will fail because the the input source would not be verified we use open shift image streams to detect the input sources to a build route so anytime things happen there and this is actually going to be the kind of the inflection point in the future where we will do automated rebuild so in the event that we update the base image things that directly need the base image will automatically be rebuilt as layers so let's say for example you're building or you want to put WordPress in a container and you started that WordPress off of the maybe the PHP container so you're not directly relating you're not directly depending on the base image but at some point you will in the event that the PHP container were to get updated then the inside of the build system would know to cascade up a build to the WordPress and those kinds of things so while that's not you know as it is today that is that work is is underway and will be will be getting there soon we're we're also going to have a factory to component so for those who have heard of factory 2.0 or have seen the factory 2.0 talk with there will also be a component of the build environment that will have a record of all of the content that's going to go into a layered image and then in the event that there is an update noticed within the Fedora update system and it would it would then require I'm sorry there will be a manifest of the RPMs for layered image build service or a layered a container layered image and in the event that one of the RPMs that go into that layer gets an update the system will automatically notify the various tools that need to and update them because right now we basically just need to update the base image and then cause cascading rebuild which will then pull that in and we do that on a on a regular cadence except when there's a security update then we do it we break cadence and we get it out as soon as possible for for the security update so atomic reactor so inside of that build route that I talked about there's a component called atomic reactor it's a single pass build tool does all sorts of really really cool stuff it's got a pile of plugins we wrote some of our own and this is what allows us to tie this directly into the rest of our environment we're also able to gate updates so again we can promote images when it comes time so all of the bills as they go through our system automatically land in a candidate registry and you can pull them as soon as we're done building you can test them you can even you know share them out with with friends and and say hey check this out and then we will you know we will gate and verify once that verification validation happens we can send it out something else to note is when a build completes there is actually a metadata import that contains information about how the image was built how the image was built the sources that went into the image and all of the content that is within the manifest that belong that is in that and given that we can actually reproduce not not bit like we haven't mathematically proven that it will be bit for bit but we can verify that the manifest name version release of every rpm that goes into an image can be reproduced and each of those rpms themselves can be checked some and verify because they're signed and there and all that so anyways so this is this is the the system and this is kind of why I wanted to mention earlier the OpenShift architecture because in here it's just kind of like this this box you can't really see how how well scaled out the environment is but for those who are fedora contributors the the idea of kind of coming into disk it and putting in a spec file and then submitting a build to Koji and having the code have Koji build go through and then you submit an update to Bodhi and that go out to the users like that's pretty par for the course you're used to that workflow so what we did is we tried to mirror that as much as possible to keep this as consistent as possible for for the fedora users so a layered image maintainer comes in and they'll have their docker file and potentially their service and it scripts and those kind of things and then tests and documentation instead of doing a fed package pack a fed package build they'll do a fed package container build and then that will fire off in Koji and Koji will schedule that build inside the OSBS system and then OSBS will perform the build and import so the logs are actually brought in to Koji on the fly it really kind of as it goes and then when it's done all the metadata will be imported and then from there a message gets sent out so actually I don't don't have this diagram I probably should have but from there actually a fed message gets sent out task storm will pick it up and and we'll actually be able to do testing everything from the candidate registry before we distribute to the users so disk it for those who are not familiar this is a distro get structure set up by fedora it's used to provide versioning for all the input sources that result in becoming fedora there's a branch for each release master branch is rawhide rawhide is our development version for those who don't know fed package is the maintainers build tool allows us to have kind of some niceties for disk it and initiate builds locally remotely if anybody wants to do mock builds locally you can do that koji is fedora's authoritative build system everything that we would potentially need to know is in koji and why that's an interesting point is because if at any point in the time we want to just blow away the entire OSBS system we can because we have one touch deployment in the ansible playbooks inside the footer infrastructure this is something we do in staging periodically just to make sure that it still works and we haven't broken anything and because koji is authoritative and it does contain all the metadata required to reproduce any of the layered builds we can actually just remove it and since it is disjoint from the registry we don't actually lose anything and since all the logs are imported in koji we don't actually lose anything but I wanted to kind of point that out because it is interesting to note that koji is where everything happens not just for layered image build system but also through all artifacts of fedora so rpms live images cloud images isos the distribution stuff things that you get on the little USB sticks of conferences all that koji container build is the plugin that enables fedora fed package container build the thing that allows us to actually do the integration point in koji with OSBS and then the registry is our upload destination so I spoke really fast when I so we're actually done if you have any questions I'll be happy to field them if not I'll give you a little bit of time back I ran through this and it took me 40 minutes last time and I did it did it much quicker yes okay so the okay so the question was are there hooks to get notifications upon build failures in the event of an automatic rebuild okay so the question was when a layer rebuild loan a lay automatic layered rebuild happens because the base image changes will you as a layered layered image maintainer get a notification in the case of a failure or will those be validated in some way can you get feedback on the new rebuild at just a notification period so two things one you can because fed message will be capable of that we will have to define something that would contact you as a image maintainer but to you can't because the other rebuilds don't fully exist yet but that's something we should absolutely do I think that's good a good idea I think it's good feedback in terms of something that would probably be useful to to maintainers but as far as validation we will absolutely so the fed message will go out task a trial will pick it up and run the test so we actually have a document that kind of lays out how you should how you should have tests to define within within our environment so they can be run so we can at least once the layer rebuild happens get the test and the tests will define what is released so when we do the gated release we will be able to query the historic test data and we won't push anything out that didn't didn't pass the tests but yeah so notifying you in the event of a layered an automatic rebuild causing a failure is probably something we should look into so not today but we'll yeah let's we should yes take us through the process of the next shell shock and how long that will take so shell shock happens we'll call it shell shock to some super inventive that fix will go into the rpm that rpm will go into Bodie that rpm will come out of Bodie into the updates repository then automatically a build will be triggered well okay so today something relinch will go trigger a build hopefully in a few months a trigger a build will automatically be triggered that will go through and it will cascade rebuilds throughout anything that relies on it which is going to be everything because that would be in the base image those would chew through and as they come out of the build system it would go through the testing and have verification right now builds take on average two to five minutes depending on how big of a how big of a you know task list is in the docker file define tests are going to again depend on specific specific images and and how what level of testing they require and all those things so we'll say 10 minutes and then once those are done we can trigger an automatic release based on the latest tested and those can be done in parallel we can we can right now build 16 images 16 images concurrently and we can scale up from that if we need to pretty quickly like within 20 minutes we can add multiple extra nodes we can we're we're right now we're averaging eight concurrent image builds per node and we have two nodes currently because there are five layered images and just get at the moment so we didn't feel the need to increase capacity yet so I mean that would be kind of the workflow as far as timeframe a lot of us can have to do with how many layered images we have so let's just guesstimate that we have a hundred layered images and if we divide that hundred by 16 we're looking at what 32 32 you make me do so six so yeah we'll say you know so six point two times we'll say worst case scenario 10 minutes per build which is potentially you know realistic given the the added load and it's a little bit outside of the average so it would be an hour for builds and then as they get done you know kickoff test so we'll say again worst case scenario for 10 minutes a piece so we'll be looking at essentially two hours once the once the updated RPM hits stable and all of that uses the internal RPM repose so we actually don't have to wait for the mirror network to cat to update so let's say four hours just just to not just to be a little bit you know safe side not lie to you terribly it was four hours we can probably do turn around time on something like shell shock in the future yes okay so the question was how many image flavors do we plan to have and what's the goal for the applications so initially this was kind of kicked off through the Fedora atomic working group which used to be the Fedora cloud working group we kind of move towards the idea of targeting project atomic as our as our upstream and trying to deliver that across cloud platforms and and native hardware so the original target is to bring container applications on top of the atomic host as as a runtime platform as we kind of but with with the subtext that it doesn't have to run on the atomic host I mean it can run on you know bare Fedora anywhere realistically anywhere you can run OCI images the the goal or the initial hope is that we can we can deliver something that is very useful as a you know a server platform you know either in cloud environments or locally for container workloads whether it be single host or multi host orchestrated and those kinds of things so the initial goal is this coming up Thursday our virtual activity day we're going to move roughly 50 of of various services that are that have historically been on GitHub and distributed through the the the Docker hub we're going to try to bring that into our build system and and actually work on that so it's I mean it's very standard things like htbd engine X you know PHP Python I believe there's a Python flask there's a Django there's a rails like just kind of trying to get those those application stacks or the you know in in container images our syslog you know things that you would nest you know would probably want in an environment and that's the initial goal from there I think a lot of it's going to evolve but something that we're currently scoping out with the workstation group is flat packs so one of the things that flat pack is is aiming to do is also being able to distribute their images as OCI compliant images so if we can get there we can probably integrate some of that with with this similar build pipeline testing would need to be a little bit different just because of the nature of flat pack but we want to be able to leverage the work that was done here for some of the future things that we want to do same similar thing with modularity so the modularity group is is hoping to distribute modules both as you know traditional sets of RPMs as well as images and we'll have we'll have those in there as well so kind of a lot I mean like it like in by by a lot I mean like spread out amongst different initiatives but yeah the initial goal is to target the atomic coast and provide content for for a useful I guess a useful word I'm looking for just something like a useful solution for the problem or the desire of having a fully containerized environment for so the question was is is something we want to do is ship Firefox in the container so for desktop applications for graphical environments flat pack is the preferred solution from a Fedora standpoint so that is something that we absolutely target we are we are aiming for that I don't know how soon on the timeline that's going to be but that is something we're definitely working towards we actually so the for atomic working group a handful of members of us as well as members from the floor release engineering team met with the desktop team just earlier this week to try and come up with kind of a precursor plan for what work needs to be accomplished to allow exactly that so allowing Firefox you know lever office those kinds of things to be distributed as containers and run as a flat pack runtime yes okay so the question was everything that I talked about in the scenario of shell shock was targeted on a Fedora released version and the follow-up question or I guess the follow-up question that premise is are we doing any of this release rebuilds or anything for rawhide the answer is not yet that is something that we actually need factory chado to to exist to power because of some of the some of the kind of tracking of all of the you know I guess the funnel of content coming into rawhide we need something to to have an idea of what's going on there and right now we're able to kind of get that from Bodi but there's no Bodi for rawhide so yeah so not yet but that is planned like we absolutely want to keep rawhide in in the in scope there was yeah okay so there was a comment and then a question the comment was right now with the redhead images there's roughly 120 so that's probably a good estimate for you know a relatively initial offering since this is available and a lot of people in the redhead community participate in fedora will probably be in that ballpark pretty soon the question was for builds are we hooking to things like maven and the answer is no not yet right now we're only doing rpm content allowable in docker in the docker file for the image build and we plan to do better for that in the future the big thing is is to track the ability to audit and reproduce the build we need some way to curate that content stream and with rpms we have that because we have that tooling because we as a distribution have been doing that forever but for things like maven npm ruby gems you know pi pi peckle pair all of those things we we want to get to a point we have that we just don't yet and but it is on the roadmap so if you go so my last to do to do laptop sleeping maybe bealer there we go so I have a reference page so once this talk is up in PDF format or however they want the dev comp folks like exported on the wiki page for for the initial proposal down in there we actually talk about the future of wanting to do curated content we just don't have it yet I didn't mention maven but yes maven along with npm and ruby gem all those things we we absolutely want to do that yes retention policy for images the question was what is the retention policy for images going to be like we're going to keep n and n minus one so whatever the rate the latest release of a particular image and its previous release will be in the registry we're going to continuously be cleaning out the registry because it's just a lot of data and we but we want to keep current release and minus one to allow for quick rollback because that's kind of a big functionality or a big of new feature that we have that we used to have is the ability to atomically update our operating system with atomic host and then also our application stack with you know the container images because they're immutable so yes we'll keep the latest and minus one there's been discussions of possibly doing minus two and I mean we can we can hash that out in the in the working groups as a community we're open to it it's just to start off with we figured current minus one okay I believe that was we're pretty close to time and I don't see any other questions thank you all very much