 So hello and welcome everybody to another Image Builder SIG, OpenShift.com. I'm Diane Mueller, and I'm really pleased here today to have Adam Miller and Matthew Miller, not brothers, but huge Fedora folks here to explain a new initiative at Fedora called the Fedora Docker Layered Image Builder Build Service. And to talk a little bit about the Fedora Cloud. So I'm going to let the two of them take it away. What I'd like to ask is that if you have questions, ask them in the chat. Let Adam and Matthew get through this presentation and the talk. And then we can have a conversation afterwards, since we're recording this for posterity. It will be posted up rather quickly on our YouTube channel. And you can find more information at the OpenShift Commons site for SIGs, commons.openshift.org SIG. And you'll look for interests in the menu and you can put whatever you're interested in, whether it's Image Builders, operations, education, and perhaps propose another SIG. So without further ado, I'm going to let Adam take it away and start us off this morning. Thanks, Ann. Hi, I'm Adam Miller. I'm part of the Fedora community. I'm part of the Fedora Engineering team at Red Hat and I work primarily in release engineering and cloud things. And one of the kind of areas where that intersects ends up being the Docker Layered Image Build Service. So we're going to kind of go through what Fedora Cloud is, what the Layered Image Build Services, and then I'll do a bit of a demo to kind of show off how that goes in. There will be a little bit of background information as we go, just because I don't want to assume that everybody who is watching this later or participating today is intimately familiar with how Fedora contributors actually contribute code and content. So for starters, what is Fedora Cloud? So once upon a time, Fedora as a distribution kind of had one thing. We had one deliverable that we created, a single ISO installer. It was a just giant Hawking DVD that had all kinds of configurable options and those kind of things. And as part of, so actually Matt Miller, who is on the call, who is the Fedora project leader that I probably should have taken a moment to introduce. So let's do that now. Matt, do you want to talk about all the things that you do in Fedora space? Oh man. I don't even know what all the things I do are. But yeah, I am the project leader and I guess the main thing I do is try and build community consensus around what we're doing and where we're going and then try and keep everybody on the same page about all of that. So this was definitely one of the things that came up as an additions idea where I tried to help shepherd the project into the next generation of where we want to be for Fedora for the future. Right. So that's kind of what brings us to this is there was an initiative to create three additions. There's workstation server and cloud and Fedora cloud is one of the additions and it targets cloud infrastructures. And that's a little bit vague on purpose because there are multiple kinds of cloud infrastructures. There's infrastructure service, platform to service and then software to service. We generally hang out and try to deliver solutions in the form or two, not necessarily the latter. But there are things that Fedora cloud does provide information for and a platform for, you know, similar like home cloud or next cloud and those kind of things. So what we produce is an addition and it's a image that can be run in different environments as well as the basis for the platform to actually spin up clouds. The cloud working group is a subset of the foot or community that works to deliver the cloud technologies and those kinds of things. And there are two relative links. One is actually where to get the cloud content. And that will include, you know, being able to spin it up on infrastructure service environments like Amazon as well as download images to import into if you have an on premise, those kind of things. And then the project of our wiki page you see there is actually about the cloud working group. What we do when we meet things that were current initiatives we're working on and kind of what we're doing in that realm. So one of the pretty hot topics in the world of cloud right now is containers and kind of front runner in that world is almost unanimously going to be Docker. So one of the things that we're focusing on right now and where release engineering and cloud kind of intersect is delivering a platform upon which the door contributors can build and maintain Docker images in a reproducible and auditable fashion. And what that means is if you provide the same set of inputs, we should be able to prove that we have the same set of outputs. And this for various reasons in release engineering is very desirable. And it's something that we are working to do and the solutions in place and I'll kind of go through it in a demo. But one of the main goals that we had was to keep the process very similar to how the door contributors today create and maintain RPMs. In the history of the project, the primary build artifact that came out of our build system was RPMs. We didn't have anything else. And as we move forward, we're trying to make the build system and our process is more modular nature, more flexible to allow us to deliver new things in the future. So today we're working on Docker. Next up, we're probably going to be working on other container technologies as well as new application technologies such as Flatpak for those familiar with that project. So we just kind of want to move into the world. And this also again kind of goes back to the Fedora.next initiative that talks about trying to make components of the operating system more flexible and allowing certain components to exist in different life cycles and those kinds of things. So we want to kind of kickstart that here. The Fedora Docker layered image build service and I'm probably just going to call it the image build service. It's not the most inventive names, but it's descriptive. So, excuse me, drinking coffee. It's a build system that will automatically rebuild layered images. And what's important about that is actually let me first define what a layered image is. In the Docker space, there are two kinds of images. There's a base image and there's a layered image. And the base image is created from kind of a special directive called from scratch. And you simply build the base image effectively in a charoute and provide a root file system to Docker such that it can then use that as the basis for other containers. So it needs to be self-contained runtime and those kinds of things. And then if you in Fedora space wanted to create something on top of that and provide a service, for example, a database or a web server, your first directive in your Docker file will be from Fedora and the Fedora base image will then be used. Well, these layered images need to be maintained in some way. And that's what this build system and as well as the pipeline in general. There's a lot of components and I'll show that in a moment how they all tie together and everything that's going on. This automatic rebuild is if something in the base layer changes. So let's say there's a G-Lib C update or some security vulnerability somewhere in the stack of dependencies that one of the layered images relies upon. First, that will get updated. The update will go to the layer that is dependent upon and then all dependent layers will be rebuilt automatically by the system such that the next time somebody on the Fedora community were due a Docker pull on the image that they were using, they would get this new updated layered image. And the big goal behind that and actually OpenShift. So the next thing is the build component is built on top of OpenShift v3. And the reason that we use OpenShift v3 is the concept of the image streams allowed for creating this dependency graph as well as the relationship to define when things need to be rebuilt. Beyond that, OpenShift in general with the custom build types allow us to have a lot of flexibility in what we can actually do. And we created it on top of OpenShift v3 and then tied it into all of Fedora infrastructure again going back to maintaining the consistency. So this is the architecture and I know it's kind of a lot going on and I will try my best to distill it down to something that's consumable. So on the far right, there is a box that says Fedora Layer to Image Maintainers. So in the world of Fedora right now, there are Fedora RPM maintainers and often called package maintainers. In the future, there's going to be different kinds of maintainers for different kinds of artifacts, RPMs being one of them. Now with layered images, we will have layered image maintainers. And these will be the people that will maintain things like the Dockerfile, the application initialization scripts, any kinds of documentation as well as tests that are needed. If you follow the arrow, you'll see that the things that I mentioned are listed there and inside of an outer box called DistGit. And DistGit is a Git-based system that has certain branching conventions that are enforced such that each branch is correlated to a release of the distribution. So you will see in the demo when I kind of worked through this, you'll see there's an F24 branch, which will specify Fedora 24. And we can this way gait different versions and different modifications for new features or new enabled components in newer versions as well as allow for keeping sanity between releases. In a traditional RPM sense, this would be where you'd have your spec file and your source listings as well as your patches. And you would commit them to Git. You can do so much like you do in your general Git, but there's also a command line tool that's a helper that assists in things. So for like switch branch, you can display what branches are available to see how many versions of Fedora, the item that you're working with supports. And that command line tool is called FedPKG or Fed Package. If you follow the arrow from this box that we're looking at, you can see Fed Package Container Build. And Container Build is the directive that tells Fed Package that this is a container. It is a layered image build and it needs to be handled as such. Whereas if you were to be building an RPM, it would just be a Fed Package build. And one thing to quickly note about DiskGit is our Git layout. We actually had to modify the Git layout as well as add functionality into Fed Package to handle the difference between layered images and RPM files. And mostly just because in the past there were only RPMs. The only thing you ever had to deal with was RPMs. It was the only artifact that we delivered. So we now have what we are just calling namespaces. We're namespacing DiskGit such that it correlates to the artifact that we're delivering. So RPMs is now a namespace and it is the default namespace. If you don't specify, that is the default. And the reason is for backwards compatibility. And then we've also added Docker. So the Fed Package Container Build command will then kick off to the next... It will kind of follow the line to the next box. And this is Koji. Koji is Fedora's build system. The build system is what has been used forever to build RPMs, to build our live images, to build our ISO, DVD installers, all of our cloud images, the QMU, RAW, QCao2, those kinds of things. And Koji depends on a lot of other technologies to kind of satisfy all these different things. It's not an all-encompassing do everything, but it's the control point. It has the database back in that stores all of the information. And it maintains, metadata about the distribution such that we can produce things like young repositories directly out of the contents that have been built inside of it. So this is where the Container Build plugin is installed. And this is where we'll do... Instead of actually doing the builds, we offload it, and we then... We also do our RPM builds there. I just realized that I'm missing an arrow directly from Koji down to OSBS, and I apologize for that. But in the bottom... Pardon me. In the bottom, you see a box called OSBS, and this is the OpenShift build system. And that is an upstream project that has actually... It's been renamed to OSBS Client mostly because it's more accurately representing what's happening. OpenShift is the build system. OpenShift provides all of the primitives that are required. It provides the image streams. It provides the custom builder type. And OSBS effectively just sits in front of OpenShift and uses it in a very specified, predefined fashion with a custom builder type and a set of Python APIs and things like that so that we can tie it into other aspects of the build system, which are also written in Python. You might notice in OSBS... Pardon me. It's a Tomahawk reactor. Tomahawk reactor is what we actually use to do the Docker image builds. And it's a standalone utility that actually gets run inside of OpenShift B3 as part of the custom builder type. And we use that mostly because it affords us certain luxuries of being able to programmatically or from the command line, pass overrides for injecting certain yum repositories or changing aspects of the Docker file purely for the build. And this is good because in our environment, we want to isolate what the build route or we're effectively calling the build route, which is basically just a Docker container that is used for the purpose of building other Docker containers. And the build route needs to be isolated from things like the Internet. So we have to be able to specify specifically what content sources it can access. And the main reason for this is let's say that somebody wanted to put in their Docker file a curl pipe to Grip from somewhere in the Internet that doesn't install some software. Now, this might not be malicious in nature, but it does remove the ability to audit or reproduce because that thing on the Internet could disappear or what results from that thing in the Internet will change. Or it could be malicious in intent and people are injecting... Somebody has decided to inject some bad payload into the image that would then cause our users issues. So from there, OSBS interacts with our registry. And for those who have used Docker, this should be very common place in terms of something you interact with. Most people interact directly with the Docker Hub, which is probably the largest and most popular Docker registry in the world. But what we are going to do is have a Fedora hosted registry and there's actually going to be a couple of them. There will be the intermediate build registry and then there will be production registry. And the intermediate build registry is where all the builds will land. So if somebody wanted to pull a build that has just been created so that they can test some feature, they are actually able to do that. As soon as the build lands, they pull that down and run their tests on it. This also allows for our automated CI to do the same thing. To pull that image as soon as it was created and perform tests on it. And then the production, what we are planning to do in the Fedora 25 cycle is actually have every two weeks to production releases of all currently available layered image builds. And that will be based on automated testing. So for all of the images that have passed the automated testing, they will get a release. They will have a new update pushed to that production registry. And people can pull that from registry.fredoraproject.org, which doesn't exist today. That is in the works. We will have it in the coming months. But if you write the second tried to pull from there, you won't get it. But in the future, we actually want to allow for automated gated tests so that we can iterate even more rapidly and allow for Fedora layered image maintainers to opt into something like continuous delivery mechanism. So we have plans to make components of Fedora become more rapidly deployable and more rapidly iterated upon and not necessarily have to exist in the same six month life cycle as the rest of the operating system. And some of that actually rounds out to a whole different deeper conversation about Fedora initiative called non-gilarity, which I will make sure to provide a link to at the end of the slide for anybody interested. So another thing to note is when we push to registry.fredoraproject.org for the production registry, we will also be mirroring that content up to the Docker hub so that people who are just used to using the hub or maybe aren't using Fedora directly and don't know about specifically our registry and those kinds of things. And they found Fedora through doing a search on the hub. We want to make sure that they're also getting the latest content. And that in a very, very large nutshell is the layered image build service. So demo time. I want to go through a really quick demo. And there's a project called Cockpit and actually just really quick, let's go to... So for those not familiar, Cockpit is a web interface admin portal for Linux servers. And it can do multi-node, multi-host, it can manage clusters, it can do Kubernetes, it can manage Docker, all kinds of really interesting things. And what's interesting about it beyond that is that it by default can run out of a Docker image itself. And that will be something that we are going to deliver as a layered image build initially. So what we showed earlier was that we have this thing called disk it. And this is what I'm looking at here. This is actually a Git repository that has... So let's do a Fed package switch branch. We have different branches. So we have Fedora 24 and master. And master is Rahide. Fedora 24 is branched. And it actually looks like somebody... So something else to note is I'm actually working in stage. This is not production yet. So Fedora infrastructure has a staging environment and a production environment. We kind of fiddle and experiment in stage. It looks like somebody is messing with the concept of a vast future where Fedora 26 exists already. But anyway, let's do something else. So what I would do as a maintainer of a Docker file or a Docker layer image is do certain things in the Docker file. So let's just bump the release by a small increment. And then I can actually do a Fed package commit with a message, bump release for demo. And then we can do a Fed package push. And this is going to push out to, again, Git. So the output here should look very familiar for those who are familiar with Git. We will then do a... Actually, I need to point it at stage so I'm sure we're not. Okay, so we'll do a show pane. So we'll do a Fed package container build. And this is going to go off. And like we saw in the diagram before, this will send a message out to Koji and schedule a build that will then happen out in the environment. It's going to take a few minutes. So instead of staring at this output, I've actually pulled up a old build that already was done for cockpit and it was successful. So what we can see here is that we have our Git SCM URL as well as the exact checks on the actual commit message that we're trying to pull. So this will specify the exact commit that we sent as the build, which provides our input into the build system. And then this is a parent task. And this parent task will handle multiple architectures. Right now we only have x8664 enabled, but as Docker gets more widespread support for other architectures, we will see this actually pick up other things like ARM, ARM64, PowerPC, those kinds of things. So we can then look at the sub task, which is actually the build, if it will load. Okay, here we go. And in here we can see a handful of things, including the result, the build log. So if we wanted to look at these are all of our logs, I mean, it's very verbose. There's a lot going on. But if something were to go wrong in this place to look, one thing to note is our registry. So here we have three different listings for the registry. One of them is a build UUID based on date and timestamp. Another one is based on version release tag. So it's actually tagged in the registry. And then another one is just release, no, I'm sorry, just version, and then latest. And that will be pushed to latest. So we can actually go over here and do a Docker pull on that image, and it will pull it down. I've already downloaded it. It's pre-cache on machine just for the sake of having. But what's interesting about that is the atomic command, and this might get off in the weeds, but for anybody, atomic is a command wrapper for many things Docker, and it correlates to project atomic, but we can atomic run that command. And now, nope. Okay. Well, something went wrong because of live demos, but the end goal is to allow for people to directly pull from this registry and run it and consume it that way. For Fedora contributors to be able to contribute much in the same way that they do for RPMs. And one thing I can show really quick, if I were to go into per RPM, just because why not make it full circle, I could do a Fed package, and I'll do a scratch build, which is an unofficial build. But this will, again, this is actually an RPM build. And as we can see, the output here is very similar to what we saw with the container build. And the command was very similar, and it's just the same workflow. And here, again, we have this Koji task that has been created. And we can actually exit that, so we don't have to stare at it. But we'll see here that this is all of the Docker spec file with an RPM, the service unit file for system D, and those kind of things. So the workflow is very similar, and it's just meant to provide a really good experience for people in Fedora who want to contribute. I talked earlier about images, and anybody who's interested, there are a number of links here, and I will make sure to go ahead and add a couple that we had discussed during the presentation of this page. But you can see the, I don't know what, the fifth one down, FedoraProject.org, slash, we can join that provides information how to actually join the Fedora project, how to get involved, how to contribute code and content, and anything that we're working on here directly is related to the cloud group, the change Docker image build, layer Docker image build service. And I do want to edit that and it's lived on our Fedora Docker registry, which is our plan to actually build scale that registry and other items in the area, as well as the upstream projects for the technologies that we are actually using to deliver the layer image build service. So I at this time I'm going to stop sharing my screen. Actually, can you, I'm going to ask you to go back on your screen and ask a question from a community point of view. One of the things that I'm interested in is is getting lots of the different open source projects to create containers that they can use to run an open shift and images. And the slide that you had, where you had the Fedora image maintainers, whatever that tag was. Fedora layered image maintainers? Image maintainers. Explain to me a little bit more about that role. So, for example, if I'm someone like the open source project lead for MariaDB or in the example you used was Cockpit. In Fedora perspective, would you expect to maintain that image when the next release is out? Do you expect someone from the Cockpit project on MariaDB to have someone assigned to be a Fedora layered image maintainer and be part of the Fedora community or how are you envisioning people using this? So, this is kind of twofold. It can go either direction. Somebody from an upstream community. So, for example, the Cockpit team upstream does actually have somebody designated to doing their Cockpit RPM builds and then they will again later for their layered image build containers. However, they completely automated that. The person that actually manages that these days is a boss that they wrote. But on the flip side, most of the RPM content that goes into Fedora today is actually maintained by specific Fedora RPM maintainers. And they are generally people who are users of a technology and maybe are involved with upstream in some form or fashion. And they will kind of take on that responsibility. And by and large, I would say 90, probably 90% of those people are just volunteers. It's something to do in their free time because they are passionate about that particular piece of software and they are fans of what we do in Fedora and our contributors to Fedora and help make the distro as well as all the things in the repositories. So, it could go either way. If somebody in the upstream community would like to designate somebody because it's something that they would like to engage with the Fedora project on and actually have their content in the Fedora registry or build pipeline as an official participation from the upstream project, that would be great. But they're not a requirement. We also have a wish list. We actually have a package wish list and we will establish a very similar thing for layered images of just things that people want to see. And those could even be done by the upstream teams that want to see it offered and just add their names to the list and people who are interested in that technology will generally pick it up and add it to the distribution. And it could very easily go either way in terms of, it's really quick actually. Let me go over here. So, for those not familiar, there is a website called releasemonitoring.org and this is using a tool called Amitya and this was written as developed and maintained by people in the Fedora community but it's not specific to Fedora. You can add projects here to have their upstream releases monitored and in the event that they produce a new release, things happen in the Fedora environment including things said over the Fed message, which is our federated message bus. So, when software upstream is updated, people who are involved in the Fedora project in correlation with that software get notified about it. So it's not like a direct manual process such that somebody in the community needs to constantly monitoring that upstream software for a release. They can elect to get notifications about it. Currently, if you can see on the screen here, we're monitoring of just over 10,000 upstream projects at the moment and it should be noted that anybody can create an account and log in and add things to this. So let's, you know, for example, so MariaDB, since we were talking about MariaDB, we'll just go ahead and do that one. So, MariaDB is actually already monitored and it's latest versions are reported upon and we can actually see. Oh, this is just mentioned about Fed message. So, since we're, so we're going down the rabbit hole, I want to show something else really quick. It's an interesting rabbit hole. Yeah. So, aside from Fed message, there's also a tool called DataGrepper and DataGrepper grabs every single Fed message that goes across the message bus and it keeps it persistently. And you can monitor the feed and you can query it. To date, apparently, you know, 60.5 million messages have been received and stored in this service. And we can actually just monitor the feed and if you just kind of want to see all the things going on story, they'll kind of fly by or you could limit your search down and there's discussions of how you can do that. But if you wanted to perform searches against DataGrepper for messages emitted by release monitoring to kind of query what the latest, you know, version of things are in a programmatic way, we could actually automate the incrementing of a version and incrementing the build of that, which we have. So there's a tool called Coshay, which actually keys builds off of release monitoring, and then we'll just attempt to increment and do a build. And if it passes, it will alert the maintainer that there is a passing build. And these are modifications that were made to create that build. And then maintainers just need to basically simply, you know, do a sanitization check, make sure that, you know, what was done in the automated fashion didn't cause any issues and then actually submit an official update. And what I would like to do is something very similar to this, but for Docker layered image builds so that if a maintainer wanted to participate but didn't want to have a large time commitment in, you know, in the normal cycle, obviously, sometimes things will happen to the major upstream releases that will have non compatible changes. And it will require a little bit more, you know, kind of a little care to do the upgrade. But yeah, we are we are doing everything we can to make the entire release process the entire build process as automated as possible with with various, you know, getting automated tests and sanity checking by a human hands on when needed. So there's there's one other question in the chat and it's hard for us asking it. And I think you're kind of answering it in a way, and is are the Docker image updates timeline aligned with for releases and updates so what I take from all this automation you have is that when the base fedora image changes. That's going to trigger a lot of image rebuilding as well. Yes, so there's a little bit of a long winded answers that but the base takeaway from that is yes we plan for it to follow normal updates. So for those who are not familiar. We have a system called body and body is the photo update system. This currently today only handles rpms. We are going to add the functionality for this to also handle Docker images. And in the initial offering Docker images that come out of Fedora will only be comprised of Docker Fedora rpms in the future we want to kind of loosen that constraint and allow for all kinds of things to put in the Docker not just rpms but in the initial it will be just for our pms so there will be the plan is that there will actually be a relationship between a Docker images a Docker image and the rpms that are that it is comprised of and when an update for rpms that are in the list that a layered images comprised of gets updated that should trigger a series of events and one of those events will be a new rebuild. Then there will be an automatic entry in Bodie and Bodie will then trigger what's called Taskatron. I love the names that you guys add to things. Okay, so Taskatron is our task execution framework and what I mean by that is it literally can execute anything. One of the things that it does is automated tests. So, you know, much in the way of CI it's built on top of build bought it has all kinds of fancy grid. Yeah, anyways, there's a lot of things that can do. So what we will do is Bodie will set up this thing. There will be a message sent across the Fed the Fed message bus and Taskatron will pick that up and say hey I need to run tests it will run tests and then it will submit back to the update entry pass or fail and if it passes then there will be a new Docker image release. And that will be tied to the RP the Fedor RPM content and that is the long term goal. Now the amount of development effort that is required for that is not small. So in the short term in the Fedor 25 time frame what we are going to do and note that like Fedor 24 comes out in a week or two depending on blocker bugs. So Fedor 25 is you know the next six months time frame in terms of development effort. So in that time we are just we are going to set up an automated system that will key off of Fed messages and those kinds of things to kick off automated tests and much of the groundwork for this but the release schedule is just going to be a static two weeks. So every two weeks there will be a new cut of all of the images with the caveat of that being anything security related. So if there is a critical security thing if you know I don't know pick the latest CD that has its own website logo and t-shirt and music video and if the next one of those happens we will absolutely address that immediately. And we will push out updates. But in terms of just standard bug fixes new versions feature releases those kinds of things. It's just going to be a standard two weeks until we have all of the work done into the update system to handle this. So it sounds like to me that once again Fedor is going to have all of the bleeding edge latest Docker images for the ones that are on your wish list and the ones that people are willing to maintain. And will we become the sort of go to place to get those latest images. Based on all the automation that you have as opposed to having to wait until a project like not I'm not picking on Maria DB or someone else gets their images built and put them up on Docker had that that, you know, once this is all in place. And all the automation is working. The door should become the Fedora registry for images should become the go to place to get the latest and greatest of everything is that a safe thing to sort of predict. We would like that to happen. I you know who knows what will happen in the grander steam of the world and what communities going to want from from whom but no we would love for that to happen and basically what we're trying to do is deliver the best that we possibly can with the latest, you know, cutting edge or leading edge technology that we that we can, and just kind of to round back to the open shift point. Another thing that we want to do in the Fedora cloud space is actually work on a set of guidelines that can allow for applications that go through our build pipeline to be automatically imported and ran in open shift. And that is kind of another stretch goal that we have and that's more focused directly on the working group as opposed to the lease and string a lot of what I focused on is kind of the release engineering back end work because that's what I'm involved in my day to day and that's kind of what I guess always comes to mind first when I start discussing these things but in the sort of working group that is something that we want to do. And under the umbrella of the upcoming Fedora initiative, or I'm sorry the Fedora incubator project, we have a new goal of actually kind of targeting open shift as our as our container runtime platform for things that the Fedora working group, the Fedora cloud working group will kind of focus on around multi multi container services and scaled out deployments and those kinds of things and so we do we we actually have plans to kind of start a more formal relationship with the upstream of open shift origin and try to provide something that's useful to people who want to use open shift as well as Fedora as a platform for people who want to run open shift. And again in kind of the leading edge we are going to ship upstream latest release as quick as we can. Sounds like a huge, huge gain for the open shift community and I'm hoping that we can get some of the folks interested in working with you and so that you're not resource constrained, as I know you guys always are. Well, and actually just just a quick note last week we had the Fedora cloud working group activity days in and we had a, you know, a little over a dozen of us meet and and just kind of do sprints and work on things and have access and during one of those times we were actually fortunate to grab a few people from the open shift up through development community and kind of propose what we want to do and and see if there was anything from that that would be useful. So one of the things that we want to target actually is doing a full containerized deployment of open shift origin on top of the Dora atomic coast, which is a whole different rabbit hole to go down but anybody is familiar with Fedora atomic coast that is that is a target for for what the Fedora cloud working group wants to ship as the primary deliverable in the future. As we kind of believe that that kind of thing the more the more immutable infrastructure style of deployments in in the cloud spaces kind of the future so we are we are actively working towards all the things as fast as we possibly can. You know Matthew's on on the on the call as well and he's been terrifically silent. Is there anything that you'd like to shout out there Matthew to get people more involved in Fedora. Yeah, I've been silent because Adam's been doing a terrific job and pretty much to set everything I'd like to say so I didn't have much to add. One of the things as a follow up to that cloud fad meeting I actually just posted something to the Fedora cloud mailing list which you can get to think from that second wiki link they're seeing on the screen. And it's posted something about the future of Fedora and Fedora cloud and open shift and where we're going to go with that so that would be a great place for interested people to jump in. All right, well we'll see if we can gather some forces behind it. It sounds to me like it's going to be one of the go to places for getting images as well and in the not too distant future so thank you very much for all the work that you've been doing. It's hugely appreciated. And I'm glad to hear some of the open shift origin engineers jumped on that working hackathon day to help you out need to get more of that going I know. So, thanks again, and I'm not seeing any other questions. So, I'm going to say thank you.