 Sure, my name is Steve Melner, and this is... Hi, I'm Jeff Liggett. Nice to meet you. We both are part of the Red Hat CoreOS team at different levels here. But we're going to be going over what is Red Hat CoreOS. So before we get started, I have a couple of questions for the audience. So first off, how many people here have used Container Linux, a.k.a. formally CoreOS? Okay, we've got a handful of folks. How about how many people here have used Atomichost, whether that's Fedora, Rel, Centos, okay? Slightly more people. Cool. Now how many people here have installed OpenShift? Okay, so not used, but actually installed it. How about Tectonic? How many people here have actually played around with and or set up a Tectonic cluster? Okay, we've got one person here. Okay, so these questions are just a gauge to see how deep we should be going into some of these topics, but... We'll move on forward here. So what we expect you to learn from this talk, when and why did Container Focus Distribution start appearing? A history of Container Linux plus Atomic, so the acquisition, what kind of happened after that? The use cases or use case of Red Hat CoreOS, depending on how you look at it. And foundations for the following talks that are going to happen in this room. So we have them listed up here, of course you can see it on the schedule, but there's going to be a couple other ones that come off directly after this talk that are going to be in the same room around Red Hat CoreOS slash immutable host-related items. So we're hoping to set up a foundation so that you can follow those talks as well. And the plan is for this one to be very high level, hand wavy and not too midi-gritty. And then later on, other speakers are going to be going into deep detail about things like ignition and other technologies. Exactly. So this is kind of where it started here. So before there was a CoreOS, most people were utilizing containers, and at that time that was pretty much docker on general purpose operating systems. So we're talking about RHEL, we're talking about Fedor, we're talking about other distributions. And this makes total sense because that's what we had at the time. However, once CoreOS popped up, it became really, really obvious that having this larger infrastructure to essentially run your applications on a host that you really don't want to be modifying, kind of makes sense to have this. So if you think about it from the point of view of travel, you have a lot of options. So like here in Brno, you can get to and from the conference a lot of different ways. You can take public transportation, you can get in a car, you can even ride a bike. All those things make a lot of sense. But you have other options as well. You could use a seaplane, you could drive a tank. These things are options, right? But they kind of don't make sense for those use cases, or at least they don't make as much sense for those use cases. And that's kind of what we have here. We have container focused operating systems, the first being here of CoreOS. So I believe this was actually the very first release of CoreOS. Version is 94.0.0. Does anyone know why it was 94.0.0, not 1.0 or 01 or anything like that? Any ideas? Other than Andrew, but I will call on Andrew if nobody else knows. Go ahead Andrew, why is that the case? That is correct. Day 94, this was the first release, right? And it really, I mean it took off pretty quickly. Once people started playing around with the idea of having this immutable host to run their containers on it, have very light management that they actually had to do, it was simple. I mean it really blew up. It just went everywhere. So we're going to talk a little bit about the projects and the tooling that went around with CoreOS. So not necessarily directly in CoreOS, but either directly in or around those things. So the first one I think pretty much everyone is going to be familiar with, and that's at CD. Distributed key value pair, used a lot with Kubernetes OpenShift and a heck of a lot of other stuff. I won't jump too far into that. Ignition, how many people here are familiar with Ignition? All right, cool. It's actually extremely foundational to both Container Linux and Red Hat CoreOS. There's going to be a talk on this a little bit later, but basically think of cloud in it, but way earlier in the process. So you can actually provision disks and do all that kind of good stuff that you can't do post installation with Ignition. And as we go through a update flow, you're going to see where this shows up in terms of updating Red Hat CoreOS and how it could be used for other processes as well. Mantle was also a big one, and this is one that we use in Red Hat CoreOS. It's kind of a collect all of tools that are used to either build items, to test operating systems, or to push images around. So I think there's about seven or eight projects underneath the Mantle GitHub repository. They share each other's code. That's why they're all in kind of one location. COLA is a big one. OR is a big one. COLA is kind of the unit testing of the operating system. OR is a big one moving these images around into different clouds and that kind of good stuff. FLEET, anyone here use FLEET? Have used FLEET, I should say. No, okay. So FLEET was an early one with CoreOS, and this was kind of scheduling system D units into a cluster so that you could run things, so it wasn't just like, you know, Ansible running something and getting system D units on all your machines and running something, but you could actually schedule where things were running within that cluster. And it also acted as a distributed system for system D itself. It is not really used much any longer. Part of that was having a stampeding herd type issue, where if you had a whole lot of machines, they'd all be checking into SED, and even though SED is really resilient, if you have thousands and thousands of machines just hammering and watching and holding on, even SED can't always keep up with that. So there was moving away from that. Torx is actually a really interesting one as well. If you think about OpenShift and Kubernetes, they are tied very strongly to the container runtime. Now if your focus is on that, running Kubernetes or running OpenShift, you need a very specific version of, at this point in time, it was Docker, but whatever that container runtime is. Users who wanted to run container workloads outside of, say, OpenShift or Kubernetes generally wanted a different version of Docker, the latest supported version from either REL or from maybe the company itself or some other application, while Kubernetes tended to hold back on older versions of Docker. Now, Torx was a way within CoreOS and Container Linux to be able to essentially say, okay, we're going to ship with a version of Docker so that our customers will be happy, but if they install Tectonic, we can turn it over to an older version, which can be used by Tectonic so that the cluster works at that level. So Docker was not the only example. There were some other things that you could also move around for that, but that was kind of the biggest thing. On the Tectonic side, we're going to see that the Tectonic team actually figured out a similar process but using a different tool set. Rocket, another container runtime. Toolbox, since most of these types of operating systems don't come with a lot of developer tools on purpose, or I'm sorry, not developer tools, but debugging tools on purpose to keep the size small. Toolbox is a container system to pull down debug tools on these systems if that becomes necessary. And the last two here that we're going to talk about, Flannel, just essentially network, fabric for containers and operators, which came later, which there's actually been a decent amount of talks here in DevConf on the operators, but the high level of operators is essentially extending Kubernetes or OpenShift, providing more functionality through the Kubernetes API and then being able to action on those items within the cluster itself. So obviously there's a ton more, but those are kind of the more relevant items for that. So then when Corus was releasing this, there was plenty of concern that this new way of doing immutable infrastructure was going to be so exciting and so interesting and everybody would want to do it that Red Hat really needed some kind of response. And the direct response that came out of Red Hat was Colin Wolter's OSTree project that created Atomic Host. And as Atomic Host got a little bit more mature, we added more and more items to the umbrella organization of Project Atomic, and that just became everything that's container-related should kind of be there. And this was also kind of at the same time as OpenShift was really getting really big there were container-related things, but it wasn't really a good idea to put OpenShift under the Atomic umbrella and so they kind of merged and became together Atomic OpenShift, which is a little bit closer to where we are today. So for Atomic Host, we've got three different versions of it. We've got the Fedora Atomic Host that we spin out upgrades every two weeks. We've got Rel Atomic Host that has a faster life cycle than Rel itself, and it comes out every six weeks. And we've got Cent Atomic Host as well that is basically a rebuild of the Rel Atomic Host stuff that we're already doing. And all three of those all share OSTree as their base foundational immutable infrastructure solution where it will lock down the packages that are there and only allow you to install new ones with RPM OSTree. The big thing about OSTree is, next slide, transactional updates. If anybody went to Colin's talk yesterday, his dream of an update is as the host is updating, somebody can pull the power and it's still going to be okay. You will still have a host that you can boot into just by rolling back to the previous installed operating system. And we like to throw around the metaphor of get for the operating system where you have snapshots of the state of the entire OS as an image, and you can move between those snapshots and move to different sets of packages and things that are there. And on top of that, you can also add new things to make a dirty treat, right? You can add in a couple different versions of Docker if you'd rather run with a very, very recent version of Docker somehow if you can get the repo pointing at it. And at any point, you can always roll back to a previous version that you know works, right? So real quick, I wanted to go back to this real note how fast the response was. So this was Tuesday, 15th, April 24th. So I believe that was roughly seven months after the first CoreOS release. This was announced, and we were already trying to compete with our current coworkers. You were that big of a threat? Yes. So Project Atomic did a lot of the same type of stuff. So we just talked quickly about OS Tree and RPM OS Tree, but System Containers was very similar to Torx and that the attempt there was to be able to switch to different runtimes or use a different cooblet or things like that, but instead of doing it through moving files around or pointing something to a link somewhere else, it was through using Run C, this was before Podman, using Run C and not using a direct runtime other than Run C itself. The point to really hit there is that anytime you're using a mutable infrastructure, it's not that you want everything locked down over time. You want controlled mutability, right? You want to be able to change a couple small things and it's a matter of what you want to allow people to change and not really turn the entire thing into back to vanilla rel where you've got everything installed that you could possibly use. But there are cases where sometimes you are going to need one more thing for one more customer. So other things we had there, the Atomic CLI which sort of just sat in front of a lot of other tools and this was meant to be a management type tool. So an administrator would log into these nodes and run Atomic CLI to do something like pull down installations of pods or things like that, be able to pull images, be able to install System Containers, be able to do an update of the system, et cetera, et cetera, et cetera. Builda, I think people are pretty familiar with that, building OCI Containers, Scopio, moving them around. Cryo, which is actually an implementation of the CRI or the Container Runtime interface for Kubernetes. And Cockpit, which actually still is a really awesome UI and integration into management of systems. And again, lots and lots of other tools that our teams owned and still own, but we won't jump in too much of that. So last year, like two days after DevComp, we flew home and we started getting calls saying, hey, did you hear the news? Because we bought CoreOS. Yeah, there was a guy we saw on Twitter that cut off his beard, it was pretty interesting. But after this happened, so did this. OpenShift 4.0 development started going down and what we knew is that for the Atomic team and for the Container Linux team, we were coming together and we were going to be helping significantly on OpenShift from that point forward. So no longer was there going to be an operating system that somebody's going to install OpenShift on, but there's this combined thing. So actually, before we go forward anymore, the steps to install OpenShift previously or currently is you have hardware or a cloud, right? You install your operating system, whatever that might be. We'll use rel as the example. So you install rel, get your keys there, you're all ready to go, everything's great. You get OpenShift, you install OpenShift using, in this case, an Ansible installer. So you have OpenShift now on top of rel and your next move is, well, day two, I need to provide some updates. I need to get this machine updated because there's some CVE or there's some update that I'm required to have. Well, great, you update the operating system, you verify that works, you update OpenShift, oh, something's not matching now. Or maybe it is and everything is great, but there's a lot more management overhead for that. A big jump here was that we were going to start combining these things together. So no longer would it be a multi-level, you have to manage everything at multiple levels, but instead it was, you got your cloud or your hardware? Great, install OpenShift. Okay, great, now manage OpenShift. It's all one thing now. There's no more of these multiple layers that you're going to have to deal with. Everything is managed through the cluster, not at every single level, you must manage those things individually. And that really came from the outstanding work that the CoreOS folks had done with Tectonic of making sure that the version of Tectonic really defined what version of Container Linux was running underneath. They were really married and they were really in sync because the developers would just go downstairs and say, hey, I need this from you guys. So as the team started jumping together, we had to come up with a philosophy. So a lot of the philosophies that we already had just matched right off the bat. So the first three there, the minimal, the immutable, and the effortless management. So minimal in this case are things like dynamic languages are not desired to be on the image, nor is extra development tools or extra debugging tools or anything else that's not going to be used on a at least monthly basis. It's just not needed. And all it does at least for the updating and installation and management is cause more download time, more packages that need to be revved, more things to track, all that kind of stuff. So just like Container Linux, just like Atomichost, we wanted to keep those things as small as we possibly could. Again, a big thing too is the simplified install. So with Red Hat products, we tend to have Anaconda, which is actually really, really awesome. However, when we pulled CoreOS and Atomic together, we realized we have a very, very specific use case for installation. And that use case is get it on disk, don't touch it. And giving people a lot of options to mess with the installation was counterproductive for us, at least at this time. And so we actually ported the Container Linux install. Is that what it was? Or is it CL install? Yes, CL install. And to CoreOS install, which is a shell script that essentially DDs the image onto the drive. The biggest portion of this installation script is a GPG key. Pretty much everything else is five, maybe 10 lines of code to get this onto the disk. Now, that might sound a little weird, right? Okay, you've got it on the disk, but the disk might be significantly larger than what you just DD down. Ignition is there for. So ignition, once it starts doing its first boot, it will start provisioning those disks, growing things. Basically, it takes over from that point. If anything needs to be laid down, if anything needs to be modified, that is where it happens. And you're going to see a little bit later how that ties back into OpenShift as well. The immutable thing, as Jeff said and Colin said yesterday, it's more about controlled immutability, not everywhere, but in certain locations. You can still change configuration files. You can still update SSH keys. But if you're trying to mess with system-related items, it's not going to work. It's not going to allow it. Unless you take extreme measures to do so, of course. At which point, it's kind of sort of not right at CoreOS any longer. And the effortless management. So for those who have used, or not used container Linux in the past, it's kind of like a continuous rolling release that just keeps on going. And you can opt-in slash opt-out of these updates being applied to your machine. So you're not sitting around going, oh, well, I just read the release notes of XYZ and it had this issue and I need to go ahead and install these updates. But instead, it's either automatically applying or you're sitting there going, okay, I got an email or I saw this thing happen in terms of a CVE was posted online or whatever. So I'll go ahead and say, yeah, make this a little update. On the atomic host side, we ended up adding in a timed update mechanism which acted similarly but not exactly the same. So you could also opt-in and consistently get new OS tree updates at that point. And the last two here, opinionated actually came from container Linux. So atomic host was not nearly as opinionated as we should have been. We started off pretty opinionated, but over time kind of allowed it to become a bigger set of things. So we were focused and then over time it was well, okay, yeah, for that use case, we'll go ahead and add those libraries. Yeah, all right, for that other use case, we'll go ahead and add that service. All right, and so instead of being focused on either running the cluster or container workloads directly, it started becoming a container optimized operating system that does a lot of other things, but maybe you shouldn't, but you probably will operating system as opposed to being very, very opinionated on it. So when our friends from CoreOS came over, they nicely called us out on it and we made sure to stick with that from that point forward. So everything that we choose is for a very specific purpose and we're not choosing multiple items. So when you see the runtime, we've got one. When you see the cluster, we have one set of cluster components. When you see any of those items, we have one for one specific reason. And then the last thing here is focusing on the cluster. So we embed the OpenShift packages and as I said before, we're pushing management to the cluster, but it's not just that, we're also versioning with the cluster. So both Tectonic and Atomic had issues when trying to work, I'm sorry, both Container Linux and Atomic had issues when trying to work with the clusters above. And part of this was for their own operating systems with their own use cases with the schedulers on top with their own use cases and so the runtimes didn't always match, the libraries didn't always match. Basically, there was a lot of hoops that would have to be jumped through to get these things to work. And essentially by versioning the container runtime, the operating system and the container scheduler all together, we can continuously ensure that they always work lock step and they all feel like one entity, not multiple entities that are separated. So a lot of the things that we had to choose from were things that came from both Container Limits as well as Atomic Host, right? So one of the choices that we were forced to go down was are we going to stick with the roller mechanism that CoroS did with the initially Chromium work and then eventually became a fork of that or are we going to stick with OS Tree that was a little bit more native to Atomic Host? And luckily we both came to the decision fairly early on that OS Tree was a little bit easier to use and we had a little bit more control over what was going on with OS Tree and we really liked the ability to have multiple different images on the disk, not just the two. There had been cases where CoroS had updated and then updated the other partition and then at that point you kind of run out of backup plans if both of them don't work so you wind up with the brick machine and with OS Tree you can lay down as many updates as you need and as long as you've got the previous one that worked you can continue adding more on until one new one works, right? So a little bit more flexibility and a little bit more restuability, right? The toolbox we thought was amazing and it had some really cool tools and so we really wanted to move that over in and we made some other decisions about the stack where we're going with Cryo for reasons that I can't really get into but I'm sure other talks have covered why we love Cryo. Ignition was kind of the way that Tectonic installed and since OpenShift and Tectonic were working very close together very early on the architects said well we're going to go with the Tectonic installer route and therefore Ignition had better beyond whatever host you choose, right? So that was kind of laid down and we've learned to love it and it's come a long way and it's super, super useful. Build is what we use for building container images but we don't actually have the builder package on the host it's actually up in a library inside of the OpenShift packages, right? And let's see. Scopio I believe also is included and it's been entered in in Podman so it's there but it exists as Podman, right? So we picked up a couple other things here so Podman which there's been a lot of talks around that obviously the Kubelet and other tools from OpenShift CoreOS assembler which is what actually creates Red Hat CoreOS as well as its image payload which we'll talk briefly about later. We also, we don't include the machine config daemon but it does run on and it's specifically made for Red Hat CoreOS which is something that senses changes and we're going to go through a flow of that shortly and is able to intelligently create updates as well as changes to configuration files and all that kind of good stuff and then Pivot which is sort of a layer in between a lot of these other items which we'll also see in that flow. So we can talk very briefly here we'll go through these pretty quickly of differences here. So container Linux, right? Designed for general container workloads just like Atomichost, Rails, literally all the things you can do whatever you want but it's on you to make sure that those things are updated there set up the way you want them to work and Red Hat CoreOS which is designed to power a container scheduler in this case it's OpenShift specifically but this is kind of what it boils down to as being part of OpenShift now. So we are really the abstraction layer between the hardware and above. So we are a component of OpenShift no different than the cooblet is a component of OpenShift no different than the logging layer is a component of OpenShift it is all together it is all included obviously hardware cloud is not but everything from Red Hat CoreOS up is all one unit when it comes down. So we have some other items here container Linux provides multiple runtimes Rails whatever you install is there Red Hat CoreOS is specifically one in its cryo which is an implementation of the CRI like I said before so it is specifically meant for Kubernetes slash OpenShift not for other container runtimes or workloads I should say. So some of the other choices for operating systems a lot of people really love the Gen2 based kernel and the content that came in container Linux we are a rail shop so we ship rail content and so Red Hat CoreOS is going to have the rail kernel in it it's going to have rail RPMs it's going to have OpenShift RPMs and it's going to be managed if you need more packages by RPMO Street but generally nobody gets to choose that except for OpenShift yes that is one of the things that we are working on yes sorry are we going to have support for bare metal for Red Hat CoreOS and yes that is one of the things that we are working on that is one of our items for us to deliver yes for so this is basically the comparison between the three operating systems and we've got a couple more to go so this is an interesting one in my opinion so container Linux used Omaha for its update mechanism is anyone here familiar with Omaha or the Omaha protocol of course yes alright so it is actually the same thing that is used from Google for updating Windows components in some cases as well as it was used I don't think it still is used or at least it's not used as much as it was with Chrome OS for delivering the updates to Chrome OS box so essentially it was reused along with Core Update to keep container Linux updated we know how rail is updated right it's the tried and true rail repositories and or satellite etc for CoreOS it's something called the MCO and the MCD MCO is machine config operator which again it's that operator word that you've probably heard a heck of a lot while you've been here it lives up in the cluster and it's sort of in charge of passing off items to the MCD or the machine config daemon which will then consume that information make an intelligent decision if it's possible to actually action on this information and if it is it's going to update the system update files, update configuration and restart so you have a brand new system coming well not brand new I should say an updated system coming back in to the cluster or to say no I can't do that respond back up to the cluster and say you need to destroy me and reprovision me because it's not possible for what you're asking so security the security model for Red Hat CoreOS comes from RO the main reason we're using the rail kernel and rail content is we want to make sure that all the features that we support in rail are going to be available in Red Hat CoreOS we want to make sure SC Linux is in enforcing mode container Linux has SC Linux on it but it's impermissive so it might as well not and we want to make sure that any CD updates that we're getting from rail can be easily applied and having to do all that work twice for every CD that we find just seems like max so that is another reason for the content choice that we went to another reason to keep the package set small one of the reasons that we want to get things off the host as much as we possibly can is to make sure that we've got a minimal footprint for places where somebody can come in and try and find one of those CDs we want to make sure that the cluster is in control OpenShift is in control of exactly what goes on to the host and allowing them to have one big button that appears in the OpenShift console that says you are able to update when you are ready push this button and everything just rolls out to every single node on the cluster is just the experience that we want to deliver to customers we know that's a big deal because some updates for OpenShift in the past have gone a little less than ideal and so we know we have to land this correctly and this is our mechanism of making sure that we are going to get it right and so real quick before we move to the next slide SSH access is something that is more so optional for Red Hat Core OS the only time you should ever actually SSH into a Red Hat Core OS machine is if something has gone terribly wrong that is you need to go in there and debug something related or you need to go in there because literally there is some unknown weird thing occurring that just could not be foreseen other than that there is very little reason to actually go on to those machines and that is by design if you do SSH into this machine or these machines that's okay nothing bad is going to happen to you or to your machine here but what will occur is it will note in the cluster this machine has been accessed so it may not be exactly the way that it should be in the future what we are looking at doing is starting to taint these machines after SSH access because hey something was wrong right so it shouldn't be part of the cluster you are accessing it you are looking at something that went wrong we don't want to destroy the machines but we want to make sure that it is not actually continuing to run with pods in the future if people are accessing them so we actually already went through that one so we will hit this so this is how essentially the updates work is a very simplified version and I am going to hand wave at the higher level OpenShift stuff so essentially at that point think of what I refer to as the MCO or the machine config operator that is a operator running within OpenShift that is in charge of getting its hands on what should be on these hosts what should be on my workers what should be on my masters it ends up providing this document down well it ends up holding this document I should say which is called machine config and this is an ignition configuration file and this is an OS image URL now ignition configuration is going to be things like dropping files on the system modifying configuration adding or removing SSH keys that kind of thing the OS image URL is literally a OCI container reference using not a tag but using a hash so that we know exactly what we are getting to a container that is in a container registry that houses the OS tree content that the machine is supposed to update to so this is not we are going to get some repos and do something or hey there is some files sitting on some server somewhere we are going to download no this is updates coming through OCI containers to the host itself so it is basically containers all the way down with an openshift from this point forward so the MCD or the machine config daemon senses hey this machine config that I am attached to has changed can I actually lay down these can I make these ignition changes that I am being asked if the answer is no it just essentially tells the cluster I can't do what you are asking it is impossible for me get rid of me reprovision me the way you want and then continue from there and the answer is going to be yes so when the answer is yes it is going to make those changes to config files do whatever it needs to do and then it is going to pass to the pivot which is a command which is going to pull down the image that is referenced in the OS image URL explode that content pass it to rpmos tree which will then make a basically it will install the OS side by side with the current running one so there is no modification to the current operating system it updates the boot loader hits the reboot reboots back into the new system which is side by side with the old one so if there is anything wrong it can go right back to the original one and then report to the cluster I am sorry I couldn't do it you need to reprovision me but if everything goes as planned which it does it reports back up I have switched over I have made the changes that you wanted I am in the state that you want I match that machine config in terms of what I am it updates the cooblet that could be security updates that can be all sorts of things now the information coming down is what comes down from what we talked about earlier Cincinnati which has a graph of all the components and the versions that match each other and that essentially gets distributed down to different operators flows down comes down through here and the updates occur from that point I know it is as simplified but this is as simple as we can get it down so another way to look at it so over the year updates the way that they work is as we explained in the last slide basically we have a running system here and we pull down a container out of Quay and inside of that container we mount it and there is no process running inside of it it is basically just a flat container with some files we pull in the updates that the operating system needs for the next version and then we point at that next version and reboot and simple as that we call it the pivot and it is super easy and fast so we only have a couple minutes left here before questions let's jump through the roadmap that we have quickly so some things that we want to do that we haven't got to fully yet streamlining which is always happening with every project and product we want to remove more packages that we know are not absolutely required at this point what we have is Python and other dynamic languages that have snuck in through other packages I'm a big Python fan but we don't want people trying to SSH on the machines to run Python and you can run Python on Red Hat CoreOS in a container that is what we would rather if you are going to do that and of course continue to shrink these images down the operating system image I should say down so we want to get to Fedora CoreOS we want you guys to be able to use the same technology that we are using in a community supported version that does what the community wants so Fedora CoreOS is trying to fulfill those roles that other people were using container Linux that were not just running tectonic some people actually had it as their desktop for a little while there were plenty of other use cases like Mesos and general runtime just a general container runtime and we want to enable that for people in the Fedora space and allow them to help guide some stuff that will then eventually become part of Red Hat CoreOS this is that entry point and we want to make sure that we form a bridge with the always ready OS work that Steph Walter and Don Zekas are working on because the way that they are gating packages and the way that they are building things is really amazing and if we get it optimized for the smaller package set of Fedora CoreOS we think we can really build something bigger from that plug in support for Kola we want to make sure that there are other things that you can do with the testing framework that is part of Mantle and we want to increase the usability of ignition Colin called out that it would be nice editing a couple hand files are going to throw a big configuration problem down the road because it's human readable versus machine readable and the last items we have here obviously there's more things that we want to do but essentially we want to continue expanding to other platforms so bare metal obviously is a really big one but we also want to expand that CI and OS level testing so right now we mainly are testing in AWS with some kind of prototype testing in other clouds and other locations but we want to essentially get that a lot stronger and a lot better add more tests to Kola as well we do a lot of testing with Kola but there can always be more and on top of that we want to port Kola to more clouds so it does have a limited set of clouds that it tests on so we do have some work around that as well so this is just some sources this is a tiny list of things that we own we own probably about seven more pages of these things these are some of the ones that we talked about today and really do we have any questions we covered a lot we didn't go super deep but yes sir so the question was do we have a real release date for bare metal for core OS yes, we do have that release date I'm not sure that we can talk about it at DEVCOS though sorry sorry that sounds like a fancy party I hope I get to be there other questions yes sir for best way to provide like the next eGIS S access solution so my three departments is actually not doing it at our times my new machine that's actually not scanned what would be the best way to do it that's a really good question so there's two things that come to mind so the question is you've got an ancient security scanner that you need to run on the host and it uses SSH access for it today how would you suggest or how would we suggest to you the best way to go about doing that and that's a great question so first off I would say with the diversion that we're pushing out now they can still do that and that's okay it's going to show up as an alert that someone access these machines it's not going to stop anything obviously in the future we want it to be tainted because it's been accessed but there's a couple things so number one it sounds like is this opportunity to have a security scanner that doesn't rely on SSH or other things like that however obviously it's not totally realistic so even when we go down the point of tainting these machines and pulling them out from the cluster temporarily tainting does not mean the machines are unusable out it just means that okay it's been accessed it took a base of maneuvers that's not really supposed to so the security tool actually set off a security event and now to get it back in let's say you did it to have your cluster it takes a command to get them untainted and back in the cluster in fact you could do that quickly along with the scanner if you needed to do so I would guess that and since this is a future thing that we're tainting today but my guess would be too that there would be a way that you could temporarily disable the tainting for a certain amount of time for something like that but we definitely want to push people towards don't access these these are only for the cluster any SSH should be some sort of a security and the other half of that is we need to be very vocal and talking about the fact that we're going to be doing it this way rather than just hiding it in a dock or a ship to admin doesn't just run into it down the road and can't figure out why no new containers are scheduling to that node and the taint that we're applying is a no new containers no new scheduled containers to it so the running containers will still be there and that's even more mysterious to somebody that so we want to make sure we're very vocal we talk about it a lot and it doesn't just come out of nowhere which is why it's future work never forget about it sorry we have to repeat the question so the question was is there any support or thought for supporting things like VMware in this case it's an internal installation of OpenShift and things like AWS external cloud is not an option so we do have on a roadmap basically a roadmap to continuously be adding new clouds whether that is external or internal and I don't know exactly where each one fits down in terms of the roadmap but VMware is very popular so I tend to think that it's it's going to show up but I don't really know exactly when if you talk to a PM that would probably get you there that would definitely be a better answer than we can get anybody else okay? thank you