 Yeah, the back-up is there, so I can just save it. Yeah. OK. Most of the stuff... No, those are just my notes. Yeah. Are you going to... Are you going to the evening? Will you guys? Yes. Yes, yes. Not the day of the month, but the few... Yeah, that's right. I'm going to be a bit late as you are. A bit late. I have an additional meeting before that. We're going to be seven. So, I'll see you later. Do we have something there that's pretty set for you? Is that what everybody orders? So, we'll see you later in any case. Thank you. You're doing your own thing. Can you hand me the back of the room? Can you hand me the back of the room? I'll just sound back. That good? Yeah. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. So, I'll just sound back that good. OK. Hello, everyone. I assume that most of you will know this guy. He was used to be known as Mr. Salinox, but he is now Mr. Containers in Red Hat, and he will tell you something about containers in production. So welcome, then Walsh, and enjoy the talk. OK. So the talk is called containers in production. But really, what we're going to do is we're going to talk about some of the stuff that the container team is working on to make containers work better in production and basically improve the overall landscape of what I see as containers. So first of all, look at this word. What does this word mean to people right there? Nothing to do with containers. But when we see the word PDF, do you think of Adobe? Some do, some don't. But I think, for the most part, when we see PDF, we just think of it as a standardized format for visualizing. Would it have been as successful if the only way to ever look at these things was you had to have Adobe Reader? Would it have been as successful if the only way you had to view these was Adobe Reader, if the only way you had to create them was Adobe Acrobat? The beauty of PDF is I can look at them inside of any browser, web browser, in my mail. I have lots of different tools like Adobe Reader, events, other tools that I can look at PDFs. I can create PDFs. I can create PDFs from just about any application. I just pull up the print window and I say create a PDF. It's standard. I'm not worried about only one product chain to be able to create these things. Let's look at another example, Linux. Linux operating system. Would it be as great if it was just Red Hat Linux? I argue no. It wouldn't have been as great if it was just Linux. If the only Linux was Red Hat Linux, it wouldn't be anywhere where we are now. Competition makes us stronger. Competition, the ability to have multiple distributions makes us stronger. Ubuntu, Suzy, Android, Debian, Fedora, Alpine, these are all forcing, everybody's learning, everybody's innovating. It's always innovating. So the innovation of Linux, the innovation of PDFs, making lots and lots of tools be able to use this. But the way this happens is things sort of get standardized, right? The Linux kernel is sort of the standard of Linux. PDF was the standard of ways to look at documents and print documents. Let's look at containers. Every talk I go to today, I want to hear the word containers clear, just containers, just like Linux, just like PDF. You don't have to say a particular name of an entity before containers. Containers are a Linux construct, so we talk column containers. We have to move to that. So how do we make Docker containers into just containers? We may need to make containers as generic as PDFs. Containers need to be open. They can't be one entity controlling containers. So how do we do that? First of all, we have to create the equivalent of Adobe Reader. We have to create a standard way that everybody can run containers, run the applications. Everybody's building these applications, so once we have those applications, we need to have a standard way of running them. Luckily, that's already happened. So the open container initiative started a couple of years ago, which defined the standard way for an application to be defined and executed. What the layout on the disk is supposed to look like and what a JSON that describes what the application is. So that's OCI, and it was called the OCI Runtime Specification. So a whole bunch of community teams are on it. It's under the Linux Foundation, OCI. Run C, a lot of people probably heard of Run C. Run C is the standard way to run Linux-based containers, or Linux containers, and it's a default implementation. Again, it's under the OCI, it's been implemented. As of Docker 1.11, Docker now uses Run C underneath the covers. So right now, Docker 1.12 is what we're shipping in most of our products, as well as Fedora, CentOS, just about everybody's shipping Docker 1.12, Docker 1.13 came out about a week ago. But every one of these tools is using Run C by default for running Linux containers, sort of the namespace C-group type containers. But there's a new emerging technologies for building containers under VMs, virtualization-based containers, or KVM-based containers. And Run V, which is a company called HyperDataCH, and Clear Linux containers, which is Intel, have also built mechanisms for running it, but they can run the exact same images. Because we have a standard, they can run their images underneath the KVM, a very different way, but they can look at the JSON, and they can look at the RudaFest on disk, and be able to boot it because we have standards. So there's multiple ways now to run containers. Basically, these are alternatives to the default Adobe Rita. Application definition. So the next big thing you have to worry about as we go to this, the real critical thing, and it's really the innovation that Docker came up with originally, was that they innovated and came up with a standard format that everybody sort of agreed to for images, and then they put them out to container registries. So the Hadoop application, the Fedora distribution application, these are different types of applications. They're all in the standard bundle format. But we have to get a standard on that. Right now the standard thing people talk about is Docker V2 bundle format. But again, OCI Open Container Initiative has been working on a formalized version of that, and that's the OCI image format. Release Candidate 4, 1.0, release Candidate 4 is available right now, but this is critical of this to get standardized. Okay, because if this thing starts to change, then all of a sudden we're gonna end up with incompatibility of applications, and we could potentially end up with deb packages versus RPM. This is the real critical standard, and we're real close to having it right now. Sadly, the holdup right now is caused by Red Hat, but it allows us to use the standard images for container registries. Okay, so we're real close to having those two things. So those two things, the standardized way to run these things, and the standardized way to store them. Those are the two really, really critical items. The next standard, this is not so much a standard, but it's an open way of doing it, an open transport format we've been working on. That's, when you think about it, how do I pull these images? If I'm storing these images out at a container registry, and I wanna get them and pull them onto my machine so I can run them underneath the Run C or Run V, how do I get it from point A to point B? We had a tool a few years ago called, we started building, Antonio Mradakapia, started building a tool called Scopio. And what we wanted, originally we had opened up a pull request to Docker, and what we wanted to do was basically to go out to a registry and pull down just the JSON. So if you wanna inspect an image at a registry right now, we're using Docker, you have to pull down the entire image and the entire JSON. And all we wanted to do is go out and look at the JSON and read some of the content, basically labels. We wanted to say, what does this application do? What's its entry point? But all we wanna do is pull down a little thing. So if you have to pull down megabytes or even gigabytes of data just to look at the application, we basically just said, let's build a protocol to build into Docker. We wanted to basically Docker inspect dash dash remote. In the patch, the pull request got rejected. And what Docker came back to us and said, well, you really could build tools to do that outside of Docker, we don't wanna build into Docker. So we said, fine, we started building Scopio. So Scopio is a tool for originally written to pull down the JSON, just the JSON from the container registry. And Scopio is a Greek word for remote viewing. So it's kind of a fitting name. What we use underneath in the project of Atomic, we use Scopio a lot underneath the Atomic CLI for viewing containers JSON on the registry. But over time, since Antonio is into these things, he said, well, if I can pull down the JSON, why don't I just pull down the image? So we just slowly evolve the package to be able to actually pull down and basically do the equivalence of a Docker pull. So Scopio now can actually pull and push images from registries. So it's a fully separate tool from Docker that's able to do this. And we started talking, we actually got together with CoreOS and we started talking about this and said, hey, they were looking how they could do some more standardized tools inside of Rocket. And we said to them, why don't you use Scopio? And they said, well, they don't really want to use a command line tool to do this. They said, why don't you split out the ability to move images around into a separate package? And that's actually container slash image. So now container slash image is a library that people can use for pulling and pushing images. We'd like to have to become the standard image moving technology. And we're getting people working with us on this on basically standardizing at a much smaller level than just at the big demons. One of the things right now is you have to work at the Docker demon level. You have to work in container D. We want to work at a much smaller library level for doing these base functions basically and moving images around is one of those. We're gonna talk about some of the technologies we've had to them. CoreOS is still investigating whether they're gonna use this technology inside of Rocket, by the way. So we got an image on disk that we can execute. We got an image at our registry. And we have an image that we can, we have a tool that we can pull images back and forth. But one of the things we have to do is after we pull the image down, we have to put it onto disk. Okay, so get it from the, you know, Scopia right now pulls the image down but it had to be put onto disk somehow and then run C or whatever can you execute of. So we also wanted to look at how do we store these things on disk? And I got to talk a lot about some stuff on images. Where do I store or explode container images after I pull them? So a little history, a few years back we built a tool underneath Project Atomic called Atomic Mount Container Name at a Mount Point. I don't know if anybody's played with it but it's really kind of cool. The reason we did this is we wanted to be able to examine an image that was stored inside of a darker, inside of darker on disk without running a container. So most people when they wanna examine what's inside of a container you have to stop the container. Well as soon as you stop the container if there's hostile stuff in there you're basically, you know, it's most likely taken over your process that went into the container. So if you wanted to examine or scan containers you really wanna just take the mount and mount them somewhere on disk, mount the container image on disk. What we did to build this tool is we actually go in to device map or overlay and actually mount it up. So we inspect what's going on inside of a darker and mount it up but it's not race free. For instance, if I mount a image onto disk inside of darker I can go in and remove that image and then darker gets all confused because the image is being used somewhere that it doesn't know about. And so the problem here is that the docker does all it's locking. There's no, the docker deem or container deem in the future has no concept of understanding that someone else is using the same storage at the same time. So we had an effort over the last year to take that storage outside of darker, use file locking instead of memory locking. So darker right now does a lot of memory locking, do file locking and then allow applications, multiple applications to share the image store at the same time. So that was our original effort. As we went down the course of doing that we found that trying to get this to always work with docker deem was gonna be much more difficult. So what we wanna do is get a proof of concept of storage outside of the docker deem and onto the host using file locking. And basically to allow us to use multiple tools, multiple applications sharing the storage. Eventually we created a docker, what's called the docker, took the docker graph driver code and we built in a new package called container storage. So container storage does all of things like copy on write file systems, overlay, overlay two, device mapper, all the different file systems that are currently used in docker and basically allows us to do it outside of the docker deem. So now we have the basic components that we can pull an image from registry, store it on disk, run the application. So we have all the standards built in so now let's innovate. So first thing we're gonna be, we're introducing over the next few months is the concept of system containers. What a system container is on an atomic host, well, the reasons for a system container, first of all, on an atomic host, you're not able to install packages. I know Colin's gonna say you are able to install packages but let's say for right now that we want to leave the atomic host as a standard way it comes in the distribution, all you can do is upgrade to new versions of it. So we wanted to be able to ship software onto it and the only way, the fine way to ship new software onto atomic host is inside of containers. So if we wanted to add applications to atomic host, they have to come in the form of containers. Well, what most people wanna look, a lot of people want to run is Kubernetes and Kubernetes requires two services to be run. They require SED and FlannelD. Key thing about SED and FlannelD is they need to start and run before the DarkerDemon runs. What they do is they set up the network and they set up network connections that the DarkerDemon's gonna use. So they have to run earlier than Docker. So we can't install them on the system unless they come in the form of containers and they can't be started by Docker. So what do we do? These containers, they also can be run as read-only images. I'm gonna be talking a lot about read-only images later on in the talk but basically since they can be run as read-only images, we can sort of treat them differently. Finally, UpstreamDarker has a problem of priority. This is why they can't run it. So UpstreamDarker right now has no real way to say that this container has to start before this container. So if you have one container providing a service for another, they have issues with that. SystemD does not have that problem. SystemD was designed from the ground up for basically starting services in the correct order. So we need SED to run first. We need FlannelD to run second. We need DockerDemon to run third. We need Kubernetes to run fourth. SystemContainers provides that. So SystemContainers with the atomic command allows you to install, well the atomic command allows you to install SystemContainers. SystemContainers, our atomic command uses the scopio command to pull the container image from your favorite registry. Any registry you want, you can put SystemContainers on. It then stores the content on top of OSTree. So OSTree. This allows us to use storage and have shared storage between our container images without having a huge amount of additional space wasted. Finally, it uses SystemD Unifiles to start the container. And it can use RunC to run containers. So RunC is on every host. So we can actually, if you wanna run RunC inside of a container, it'll work. I should, so it actually optionally uses RunC. There's no reason, after I pull a container image to a host, that I have to run it inside of a container. But if you wanna run it inside of a container, there's a tool called RunC that you can use it. You can run, just run a standard charoute. So there's lots of containers that we're gonna bring down to the system that potentially we want to modify the system. Talked a couple years ago, I gave, I called those SBCs, super privilege containers. But basically it's just software running on the host that comes in the form of a container image. So you can use your system containers to install content on the host. One of the things we get asked about is what about kernel modules? Package them up into a container image format, use atomic install system, have them come down, run a single command that modifies the kernel, you're done. Those are system containers. This is how you would install a system container. So if you wanna install etcd, atomic install dash dash etcd, enable the service, start etcd, atomic install system flannel, start it, go. System d makes sure the etcd starts before flannel, system d makes sure both start before they contain a runtime. We're coming out in the next month with Docker running as a system container. We're gonna run Docker inside of a container. The reason for this is we're ending up with lots of people are asking us to run different versions of Docker. Some people want the latest Docker, some people want older versions of Docker. So we've actually looked into it. Can we run a full Docker client, a Docker demon inside of a container? Turns out you can. So we can do atomic install system containers, just like we did with the Etsy. At Red Hat, we've been discussing a new thing called standalone containers. Standalone containers and system containers for all intents and purposes exact same thing, okay? It all depends on whether you wanted to give it full privileges on the system or no privileges on the system. But the idea of standalone containers is packaging standard rel content and that you could do sent us content to Fedora content as containers. Think of a MariaDB database, a patching web service, things like that. You could patch those up as containers. Everybody understands that, but a demon runs the container on standard ports and volumes, prepackaged for standard use cases. The idea here is to in the future, not run Apache locally on your system as an RPM or not run MariaDB on your system. If you wanna run a single instance of Apache on your system, have it come down as a container and run it. It'll listen on port 80 on your system, full services. But if you're running just one Apache, why not running inside of a container? Packages, apps can use OCI image format rather than RPM. Examples of these things of MariaDB, Postgres, Apache. Also, we're looking at things like languages. So as we move forward on rel, we've had these software collections, but now we're gonna be looking at standalone containers so you'll have a version of PHP. So you could go out and get version A of PHP that if you want for your application of version B and there'll be a whole library of different container images that you can use. A goal with these container standalone container images is that they could both work in standalone mode and in orchestrated mode. Now whether or not we can achieve that with every one of these, we don't know. But again, we're looking in the future to basically saying you run just an Apache web service on a single node, pull in this container, run it locally, done. As we move forward and rel, we potentially have a different release cadence for these type of containers than the host-based operating system. So start to think of separating out the host operating system and having that advance at a different cadence than sort of your software, well, your different software applications or software collections. So it's sort of the next generation of software collections. Let's continue to look at container image store. Copy and write file systems. This is a key thing for people developing containers. But the copy and writes have a lot of problems. People right now on rel systems and well, if it were up to current release, the default was device mapper. ButterFS has a similar problem in that they break memory sharing. When you run an application on a system, it loads up, say, a shared library into the system and the kernel knows, based on the iNode of the object you're loading and the file system that it's on, if two programs open up the same program and load it into memory, the kernel knows that they're the same. The problem is when you run on ButterFS or if you run on top of device mapper, the kernel does not know they're the same because the device nodes change. Each one you can take, even though physically they're the exact same, but with device mapper and ButterFS, it fools the kernel. So the kernel can't figure out the same shared memory for those people that do use Java. JREs are massive, okay? They use up a massive amount of memory. So in a standard system, if you loaded 10 JREs, they'd be shared. If you're running on top of copy on write file system, the kernel can't understand that they're the same, things being loaded into memory, so you'd use up 10 times as much memory. This becomes really critical, especially for large app sharing types of things like OpenShift. So we have to find a different way of doing copy on write file systems for something like OpenShift to share memory. So the standard one that everybody's excited about right now is called OverlayFS. This fixes a shared memory problem, but it actually has an SC Linux issue, okay? And I worked hard with the kernel engineer on my team. We fixed it. So now Overlay will work with SC Linux and very good order on Fedora. We have to backboard that to RHEL and then Overlay. So over time, we're gonna probably move the standard back end. Actually, we have a proposal for Fedora 26 to move from device mapping to Overlay as a default file system. Problem with Overlay is Overlay is not a POSIX compliant system. So there are issues, there are potential problems that Overlay can cause you if you're on top of Overlay. Most critical one we found was RPM. So RPM had a real issue with it. We've actually fixed the RPM issue. What happens in an Overlay file system? If you open a file for read, so it way Overlay works is there's a lower level and an upper level. So on the lower level, you sort of read only content. On the upper level, you should write content. So when you open a file for read, if it's not on the upper, it goes down to lower, opens the file description for the read. If you open a file for write, the first thing it does is copy the file from the lowest section onto the upper section. Where RPM got in trouble is RPM opened its database for read on the lowest section and later in the same process reopened it for write. So copy up happened, ended up, two file descriptors pointing to different files but in RPM's mind did the same file. Okay, so things went wrong when that happened. That's why it's not a POSIX compliant file system. There's other issues, but we had to change RPM to stop doing that. I fixed it for RPM, but we don't know if there's other applications that had problems. Other problems we'll copy on write potentially. Every time you write, there's all this code path that you're going through for copy on write. Whether or not it's a huge amount of performance and not it's probably debatable, but you're going through a whole bunch of code to do copy on write. Another problem with running images in the standard way right now is every time you bring up a new node, it has to go out and grab all these images. It's using Docker pull or Scopia or something like that and they're pulling down gigabytes of images to every node for any image that it wants to run. Anybody who's played with Docker for any time, the first time they run an application, they have to sit there for 30 seconds, 20 seconds, just waiting for images to pull down. Another problem with copy on write file systems is that the process inside of the container can rewrite it's executable. So if I'm running Apache inside of a container, Apache can rewrite the Apache. If I get hacked, I could rewrite Apache service and the next time you start the application, it's gonna be pre-hacked, ready to go. So the hacker basically put back doors into an application. So we're proposing in production that we move to read-only containers, right? The reason you need copy on write is you want the application, as you're building it, it needs to be able to write to slash user. But once you go into production, you don't wanna be writing to slash user. You really don't wanna be writing to the container image. You wanna be writing to volumes that amounted into the container, but you really don't wanna be modifying the container. So we wanna move to read-only container images. So the read-only provides you better security. In production, most images should be immutable. They shouldn't be changing. So as we move to production, we should be moving more and more to read-only containers, getting out of the dev, going from the dev, part of the dev ops to the ops part of the dev ops. So if we start to move to read-only containers, we have potential for network storage support. So if I'm not writing to the content, I can start to share it. So we have a proposal right now with the OpenShift Registry where basically we built this last summer, a service that watches images arriving at the registry. As soon as the image gets onto the registry, the tool would explode the image onto a root FS. It would then scan the image for vulnerabilities. So we wanna look at that image as it arrives. So if someone does a darker push to a registry, we wanna be able to scan the registry, look for vulnerabilities. If there's vulnerabilities in it, if someone accidentally tried to ship a vulnerable application, we would then block it from being able to be pulled. So we could block that application, basically put it into quarantine mode and tell the user to clean it up, fix the problem. But if it does pass, we could leave it on storage and make it available to any container images, any container runtimes that are out there via a network storage application. So NFS, Cephs, Gluster, OS Tree. So imagine a world where all your nodes are sharing via NFS or various kind of network storage to container images. Now you go out in Kubernetes and all of a sudden it says, I wanna fire up 15 new instances of this application. It doesn't have to pull the entire application down to the disk anymore. It just starts it up, okay? All of a sudden, all this stuff about, oh, I need a smaller and smaller and smaller and smaller container image because it takes 10 seconds or five seconds or 30 seconds to pull down, the image is always there. Not only that, right now when we have vulnerabilities, everybody's like, what is this about? Where are the vulnerabilities? How many containers do I have with that vulnerability? If I had network storage for all of my images that are running in production, I update the image instantaneously all the nodes have to fix, okay? So moving to read only containers in production and moving to network storage for our images, suddenly solves a huge amount of problems. Now people are sitting out there and saying, well, that means I have to manage the network storage. Most likely if you're running in a microservices environment, you're already gonna have to have shared storage because you want those containers to move from one machine to the other. They have to be using NFS, steps, cluster, some kind of backing store shared. So why are we treating images different than volumes? Improved storage. So one of the things we wanna do with that now that we have container storage separate is be able to do shared file systems as I said, instantaneous updates, I already covered that. Bottom line is get rid of copy and write file systems whenever we can in production. This is a demo, I'm not gonna have time to show it, but it basically shows the whole working of what I just explained. So let's take a look at container image development tools. How do you guys build containers right now? How does everybody build containers? Everybody in the world builds containers one way. Darker file, darker build, right? Standard way. Who cares how you build a darker image format? Who cares how you build an OCI image? Darker images are basically tower balls and JSON files associated with them. So I should be able to build those with hundreds of different tools. But the only way to build them right now is with darker build and darker file. I think darker file is a very, very, very crappy version of bash. Okay? That's the only way you're able to build them right now. That's the tool we have, and we're stuck in that world. It's been out three and a half years, almost nobody's changed that. Why? Because there's no way to deal with this container storage. Container storage is locked in under this big honking demon. So if we break out container storage out from the demon, we need access to those copy-on-write file systems. We need that, how does overlay work? How does device map? Who wants to build that stuff? We pull that out into container storage, all of a sudden we free that up. Build container images without a container runtime demon. If I'm building an image, if I'm building a tower ball of a root FS and a JSON file, why do I need a client server operation in order to do that? So if we pull that out and we just explode this, explode our content onto a root FS, tie it up into the standard format, able to push it to an image repository, isn't that a huge step forward? Ansible containers, okay? Ansible containers right now, Ansible has a totally different way of describing an application. Ansible playbooks, Ansible roles. I'm not no expert on Ansible, but a lot of people really like it. And it's a lot more descriptive than a Docker file. So Ansible team right now is working on containers, and sadly Ansible containers at this point, they have to talk to the Docker demon because there's only one way in the world to build these things, and you have to talk to this client server operation to build a container. Well, we're working with them right now to say let's look at this differently, let's look at using lighter way tools using container storage. OpenShift's source to image is another way to build applications. It's really cool in that a whole ton of people really don't care about what a Docker file looks like. If I'm a Node.js developer, why do I have to care about young install and Docker file? So OpenShift right now has source to image, which basically says as a Node.js, you build your application, you check it in, and OpenShift in the background goes off and builds the application. So every time something gets checked in, OpenShift takes the application that gets checked in, recompiles it, packages it up into a container image and then pushes it out to atomic registry or whatever your favorite registry is, container registry. We're building, that's actually somewhat built yesterday, by my buddy up there now, and is a thing called core utility, well, I call it core utility containers, it's not called that right now. But the basic idea is, instead of using a crappy version of Bash, let's use real Bash for building these things. Okay, there's a couple of primitives you need to be able to build these, which is basically copy from, basically the from line, so basically what image am I basing my application on, and then I need a commit. So I need to be able to create an application, pull down an image I'm gonna base off of, I can go and add content, commit, add content, commit, add content, commit, looks good. Scope, you'll pull it out, push it out to a container registry, I'm done. I can optionally run that whole process inside of containers if I want, or I can just, the heck with that, I trust my code is doing it right. I can run commands like dnf, yum install, dash dash, root fs pointed in it, and also I can build an application container without having to have dnf inside the application. Right now we ship all of our containers with the build tools to build it. It's like if you shipped a .out file and you had to include all of GCC with it. Okay, that's what we're doing right now. That's one of the reasons these applications swell because they all have to have dnf. I'm already running out of time, okay. Moving along. So we don't care what the output of your build process is as long as it's an OCI image build. With signing we're also, I'm gonna skip over this section, but basically because we have image pull we can start to innovate on the way you sign containers. We're not locked down to one person, one way of doing it, we're adding signing to, skip through that, you guys can read that on your own. There's a couple of YouTube videos on how we're doing signing now inside of this content, but I wanted to get to this last part which a lot of you probably have heard about the controversy and the different stuff that's going on. We've started an effort called CRIO, okay. We what, so when we look at what components do you need to use in order to run OpenShift and Kubernetes in a environment. So when I say I'm gonna go to OpenShift or go into Kubernetes and I wanna run containers what has to happen. So the first thing I need to do is I need to container image transport, container image storage and the OCI runtime. I need those three key components to build to be able to run my Kubernetes environment. Finally I need some kind of management API to trigger all this stuff. So the Open, we started OCI ID, well we started as OCI ID a while ago, it's been changed, it's named to CRIO, but the daemon is still called OCI ID and that stands for the Open Container Initiative Daemon. I like to call it OCD, which if America is different, different terms. But here's the GitHub site. So what it, the OCI ID daemon implements the Kubernetes kubelet runtime interface. Little history here. First version of Kubernetes only talked to Docker. CoreOS came out with Rocket, they went to Kubernetes and Google and said, hey we want to have a different container runtime than Docker underneath Kubernetes. So we want you to take all that code that you added all the Docker Engine API and we want you to add in all the Rocket API and Google said time out, there's gonna be other container runtime, so let's standardize on what we're willing to talk to. We will define an API to talk to container run times and they started the CRI effort. So CRI is basically the mechanism that Kubernetes will talk to these container runtime demons. OpenShift, so when OpenShift gets requests to run an application it tells Kubernetes, go execute a pod. Pod is a fancy way of saying, Kubernetes says I'm gonna run one or more containers inside of a single area and so Kubernetes doesn't call them containers, they call them pods. So it says I want to execute a pod, Kubernetes then communicates with OCI ID and says hey, I want to run the NGings pod or NGINX, whatever, Apache. OCI ID then goes out to pull the image using container's image, stores the container's image onto disk using container's storage. OCI ID then starts the pod using Run C. Standards-based container run times alternative to Docker and Rocket. That's what OCI ID is. For the Kubernetes workflow, it is not a Docker fork, it is not a Docker replacement. We built a tool that's optimized for running Kubernetes. In the future, very hopefully soon future, we're gonna start building Kpod. Kpod is a tool that runs behind the scenes that allows you to manage the entire environment. Management tools for managing and restraining your container run time. We hope to move within the next six months OpenShift and Kubernetes to at least allow alternatives of running Docker underneath the covers or Cryo, OCI ID. So we want to move that over in the next six months. OCI ID package is now available in Fedora Rahide. As of like last Thursday, it actually worked for me first time. So we're actually going through the testing phase right now. We're trying to fix any problems we have with the Kubernetes as a really good container run time, environment, test suite. We have to pass that. After we pass that, we have to pass the OpenShift test suite. And the OpenShift test suite is insane because they start to do hundreds of thousands of containers simultaneously. So we want to be able to pass those as we move forward with Cryo. So there's two must see talks today. By the way, one of the, going back a little bit, as we start to swap out the Docker daemon with Cryo, if we go to system containers, we'd be able to package up OCI ID as a system container. Docker is a system container. So if you have your atomic host, oh, I don't want to use Docker anymore. I don't want to try that new thing called CRIO. You turn off the Docker service, you add atomic install, the Cryo service, bring up your Kubernetes again, and you're ready to flow. So that flows into it. So there's a talk later on by Giuseppe Scravano on atomic system containers at 12 o'clock today. There's a full introduction to Cryo happening at 1700 today by Antonio. It's going to go through much deeper. So if you're interested in what I just talked about, you'll get a deeper dive into each one of those. I have five minutes left. I'm going to open it up to questions. Yes. That's what all the cool kids use now. Yes. Yes, it's written in go. The question is, the libraries are all written in go. Is OCI ID or Cryo all written in go? So all these tools are right now written in go. Atomic tools written in Python, because that's what I do. So you get one of these. Next. Yes. Is Cryo going to use container D underneath or is it just going to use Cryo? If you took a diagram and diagramed out what container D looks like or what Docker envisions container D looks like, and you took our prior diagrams to look at what Cryo looks like, I think they've done some copying. Okay, so our idea, why would I want, right now, the Kubelet talks to the Docker demon, which talks to the container D demon, which talks to run C. If I run Swarm, I talk directly to container D, which talks to run C. Our vision of the world is that I use Kubelet to talk to OCI D, which talks to run C. So we eliminate that model. We have proposed to container D that the container D team, we have proposed giving them our container storage and containers image and moving those out so they're not locked into this huge frigging demon. We don't want to launch, lock those into our big demon. Right now, Docker has turned down that because they say they have some other features, but we're going to try to continue to work. If we could work at that level, perhaps, but we don't want to have a demon that's under the control of anybody. So, right, this is fully open. We have many on OCI D. In my worldview, in my perfect future, I would get rid of OCI D. I would get rid of the Docker demon. Kubernetes could use container storage, container's image is in the open container of runtimes and do it itself. And we don't, right now, one of the problems we have when we get bug reports is everybody does this. Now, that's the Kubelet screwing up. No, no, that's OCI D or that's Docker screwing up and no one knows which one it is. So if we could get rid of those, I think that would be good. But right now, no, we don't have plan on using container D. Yes, so the, well, the contract is the file. The Docker file is your contract, I guess. Yes, yes, absolutely. Okay, and so I could build a shell script wouldn't that be my contract? It's going to specify which ports it's going to listen to. So remember, there's two primitives. There's commit and there's from. The commit line is basically specifying which ports, so basically the JSON, so what the, all the other syntactic stuff inside of a Docker file is basically what ends up in the JSON file that's associated with the container bundle. Okay, you're going to have to specify those on the command line with the tool that does the commit. So what's the maintainer? What's the entry point? What ports are open? What environmental variables are set? So it's all going to be just bash, right? It's just bash. Or an Ansible playbook could be the same type of thing. That's why it's just a file that says this is how I built it. Anybody else? Yes, I can't hear you. Could someone repeat what she said? You can start, well, any application that doesn't need to write, this is the last question. Any application, what does it say? Repeat the question. Okay, I have to repeat the question. Okay, so the question is could I start any third-party application that needs to write anywhere where it writes would have to be a volume in a read-only container? But what I'm really talking about is slash user. So most applications don't write to slash user. If you have an application that writes to VALOG and VAR-LIB, MariaDB, then you would volume mount in those directories and be able to write to those. But only time you're really building, when you're building an application is when you're writing to slash user, writing to the image. Yes, if you've noticed the person pulling the pull request to get that to happen, repeat the question. So the question is will we support remote federated layers, like the guy who wrote Scopio has put patches to Docker to be able to support those for Linux? I think he's gonna support them inside of his own tool. Yes, so we want to, in Container's Image, we wanna be able to give everybody Microsoft right now in Docker, Microsoft does not want the base image, the base image to be available to anybody except through Microsoft.com. Red Hat wants the same basic stuff for Linux. So Red Hat basically says we don't want the REL7 base image to be available to anybody unless you pull it from Red Hat.com. The reason for that, that's how we enforce our licensing and that's how Microsoft wants to enforce the laces. Docker has said yes to Microsoft and they said no to Linux. Okay, basically pretty much the same code path. They might have reasons, they might not have reasons, but right now that's the standoff and we've been trying to work with Docker to get that capability in for a while. Yes, did I just say that on video? Wait, did I just say that CRIO will be ready in six months? That's our goal. Our goal is to have CRIO, our goal is to have CRIO available, well it's available right now in Fedora 26. We need people to play with it, we need people to contribute on it. There's a demo, come to the demo later on and you'll actually see something happening in CRIO that doesn't happen in Docker right now and that's, Clial Linux Container is running underneath CRIO, so it's working now, yes. Any Docker build? Or is it? Yes. But, so there's no, repeat the question, what tools are available right now to build CRIO images? There's no such thing as a CRIO image, a CRIO image is a OCI image, an OCI image is a Docker image. We can't change that format, the format right now needs to be standardized so I can pull an image from Docker IO, redhat.com.io, Fedora.io, my Artifactory, my Quio, however you pronounce that name. Whoever has a registry, we need to be able to pull those. Those things have to be standardized, if they start to fragment the whole Docker world, the whole container world starts to fall apart. That's it, thank you for coming.