 Γεια σας. Ευχαριστώ για να μιλήσουμε για να εμπρευθούμε στον κλειδό των κοινωνικών παιδιών στην Cloud Foundry. Αυτό είναι ένα κλειδό του τέτοις κλειδός των 3 ΛΟΑΤΑΡΙΤΟΥΑΤΑΡΙΤΙΝΑΙ. Μια name είναι Κοσταδίνος Καραβολιάς. Είμαι σκοπησικός και είναι καλύτερη μησοχεία για την εργασία της Γαρδέν της Λόντου. Είμαι εδώ. Γεια σας, είμαι Γεγληρο Σταρήσ και είμαι ένα παράγκοσοχελής προς ημερικιάση του τηνποτήρου Γεια σας, είμαι γειαρείρ Ρωσνάος Да, είμαι αγεντρία, εξε που διαιμόσαω από το σύνδεμο κλπ και τ Chill We are going then to speak about the situation before standards. What were the pain points. Then we are going to discuss, we are going team by team and discuss which standard each team adopted and how the team changed. And finally we are going to conclude with what we have gained and enabled in the platform level. So, from users to containers. User processes are running inside the containers in Cloud Foundry. The process under hood is the following. When a user does a CF push, the CLI talks to the cloud controller which in turn asks from the Diego brain to schedule a long-running process. Diego brain is picking a cell inside the cluster and plays this request. Locally in a cell there is another Diego component which is called Rep and picks up this request and instructs Garden to create a container which finally is going to host the user process. In high level in order to create a container we need a couple of stuff. First we need an isolated container file system. We then need a kind of configuration file that is called runtime spec in container terminology. Then we are going to invoke a runtime engine we will take these two and we will create the container environment. We should make sure also the network new part is there the connectivity is there and finally we are going to start the user process. In the past after 2012 before 2015 containers start being adopted heavily. Almost all companies across all industries start experimenting with containers. The reason was because they have a huge benefit in both developers and operators. They have great performance, they are very portable they have optimal resource management and many many other advantages. Docker allowed people to start experimenting with this old technology and I'm saying old because container technology was more or less clear since 2008. It is an isolated process that runs using some kernel features like namespace, cgroups and other security improvements like capabilities and second. So the situation was from the one side they were very strong incentive to put as fast as possible containers production and from the other side it was clear how to create the containers. But the problematic part was the software that spin ups the container. They were not many implementation out there and the implementation were not open, they were not standardized. So each company that wanted to adapt containers had a couple of solutions either to write from scratch as such software but with high cost or to use a non-standard implementation and basically to opt in for a vendor locking. That was one of the reasons why containers did not go faster in production systems. Cloud Foundry of course picked the first solution a steam in London was spin up called Garden and it's a component that was called back then Garden Linux. From the one side Garden Linux implements the Garden API that speaks to the rest of the Cloud Foundry component and from the other side implements all the functions that are required in order to spin up a container. This component was large because it has a lot of functionality built-in runtime networking file system and it was very complicated because it was interacting with the kernel and this has been always a tricky part in software engineering. So without standards we were very busy maintaining this component. We were very busy into patching this component. When there was a security vulnerability it had to be patched instantly and without no external help. On top other tasks like porting the windows were hard because you could not actually reuse the code. From an engineering perspective inside the team there was not a lot of room for innovation and creative work. Of course this problem was not only Cloud Foundry but many other companies were facing exactly the exact same problem. That's why Open Container Initiative was funding OCI. The mission is to standardize the container technology and make it not vendor specific. It was founded in June 2015 by Docker and 40 other members. It's a Linux Foundation Collaborate project. It started with the runtime specification. Well, runtime defines the life cycle of a container, how you start, you delete, you kill a container and the environment, name, space, volumes, networking. After two years in June it came to the first stable version. RunC is the reference implementation of the runtime specification that was donated by Docker. Here is also the GitHub repository if you would like to get some more information. So Garden Team saw the benefit of using this OCI standard and was one of the early adopters. The architecture changed. From Garden Linux we renamed the component to Garden RunC and we delegate a large part of the component to this external reference implementation so the component became smaller and more flexible. One part was rewritten to support the OCI and in general it was great because the support from the community came so when we had the vulnerability or anything else it was people working on that all over the world. This has been already in production for a year maybe more and with that I'm giving the floor. Hey, how's the sound? Okay, sounds okay. So Garden and CNI. So as a Cloud Foundry operator if you were here for the previous talk you might have heard about the container networking project. The Cloud Foundry operator wants to have their containers attached to existing networks. They'd like to have fine-grained network policies between those containers, maybe from containers to other things. They might want to utilize advanced features of their software defined network if they have something like NSXT. And as an engineer working on Cloud Foundry and as a member of the teams that are working on this we want to avoid reinventing the wheel. We want to enable vendors with their fancy technology to add that stuff to Cloud Foundry without us having to do a whole lot of work. We would like to build our software around open standards and that's sort of the motivation for making use of this CNI project. So this was originally developed by CoreOS. It's part of the Rocket project when they were first starting it with the Cloud Native Computing Foundation. There's a small but growing community, a lot of contributors from different organizations. Pivotal, CoreOS, Red Hat, Tigera, Weaveworks are all maintainers. And the idea behind it is that it's the simplest possible interface between a container runtime like Garden and a network implementation. In practice this means that the runtime has some configuration that is provided to it and Cloud Foundry, that configuration, would come from Bosch or Diego. And that config then gets transformed into a certain representation that is understood by the network plugin. And the network plugin then does the hard work of actually setting up the network. And it's that first arrow that sort of curves down. That's the CNI boundary. And one thing that's common in CNI plugins, well, okay, before I get there, the actual thing looks like this. You might have some environment variables that get set up to the same level. The plugin about which container it is and where the network namespace is. And then there's more config that can be passed in over JSON on Standard In. So it's a very sort of simple UNIX format. Then you can chain these plugins together. So a common thing is the sort of high-level config is passed to the first network plugin, which then delegates to a second network plugin that might be doing IP address management or other things. And so the same parameters, maybe some new network config, it's passed in. The IP address management plugin maybe goes off and talks to a database, talks to DHCP, looks on a file on disk, figures out what IP address to use and returns that to the main network plugin, which is doing the work of setting up network devices and routing rules and all those sorts of things. The core repo is here. It specifies the API boundary for what runtime should do, what plugins should do. It has some conventions for more optional advanced features. It makes it easy to implement C&I plugin and to write a runtime that consumes that. There's also a separate repo. We recently split this out into a separate repo, which has a bunch of reference implementations of network plugins. Things for interface creation, for IP address management, for sort of tuning and like metatopic things. There's a new port mapping plugin that's pretty cool. And there's a sample if you want to build your own. In addition, there's this big ecosystem not just the reference implementations that we've built into our own repo, but Cloud Foundry and Kubernetes and others are using C&I as container runtimes. On the other side of the API boundary, you've got plugins, some open source, some proprietary that tie into various software-defined networking systems. This is really where the power of the whole thing comes from. Within Cloud Foundry, we've gotten to, as a result of this work, the size of Garden again. We got to remove a bunch of functionality from Garden Linux, where it was setting up networks itself and now there's this sort of thin layer that just calls out to C&I and then the C&I plugin can go and do the hard work. What this lets us do is to swap in various network plugins. What we've pictured here is Silk, if you were at the previous talk, you might have heard about that. It's sort of the batteries included core C&I plugin. Those things well as a very simple plugin. Within Cloud Foundry as a whole, you actually have this whole stack of plugins sort of delegating to each other and calling through API boundaries. Diego is calling the Garden API. Garden Run C has its own little API in order to call this Garden external networker component, which then exposes the C&I API. Then in red on the right, you've got all these swappable C&I plugins that you could tear out these. You could put in your own network plugin there. This is the path towards integrating third-party network solutions into Cloud Foundry. If you're interested in that, you can learn a lot more. Read our docs. There's a link there at the bottom. The basic idea is you make a Bosch release for your C&I plugin and then you update your deployment manifest to set some properties. Now Cloud Foundry will deploy with your networking plugin. This is how we're going to be integrating other plugins into Cloud Foundry, which I'll talk about in a second. For the wider C&I community, we're polishing up support for IPv6 right now. It's mostly done. We're talking about adding a new verb to the standard. We're talking about adding conformance test suites so that we can know that plugins are doing the right thing. Then we're going to be doing some release management improvements. In Cloud Foundry, there's going to be more C&I Bosch releases coming out. Calico, I think, is working on one. VMware is working on one for NSX. This is kind of exciting. So that's the end of the C&I stuff. All right. So the last bit is to talk about OCI images and in Garden. So to start with the definition of what is an OCI image, Constantinus introduced OCI, the Open Container Initiative before. The OCI images came a bit later. It came around the end of 2015 and it essentially is describing the container image format. It's standardized in the container image format. During the time that OCI images became a thing, there were already two main standards. You are probably familiar with the Docker image format standard which has been around for a while and it's very popular. There are a number of different variants of Docker images out there and there is also the Apsi standard coming from CoroS. On the other hand, the OCI image standard is not defining distribution because with Docker you get the image format but you also get the distribution mechanism. OCI does not have this as part of its scope. So we're only talking about how an image looks like and note how you download an image. Just like a Docker image. The layers themselves are fully compatible with Docker image layers. The metadata are fairly different but there is a very well-defined process to convert Docker image to an OCI image and also to convert an OCI image into an OCI runtime bundle which essentially means that you can get an OCI image and create a container out of it. There are some more things about layers. Essentially, an image layer is a set of files and directories. You can think of it as a diff and then you're applying all these diffs on top of each other to get an image. If you are familiar with a Dockerfile, this is a reverse Dockerfile in which the first line is at the bottom so you start from a base image as you may call it. In this case it's Python 3.5 and when you say from essentially pulling the layers of these base image into your container image that you're about to create then the next line is adding your code is adding the contents of the current directory to SlashMyApp. This will create another layer which will contain all these files and directories that compose your code base and the last line which is PIP install PIP is a dependency management tool for Python and essentially will install all the dependencies of your in this case Python application which again will go into a different layer so all the files and directories PIP will download and install will go into a separate layer that will be applied on top of your existing layers. In terms of metadata we have a few JSON files so this is the first one this is the indexed JSON which defines an image and you can have one or more manifest you can see that this is actually an array and the manifest is essentially another JSON file but every file and OCI image is defined by a content address a SHA256 content address so then you have the manifest which points to a configuration file another JSON and it points to a set of layers again we have an array here with a set of layers which are essentially turbos the configuration has its own data so it essentially tells the runtime that this is pure so you can only run it to name it 64 and some metadata like the environment variables and the command to run when you actually run the container based on this image so we would like to reduce gardener and see even further and also use OCI which is why GridFS became a thing so it's an image plugin for garden and it makes garden even smaller because it removes the previously built-in file system which was scarring around so GridFS yeah, just to say a few more things on why we did this work so the previously built-in component is called garden set, you may have heard of it it has been largely an implementation detail of garden runcy the problem with garden set is that this codebase is now about 3 years old and it would be really hard to refactor it to add OCI image support but our hand is very beneficial for us because it unlocks a lot of features you may have heard of OCI buildpicks and also treating all different container workloads the same way the ones that are built-back based CF stacks and the ones that are docker based is very beneficial because we can do various optimizations in how we handle containers in Cloud Foundry so what is GridFS? it's not a file system even though its name suggests that it is we like to call it a container image management tool which is not really a term you will find elsewhere and it's basically a tool that deals with the file system isolation of containers so its container gets its own isolated file system it deals with caching container images so you don't have to download the same image twice and it also deals with these quotas because Cloud Foundry has this peculiarity that it cannot really use these quotas because all the containers are using the same VCAP user so it has to use directory-based quotas for its container and this is hard this is very hard so we had a few iterations we started using ButterFS which didn't work very well for the Cloud Foundry scale what we ended up with is our current implementation of XFS for these quotas which has a very mature implementation of directory-based quotas they call it project quotas and we also use overlay for layers and for casting these layers in terms of adoption and rollout we have been running with a fast in PWUS the pivotal web services since April this year I believe and it is shipped with PCF 1.12 so we are now looking into making it GA in the open source world there is an experimental file for CF deployment and also an experimental flag for Diego released Manifest generation scripts so we would like to make it a default if you have any concerns about adopting GridFS in your installation please come talk to me tomorrow in the GridFS office hours and with that I am going to give you to Gabe who is going to talk about the future with all these standards it is pretty great we get a much smaller garden but we are going to use a lot of very extensive cloud foundry using OCI run time we can run containers on runcy on linux wincy on windows maybe clear containers maybe other container runners in the future because we are complying with that standard because we have CNI support in there we can use all these various networking plugins to plug into various images legacy droplets for cloud foundry Docker images OCI images next generation build packs all sorts of other cool stuff and this is the story of moving towards standards and making it so we have less code to support and that is really great so this encourages innovation it is across platform support is much easier portability is better and less vendor lock ins these are all great things for an open source project and we will take some questions if you have any I did not actually mention a tool there are many tools for instance one of them is SCOPEO that you could use what I try to convey is that there is a defined process and there is a standard process of doing so the implementation you can find many implementations SCOPEO is the one I would suggest because that is what we use to convert Docker to OCI but there are more than that cool, thanks for your time