 All right, so maybe let's get started. Hi, everybody. I'm Patrick Chenezon from Docker. And today, we're going to talk about the Moby project. And we're lucky because many of us from Docker came at Kupron. And so we'll have Stephen Day, who's going to talk about container D. And Justin Kormak is going to talk about Linux Git. So let's talk about the Moby project. So Docker itself is a platform that sits above your infrastructure, any type of infrastructure, and then runs any type of applications, both Linux and Windows. It runs on premise, in different cloud providers. So it provides you that insulation and future proofing, allowing you to add new type of infrastructure and have that same layer of isolation on top of it, and then run new types of workloads on top of it as the way we're building applications evolves. So that's what Docker is. If we look in the details of it, there are four layers in there. So on top of infrastructure, you have the core container runtime that's called container D. So we're going to talk about that. That's one of the key piece of the Moby project. And actually, yesterday was really a great day because after one year working on the container D project, we announced container D10. Then on top of that, you have orchestration. So in Docker, orchestration is done with a component that's called Swarm. On top of it, you have Docker Community Edition, which is the developer tools, the tools that you're using to do Docker build, Docker push, to build your images and run them. And then on top of it, we're building Docker Enterprise Edition, which is a full-blown platform for enterprises to be able to run containers and manage them. So the Enterprise Edition and Community Edition ship on different types of distros, the most common ones that people are using. The Community Edition, you can find it on Mac and Windows on your laptop for development. The Enterprise Edition runs on any type of infrastructure that enterprises typically use. And one of the use cases that we see enterprises use Docker for most commonly is to modernize traditional applications. So initially, when containers appeared four years ago, a lot of people were using Docker for CI, CD, or microservice type applications. Nowadays, all the rage is to take your old .NET or Java application that are running in VMs or on bare metal, containerize them. And then once you've containerized them, you can use Docker to set up a full workflow, CI, CD, and run them on different type of infrastructure. So that's a very important use case that we see more and more enterprises doing. So one of the big news that we announced at DockerCon this year, and this is why there are so many Docker engineers coming at KubeCon, is that in addition to Swarm, we're in the next version of Docker Enterprise Edition. We're also going to support Kubernetes as a choice of orchestrator. So that gives you the best enterprise container security management with Docker Enterprise Edition, the best container development workflow with Docker Community Edition, a choice of two orchestrators. And it's the native Kubernetes bits that are in there. So that means that all the Kubernetes ecosystem of different projects can run with it. And then this is backed by the core container runtime that is a container D, which is now 1.0. Congrats, Steven and Tim. So in order to do that, we followed the traditional Docker innovation model. We developed all our components upstream in the MOBI project. So there we collaborate with 9,000 other open source contributors. There's 8,800 PRs per year. So it's a very active set of projects. And then based on that, we use that as the upstream for Docker Community Edition, targeted at developers, and then Docker Enterprise Edition targeted at enterprises. So the MOBI project has lots of different components. So SwarmKit is for orchestration. InfraKit is for infrastructure management. LinuxKit is a toolkit that allows you to create your own Linux distributions that are very small and very secure. Container D is the core container runtime, which is now 1.0. And it implements the OCI standards for image format as well as runtime. Run C is the reference implementation for OCI. So it's used by Container D. VPNKit is a toolkit that's used to do the network management on Docker for Mac. Registry, LibNetwork, there are tons of projects in there. So today, in today's session, we'll focus on three of them, Container D, LinuxKit. And if we have time, InfraKit. So how did we integrate Kubernetes in Docker? Actually, it's been a work that happened in the open for more than a year. So it started with the Container D 1.0 roadmap defined with the Kubernetes community at KubeCon in November 2016. Then in March, we contributed Container D to the Cloud Native Foundation. So it's a CNCF project. In April, at DockerCon, we showed LinuxKit and Kubernetes working together with the help of Ilya from Weave, who's sitting there. Thank you, Ilya. And then we've worked with Google, with Lantau, and the Google team on CRI Container D, which is Kubernetes Container Runtime Interface Implementation in terms of Container D instead of Docker D. LibNetwork implemented, started to implement CNI in September. We talked about that at the Open Source Summit. Then in October, we gave Notary to CNCF. There was a talk about Notary earlier this morning by David Lawrence and Ashwini. And then the beta of Docker with Kubernetes support is starting probably in December, private beta. And then the GA will be in the first half of 2018. So Container D and Notary are now CNCF projects. So we really love the CNCF. They are a whole community of developers who participated in that. Ilya, your picture is a little bit small there. And then what we're really excited about at Docker is what will happen for containers in terms of innovation when the two largest open source projects around containers, Moby and Kubernetes, join forces to innovate further. And at the end of the day, we are one big community trying to get containers to be more mainstream. One of the changes that happened that's pretty important that Michael Crosby led is we had a BDFL close, Benevolent Dictatorship for Life in our governance for Moby projects. And the BDFL was Solomon Hikes, the founder of Docker. And what BDFL means is that when there are technical disagreements between maintainers, they can escalate that to the BDFL. So we replaced that by a technical steering committee that has been elected last month, I think. So let's talk about Container D, Linux Kit, and Infrakit. And so for Container D, Steven, who's one of the maintainers in Container D, is going to explain how we went to 1.0. Hey, everybody. Did Patrick or anybody mention that Container D is at 1.0? So this is pretty exciting. We honestly didn't plan it to be released with KubeCon. It was just like that's the way the dates worked out. So I think it's great to be having that news here. So it shipped yesterday. Check out the blog link. You can read all about the details. I'm not going to go into the details of what was and wasn't in the 1.0. I'll just talk a little bit about the project. Oh, yeah, if you're taking a picture. Go back. Yeah, there you go. All right. So early history of Container D 1.0, or sorry, of Container D was the basic. So Docker had this thing called libContainer. And that was spun out into a project called Container D eventually. There's a few steps in there that I'm going to omit. And that was released in Docker 111. And it acted as a management or like a supervisor of OCI RunC executors. We then, over time, we decided due to requests from people in the community. And based on architecture, we decided to expand that the Container D project was a good place to expand some of the core functionality of Docker that people wanted in just kind of a simple core Container runtime that people could rely on. And this is a Docker with minimal features and that has full capability of the entire operating system. So this is kind of the timeline. December 2016 is when we first spun it out of Docker. I think 111 came out in the spring of that year, maybe April 2017. We then donated it at CloudNativeCon in March 2017. We had our first Container D summit in April of this year. And that was, this is where we kind of collected information from the community. And we brought a lot of people together and said, hey, what do you want if we could turn, if we could expand the scope of Container D just into what's needed as this single host container runtime, what would it look like? And so we took a lot of feedback there. And this informed what became Container D 1.0. In November 2017, we released our first Kubernetes 1.8 integration so that Kubernetes, you could try out the betas with Kubernetes and use Container D as the base runtime through the project CRI Container D. And then I think yesterday, or the yesterday, flights extend time. Yesterday, we released the Container D 1.0. And that's now generally available. A little piece of trivia, if you're running Docker 17.11 or 17.12 RC on your laptop, you're already running Container D 1.0. So you may be running it and not even know. So again, why did we do this? Why do we do all this work and why the big hubbub? So we wanted to expand the Docker platform. Well, we wanted to extend Docker into a platform and focus on developer experience. And we found the goals of that conflicting with the goals of a well-behaved Container Runtime. And this affected our engineering processes, but the engineering processes of our partners as well. So we wanted something that could be used beyond Docker and kind of fulfill the role of the 2014 view of Docker as this single-host Container Runtime. So we donated it to the CNCF, and history was made. So the goals of the project were to have a very small, lightweight, GRPC-based API and a heavy client library that minimized the amount of abstraction. So we didn't want you to have to plum a single flag all the way through from the client to the runtime. We wanted to make that stuff available directly in the Container D client, making it a lot easier to add new features. So if you upgrade Run C, you don't need to also upgrade Container D to get access to new features in Run C. The other part of this was designing for stability and performance. This means that when we add something, we know that we want that to be there forever. But also, when we do add things, we are trying to design so that they can be stabilized. And so if we're unsure about things, we hold off on putting them in, and this will help us to stabilize over time. We also have lots of things that can help us to do this, such as API-based integration tests rather than CLI-based integration tests, like are in Docker. And as well as, we take the entire API and scan all of our protobuf files, and we take that and put it into a giant API stabilization file. So this will help us identify API changes over time. And this is all generated and automated. So it'll prevent a lot of the problems that we had with Docker. And the other thing to note is that it's highly decoupled. It's made up of several microservices that you've kind of used together. And if you're more curious about the details of that, you can come to the ContainerD salon and I will talk your head off about it. So the other thing about ContainerD that's huge is we focused on an all-cart design, so that if you don't like the way we do image storage, or if you don't like the way we do push and pull or anything, you don't have to use it. You can only use the parts of ContainerD that you actually want to use. And you can actually disable most of the parts of ContainerD that you don't need to use. We also focused on using known good technology from the CNCF, as well as the Linux Foundation. So like OCI Container Runtime and Images, we used DRPC heavily, as well as a project called Prometheus for exporting metrics. And those are great CNCF projects. You should definitely check them all out. This is my list of use cases. I hope this slide's a little old. But I hope this to be expanding. I mean, the big one is Docker. Again, you're using it in Docker today. The next one that we'll see coming out in the next few weeks going into beta, hopefully, is CRI ContainerD. You'll be able to use ContainerD on Kubernetes. We also have an experimental SwarmKit implementation, but it might not compile anymore. But the other two big ones are LinuxKit and BuildKit. LinuxKit uses ContainerD to actually package its system services, and BuildKit uses it to control build. And this is one thing I'll actually highlight about ContainerD is that it's meant as being a critical piece of the infrastructure, whether you're on your dev machine all the way out to your cloud or your own hardware, so that when you're running containers in ContainerD, there's no such thing as, oh, it works for me. It's end to end. You use it for the build. You use it on your dev machine. You use it in production. And the future potential things, like there is an open fast integration. I don't know. There's Alex Alice is using it. It has a talk where I think he's integrated with CRI ContainerD. He's always doing really cool stuff. So I don't know what the current state of that is. And then IBM Cloud Bluemix will use CRI ContainerD. And hopefully, you will be too. You can integrate it, let us know what you're doing, let us know if you're having problems. So a few facts and figures about the project. We have 1,994 GitHub stars, 401 forks, 108 contributors. I think it was roughly 87 to the ContainerD 1.0, which is really cool to see. And there's a lot of different people contributing. I was looking at the list of the names the other day, and it's really, really cool to have so many people be interested in this project, such that they're going to do code contributions. We also have eight maintainers from independence and member companies, including Docker, IBM, ZTE, ZJU, is that right? As well as 3,030 commits and 28 releases, I think. 28 or 29. Anyways, so this is a lot of links. And if you go to that blog, I think we get most of these. But I'll highlight the most important ones that you might be interested in. Check out the Getting Started Guide. The Getting Started Guide is really cool. It'll kind of show you what is ContainerD, what does it do, what does it not do, that kind of thing. And then you can look through the scope table, the roadmap, kind of see where we're going, if there's stuff missing on there. Reach out to us through an issue or on the Docker Community Slack, and you should be able to kind of push us in the direction that you want, and we can help you with a proposal or anything that you need. You can also read a little bit about the architecture of ContainerD and the approach to how we've implemented it. And that can help you to understand how to use the APIs a little bit better. And then you can learn more about this. There's actually lots of talks. There's the Linux kit and Kubernetes talk at the Austin Docker Meetup, but that's already happened. Oh, OK. So Phil's talk has already happened. But tomorrow, we have the ContainerD Salon, where I think we have a large slot. I think it's 70 minutes or something like that. And we'll go into details on architecture as well as we'll show you a little bit how to use a client and then how to get started with ContainerD. I think Jess Valaraiso has created a great Getting Started guide, so you shouldn't be able to. It should be low drama. Anyways, now I'm going to pass it over to Justin Cormack for a presentation and demo on Linux kit. Yep. There you go. Do you want to learn? No, I'm OK. OK, that's nice, yes. OK, so I'm going to talk to you about Linux kit and a bit about Kubernetes as well, because we're at KubeCon. And we've been doing a load of cool stuff with Kubernetes. So kind of why and what is it? So let's get the sort of strap line as a toolkit for building secure, portable, lean operating systems for containers. So it's about taking a kit that gives you the ability to just build Linux in exactly the way you want. We give you a load of pieces that help you build Linux in the way we've been using it for Docker uses, but you can totally replace all of them with anything else you want. And the pieces that you build it from are containers. So you can build them with your normal workflows for building containers. And you can build Linux in just a minute or so. It's designed to be built in a CI pipeline. You can test it locally and then deploy it wherever you like in the cloud or on bare metal or on your laptop or whatever you want. And so it's a very different lightweight cloud native approach to how to deal with Linux. And it's also, I don't think the whole immutable delivery idea that we have with containers is very important also for system images, and we've really gone with that. So the idea is that you build a new image and redeploy if you want to update things rather than having a really complicated kind of desired state management thing going on inside your image. Security is designed to be minimal and just run exactly what you want. And I think the NIST Container Security Guide recommends that you do use a container-specific OS for running containers, and that's one of the important use cases for Linux Kit. As it work, it's got a very, very simple architecture. It's exactly the same design as a pod in Kubernetes. Kind of coincidentally, but it's also kind of convenient to understand. So the life cycle is that you start a few containers on boot sequentially, which we run through Run C. And then we bring up container D and run all the services you want at that point in parallel. And so it's a very simple, understandable architecture, and that corresponds to a YAML config file. So you can see here, you've got on boot. You've got DHCP, and then this is running Redis. So it's a really simple example. I'll show you that in practice in the demo in a minute. Other important differences is that it's all designed around immutability. There's no package manager or anything at runtime. You can just build it and replace it. If you want to do dynamic services, you can run Docker or Kubernetes on it. So it makes life much simpler, and it's easier. And that gives you all the advantages of building fast and running fast. We have a very container-like build, push, and run workflow with support for pretty much every kind of cloud provider or disk format. You can build ISOs, or you can build VHDs. And we've had the community add support for all these different platforms that like to use everything from OpenStack to VirtualBox to KVM and so on. So there's a really wide range of platforms. You can run it on, and there's really simple tooling. Or you can use the native provider tooling. So you can just build an AMI using our tooling and then run that using the Amazon tooling or whatever you want to do. So we design the tooling around giving it a starting point. And then for production applications, you can obviously use more advanced tooling. How it fits into other CNCF projects. We use a whole range of things. Obviously, container is really important because it's actually the key bit that's running in containers. We use nature in tough image signing to assign the packages we build. Kubernetes and CNI, I'll talk about in a second. And Geopracy and Prometheus are used in container. And you can use that to get system stats out and so on out of LensKit, which is really cool. So it's very much a cloud native way of doing things. It's now in the CNCF landscape, as of the new version today. Kubernetes. We've actually been working on Kubernetes support actually with Elia, as Patrick mentioned, for seven months now. We did Kubernetes's demo when we first launched LensKit. And it was the first time we showed Kubernetes and kind of at DockerCon. And apart from the original launch of Kubernetes, which was at DockerCon. And we've worked together with the community to build out everything. It's now got its own repo. It's a really straightforward, standard, understandable version, unmodified version of Kubernetes using QBED-UM to run it. It supports both the Docker runtime and CRI Container D. We've been using that for testing CRI Container D. It's designed to be customized. You can add different networking, whatever. And it's also the upstream for the Docker desktop editions, where we're shipping Kubernetes on the desktop support any day now. You can sign up for the beta at beta.docker.com. And it will probably ship in the next few days. This is the UI for Docker desktop, which I'm really excited about. It's very simple. There's a tick box saying, I want Kubernetes. And then you have Kubernetes on your desktop. And it's just there. And you can just run Kubernetes. It's all enabled by the LensKit support that we use as the app. It's all exactly the same code as we're using in the upstream open source project that you can, if you want to make different customizations, you're using the same code. But it's just designed to be really, really easy to use. And we're incredibly excited that we're being able to ship out Kubernetes on the desktop to millions of users of Docker for Mac and Windows. And we think that's really going to help people get their hands on Kubernetes for the first time and understand how to use it and have a really easy development experience. So I'm going to show you a few demos. We've got a few minutes for demos. I'll show you. Here's the Mac that's about to ship with the Docker's running, Kubernetes is running, and the preference thing here, which is enabled Kubernetes is, which is. You click that button. It takes a minute or so. And it will just restart with Kubernetes support. And when you're running it, you've just got a normal Kubernetes that's there. Kubekertl's already installed. It tells you it's Docker for Mac and running Linux Kit kernel. And this is Docker 17.11. So it's all really, really straightforward. You can just use Kubernetes as you normally would. I'm going to show you a quick example of how easy it is to build something with Linux Kit. So this is a really simple Linux Kit YAML file that we have just at the top of the repo. We say, which kernel we want, which version of container D, whatever we want. And this, apart from a few other things, it just runs the nginx image from Docker Hub, the Alpine one. So it shows that you can just run services from straightforward upstream normal containers. We can just do Linux Kit build with the YAML file. And I need to update my cert. We always check the certificates when we're building everything. We did a lot of integration with Notary, which is there. So that's how long it takes to build. And then we can just run this locally as the image we built. And there's Linux booting. As you can see, I've got quite a big text on this thing. So we've just booted up Linux Kit, got a console, and we're just running a very few processes. We can just connect to nginx and get the LA World page from nginx. So that's how simple it is to just build a very simple, simple service. Kubernetes is obviously slightly more complicated, but not much. This is a console on actually on Docker for Mac. And you can see it's running, I should make this text a bit smaller, shouldn't I? So you can actually see more of it. So you can see it's running Docker container decompose, Kube DNS, Weave networking, Kube proxy, everything, the Paws container. So it's just a very simple Kubernetes install that we built with Linux Kit, which is basically exactly the same code as if you go to Linux Kit, the Linux Kit Kubernetes repo. I made this all very large for a meetup earlier when the screen was a bit small. So this is the Linux Kit Kubernetes repo, which has a very simple build process that you can basically build your own Kubernetes, customize it, and basically configure it in different ways. And you can kind of get started with that. So it's a really simple way to build Kubernetes's images for local use, for production use. And we've got a lot of people working with us on this who want to use it in production. We've been talking about integrating it into cops yesterday, for example, which I think would be really cool to do, so that you can deploy it on AWS easily with cops. And we've got lots of people who've got real interest in putting this into production in large companies and things. So we're really quite excited about particularly the Kubernetes integration, because a load of people have come to us and said, that's what they want to use Linux Kit for. So if you want to get involved, it's been a very successful project. There's lots of demos to get started. So come and chat. OK, so we just have a few minutes left. And I wanted to talk to you about the one last project from the mobile project, which is called Infrakit. Oh, no luck. OK, I'll do it without slides. So the last project I wanted to tell you about is Infrakit. So Infrakit is a declarative management system for managing your infrastructure. And it's all based on containers. And so, oh, it works very good. So it's all based on containers. And, oh, OK, can I, yeah, mirror displays. OK, that's good. It's all based on containers. We launched it at LinuxCon in Berlin in October 2016. One of the difference between Infrakit and other infrastructure management systems is that it's doing an active monitoring of the infrastructure. And if the description that you're telling him that you want for your infrastructure differs with reality, Infrakit takes active action to change that. So we proposed it to CNCF last June. And it's a little bit too soon to accept the infrastructure management projects in there. But these are actually some of the slides that we use in order to present it to CNCF. So it's focusing on the provisioning aspect of provisioning. So you could use it to provision your Kubernetes cluster or your Docker cluster. So in the CNCF landscape, it tackles that part at the bottom there. And the kind of use cases that can be used for is to set up your day zero install and also your day one configure for container orchestrators. But where Infrakit really shines is in day n automation of infrastructure, like how to provision new nodes, how to update your nodes with new versions of the orchestrator itself. One of the, yeah, actually, time is done. But just one thing I wanted to mention is one of the really interesting developments in Infrakit recently, David Chang, who's working on that, has been working on a cluster autoscaler for Kubernetes. So your Kubernetes cluster connected to Infrakit could, based on some metrics that you define, autoscale the cluster itself based on the load. So lots of interesting developments tied to Kubernetes there. So these are the details about the autoscaler. And you can get involved on GitHub, Docker slash Infrakit. You can learn more about all the projects on the Mobi blog. We had a Mobi summit at DockerCon this year that had tons of talks. All these talks have been recorded. And all the slides have been published on our blog. So it's blog.mobiproject.org. Or you go on GitHub, and you can find the code and play with it. Thanks very much. Many of us will be at the conference, so you can find us either at the Docker booth or in the corridors. And there's one more session that should be pretty interesting. It's the OpenFast session by Alex Ellis. So OpenFast is a serverless platform. And he's doing some really interesting ties with some of the Mobi projects. So for example, in there he has a fantastic demo of using functions to just deploy automatically Linux KT images. All right, thanks very much.