 I think we're starting. Are we good? We're good to go. Hey, thank you for joining us today. We're going to talk here about running Docker containers using Kubernetes on OpenStack. I hope that's the talk you're here for. So my name is Craig Peters. I'm a product manager at Mirantis, and I'm really happy to welcome Kit Merker to join me. Hi there. I'm Kit Merker. I'm a product manager at Google, container engine, container registry, and Kubernetes. And I'm just going to give a very brief introduction to Kubernetes. And then I'll play around with a Kubernetes cluster later running on OpenStack. So just before we get started, quick show of hands. How many people here have deployed Kubernetes in production? We got a couple. Three. Okay, nice. How many of you have used Kubernetes at all, built a cluster, played around with it? Nice. How many people here have heard of Kubernetes? Oh, look at this. Look at all that. I mean, before you walked into the room and we said that word. And does anybody here know the original origin of the word Kubernetes? Does anybody know where the word comes from? It's Greek, right? Not Patrick. You know? Right there. Shout it out, man. There you go. Yeah. The ancient Greek word for the helmsman of a ship. Anyway, so like I said, I'm going to give a very brief overview. Before we get into Kubernetes, though, you sort of have to back up and say, okay, why containers? It's the hip new hype in the technology space right now to run containers. But they actually provide a lot of benefits to you as you're running an infrastructure. You know, first of all, performance. You can spin up a container much faster than a VM. So you can tear them up and down. You can deploy containers repeatedly. They're sealed from an image. And you can take those and push them repeatedly to different environments and not have to worry about installing bits that might fail midway through deployment. You get isolation. So if you have noisy neighbors, two containers running side-by-side, one of them's noisy, the other one's isolated. They can't reach into each other's space or consume each other's resources. You can get a consistent quality of service across your environment because you have container run times that are working in the same repeatable way. And also get an accounting of what's actually running in your environment. So if you think about you've got all this infrastructure, you're running different applications, you can see everything that's running, exactly what version of what code's running, which is also kind of a way of getting visibility into what's running in your environment as well. But one of the most important features of containers is portability. Being able to take code that you wrote, package it into a container image or a container runtime, and move it between different environments. So whether that's an open-stack environment on-premises or in a cloud provider like Amazon, Google Cloud Platform, Digital Ocean, being able to move that same code and not worry about what specific machine I'm running on, what specific infrastructure I'm running on. I can just kind of let it run. And for enterprises and companies today, being able to move between different infrastructures is really important. Things change and people are migrating to cloud. People are taking things that are in cloud and moving them into on-premises for performance reasons or security reasons. Being able to have that choice and portability is really important. So when you think of running your code in containers, it's fundamentally a different way of building applications than on bare metal or a VM-only style environment. Talking a little bit about sort of a spectrum of tools, right? So we think of Docker. The Docker project is really the packaging and runtime portion of the container. So the Docker format is a great way to run a container, take your code, share a kernel with the shared VM, share it on Docker Hub. You can get code from other people that have already pushed out images. So that's kind of the Docker piece. And they've really solved the imaging, packaging, developer experience piece of getting a single container running. And you can deploy that onto your laptop, on your server, on your cloud, and it runs kind of consistently and repeatably. Kubernetes is the open-source project that Google created last year, almost at our one-year anniversary. The idea behind it is it's for cluster-oriented orchestration of containers. We have multiple containers that are working together that can scale up and down, where you can easily update, deploy, not have to worry about your infrastructure, really focus on your code, focus on the operations of your infrastructure, and also declaratively manage it. So you define what you want, and Kubernetes tries to fulfill that desire. So you're not giving it a series of instructions, do this, do that, run this here, run this there. You let the scheduler do the work. I'm going to show a little bit of that later. And because Kubernetes is open-source and we designed it to run anywhere, it really fits into not just Google infrastructure, but any public cloud, private cloud on-premises, et cetera. And then Google also offers the container engine, which is the hosted version of Kubernetes that runs on Google infrastructure. So this, again, is a cluster-oriented service, lets you run containers. You get the full power of the Google infrastructure and Google Cloud, and it's powered by Kubernetes. All right, and just so, you know, by way of background, Google's been running containers for many, many years. Every single service that Google runs, whether it's Gmail, Search, Yahoo, Hangouts, et cetera, all of it runs inside of containers. And that's run on an infrastructure, and we've recently kind of shared some of the details of our internal infrastructure called Borg. And Borg is the kind of container management infrastructure that inspired Kubernetes. And the same people that developed Borg that runs all these scale services, they also built and designed Kubernetes and are working on it today. So they really took the concepts that have driven this massive learning that Google had to go through to get to the scale that we're currently at. We've taken that and turned it into a streamlined open-source project that anybody can run, even for smaller applications. Not everybody here is running a Google-sized infrastructure, obviously, but, you know, for you or your customers, getting those design principles in, even at a smaller scale, gives you a lot more power and gets all the benefits that I talked about earlier. And we launched two billion containers a week, which is just like an impressive number, so we just say that a lot. Two billion! Hopefully we'll... Well, I should figure out what the decade number is. I think it would be a really big number. Greek word for helmsman, also the root of the word governor. You know, container orchestration runs Docker containers. Actually, we recently announced support, early support for rocket containers as well, but we really want to provide choice. Any container runtime that the community wants to contribute, we want to make it run in Kubernetes. Multi-cloud, bare metal configurations, inspired by our internal infrastructure, written in Go. Really, what it comes down to is we want you to manage your applications, not the machines. And that's what Kubernetes value really comes in. Let's see. I want to give just a very brief overview of the concepts of Kubernetes. Most of you may have heard about these. I'm going to try and do it as eloquently as I can, but I also am impressed for time, so I'm going to do my best here. I'm kind of joking. I'll take questions later too. So container, where to talk about a container? That's the single unit of runtime. We also have this concept in Kubernetes called pods. And what a pod is, is when you have containers that work together very closely, that have shared fate, that have shared life cycle, they can communicate with each other as if they're on the same network, they have the same IP address, they treat each other as local host, they work together very closely, and pods can have one container. That's fine, but it actually is very powerful to be able to have two or three or four containers where you have reusable libraries or you want to do application composition and not do that earlier in your sort of build process and do it at runtime. So you can take one example what we use here is like you have a content server that's serving static content, and maybe you have a sinker service that it goes and grabs it from some data store somewhere. You put those two containers together. If either one of those pieces of the application changes, that container, you don't have to rebuild an entire application. So you get that nice separation of concerns and the ability to compose that application. But at the same time, they get to run together very closely with shared fate. We have this other concept of controller, the primary instance of this being our replication controller. And really what a controller does is this fulfills that whole declarative management. So you define what your desired state is. So maybe you say I want to have five of these containers running at any given time. The controller will look and see, okay, are there five? Are there five? Are there five? If suddenly there are four because a VM went down or a hard drive got lost and so your infrastructure is impacted, it'll go ahead and find a new place to run it and add that. Or maybe your desired state changed. Instead of five, I want to have ten. It'll go find resources to spin up ten. There's also those interesting corner cases where the VM goes away, I spin up new work, the VM comes back, and so now I've got more than I wanted. Kubernetes will actually notice that state as well and spin down containers until you get the right number. So the control state is really about observing the truth, measuring that against the desired state, and then taking action to fulfill that. It takes a huge burden off your back as an application administrator because you don't have to worry about implementing that for yourself. You can just kind of take advantage of it. We have this really unique word we invented called a service. It has exactly one meaning. It's basically this idea of a service which is basically a group of pods and be able to address those pods by one IP address or one handle. So you think of like, I have all these different pods and any one of them could do the work. I don't want to address this individual pod or an individual container. I just want to point out that group of pods over there, that herd. I just want to point out that service lets you do that. It acts as a load balancer in front of a set of pods that can all fulfill your work. You can use this app to put those into containers that all do one container replicated many times with a single pointer. We have two more concepts. I'll go over labels and selectors which are very closely related. So in microservice style applications, hierarchy is bad. You want to have all these loosely coupled services that can all talk to each other and they each do a job and then they work together to create this application. What we have in Kubernetes is this concept of labels where you can actually take a key value pair of whatever you want and you can label things in your app. Kubernetes uses this internally to find and address different portions of the app. The replication controller will use labels to find the control loop. You might say, okay, this set of pods over here, these are all front end pods, these ones over here are all back end pods or these are all part of this one application. You can use key value pairs that you want and describe your environment and your application in a way that the Kubernetes API can understand and use for addressing. Finally, the selector is basically the query you use to use the labels. So that's just a way of finding anything in your application by labeling, which means you don't have to worry about which machine is running on or anything else. You just set a query and go through it. A client would use the selector to find the right services that they want to consume or would other pods use the selector? Either way. So that was my brief overview of Kubernetes. It's not that complicated. It's not that hard. I'm going to hand it back to Craig now and he's going to talk about OpenStack, Marano and Kubernetes running on OpenStack. Here we go. So I'm going to start by setting the stage for a little demo we're going to do which actually shows Kubernetes running on OpenStack. I'm kind of harking back to what Kit had to say about portability. If you think about trying to use one of these systems, this is kind of a simple example you might have. I want to set up a monitoring system that uses some simple components that I can get off the shelf. Some nice open source tools like Grafana and FlexDB, but I want to do it in an HA way. I want to wire them up. I want to make sure that they are always available and I want to take advantage of an orchestration engine like Kubernetes to make sure that all these connections are always available to the other parts of the service. So, that's kind of a common premise probably for all of us that we need to do something like this. So we kind of have several choices. One choice is to look at all the documentation for these tools and figure out how they should be connected together to configure literally thousands of parameters after I've installed them and to do lots of testing and figure out how to do that. So that's kind of the left column here on this slide. Another choice I have is to use somebody who's already packaged this kind of a thing as a preconfigured app in a hosted environment so I'm not even running it locally. I'm kind of outsourcing all that infrastructure and so I can point and click and go through and host it there which is an awesome solution but not for every scenario. Sometimes you also need that stuff in-house and so what we're going to talk about here is another option which is presented on how you can do it on OpenStack using a technology called Merano. And what Merano does is it essentially does a lot of the same kinds of things that the hosted service providers do in packaging up their applications into a kind of marketplace but in your on-prem cloud so you have much easier time integrating with your existing infrastructure or complying with regulatory requirements or taking advantage of flexibility you need in your underlying infrastructure to serve an application-specific service level agreement so we'll take a little look there. So I want to introduce the notion of Merano a little bit here because it serves as the glue or the underpinnings that makes it really easy to run Kubernetes and other orchestration engines and it will be quite frank there. OpenStack is designed to run any kind of infrastructure so it supports all kinds of passes other kinds of orchestration containers and what we've done is we've been really lucky to be able to collaborate with Google on creating an integration that shows how you can easily run Kubernetes on OpenStack. So Merano is a way to do application management in the cloud. It's a way to package up things in a user-provisioned way and provide repeatability so it provides a list of applications it exposes a set of APIs that can then be consumed by automation infrastructure for things like CICD and you can implement really interesting use cases like when tests fail you can automatically take snapshots and when the developer comes in in the morning they can recreate that environment and do debugging in C2 instead of having to just look at logs and figure out what happened so you can get the real picture of what happened in the cloud. So the whole idea here is to provide a way for operators to create consistency in the way their applications are run across tenants and have a degree of control. So say for example when you deploy a certain application you always want to instrument it for monitoring in a certain way and you want to automate how that monitoring is used for billing, show back, that kind of thing. So essentially Merano does this by being an application abstraction and presents that as a catalog. It has an application object model which keeps track of application state and then the events, they're essentially events that occur around applications and those take advantage of the application state and those are exposed in the UI or you can consume them from any API endpoint so they just extend the open stack APIs and the way you configure it is essentially using a domain specific language for those kind of event driven workflows and if we have time after the demo we'll spend a little time just digging into that a little bit. It's a very powerful concept. So what I want to do is show you how you would provision a Kubernetes cluster and this is actually kind of awkward excuse me while I bend over my demo it's not exactly the ideal setup that's actually not about the ideal. There's a few of them here right I can pick one. So one of the things we introduced earlier this week so on Tuesday I was lucky enough to be invited up on stage to launch the app catalog so what we'll do is we're going to go get a dockerized application from the app catalog and configure it to be deployed in my open stack instance here so I can go to my packages and see I have a bunch of tools here available already for users of this tenant but I want to add to it I want to go get that Grafana tool that I've been hearing about and see how I use that. So I'm going to go to import package and I'm going to go find the repository and go find a list of things that I could use. So I'm going to go find a Marano package and let's see let's go find Grafana see if this search works. Ah yes I can. That would assume I know the command for that. If it were a Mac it would be true but actually I'm running Oracle OS here. Shift control plus. There we go so I found dockergrafana is that readable now? So in the community app catalog essentially I could do a search I found the artifact I want I got a description of it I already know what it is so I'm going to use it I can see who created this thing and get in touch with them and interestingly I can see what it depends on so in this case it's a it's a dockerized instance of Grafana so it depends on either docker or Kubernetes pod and obviously that's what I'm interested in showing here but it also depends on a backend database right so in this case this packaging of it says it's going to use the influx DB so I'm going to go ahead and just copy the package name into Horizon paste it here and assuming all my network setup magic that was happening while you guys came in worked it did so now what I've got it imported a bunch of packages into my environment here and I'm going to give them I'm going to kind of categorize this as I wish I had a monitoring category but I don't so I'm just going to call it databases and create so what it's done is it's added additional packages to my list of packages which means so I'm kind of looking at it as an administrator I've published those now so users who come to the catalog they go here and they see the list of tools that are available for them to self-deploy so let's go and see what it's like to configure an environment to deploy Grafana so I'm going to add this I'm going to do a quick deploy on that let's call it Docker Grafana and so this is a Docker container so the Morano packaging knows that a Docker container depends upon a container host so Morano has this notion of dependencies and abstract ways to satisfy those dependencies and in this case I've got two ways to do that I can either use the Docker standalone host which is essentially in this case it's implemented as a VM that runs the Docker service so I'm going to choose that I'm going to give the pod a name so I'm just going to call this my Grafana pod you can see in a previous instance I misspelled it and that Kubernetes pod as Kit had so quickly explained about pods pods actually depend on the Kubernetes service itself to run so I'm going to create a cluster for the Kubernetes service I'm actually just going to pick the default so the Kubernetes cluster actually has a pretty sophisticated set of configurations so I'm sure that the state the declarative state of the cluster is maintained and it essentially maintains its own high availability infrastructure it's got minions and things like that and then this packaging implements something called gateways which provide easy access to the internet to create a public IP address so you can access that API endpoint so I'm just going to choose all the defaults there and then I get to choose my flavor thing and make sure I can do an SSH connection to it and I'm done with the Kubernetes cluster and I'm going to finish the pod and I can deploy that and there was an error uh oh that's a problem happily this is a baking show pulling out of the oven I pulled one out of the oven that I baked yesterday afternoon so my server is right here and it says I couldn't use it so here I've actually got a cluster already running so let's take a look at that so in this case one of the things I actually went through really quickly is that I had chosen to run oh actually that's the problem there was some problem with the dependency didn't flex DB I didn't ask about that so I chose that was the error that came up and I'm not sure why we'll find out why but what I did when I configured this one is I actually chose to have the both the influx DB and Grafana run in the same pod so that they had this shared fate right and I chose to only have one cluster of that because I'm running all on my little machine here and so what does that look like so here I've got a topology that shows what I've got so in this case I've got my Grafana Docker pod that's running on a and this is actually the influx DB I actually see now exactly what happened influx DB didn't come down for some reason they're both dependent on this Kubernetes pod which is depending on the Kubernetes service which has various minions and gateways and those can scale up and scale down dynamically in this infrastructure so you know what we can do now is we can actually look at what Kubernetes does for us on there so should I drive since I'm sitting down sure why not I can just do a docker ps docker ps so I've got some containers running here so one of the things that we talked about is Kubernetes is focused on maintaining the state of these things right so let's kill one of these guys make the screen bigger we probably can't see it do it again please do it again again again how do I make the floor bigger on this sucker just press up sorry control plus here we go is that readable you're gonna have the same problem where it's not sorry it's truncating your table pseudo there we go alright the important thing here though is look at the created times we're gonna just pay attention to that yep so you see one of them we did this just before the demo we did 24 minutes ago it came to life so do we move so Kubernetes is watching these containers and what it's gonna do is I'm just gonna kill one of these control shift sorry control shift C alright so we're gonna docker kill and then control shift V so that's the same ID for the one that was created 24 minutes ago right so we're gonna kill it and then we're gonna do let's take a look at what we got it's not there it's dead uh oh I killed it oh wait there it is that was a little slow the server is under some mode so you'll notice it brought it right back to life and so Kubernetes is watching the I wanna do this again it was like instantaneous last time let's kill another one I'm gonna kill the other one it probably it probably confused because you killed the same one yeah sure let's see ready there it is one second ago so yeah we're hang on a second can I do the resize or because I had one other thing to show but we're not going to anyway we'll do anything else we had a little networking fail so did you do anything else I do let's just take a minute to think about what it is we've seen so we have seen Kubernetes which is a really sophisticated orchestration engine running on top of IAS running on my stupid laptop simulating a cluster of machines right and um essentially what we've got inside OpenStack in the Marano project is a way to build an infrastructure that essentially mimics what Google Container Engine does for Google Compute right and so I just wanted to give you a little insight into what that means so essentially we talked a little bit about packaging applications which allows you to have some control over who can do what and so you as a user can create a package and there are a whole bunch of packages now as you saw published up on theapps.openstack.org and we invite you all to contribute more to them and that's actually kind of where I want to go with the rest of the talk we as a community can build a community of best practices around how we do this kind of container management on all kinds of infrastructure to make sure we can have reality of this portability and Marano is just a tiny little piece of glue that makes sure that these layers can work really well together and so from a user experience I can just do this kind of drag and drop experience to configure very sophisticated infrastructures of clusters and that's done because there's a little magic that happens behind the scenes and this is some of the Marano PL and I've got the link here and you'll be able to get to that that's just out on GitHub and you can see how that works and I just wanted to walk really quickly through this thing that actually goes and does all this work to create a pod so it kind of has two major parts there's kind of a setup and so there's a name obviously it's called create pod it takes some arguments and the arguments are a contract so what is there about that version for example it's got a kind what kind of data are we doing and metadata and so that's stuff that's passed to this to give it context and then it declares a few things about that so essentially you've now got information about what's going on there and this is what comes in and then the next is to say let's check the state of the system let's see if it looks like what Marano thinks the system should look like because it's true that Kit could have gone and done some commands to that cluster that Marano wouldn't have known about so there's this dollar sign deploy so it's like this deploy it'll run that method that's a separate method that has its own things and what that does is that checks and sees if the cluster is in the same state that Marano thinks it's in it's not tightly bound, it's loosely bound and once it's done that it knows what the current state is and that's loaded into the environment so resources is then a set of associated things that are a part of this package so this just loads those in so for example for Kubernetes there's a whole bunch of scripts that come with this to do all those kube control commands that you have to do and it automates that then for the user or for hovers calling the API and then finally it does basically template search and replace those scripts have variables in them and you've got all this context about the current application state and what data has been passed in it does search and replace and then it executes them so this is just one example in this case it's just calling shell scripts to do work on the cluster you know Marano is very powerful infrastructure for doing all kinds of things in open stack so it leverages heat so you can have heat templates or dynamically works and all kinds of stuff that's also done as a part of the package to implement Kubernetes so that's just a little peek in there there's a lot more to learn of course but I wanted to give you a feeling of how straightforward it really is so I wanted to take just a minute here to talk about future things we have five minutes left so we're wrapping up the last two slides so one of the things we want to do is make sure that we invite everybody to help us keep up with all the stuff that's happening in Kubernetes it's a really fast moving project I'd say on the same order is how fast open stack itself is moving and that's a challenge so our packages are in open source and we invite all of you guys to help learn about them and help contribute to them some of the things it lacks right now are really robust error handling it's kind of a first version it's kind of a preview, I mean Kubernetes is still in preview this package is in the same state one of the things you talked about with services I think it'd be really interesting because if you think about the Morano as an application catalog it's a registry of services that are available to users of cloud Kubernetes represents those as their services that are available as microservices and cross registration of those things and understanding how that should work is an interesting thing how do you handle autoscaling of clusters from external events related to open stack how do you deal with multi-tenancy multi-regions how do you deal with alternative overlay networks and to me actually the most interesting thing is we did some experimentation and we wanted to show it but it wasn't quite ready to share yet is how do we at the push of a button say export this whole configuration and application and running on Google Cloud Engine on Google Container Engine so I think there's a lot of great work to be done in these areas to make it awesome for all of us to invite all of you to participate in doing that with us and so I've got some links here for obvious next steps how can you do this yourself in your own labs and how can you contribute so I hope these links are useful and I think do you have anything more to share before we ask for questions I think we could take some questions if there's questions Patrick there is an open source web UIS if you go to the Kubernetes project in the www folder I think it is right here so the question was how often is the APAC going to change it's a great question so as we've been pre 1.0 it's changed pretty rapidly but we're putting in place we have a governance model and deprecation policy we have I believe it's either a one year 18 month deprecation policy for every feature of Google Cloud Platform so we're getting much more serious about making things reliable for the long term so you can take good bets on the technology but yeah it's a great question over here one pod goes to one host anyone else right here go ahead and talk in the microphone since you're right there so open stack has to now be only IIS with this it's going into the past layer also so is there going to be more investment in terms of not going to be any more suppression between IIS and platform as a service or open stack is going to be all with morano so is that a question about the open stack project about morano so morano is a layer to facilitate the integration between open stack and other kinds of services so it is not meant of itself to be a pass or a container orchestration engine and then of itself it's there to facilitate other tools to do that other questions yeah go ahead so when I did the demonstration so that is actually a preview of morantis open stack 6.1 which is based on Juno still that will be out in a couple of weeks wait is it one at the microphone here we have magnum also to deploy Kubernetes so what's the difference between magnum and morano there are lots of differences between magnum and morano just taking Kubernetes as base just yeah right so that's actually a much more complicated subject that we don't cover in a one-year question so you can reach out to me after and we can chat here if you want I don't have all the answers in short thank you a simple answer there is a fairly simple answer is that borg has lots and lots of features that wouldn't make sense if you read the paper there's lots of stuff in it that wouldn't make sense so to keep it clean easy for people to contribute to we wrote it in go we simplified and streamlined a lot of the concepts it was just easier to start from scratch licensing and legal issues it's the same people that went and built it never mind that it takes six months to learn how to use borg other questions thanks so much reach out to us after thank you