 This is containing chaos with Kubernetes. I'm gonna talk about containers and an open source project named Kubernetes. If that's what you're here for, great, not. I'll understand. By the end of this, I hope you guys have a clear understanding of what Kubernetes is and why you may use it and maybe know a little bit more about the container ecosystem as a whole. Before I get started, I would like to know who you guys are. So, so everybody here, anybody, so let's start. Who's the developer, okay? Any systems and ops people in the room? Any people that do both, so they're dev ops, right? Okay, anybody here tell systems people or programmers what to do? Okay, so no boss jokes, okay. Is anybody here not one of those roles? Maybe just a content creator, not just a content, but you know, a content creator separate from all this? Okay, all right, yes, okay. And then how many people here are playing around with Docker? Awesome, how many people here actually like, not just played around with that, actually use it on a regular basis? Okay, is there anybody here and it's okay? Well, everyone else will close their eyes. Anybody here who doesn't really get why you would use containers in the first place? Okay, all right, so I'm gonna do my, I will do a very quick, here's why you use containers and then move on to Kubernetes. So a little preintroduction, why containers? Everybody's familiar with this matrix from hell. You've got multiple stacks of software, multiple places where it can run. They all come from different places with different versions with different install locations. And so you often run into the problem of like, well it works on my machine. And people very, very like hearing, well back your shit up because your laptop's going into production. So we've tried to solve this problem with virtual machines. Now virtual machines handle this problem, right? So you have very, you can very customize an OS that runs inside a virtual machine and then you can move that virtual machine around from machine to machine. There are some drawbacks to this approach though. VMs tend to be pretty heavy on the host machine, especially if you're running it on your laptop. You have to run several versions on your laptop. They are not something that you generally tend to do trivially. And so I like to liken this to, so your laptop case can't hold everything and so you solve the problem by strapping an oil drum with a handle and carrying around all your stuff in that. And it works, but it is often very unwieldy. So containers take a different approach. Containers all share the same kernel. And so there's no shared memory, there's no, I should say, there's no forcibly allocated memory. Resources get split up among containers a lot more close to the bone of what you actually need your process to run. And therefore you can run a lot more containers in the same hardware as you can run on VMs. And you also run into the plus side of that, since there's no real operating system to boot it up, it's just kind of fooled into thinking it's its own system. These spin up really, really fast depending on what you're doing. So a matter of seconds for most, for many, many container applications, if you start running more complicated things, it takes a little bit longer, kind of obviously, but that's what containers do. So you have a Docker file where you create your system. You say I wanna start with a base image, in this case PHP Apache. And then you run some updates on it and then you've got a packaged image that you can very easily share in all of your environments and that makes the matrix from hell maybe a matrix from purgatory instead. So that is a quick rundown of containers. Any questions about that before I move forward? All right, good. So why Kubernetes? So what problem are you trying to solve here with Kubernetes? So suppose I have a system, right? I've got a front end that's running PHP, serving up static content and your services all in one box and then I have a MySQL server, MySQL server that's doing all the data persistence. So I have a two layer application here and I'm gonna install it in containers and I get something like this. So then I get some more load, I get some more need for resources, so I just scale out. Pretty easy, pretty simple so far. Then someone comes along and says, hey, you know, this is really poorly architected. You really should split up your front end from your services and you should just serve up HTML, JavaScript, CSS separately and just have the service layer only be running PHP to just do services. So you split out the front end from this, you create a new container that's just nginx and you now have an environment that looks like this. Okay, still manageable. Then someone comes along and points out that, hey, you know, containers, one of the key words you get with containers is the word ephemeral. And ephemeral and database are not two words that we really like hearing in the same sentence. So okay, so I need to add some separate external storage to this, okay? So now my environment's looking like this and I can set up some more resources on the front end, I can do some replicas for the persistence layer and that's great. But now all of this is still running on a machine that has finite resources, memory and processing ability and is now a single point of failure for all of this great environment which is supposed to get rid of single points of failure. So you take all of this and you split it up into multiple machines, leaving the volume outside of it and if one of these machines goes down, you then have to move all your processes over and that's like really a lot to manage, right? We've gone from containers, makes our lives easier to, I'm just managing a whole bunch of containers now instead of a whole bunch of machines, okay? So Kubernetes comes along and says, hey, you give us some resources and you tell us how much of this stuff you want running and we'll just keep it running for you and if you need to change things, you need to auto-scale, we can do that too. So Kubernetes is a container orchestration system and if anybody knows Greek and knows that I'm mispronouncing Kubernetes, I apologize. It's, I just can't do it in the real Greek pronunciation. So it's container orchestration system, it is open source. It was started by Google but it is now contributed to by many, many others. Key or big contributors are Red Hat, CoreOS, we also have some contributions from Docker as well. So it's not just us doing this. So let's talk about some concepts. So there are some concepts you need to keep in mind when doing containers in general and Kubernetes specifically. So the first concept is cattle not pets. Has anybody heard this metaphor before for containers? Cattle not pets, okay. So I'll go through it real quickly, real quick. Pet has a name, it's unique or rare, it gets personal attention and if it gets ill, you stay up all night making it better. And so we all have servers like this. We all sat around and like, I need 12 servers so I'll name them after the Zodiac constellations and it'll be awesome, right? We all did that and it's okay. But those are pets, right? Cattle on the other hand are, they have a number. One is pretty much like any other one. They're interchangeable with each other of the same type. You don't manage them, you don't move them from place to place separately. You've run them as a group and if one gets ill, you make hamburgers. Here's what I'll point out that like, I don't know much about animal husbandry but you probably don't wanna make hamburgers from an ill cattle, right? So make yourself a bitchin' jacket or something. So the next concept is desired state. Desired state, so in contrast it's declarative as opposed to imperative. What's the difference here? Imperative is a build script. So you're gonna create your Docker images, you're then gonna launch three front ends, you're gonna launch two service layers and you're gonna launch one persistence backend. Okay, if something happens and one of these machines were to blow up and you do nothing, someone's gotta intervene or some processes gotta intervene. Usually it's a someone, right? Declared it on the other side. Desired state says there should be three front ends, two services and one backend. You don't do them sequentially and Kubernetes makes them happen. And if one of them is to blow up, because we said there should be three front ends, when there's not three front ends, it makes three front ends happen. So I liken this to the difference between employees and children. Employees are coworkers, whatever. You go to an employee and you say, hey, we had a tough day. Go home, get some rest, get some sleep. Those of you that have children know that this looks like this. Go upstairs, get undressed, put on pajamas, brush your teeth, pick out two stories. Now those of you that don't have children are wondering if you really need to tell them to go upstairs. It seems like a superfluous step. But since the next step is get undressed, if you do not specifically lay this out, you'll end up with a naked child in your living room. So we want our servers to be like employees and not like children. We want to just say, do this, run this and have it keep going instead of having to go through kind of imperative steps. So now I'm gonna go through a whole bunch of the pieces that make Kubernetes work so that when I talk about stuff later on, it'll make sense when I demo it. It is a lot of definitions and I've tried to figure out ways of making this more interesting. I just gotta burn through it and hopefully it'll be compelling enough to go. So first thing is pods. Everything in Kubernetes runs inside a pod. You'll notice I have multiple containers in a pod and that's because pods are the atomic component of Kubernetes. So everything you do in Kubernetes, when you have these larger replica sets that run 4, 10, however many machines you want running all the time, they're gonna run pods and not individual containers. They are made from one or more containers. Why? Because the containers will share name space and IP address so they'll each see each other as a local host or every one of them if you've got more than two. It's important to note that it's okay to have just one container and in fact most of the time I'm running one pod within a container. Why would you want to have more than one pod in a container? The canonical example we use is a web server and file sync. So suppose you've got a web service and you're serving up files that are created elsewhere and you always want them to be in sync on this container, you'll run a file sync container and a web server container because from a strict container Nazi point of view those should be very separate things. In practice, are they always? Not necessarily, but if you're doing it the right way you would split them up into two processes and that way you would have them run on the same box. The other big one I see here is let's say not any of you in this room, but we all know people that maybe put up a lamp stack machine once and it was just a perfect concept and a year later it's now this vital thing that everything has to run on and you're like they, not you, not anybody in this room, right? Not me certainly, not more than twice or three times. You're up at night because this is now production. So you want to move over containers to make it more manageable, but all the code is written for local host and it's all kind of talk at one another. You put it all onto one box in one pod and then you can start the process of slowly separating them into more correct configurations. So okay, so I'm gonna show config files. I'm gonna try to breeze with them pretty quickly. They're all YAML, but I feel like everybody's comfortable with YAML. You can also use JSON, but I find YAML to be a little bit more human readable. The key parts here are I'm telling you it's a pod, I'm giving it some arbitrary names and some labels that I'll talk about a little bit later. And then I'm pulling down an image. Now this particular image is being pulled down by my own Docker repository, but you can run straight from the straight Docker repository and Kubernetes will understand that. Containers, subatomic particles of Kubernetes, Docker files just like you're used to. If you've been using Docker and you've been building stuff on local machine for your local machine with Docker, you can use those Docker files straight Kubernetes without in most cases altering them. The times I've had to alter them it's because they didn't fully understand the way Kubernetes did things. And once I did, I was able to unalter them and just use them the way they are. So the next concept is controllers. You handle turning current state into desired state. So you have a statement. I would like there to be four front ends. Something happens and one of the machines goes down, kernel panic or something. So then you now have three. So it observes that there are three. It does a diff and says there should be four. There are now three. And then it takes action to push up another one. The example of this is replication controllers which is the only type of controllers because we've since renamed all of this stuff and I will explain that. But replication controllers basically are how you would set up a group of systems all running in the same thing. Now, here is a config for that. It's a little bit longer. There's a little bit more into it. But the key thing here is you see on the right that it contains a pod. It basically contains a pod specification exactly the same. And the reason for that is most people, sorry about that, stop, sorry. Most people don't run straight pods because pods don't have the instruction to keep themselves up. So if a pod goes down, it goes down. The only way a pod keeps staying up is if you build it as part of a replication controller. Now, there's another thing called replica sets which do everything that a replication controller can do but has a slightly different and more kind of super set of the way of selecting things into a replica set. The reason I point this out is because there's a relatively recent change. And if you go looking in the docs for stuff, if you go further with Kubernetes, a lot of the docs will reference replication controllers they haven't been updated to talk about replica sets but you can use them interchangeably and replica sets are where it's at now. Which is hint, hint, slowly replication controllers are gonna be deprecated at some point. Deployments, actually there are no plans to do it but I mean it's sort of obvious. So deployments, an improvement over the previous rolling updates. You wanna update containers, you wanna do it slowly over time to sort of see the changes happen to make sure there's no problems. In the past it was done imperatively, you just said make there before now, there'd be for now. This now you change your desired state and make it happen. It's a lot easier for updating now and just makes things a little bit easier. You'll notice that deployments are a lot simpler than, this is a replication set deployment and you'll see that it's a lot simpler than that replication controller config, which we like. It also has the pod specified in it. So last thing is services, or I don't think it's the last thing, it's one of the last things. Services, so we've got all these ephemeral pods and we don't care about an individual pod, if it goes down we don't care but then how do you map that this service is being provided by these sets of pods? Services are what do that? So services get, you know, you basically define them, they're all the machines basically sharing the same role. It gets a virtual IP address and the DNS server inside Kubernetes will match whatever you name this service to, to the machines that are serving up that service. So for example, in the case of the front ends, I label all of them with a service of front end and then each, whenever a call comes in to the front end, it'll get served by one of those pods. It's also useful for exposing Kubernetes services to outside Kubernetes clients. So these are pretty easy. You see this one has a load balancer on it, which means it's public, which means that it'll get a public IP address that is load balanced and will serve the load among all our machines, among all the containers in the setup here. Labels and selectors, you can add whole bunches of arbitrary metadata to your configurations and you kind of saw that in my config files and they allow you to select things into services and other roles, but services primarily. So for example, I could take all of the prod machines and run that into a service. I would probably want to further specify it and say I want all prod machines that are also front end to be one service, all prod machines that are staged to be another, but I could also do it by tier and say these are all the prod machines. Basically helps you kind of, you can run all of your environment in the same cluster, so you get perfect parity of almost all the variables so that you can test in an environment that looks a lot like you're each other. Okay, last one of these is networking. In Kubernetes, all pod IP addresses are routable. Docker, by default, is a private IP, so if you want to talk to another container, you have to go up through the main Docker host. With this, they can talk to each other directly and they can do it across physical machines. So if you've got 12 machines in a Kubernetes cluster, no matter where the machines are, they can all talk to one another. So to sum up, all of the stuff I just went through. So Kubernetes works on containers, but you never actually use a container, you use a container inside a pod, but you never actually set up a pod, you only set up pods that are created by things called replication controllers, but replication controllers are sort of on the outs, so you want to use a replica set, but you don't set up a replica set directly, you set it up through a deployment, and that deployment exposes everything through a service, and those services are defined by labels and selectors. Simple, right? So the reason for all this, and the reason I take the time to explain this is, Kubernetes is a very active open source project, and there are a lot of voices driving it, and there are changes happening. We've tried very hard to make sure those changes are not backwards incompatible, but so that means there's a lot of things to go through and the documentation is still in flux, so bear with us, we're doing some cool things, the team is working very hard on this, but sometimes documentation is a little bit hard to navigate. So, I feel like I should show you Kubernetes in action. Now, before I get into the demo, I want to explain that I Julia Child some of this. I pre-baked some of this. I'm going to explain exactly what I pre-baked. So I built a Kubernetes cluster. I installed all the stuff on the cluster. I built a volume to store the MySQL data that I'm going to show off. I imaged several Docker images to use in this demo, and I pushed those images up to a Docker repository. So why didn't I do this live? Mostly because I just went through that whole really dry part and felt that showing a process to you that takes anywhere from two to 10 minutes, depending on network connectivity is probably not the way to hit things off correctly. So when I'm at my desk at Google, it takes two minutes. Starting up a cluster doesn't really take that much time. It's really how fast you can push the images across the wire. Pushing the images up to the repository is the part that takes the longest, and I've had to take relatively short periods of time, but in a hotel room over a hotel Wi-Fi or a conference Wi-Fi, it's going to take 10 minutes. Figured that it probably was not a good use of our time. So that is all what's been done already. I'll just show Kubernetes in action. So this app is a very simple app. It's a PHP and Apache front-end API. I did not split it into three tiers for simplicity sake, but I also have a MySQL backend also running. It is shockingly a to-do app. I know you've never seen anybody do a demo with a to-do app, but I'm gonna try it now. Simple PHP with Bootstrap on the front of it. So now I'll switch over to demoing this. My hands, okay. So first thing I have running here is this visualizer, which is gonna actually help show all the pieces spin up so you can kind of get a visual feel for them. The other thing is I'm gonna run, instead of typing out a whole bunch of stuff that's really easy to misspell, I'm going to use a make file. It is running commands under the covers and you should see them pop up. So I am just gonna go and do the whole thing. So make cube, great. And you'll see I'm running a couple GCloud commands to set the environment up. It is basically taking my configuration files and just applying them. So you'll see that the front ends, which are PHP, which are the gray boxes up the top, they spin up relatively fast because it's PHP and Apache relatively quick to spin up. MySQL takes a little bit longer just because it takes a little bit longer. The green pieces are the services. Now right now, can everybody see in the back, by the way, it's not too, yeah, okay. So the green are the services and the MySQL service is private. It doesn't get exposed publicly. The front end one is public. So in addition to that IP address, it's gonna pull down an auto load balancer IP address in a moment. While I'm waiting for that, I'm actually gonna kill off some of these containers and you can see what happens. So that gives me a list of the pods I have running. You'll see that I've got two pods for front end deployment and one for MySQL deployment. Now I'm going to simulate, I don't know, some catastrophic problem or one of the pods like, you know, erred out or whatnot and I kill it. And you'll see it goes red, but immediately a yellow one pops up because the yellow is the startup and it said, wait, I'm supposed to always have two running, one's gone, so start another one. Just doesn't even ask, just does it. And that's the whole joy of the replication set working for you. We now have a public IP address, so I'm gonna call that. And we'll see there's test data in there for me testing this out in various places. So I'll just do one more. There we go, it's persistent. All right, so we start up a Kubernetes cluster. We spun up an application, it's working. Now someone, the developer or the designer comes along and says, hey, these light front end apps are crap. Like, we don't do light to be cool. We have to do a dark front end so that people can't read what's on the screen. So normally what you do with containers in general, you'd have to build another image with the new code in it, not too crazy. But then you'd have to do some sort of rolling update to do some sort of script to make it happen. Or with Kubernetes I could just point this deployment at the new image and have it use the new image. So I'm gonna show that happening. So I'm gonna roll the update here. And you should see, there's the command session. You'll see that it's killing off the old ones and starting up new ones. And there we go. Now, that was a very fast rolling update because again, I'm doing a demo and I want it to be fast. I don't want us to wait. You can set it to take much longer than that. You can do A, B type testing. You can do a whole bunch of things but just kind of wanted to show it really quickly happening. If I refresh, we'll see we've got our cool dark UI already ready. So that's a quick tour through Kubernetes. I'm gonna tear down my cluster here. So I was torn. Part of me said, you don't have to set up Drupal on Kubernetes because people kind of understand, right? And then I was like, all right, well maybe I should. And I was like, eh, is that pandering? You know, like, oh look, I can do Kubernetes on it. But I do Drupal on Kubernetes. But I figured it was worth showing that it can be done. So I'm waiting for this MySQL instance to die and then I'll run it. The hard part, it took me, I did it last night because I was finally like, no, you have to do this. You have to get it working. So I started it last night and it took me two hours to get it working. And a big part of that was quite frankly, I don't do a lot of Drupal administration. And so figuring out how to create a Drupal instance from that was already pre-started, already configured. And it took me a while because I didn't realize that MySQL had a whole bunch of drop tables in it, drop table and recreate. So all my data was ending, which sort of like, kind of hurt the whole like, see, it can persist in between sessions. So I got that work and it took me about two hours. Just kind of like, it's relatively easy to get started with Drupal in containers. The Drupal container that's like the public canonical version is actually pretty nice to work with. So at least from the op side, I don't know on the administering Drupal long term how it is, but just thought I would share that. So let me go and do the same thing, but for my Drupal install. And so like I said, yeah, I just grabbed the Drupal install that was set and all the only thing I have to do is wait for a load balancer. So I'm waiting for the load balancer to start up so I can show the front end for this. Any questions? Yes, networking between nodes you typically do either straight or through calls to DNS entries that are in Kubernetes DNS setup. Correct, you would not be doing that at all. All right, any other questions? I saw one other hand. Yes, do in the back. Yes. Oh, so all of this is running on Google Cloud platform. So under the covers this is running on Google Compute Engine instances. Google Compute Engine has part of it. You can spin up arbitrary disks that you can just run and then connect to individual machines. I'll explain a little bit of what I'm hand-waving away here in a minute, but basically it's a shared disk that anyone of the machines can connect to. Anyone of the machines can connect you to read. They only one, there's only one right master, but the cluster setup takes care of all of that because there's some under the covers magic working there. All right, we have our public IP address and we'll see, I have a slight caching problem and that is my problem and not this setup, which I can fix if I go through and get rid of the caching. I got it working and then didn't realize the caching problem until this morning and I was like, you guys would be cool with me and recognize it's a caching problem and the content is all there, right? Or do I need to clear the cache and show that it's really full there? Yeah, all right, all right. So, hold on. No, I wanted to, so I was glad that someone, huh? I don't know where it is if it doesn't style the right way. What's that? Figuration. Figuration, top. Oh, there we go, okay. See when you're in front of a crowd, like you get nervous and okay, there we are, see. Performance, clear all caches, save configuration. Waiting, and there we are. Is this in content? You'll see that I did this in my hotel room and persist because the disk is persistent between Kubernetes deployments. Yes, that's a really good question. I haven't done it, so I look into it. I know it's possible, I just don't know off the top of my head how to do it. So, I'll give you a card, drop me an email and I'll get back to you with a full answer. Okay, so with that, I'm gonna kill my Kubernetes cluster here, but we're happy. We've got Drupal on, all right. Was it pandering or should I have done it? It was the right move? Okay, all right, good. So, let me just kill this cluster and WordPress would have been funny. So, WordPress Docker images are, this is recorded, so I can't say the word rhymes with duster duck, all right. So, I have tried that in the past and it is not a day in the magic kingdom by any stretch of the imagination. All right, so some of the other things we talked about there, we talked a little bit about rolling updates and as almost as it predicted, we talked a little bit about persistent volumes, which I'll go talk a little bit more about in a minute. There's more to Kubernetes than just what I'm showing here. I just sort of wanted to touch the big features of it, but there's a secrets API for storing sensitive things like application and database passwords instead of environmental variables. There's logging, there's monitoring, there's events, there's a web interface, not what I showed you here, that's a specific thing we have set up to do the kind of visualization for demos. It can handle sort of a jobs queue sort of idea that it will spin up pods just to deal with very specific jobs. And another one of the things that I didn't show off here is horizontal pod auto-scaling, which is kind of cool. So let's say you've got five pods and they're starting to get resource constrained, they're starting to get used very highly. You can set up the original replication set to say if utilization gets above a certain metric, spin up more pods. Now, you have to be careful of this because that doesn't do anything with the underlying architecture, but I have an answer for that in one of the other pieces. So the actual hardware doesn't necessarily, what it's running on won't necessarily scale up, but you can scale up pods as long as you have capacity. So a couple, all right, I'll take one question, what's up? Well, for me, Kubernetes just takes care of it. Like I don't have to set up anything other than, other than setting up labels and selectors to say this service, call that, the service handles the load balancing. You don't really necessarily mess with any internals of that unless you really get deep into it. Yes? Currently, no, but when I get to the roadmap section, I'll talk about that. I'll take one more question and then I'll get into the rest of this. Yes? No, I won't go deep into ingress, but feel free to talk to me afterwards, okay? So comparisons. So when you start talking about Kubernetes, you start talking about the container ecosystem, a lot of the same questions come up. How does this compare to other tools in this space? So Docker Compose allows you to put multiple containers on the same host, the same local host, much like the pod architecture. Docker Swarm allows you to build clusters of container hosts and Docker Machine allows you to manage remote cluster hosts. So Docker has a very unixy sort of philosophy here of like let's build small tools and connect them together and they'll all work together. Kubernetes is sort of let's do all of that at one shot and it also adds a readable networking, replication controllers, replicasets, job sets, and auto scaling. Now there may be some pieces from Docker that have been added relatively recently to fix some of this or to add some of these, but I don't know of them off the top of my head. Also, Docker's logos are much cooler than ours, I really have to admit that, like they just really are. I gotta give credit where credit is due. The second question I usually get is Mezos, how we compare to Mezos, it is a multi-machine kernel, it turns a data center or all installed machines into a logical system. It certainly has the ability to handle containers and it does it well, but it can do other distributed job type things, so it wasn't, containers is something that's been added to Mezos as containers have taken off, it does it very well, but that's certainly not all it does. Kubernetes on the other hand is specifically designed for container management. It has very strong opinions on how to do service discovery and logging. It can run on top of Mezos, and in fact, some people have been using Mezos to get auto-scaling Kubernetes clusters working in advance of some of the things that we've added. So if you want a cluster that will auto-scale hardware or the underlying nodes of the Kubernetes cluster, you can use Mezos to accomplish that. You can also use some other things to accomplish that, and I'll talk about that in a second. So now, here's part where I pay my, I pay the piper to my employer. We have something called Container Engine, which is hosted Kubernetes. Now, up till now I've been talking about developing on Kubernetes. If you just want to create an application on top of Kubernetes, that's really always shown off. I yada-yada the whole let me build the cluster, right? So let's talk about that process. What does setting up a cluster look like? Well, you have to first choose an infrastructure, and it can be Google Cloud, sorry, Google Compute Engine, or AWS, or Azure, or Rackspace, or on-premises. I certainly have strong opinions about where this should be, but we run plenty of Kubernetes on top of AWS, as you see in our roadmap, which I'll get to, we are adding a whole lot of support for AWS features. After you choose your provider, you don't have to choose an OS. I think we go with straight Debian, but I have to double check that. CoreOS is also a really popular one. CoreOS is a OS designed specifically for hosting containers, but you can choose any one of these. Then you have to provision the machines so you create your boot VMs. You then install and run all the cube components. So there's a master and multiple nodes. After that, you configure networking, IP ranges for pods and services and service discovery, and then you start cluster services like DNS and logging and monitoring, and then you manage the nodes, you do kernel updates, OS upgrades, hardware failures, all of that jazz. Or you go to Google Cloud Platform, you go to Cloud Console, and you go to Container Engine, and there's a little button right there that says Create a Container Cluster, and we'll just do it for you. You can say this is how many nodes I want, this is how beefy I want them, and go on. So here's the thing. We want you to run Kubernetes wherever you want. We think we have a good solution, and we're happy with that, but one of the things we do is we offer $300 for two months free credit when you sign up. So I encourage you, no matter where you want to think about running Kubernetes, if you just want to try it out, but maybe you want to get over the obstacle, like it's a lot of work to figure out you want to do this thing. So why don't you do like let us yada yada the work, learn if you want to do it, and then go to the pain of the learning process, Freudian slipped there with pain, of setting it up on premises or on AWS or anywhere else, but learn it for us for free. And this is where I point out that I'm a developer advocate and not a sales guy. I don't care if you do it for free and then do it somewhere else. I just want you to use Kubernetes, because it's cool. So Container Engine is hosted Kubernetes, as I pointed out. It's got a few small smart defaults set. Things like DNS is already set for you, logging is already set for you. You can, Kubernetes has a whole bunch of logging options available. If you go Container Engine, I think you're on our solution, which may or may not be good for you. So another thing to keep in mind is you want to go on-prem or go your own hosted solution of Kubernetes or somewhere else. And like I said, it allows you first dipping your feet in. You just want to try it, see if it works for you. It's a great way of doing it. It's also a great way of just running Kubernetes. If you do get to that step, we're happy to have your hosting business, but it's a great way of dipping your feet into. And one of the things that Container Engine has being part of our platform, we can do node auto scaling. So you can set up the pods to auto scale. So as your demand goes up, you spin up more pods. And then when the underlying machines get over, the utilization starts to get high enough, you can spin up new versions of the machine that will get added to your cluster and give you more capacity. So in theory, you can have a self-healing cluster that the more and more load you get, the bigger and bigger it gets. And you don't have to manage that process. We also have Container Registry. Now, Container Registry is a hosted Docker registry repository. You can set it up to be private. You can set it up to be public too, but you can also set it up to be private and have credentials so that only certain people can get your images. You can use it with Container Engine or not. I use it for hosting some of my images that I pull down in various places using Docker and not using Kubernetes. So we're gonna get to conclusions. So first off, someone reached out to me from Wadbe. They said, hey, could you mention us? Because we're doing Drupal on Kubernetes on whatever hosting solution you want. So if you wanna host on AWS or Google Cloud Platform, these guys will help you do that. So check them out. They do other applications other than Drupal, but this context that made sense to point them out for that. So Kubernetes is open source and they want help. Help is not dedicated. Kubernetes itself is written in Go, but there's documentation, there's everything, so they want all the help they can get. So GitHub, Kubernetes, Kubernetes. There's an IRC channel, because there's also a Slack channel too, but the engineers prefer IRC, because I won't go into why. And then their Twitter account, Kubernetes.io. Roadmap. So Kubernetes 1.3 is the next version that's coming out. The current version is 1.2. It's target is June. I'm gonna go out on a limb and say, that's a soft target. We usually hit within a month, but take this with a grain of salt. It's a month, which says it's probably June or July, as opposed to when we say Q1, Q2, then it could be anywhere. But June, I think targeting June and July is probably safe. The big piece that's coming in 1.3 is something we call affectionately ubernettis. So something brought up having multiple data centers. So right now, a Kubernetes cluster can't talk to another Kubernetes cluster. Ubernettis will allow that, so you can have Kubernetes clusters running in multiple data centers. And here's what's kind of exciting for us, multiple cloud providers. So you could run part of your Kubernetes setup on AWS, part of it on Azure, part of it on Google Cloud Platform, and have us fight for your business to the death. So that's really exciting. It's a really cool thing that's coming down the pike. As part of that, we're adding more and more AWS support to Kubernetes. It runs fine on AWS, but it could run better. It could run a little less configuring. So that's, a lot of features are trying to work with AWS features to make it seem like, we have it pretty seamless on the compute end, the, sorry, Google Cloud Platform side, let's make it a more enjoyable experience on AWS using AWS's native pieces that they already have. So if you like Kubernetes, that are wondering what the hell am I getting into here? Like I pointed out before, container engine can make dipping your toes in a little less painful, a little less sort of, you can start actually playing with it to see if it actually solves your problem instead of spending time configuring it to start. Here's where I need to say that we have been using containers the last 10 years. We started, so Google scale, this was still mind boggling to me. We had 10 times growth in the first six months we were in existence, and that growth stayed pretty high for a pretty long period of time. And so if you think about how you have to handle growing by a factor of 10 every six months, it's challenging and some pieces, we kind of learned along the way. And so today everything at Google runs on containers and so Gmail, web search, maps, MapReduce jobs, Colossus, even Google Cloud Platform VMs run in containers. Actually VMs are VMs, but containers tell them what to do. Actually no, VMs technically run in containers underneath everything. So I went back and forth with engineering like what is actually the truth here? And I got different answers from different people. So I'm gonna go with yes, everything at Google runs on containers because underneath it technically we're not running Kubernetes internally, we're running Borg internally, which Kubernetes is not an open source version of Borg. What happened was is we spent 10 years learning a whole bunch of stuff and there's a whole bunch of like, why did we do this? And Kubernetes kind of started with that base knowledge of like you should not do this ever. So that's where a lot of our stuff is coming from. So technically underneath everything it's all running on containers. We launched two billion containers a week. So we stand by this, we do this, it is a viable way of running at scale. So we think that containers are a way to manage scale and we think it's an important way of doing it. But I'll make it really clear that you should carefully consider whether running everything on containers is right for you. If you've got apps that are running barely, not using up the full capacity of your setup of your bare metal setups or your on-prem or your VMs or wherever, if you are running fine and you're not having to deal with massive changes of scale, massive changes of load, it may not be right for you and that's totally cool. It's not for everybody, it's fine. But I just wanna make sure you get this warning from me before you walk away and say the guy from Google said you should run everything on containers. No, you should carefully consider whether running everything on containers is right for you. It works for us, it's worked for a lot of our customers, a lot of our partners, even some of our competition. It's worked very well for. But make sure it is right for you before trying to commit to it to a whole hawk. I know there's been a lot of pushback from the docker, docker, docker, docker, docker. All you need to do is go to a conference and say docker and you get a talk. There's been a lot of pushback from that and so I wanna make it very clear that containers are great when they solve your problems but they are not a solution to everyone's problem. So that I'm gonna say thanks. I'll take time for some questions. I think I have some time, yep. I'm gonna say thank you. I'm gonna say that my presentation is up at bit.lytprion-chaos. You can pull down this. If you wanna get in touch with me, the best way to ask me questions, if you, the best way to ask me questions is just here, I'll be here. But if you think of something later or whatnot, feel free to reach out to me on Twitter and I will usually get back to you. So that, any questions? Yes, your hand was raised. Part, I'm sorry, what was that last part? Okay, so I have not done it on AWS so I can only talk about the, I can only talk about the Google platform version of it. So basically what you do is you, when you set up the, you set up the job, when you set up, sorry, the deployment, you add auto scaling. So it doesn't do it by default. You have to add it specifically. And then when you do that, based on certain metrics, it will scale out those pots. Okay, but that eventually you'll hit, you'll hit the outside of your capacity. So what you do is you set up a group in Google Compute Engine of an instance group that has rules on it set for auto scaling for them. So when the number of pods starts to get high, it'll drive up the processing on the actual machine and then the metric will take over and spin up another instance that's already added to the, it'll auto add it to the cluster and now you have space for more pod, basically the utilization of the entire cluster goes down or the, yeah, so you get the extra capacity on the cluster to move some of the pods over to that new. Does that answer the question? Or yes, yes, yep, that's how it works. Okay, any more questions? No, oh yes, one more, same layer. So no, you wouldn't manage them through the same setup. At that point, what you would do is you would set up that extra service and just call it the way you would normally call it from anywhere else. Those, so are you trying to call a pod from an external service or are you trying to call an external service from a pod? So in that case, that external service should have a URL or an endpoint that you can call from those pods and just because they can still pull stuff from external services, it's not in what sense? Correct, no, I get the auto, that's great. The other thing I was gonna suggest is that there is a DNS server in Kubernetes so you could inject that into the DNS server in Kubernetes if you wanted to. So you could inject the external IP addresses into the DNS setup for, it'd be an extra setup for you and there's no way to do it through Kubernetes itself, you'd have to do it through the Kubernetes DNS service but there's, if you really wanna go into that, drop me a line, I'll give you a business card and I'll get a better answer for you than that. Okay, all right, with that I'm gonna say thank you. I will be around, I'll be around tonight. You can usually catch me smoking and thank you guys very much for your time.