 So, my name is Konstantin Semenov, I'm the principal software engineer at Pivotal, and today me and my colleagues from Google, Jeff Johnson and Megan Cailin, will talk about Kubo. So, what are we going to talk about? First, I will give a brief introduction to Bosch. Well, some of you may know what Bosch is, some of you might be interested in what problems it's canceled for us. Then, Jeff will talk briefly about Kubernetes and explains what problems that solves and how it could be useful to Cloud Foundry community, and then Megan will explain how Kubo brings both of them together to our mutual benefit. Then we'll give you a demo of how Kubo handles VM failures with an example application and how does that propagate throughout the whole stack. And while we're doing this, there will be some time waiting, so we'll discuss the project roadmap, and then we'll tell you how to contribute to the project. So let's kick off. In modern-day, developers use different technologies to deliver the functionality they're working on. These could be anywhere between functional systems that are event-driven or microservices. These could be more complicated app-centric systems that we all know and love, and we use 12-factor apps in Cloud Foundry. Those could be more complicated, stateful applications, for example, something that can be run on Docker via Kubernetes or even data services like database management systems and the like. So all of them are handled by running on top of some sort of infrastructure, and when it comes to connecting them together and then maintaining that, that sometimes, well, almost always will shoot the operational cost through the roof, because either you'll be left with unpatched versions of different environments because all of them are running on different systems, you have to be patched through, you have to manage the credentials when the applications are talking to each other, and all of that will grow and grow exponentially with the number of systems that you have to handle. So what do you do? Well, Cloud Foundry has been using Bosch quite a lot for a long time to manage all of that. It is used to create all the virtual machines that Cloud Foundry runs on, make sure that they're up and running, enable scaling, updating, and now with the new Cred Hub product also can manage storing and rotating your credentials and secrets securely and automatically. So what is Bosch? It's an automated Cloud agnostic platform. It's based, it was inspired by Google's Borg system, which is a cluster manager that runs at a tremendous scales and handles hundreds of thousands of clusters in the Google infrastructure. It's an open source tool chain for release engineering, deployment, and lifecycle that is managing your infrastructure for you, as I have explained before. So without further ado, I'm heading over to Jeff to talk about Kubernetes. Thank you, Konstantin. Does this work? Yes. Okay, so what is Kubernetes? If you've been on Hacker News before, you've seen the ship's wheel. There's a lot of talk about it, but if you land on kubernetes.io, you'll get a pretty dense but comprehensive explanation. Kubernetes is all about deploying, scaling, and managing containerized applications. You think about a containerized application, I like to think of it as just a binary blob and some metadata that you can run in a pretty reproducible way. But you don't just have a container. There's not just a single instance, there's not just a single service. It's a whole network of interacting pieces that work together and they have a lot of these problems around how do you access them, manage them, update them. Kubernetes is all about that orchestration and management. Where you would use a system like Kubernetes varies quite a bit, but there are a few ones that really stand out. An excellent use case are commercial off the shelf applications. So let's say you have this C++ program with a whole long tail list of these binary dependencies. Well, sticking that in a container is a much more convenient way to run that and run it reproducibly. If you can just pull it off a Docker hub, very easy way to get the software. Some apps have very specific hardware scheduling and networking requirements. So a CF push would not suffice if I need to say, this is today, I need to say that my machine needs 10 GPUs and it better not run on the same machine as this other little node. So these are little very special applications. And the last bit is when we're talking about data services with persistence. This is sort of an evolving area in Kubernetes, but we'll show you how you could do that today. So this is a bit of an eye chart and I apologize, not like a graphic designer. But here's a few core concepts of Kubernetes that we're going to look at just to get some of the terminology and sort of understand how Kubernetes talks about these containers and connects them together. So if you look at the bottom row, we have three pods going across. The pod is a set of running containers. So our pods here is an image CockroachDB, just an image from Docker hub, and it has got some metadata on it. So app equals CockroachDB, just arbitrary key value. Now with those three running pods, above them, we have a service. The service is how we're going to expose these applications. So the service gets a name and it gets a selector. The selector is the same as our DB coincidence. And it's got a name. What this allows us to do is dial it within the cluster, get DNS resolution, and route traffic directly to a healthy pod. So if we go up a tier, we've got our front end, which just think of it as a stateless app. That's able to access it over the container network, access those services. In our case, we use flannel, but you can use anything here. This is not that opinionated about it. One more layer above our guest book pods, which is just like the one below, is a service with a node port. Now Kubernetes has a lot of ways to expose services externally to the cluster, and we're not going to go in a lot of detail about that, but node port is one that says, okay, if you hit any VM on this port, I will route you to this service. Somehow you're going to do ingress into there, and you've got a service. Now this is how sort of the concepts play together, but in practice, Kubernetes is just a collection of running user space applications. So if you look on the far left, we've got our control plane. The control plane has a master node. That master node runs an API interface, as well as a scheduler, which decides where it's going to run the work, and that's all stateless and connects to a database which uses SED, which stores all the actual state for the cluster. We then have a set of workers who actually do something in the cluster. They're actually running your work, and they also run a few services on there like the container overlay network, Flannel, and Kube Proxy to help set up some IP tables rules. Somehow that gets to the internet, and folks can consume your Kubernetes services. So that's sort of the whirlwind tour of Kubernetes. I'm going to pass it off to Megan who's going to talk about how you can combine Bosch and Kubernetes. Thanks. So if you Google Kubo, you'll actually get this movie called Kubo and the Two Strings. Have any of you guys seen that? Well, if you came here to hear about that, we're not talking about that, so I'm sorry. We named it that actually not after the movie, surprisingly, but after Kubernetes on Bosch. It's shortened. So we said already, Bosch is based on Google's Borg system, which is our internal cluster manager, and you might also know that Kubernetes is based on the same system, but in a different way. So Kubernetes does management of containers, clusters of containers, and Bosch does management of VMs and clusters of VMs. So they're actually very complementary systems, in my opinion, because they're based on the same underlying Borg system. And this is a project that we've been working on Google and Pivotal together for about six months. The reason we started this project is that we've kind of found that Kubernetes has some unsolved problems, since it's meant to be a manager of containers, not VMs. We don't have some things like, for example, health checking and healing of VMs themselves. So those Kubernetes nodes, if they go down, Kubernetes will reschedule the pods onto other nodes, but they won't bring back the nodes themselves. Also in terms of HA, there's no support out of the box for multiple master nodes or SED nodes. And then scaling. So if we want to add additional nodes to our cluster, we'd like a way to be able to do that, and Kubernetes isn't providing that today. And then upgrade. So that's both upgrading the Kubernetes software you're running and also upgrading your operating system. So if there's, for example, another heart bleed bug, you'll need to update a stable version of the operating system for your cluster, and you'd like to be able to do that without taking the cluster down. And you might have noticed that those are the exact problems that we earlier said Bosch solves. So Kubo is meant to solve those problems using Bosch. So our goal is to give you a uniform way to deploy and manage your Kubernetes clusters, and we do that with Bosch. And since Bosch works on any cloud, Kubo should also work on any cloud. How do we do that? We kind of break it up into two things. So day one activities would be deploying a cluster. We have a repo with deployment scripts and documentation on how you could deploy a cluster using a Bosch director. And then we're working on integrating that with Cloud Foundry so you could type, like, CF create service Kubernetes to get a Kubernetes cluster up and running. And then in terms of day two activities, for the most part, we just rely on Bosch to do that for us. We're testing it and making sure any kinks are kind of worked out. But Bosch does the self-healing of VMs and monitoring via the Bosch agent. Elastic scaling for clusters, rolling upgrades. So we're continuously updating Kubernetes version to later versions. We're on the latest major version right now, and we're working on updating to the latest minor version. And then high availability, and we'll be working on multi-zone support, so you could have a cluster that spans multiple zones. Kubo OSS is the joint project we've been working on, Google and Pivotal together. We kind of have two different tracks of work. One would be the pure open source solution, so that has no dependency on Cloud Foundry. You deploy a Kubernetes cluster using Bosch, and then you just interact with it the same way that you'd interact with any Kubernetes cluster using kubectl. But we also have another track of work to integrate this with Pivotal Cloud Foundry. So we do things like share a routing layer with Cloud Foundry, and hopefully integrate support through the CFCLI. Now it's time for a demo. So we have a cluster already deployed. It takes about 20 minutes to deploy, and we don't have that much time. So we've deployed a cluster that has two master nodes, three worker nodes, and three at CD nodes. And the first thing that we'll do is look at the nodes that Kubernetes is aware of right now. And we'll do that using kubectl. So we have three worker nodes, as you can see. And then let's look at the VMs Bosch is managing. And we'll just look at the workers, since that's what we're demoing right now. So you can see the IP addresses of the nodes are the same. So these are the exact same nodes. And now we will deploy an application to Kubernetes, which is CockroachDB. You'll have to excuse us if it's a little buggy. It's a cockroach. Thank you. Oh, can you make it bigger? A font. Yeah. OK, so let's deploy our database application to Kubernetes. Oh, and yeah, we're going to watch the pods that are created on Kubernetes. OK, so we're creating a CockroachDB pod. We're going to create three, but they get created sequentially. And the reason we're deploying this to Kubernetes is because Cloud Foundry doesn't support CockroachDB. But we want to use that for our application. So we're going to use the two together. And we're going to deploy a front end application to Cloud Foundry in just a minute. OK, one of our pods was created, so that's good. Now we're creating the second one. Once all the pods are created, we'll also run a script to create a database within those pods and table in the database. It takes a second. Cool, so now we can create our database and the table. And then we will push our Cloud Foundry app that uses CockroachDB to Cloud Foundry. And what this application is, it is the Kubernetes guestbook application. So if you've used Kubernetes, you've probably seen this. All it is is a store for text that you type into a guestbook. And the reason we're deploying the front end to Cloud Foundry is because we think it's a good candidate for a Cloud Foundry application. But like I said, the CockroachDB is not supported. So we need some way to deploy that if we want to use it. We kind of think of this as they can be complementary. So you could deploy part of your app to Cloud Foundry, part of it to Kubernetes. But if you're migrating to Cloud Foundry, you might have applications that have dependencies that are hard to move. So you can use this as kind of a stopgap solution if you want, or you can use it as a full solution if you have a dependency that really needs to be run somewhere like Kubernetes. Takes a couple of minutes. Everything in this demo takes a couple of minutes. OK, there we go. Now let's look at our application. We're going to pull up two windows just so you can see the persistence. Oh, that's an old deployment over there. Cool. So now if we store something in the guestbook, it should be stored in the database. And we can see it is. We can write something else. Darth Vader was also there. They're not writing much about what they thought, but that's OK. Cool. So now let's see what happens how these two systems play together if we delete one of our worker nodes. So we'll run a watch on the Bosch CLI command to see the VMs it's managing. So we can see how Bosch is dealing with this failure. And we're already running the watch on get pods down here so we can see how Kubernetes deals with the failure. And we'll delete one of the nodes that has the dashboard running on it and one of our Cockroach DB pods. We have to get the name of it from this, so the VMs aren't named very easily, but usually they don't need to be. I don't think people are usually deleting their VMs on purpose, but we do it a lot. Cool. So this is going to take about two and a half minutes to delete. Google has really fast VM boot times, but not really fast VM delete times, which is probably the place that you want to optimize, but depends on what you're doing, I guess. Not in this demo, but that's OK. So let's talk about what's happening here while it's deleting. So this is the current state of our cluster. We have two masters, but you can ignore that part. We have three workers and the green tiny boxes that hopefully you can see are our Cockroach DB pods. And then I just put this purple pod in here to represent the Kubernetes dashboard. And right now we are blowing up that worker node. So we're going to end up in this state, which is not where we want to be. We have two replicas of our Cockroach DB and only zero of our dashboard. So what will happen is Kubernetes will notice that those pods are gone. So it'll reschedule the dashboard onto another node, which is great. We asked the Cockroach DB pods to all be on different nodes, so it won't reschedule that one. It'll just be mad for a little while. And then our Bosch director will activate. It'll notice that the VM is gone because it can't contact the agent that is running on that VM. Then it'll wait a couple minutes to see if it's going to come back on its own. Well, I think it waits like 30 seconds. I believe that's a setting in Bosch, though. And then it'll start creating the VM. Like I said, it takes about 30 seconds to create a VM. Then it takes a couple more seconds to install the Bosch agent on the VM. And then it has to run all of the jobs, like the kube proxy and kubelet on that VM to make it part of our Kubernetes cluster again. Once the node is back, Kubernetes will notice that it's there, and it'll be like, yay, I can schedule that pod again. And it will. Then we'll be back in the state we were in before. Our dashboard will probably stay on that other node because it doesn't matter where it's running, but we'll be in a happy state again. So while we're continuing to wait, do you want to talk about the product roadmap for a bit? Yes, thanks, Megan. So while we're waiting for something to happen, Bosch and Kubernetes both noticed that actually our machine is gone. I can briefly talk you through the roadmap, what's at the top of our backlog right now. And I will be pointing out when something fails. As you can see now, the VM that we've killed is now not recognized any longer by Bosch as a running VM. So it's now suspecting that something went wrong. It will disappear in about 20 seconds, and a new one will boot in another 20 seconds. In the meantime, I can tell you about the networking that we have planned. So many... Oh, before you start, if you notice, the Cockroach DB pod is now unknown by Kubernetes. And if we could see the rest of this, you'd see that the dashboard was recreated on a different node. Also, GCloud has finally reported that it has deleted the VM. Yes. Okay, so networking. Most of the Kubo clients would be really interested in exposing the applications they are deploying to Kubernetes to outside of the Kubernetes cluster. At the moment, we have two ways of exposing that. One is through the CF routers. Another is through IaaS load balancers. The thing is they both have shortcomings and both needs to be further developed. And yeah, now our node is missing from Bosch, so that means Bosch is currently creating a new VM to be that worker node. Great. Next up is high availability, which should be relying on the Bosch functionality, but it's really need to be thoroughly tested. We have a few experimental multi-AZ deployments that seem to be working, but we need to do more thorough testing on that. So the VM has came back up again, and it's waiting for the agent to be installed in order to be running. And before it could be recognized by Kubernetes. Oh, it is running. Yeah, so now it's running, but it's installing all of those Kubernetes jobs on it so that it becomes a true worker node. So the next feature that is really in high demand is persistence, because in most cases, when people use Cloud Foundry and they want to have Kubernetes, in many cases, because they want to have stateful applications, so they need persistence. Kubernetes is handling the persistence through the platform, but Kubo has to be able to configure it properly in order for it to be reliable. Core, migration to the latest core components. The Kubernetes has a very fast upgrade update cycle, so version one six came out two months ago. Version one seven will probably come out in about a month or so, and there are a lot of differences between those versions, so we need to stay on top of the track. Currently we don't support rolling upgrades, but it's something we're really looking forward to. It looks like our VM is now up and running, and we have a CockroachDB pod that is initializing. It takes a few minutes to initialize, so I think we should just switch back to the slides, but we are now back in the state that we'd like to be in, because of Bosch. I'll just close that. And the last feature that is also very important is multi-IaaS. So Bosch does support multiple IaaSes, which is really great, but we also need to have some of our custom configuration in order to enable Kubernetes to run that, so this is really tied to persistence as well. And as you may have seen yesterday, Kubo was recently accepted into the Cloud Foundry Foundation. We're celebrating, yay. And this cat is really happy. So we have here links to our repo, but we're about to change them to Cloud Foundry Incubator. I believe these links will still work, but if they don't work, then just swap it to Cloud Foundry Incubator. This is our, the Kubo deployment, that's our helper scripts and docs that we have. So if you're interested in deploying a Kubernetes cluster using Bosch, that's a really good place to start. If you're really good at Bosch, you could just go directly to our Bosch release at Kubo release. I don't, I wouldn't recommend it because it's hard to configure a manifest, but, and we have one already configured in Kubo deployment, but if you're really adventurous, you could do that. And we also have a Slack channel that I have a link to here. Thanks. I think we have a few minutes left for questions if anyone has questions. Can you state your name and association really quickly? Sergey Madochkin, Comcast. Question about, about persistency and volume management. Last time I checked you were focusing on a cluster, right? Is there plan to have it more pluggable, specifically for installations inside data centers, not in a, not in a cloud? Something like a skill AO and other backends to use. So at the moment, we're focusing on native volumes that are provided by the IS platforms. And we're not focusing on other solutions right now. So it might, even though it might be in the pipeline, but it's for later. What version of Kubernetes are you allowing to deploy together? Sorry, can you speak up please? Can you hear me? Louder. What version of Cloud Foundry, sorry, the Kubernetes allowing it to deploy through CF. And there was two statements you made. Okay, can everybody hear? My question was like, what version of Kubernetes were you allowing to deploy through CF at this time? Because you made a two statement, you can add scale masters, you can add scale easily. That is totally wrong. I mean to say the version 162 allows that way to scale masters in multiple nodes and then you can also scale ETCD the quorum into multiple nodes or multiple masters. So if you answer that question. The version of, at the moment of Kubernetes, I think is 161 is the current version of Kubernetes, but we're planning to update. Okay, additional questions, yes? So auto scaling and you are, Shendra, fun. To update the Bosch manifest with an additional number and then Bosch will kind of do a diff of what you want and what you have and scale that way. We haven't talked about auto scaling, but that would be cool. Yeah. The traditions and it can do it for me automatically, so I'm curious about that. What do you automate it? Also, like our prioritization process is a lot based on what people are asking for. So I mean the fact that you're asking that means we should probably talk about it. There are a couple more questions. So one here. Hi guys, from Accenture. I think this is great and solves a really big problem, but I have two questions. How do you keep it from falling behind from like Kubernetes? As you mentioned, it's a fast pace. And the second part is how do you prevent it from like exposing like limited set of features? Like you mentioned like there's a persistence that's still not there and other things. So I think those two problems would continue. So what is like overall strategy to make this kind of like a default way of deploying Kubernetes? Yeah, I mean since we're in a pre-Alpha state right now we're obviously missing some things. In terms of keeping Kubernetes up to date right now we're just manually updating the binaries, but I think it'd be cool if we could do like an automatic system that do that, like a pipeline, a concourse pipeline or something. Which I think we started talking about a little bit. I don't think you could do that for major versions maybe, but at least for minor versions. And then your second question was? Oh, yeah right. So I think the thing we're missing in Kubernetes right now is that we don't have these like cloud provider packages. So it can't provision cloud resources right now, but we're actively working on that like this week. So hopefully once we have that in place we'll be able to provision things like load balancers or the persistent volumes. One last question, Edith. Okay, Jeff has to go about, Konstantina and I are gonna stick around after if you have questions too. So quick question. So there is Bosch, which I'm really a big fan. And then there is Infracrate and there is also COPS. So my question is, is that mean that COPS is done, is gone and like, can you expand? Oh. Do you say COPS? COPS, K-O-K-OPS, like the way you actually installed the Kubernetes? They're different installed in the theCUBE. Not necessarily, I mean some people I think are using Kubernetes in different ways. This is really more for like production workloads I think. And things that really need something to manage Kubernetes. I don't know that every use case requires that though.