 Okay, good afternoon. Good to see a lot of people here. A lot of excitement around Kubernetes. Please raise around whoever is using it, or downloaded Kubo, installed, or just Kubernetes. Okay, got some people there. Okay, so this talk, we know there's a lot of excitement about Kubo, and if you really don't know what Kubo is, we're going to explain it. But there's a lot of excitement about it, and generally, Kubernetes in the CF community, which is great. Two very strong communities, both embracing multicloud. And what I really wanted to do with this session is actually explain what those different abstractions are. How they can actually play well together. Or the best use cases, or the best workloads to deploy each. And actually, how we can make this all work at the end. And hopefully, we're going to see some good demo coming out of it at the end as well. So let's try to save the questions at the end. But there's anything that can't wait. Just raise your hand, and I'll just call up. So what's the concept for Kubo, and what's the concept for Clown's Foundry? Been using Clown's Foundry for quite a while. Been this journey since VM, or they spruped a little. And once you've been there for a while, and once you look at what Clown's Foundry became all those years, and all the customer abduction use cases we have, we can realize the components have many, many, many different ways to package run the workloads in the cloud today. If you're just looking for the greenfield applications, you can see most customers building microservice workloads, spring boots, definitely a clear standard, a factor standard in market for those workloads. But people also have their own containers. A lot of times those are coming from ISVs. We've seen that people are actually shipping their software as star balls and zips, and then sometimes just FTP servers with the service code you compile. A lot of those third-party software vendors are now just in a way docker-image. And we talk to customers and they're like, what am I supposed to do with this? And those are just not single, very simple, images that you can just push. A lot of times they require stateful storage. Those are just clustered workloads that expose specific ports. And they use those specific ports to actually make the cluster awareness work. Customers also have their old batch-based workloads. They have data services that are all going to be stateful and clusterized most of the times. They do have the monolithic applications, right? And we've all been there. And monolithic applications, a lot of times, are just being replaced by new apps. At all times, they're stable enough and they're just going to be there. And some customers just want to lift and shift and keep them on the cloud for some reason until they refactor that to maybe brand-new Spring Boot applications. And we now have customers talking about event-driven functions. How they can actually trigger these functions based on specific events and how support architectures like AWS LIMDA, Google Cloud Functions, for example, are doing. And if you look at all those workloads and we see the large number of instructions that are available in cloud today, of course, we have infrastructure as a service underneath it all. Somebody can just get a VM and install everything and everything will work fine. But we have different abstractions that are already on top of infrastructure as a service anyway. We have containers as a service or CAS. Application platforms used to be called pass. And now with this serverless, we have serverless functions or functions as a service. So where I'm supposed to actually deploy this and how do I choose between it? So let's just start to take a look, very clear, on those abstractions are, right? Starting to the very left, we have just containers or containers as a service. They're usually a container or a container orchestrator platform. And basically the main abstraction, they are the main unit. Deployment is a container itself. So the developer provides a container. And actually all the tooling or the platform does is a scheduling that container and providing primitives for network routing, logs, metrics, and additional common services you need. It just provides primitives. There's a lot of yourself here. We'll get to their examples, right? There are application platforms. This is the next level. Instead of worrying about building a container yourself and actually everything that goes inside a container, what those dependencies are, how to build a container yourself, how to keep it updated, how to actually update a component inside it. Once they're exactly a security patch to be applied, the component that goes inside the container is your responsibility to do it. If you don't want to handle all that, you can just give a platform an application. You just search for the binary. And then the platform, you take it from there. It will actually build a container for you. It will schedule that container to run. It will provide full-flagged services that will actually support that container. And it's very clear the confounder experience that we have today, right? Just give it an application, I'll make everything else work. And then we have serverless functions that will provide an even smaller grain that will provide just an application function, a part of the application, and specific rules that, when it would be triggered and what you want to do with that function or the context around that function once it's triggered. And we'll leave that for later on and we'll focus on this talk on those first two abstractions. Of course, any of this runs on top of an infrastructure service, IAS, which actually a lower-level abstraction underneath it all. We're also going to realize that as you go more towards the right, you lower the complexity for development, usually, because the developer is responsible for a smaller grain. And if you take just the two of them initially to compare, you're initially responsible for building the container yourself and everything that goes around it, if you want to deploy it to a container orchestrator. If you're targeting an application platform, you're just worried about the application. So it's less complex. And you actually have higher efficiency. It's easier to enforce the standards because the platform is building the container and enforcing those standards for you. Exactly taking care of updating the containers, if needed. But if you need higher flexibility or higher desire for customization, you might want to do some of those pieces for yourself. And specifically, if you're getting a pre-built or pre-packaged application for an ISV or from somewhere who already build a container and just want to run that container, maybe that problem is already solved and all you want to do is to schedule that container to run. Or if you actually want to put everything on an application platform, we'll see that as it takes care of building that container and doing all that work with networking and this and everything for you, then it might be a little more constrained on the workloads you can put there. And that's not going to be 100% of the workloads. So there are trade-offs you consider here. A different way to see that is if you look at concentric circles, you can see on the very bottom hardware and actually deploy anything to a hardware. If you want to go bare metal, I can deploy any workload there. If you go a little bit to the top of that, maybe I can fit 90% of the workloads on infrastructure as a service through provision of VM. And then maybe 70% of the workloads you can do in infrastructure as a service you can fit in a container. So basically, the restrictions on what you can run there goes towards the top. So you can actually fit less workloads when you move to the top. But as you move up, you actually lower the development complexity. It's easier to enforce the standards and you have higher operational efficiency as well because the platform operates more or automates more. And how does that really work when you look in a company portfolio or enterprise portfolio, all the applications that they have and all the workloads they have to run in the cloud? Well, if you want to leverage the best tool of the job, your strategy might be push as many workloads as possible or technically feasible to the top of this hierarchy. So for the ones that fit right there, you have the most efficiency. And whatever doesn't fit there, you can actually deploy to the lower, actually to the abstraction immediately below it. All of this is just to say that those are different abstractions. It's specifically a container as a service and an application platform. We're going to see that this relates immediately to Kubernetes and Cloud Foundry. And not a single tool is going to fit all the workloads. They're just different abstractions. And most companies would actually need more than one abstraction for the workloads. And I think it's our job as people helping implement it to actually help them choose the right tool for the job that we're trying to do here. Right? So let me hello to Megan, who's going to talk a little bit about what operational challenges are any platform they might want to choose. Yeah, so no matter what platform you choose to run on for any specific workload, there are operational challenges with those platforms. And we found that a lot of the challenges are similar across platforms. We break them up into two categories. First is day one, which is deploying the platform itself. And some of the common challenges we have there are multi-clouds. So you'd want to be able to run on any cloud environment. You might have specific ties to one cloud versus another. And also having a consistent setup across clouds is important. So you could actually have multiple clouds running the same platform. And then you could also be able to perform in the same way. And then open APIs is an important challenge as well. If you want to be able to automate how the platform deploys, for example, in a CD pipeline, you would need open APIs to be able to do that. And then setup time. Of course, we'd like to set up fast, not in weeks, but maybe hours. Day two is operating the platform once it's already deployed. So if you need to make updates to the components of the platform, like there are patches or upgrades or if you need to scale it to meet changes in demand, you'd like those things to happen automatically and with minimum manual intervention. We think that if you don't have... If you have to manually intervene a lot, you are kind of reducing the benefits that you're getting from the platform by running on a platform in the first place. I usually say that also any time you need to have a manual intervention... And we do have a lot of cloud platforms that people need to leverage. We have a lot of scripting and runbooks to install, to update, to upgrade, to do any kind of experience. I think it's not only that it's going to reduce the operational efficiency. We have talked to customers who have teams of 40 to 60 people to operate a small platform. That's not changed the status quo, but I think more important than that, anytime you have manual intervention, you're actually adding possible errors. You're actually adding lags and updates. Every time you need to step for a week to apply a patch to install security update or maybe to upgrade something that's actually launched at each three months, you're not only at complexity, you're not only at that time, you're also being behind in a version for a week and we've seen recent security issues and everywhere around our industry. There's also a huge security problem here. That's true. Let's talk about the specific platforms that are the titles of our talk. Cloud Foundry application runtime. If you're someone who's saying, run this app for me, I don't care how, this is a good place to run a workload like that. 12-factor apps like Fred mentioned are really good to run here. The platform itself will build the containers and manage them for you and make sure that they're always up to date. Logging and monitoring and metrics, tracing, these things come preconfigured so you don't have to do any additional configuration. You can get services on demand. My team at Google actually built the GCP service broker as well so you could provision a span or instance and bind it to your application very easily. It has fully automated ops and the way it does that is by using Bosch. Bosch is an open source tool for release engineering. It does deployment and management of distributed systems. It takes care of a lot of the challenges that we talked about on the previous slides, scaling, upgrades, things like that. Another thing I wanted to mention it does is health monitoring of the servers and processes of the distributed system. So if one of the components of your platform dies, it will bring it back and make sure that you have the correct state. I think we can arguably say that actually Bosch is the platform here. Bosch is the one that actually stands up everything, watch everything you're deploying, it's healthy, and make sure that when you do an update, you have no downtime and you can actually patch things on the fly and everything works smoothly, right? That's the operational control. And the platform in this case or the distributed system would do similar things for your applications that you are running. Exactly. Kubernetes, so if you're saying something like run this containerized application, let me tell you how, this is a good platform for you to use or container orchestrator. It's ideal for packaged applications. Like Fred mentioned, ISV packaged apps run really well here and easily. Or things that use persistent storage like MongoDB, or we're going to show an example later with Elasticsearch. And things that just need customization. So if you need to specify how your app is deployed and operated, like let's say you have a specific networking need or something maybe you need to expose multiple ports, this is something you can do on Kubernetes very easily. Kubernetes has a lot of the same challenges we talked about before. Scaling and then health checks and healing, it will do that again for your applications that you're running there. It'll make sure that you have the correct number of pods they call them. And it'll do health checks of those, but it won't do it for the nodes themselves. So if you have one of your worker nodes dies, it won't recreate it for you. And then high availability is important too. So there's no high availability of the Kubernetes API out of the box. And upgrades, of course, we need upgrades easily and quickly for security reasons. Correct me if I'm wrong, because Kubernetes was first created inside Google and it was created like that because it was actually relying on Borg for doing all the high availability of the masters and the TCDs and to do road upgrades and make sure this all works smoothly, operationally. So if you're deploying that by yourself, you actually need a layer to do that. Right? And that's actually perfect for the next slide. That's why we started working on Project Kubo, which is a Bosch release for Kubernetes. So it provides a uniform way to deploy and manage highly available Kubernetes clusters and it works on any cloud as Bosch works on any cloud. So it does basically it just uses Bosch to do all these things to deploy and to manage the cluster once it's deployed. This is a project that my team at Google and Pivotal started working on in December and then it was donated to the Cloud Foundry Foundation in June at the last CF Summit. And now it just became Cloud Foundry Container Runtime as this morning, right? I knew I was going to forget to say that. Cloud Foundry Container Runtime. But that's Kubo. So it's now Cloud Foundry Container Runtime, Cloud Foundry Application Runtime. But Kubo is easier to say, so shorter. So it becomes really retrieval to understand that once you talk about application platforms that all development needs to provide is an application. Specifically, if you're targeting workloads that will run well on the Cloud Foundry Application Runtime as you would call it just provide the source code or the binaries and the platform will make sure to build a container for you to manage the container for you based on the build packs. It's a great model as a developer can actually push in workloads that's going to work on any of the build packs provided and from an operational point of view the operator can also constrain in what workloads or what stacks it actually does support the platform build packs and those are actually there. Those are the kinds of stacks of workloads that do support. But if you want to go beyond and actually just push your own containers, bring your own containers to the Cloud and schedule them I'll make sure those are safe, those are secure I'm going to patch them, I'm going to actually build those containers or as V is doing it or if you need to specify anything more specific you can just use Kubernetes by the same infrastructure operational counter which is Bosch deploy now Kubo or Cloud Foundry container runtime. Right? And what we did at Pivotal we looked and said well if you really wanted to provide an enterprise class product based on this container runtime to our customers there are a few specific components missing right? If you actually want to use a container scheduler you probably want to contain a registry that you can actually use as a private registry and provide role based control and a registry image there and do security scans, all that image and that's something that's missing. The other thing we mentioned is the container orchestrator provides some primitives on how to do a few things. One of those primitives for example is networking if you use Kubernetes before you know that the Kubernetes networking model is super simple flat network everything access everything and if you want to do multi-tenancy and if you want to do role based access control for networking if you do want to do network micro-segmentation if you want to provision load balance to the fly you actually need to know network overlay that does that so you need software to define a network on top of that. So we partnered with VMware that was actually very interested in this project worked with us from an engineering and product management point of view that could provide some of those components and that's what we've been calling PKS or Pivotal Container Service which is basically Project Kubo for for enterprises with a container registry called Harbor it's an open source project from VMware NSXT software defining networking and then we have this hypervisor that overlays on top of the Kubernetes networking GCP service broker so it can actually take that Google Container Engine experience if you leverage for example Google Machine Learning APIs and actually bring that to you anywhere you go even outside of GCP and just a control pane and that control pane is extremely important because we figured that there are some customers who really want to have in Sliced Ice they give to different tenants as they wish as other Kubernetes distributions already do but there are some customers who are actually interested on having many different Kubernetes clusters maybe I'll have a different cluster per tenant maybe I'll have a different cluster for environment maybe I'll have a different cluster for a specific set of applications that behave similarly so with this you can actually have a service broker API that you can say hey pks give me a Kubo cluster or give me a Kubernetes cluster and I need let's say two masters two ATCDs for nodes oh give me a cluster and I actually need to bind it to this specific availability zone so with basically the controller you can provision deprovision scale clusters as you wish and it provides a nice API that you can call it from the elastic runtime or application runtime now service marketplace so you can just become another service you register there or you can call for a command line if you're just deploying pks yourself or you can call it from a continuous delivery pipeline and make it a continuous delivery Kubernetes cluster and upgrade and update experience right so this is what looks an enterprise version of Kubo it would be calling pks the other interesting thing that comes out of Google partnership is we understand also that for customers be running the latest in the greatest Kubernetes version is actually a big deal Kubernetes releases very frequently at least three months maybe less so make sure you always running the latest and greatest is not only about updating that continuously it's not only make sure you do rolling up upgrades without downtime not only make sure you have security patches and updates available as soon as they are available it also means that we have Google which is contributing with most of the engineering of Kubernetes looking to say this version is well hard and is ready to go for enterprise customers on GKE exact same time we're going to actually roll that out to pks so we're constantly compatible on the Kubernetes versions so we're close to running on place we're going to run another one okay so we mentioned about Kubo we mentioned about Kubernetes and how would those things work together that's a big question there are a few very interesting integration points here the first thing I mentioned is that we have the controller the controller would allow you to call and provision and operate Kubernetes clusters for either a command line or from the CF marketplace because it implements the open service broker API that's a big deal you can have provision your own Kubernetes clusters at different plans on your marketplace if you have elastic runtime the second important thing to mention is you can actually have integrated networking and routing so there is a specific component on the Kubo project called CF route sync okay got it right that when you actually expose a load balancer in Kubernetes if you tag that image with a specific tag it creates a tag on the go router on CF so the exact same way that you push an F2CF and you get a link myapp.myapps.mycompany.com for example you can get the same by pushing your Docker image to Kubernetes and that's something you don't get from Kubernetes out of box you've got to expose balancer you can get an external IP for now you can actually get a routing CF which is really cool I was going to mention too if you're already used to using the typical Kubernetes routing features those also work with Kubo deploy clusters so what's a typical usage scenario what's a typical use case and that's what we're going to deploy as a demo we can see as both platform working together a single Bosch layer that Bosch layer is the multi called layer is the operational control pane make sure those workloads are running health checks everything upgrades everything it can have two Bosch deployments here for example you can have one elastic runtime or the application runtime release and you can have the other one there's the Kubernetes release or Kubo deployed by Bosch and those are going to be all running in Bosch then you can have for example just your Spring Boot applications or in this case we're going to be leveraging Spring Cloud Dataflow which is built on top of Spring Boot running for example in your elastic runtime or an application runtime if you're not used to Spring Cloud Dataflow it's built for integration pipelines and we're going to show it soon and usually integration pipeline you read data from somewhere you do a few steps in the pipeline make parsing data, splitting that data and reaching some content doing some transformation and then you lend the data somewhere you will write it somewhere it's a very common use case to write an elastic search cluster so we're actually creating that data searchable it turns out that elastic search needs multiple parses to be exposed it turns out that elastic search is clusterware it turns out that most of the times you want to have stateful storage so it's something that really doesn't fit well to the elastic runtime so why don't we have our Spring Boot applications the elastic runtime and then we can have elastic search cluster run in Kubernetes and both things work together and that's why we're going to set up now, let's go so first thing we're going to do we go and we have the Kubernetes cluster and we only have time to do it here we're basically going to deploy a new container run out of elastic search image that we have let's do a kubectl and see if we have the pods running, oh that was fast we have we also have a Maven server running there our elastic search is running let's create the indexes it's still coming up see a few seconds let's hit it again okay there we go so this script basically does a curl that goes to elastic search and create the index that we're going to use we're going to take a look it's a curl that just inserts and describes what the data is before we can insert it so we can go to Kibana and just make it aware of the index we just created because we're going to see a visualization of the data we are inserting their life and we're going to import the dashboard that we created just to make it pretty and what basically we're going to do is import, get get a data set with the earthquakes that happened around the world in the last month so hopefully see some really nice hot spots in the map we're getting this from ANSS who actually provides the data in the US, it's a file so let's go to Spring Cloud Dataflow, deploy that pipeline and see the Spring Boot microservices doing it we can also look at the let's open the file, take a look just to see if we're not faking anything so it's just a bunch of data CSVs separated by a star, each line so what we're doing in Spring Cloud Dataflow is saying hey, I'm going to deploy this pipeline and let's open it to see what it is it means at each 3000 milliseconds, so at each 3 minutes create an event out of that event we're basically going to do an htb request to gather that file next step we're going to split it by star that means we're going to split the lines because they have a star instead of a line impractor and then we're going to persist to elastic search and this custom Spring Boot application that we created is basically going to persist to that route that was synchronized with CF once you deploy it, we now see 4 apps starting up and before they're running, they look like trash it's normal so let's click it again and refresh it instead of one app, we should now see a few running so each step of the pipeline is just a microservice deployed on CF if you click on service on the tab each of those is exactly bound to a revenue and queue server so the first step creates a time and posts that to wrappedmq second application takes that input from wrappedmq makes a request to htp takes the result, posts back to a wrappedmq the third step takes that thing, splits on the stars puts back to wrappedmq the last one takes it, persists so it's a pipeline built out of Spring Boot microservices backed by wrappedmq that could be backed by Kafka too it seems to be running now let's open that earthquake map that had no results data is starting to come let's wait for a few seconds there's actually a ton of data, it's too quick let's refresh it a little bit data is coming more and if you actually go directly to Elasticsearch you can actually count and you can do a refresh there's an extra dash and then you can do a count we can see now we have a little over 4,000 objects, good we'll do it again a few seconds we'll see 6,000 it's counting and it's basically going to keep pulling data pretty simple to set up basically, deploy Elasticsearch to Kubo or to Kubernetes Kubernetes, make sure it's working you're already managing that image yourself keep that image running deploy an application to count found your runtime you can either use the external IP you exposed on that service on Kubernetes or just use the route that was created in CF automatically if you integrate the routing just use that as a router use the same router, goes there directly and persists the data I think we might have 5 minutes for any questions from the audience do we have a microphone to handle? cool thank you question right here checking that this works for you hi does it work with CF networking interface can you use private Cloud Foundry apps without any routes? not sure if I understand the question just say it again does Kubernetes work together with the private apps for example so do you want to isolate the networks, what are you saying? can you work do you have to expose them a route or something on Kubernetes too? if you want to use the Cloud Foundry routing you have to set it up with our route sync job and then you have to tell well we just use a label right now, you just label it and say I want to have a route named this and then it will automatically add that to the Cloud Foundry routers both the go router and the TCP router so there's a job looking Kubernetes you deploy something and you got private IPs in internal that's what you're saying right? and then you need to go to kubectl and do expose deployment and then you would get an external IP load balancer so once you do that if you have the Cloud Foundry route sync and you tag your image correctly that's going to actually take a route and synchronize with the go router so you're going to have go router forward into that external IP right? yeah and actually you don't have to have an external IP just don't work internal IP of the worker node and then the route sync integrates both things you don't need to do it you absolutely don't need to do it it's just to make if you want to use the name of the route can I get the clarifications is that the only way that a Cloud Foundry elastic runtime app can connect to the Kubernetes app? absolutely not you can expose them using external load balancers on Kubernetes services and then you can contact them using that IP address and then you're going to have an external IP address for that you need to have a dynamic load balancer set up that's something NSXD can provide that's something you're running on GCP you can use the GCP external load balancer if you're using GCP what's the forecast for run C and volume manager plugins basically with container run time I mean run C is already able to run on CE images there was some work with volume manager plugins to be able to have like some stateful block devices inside the container so does it mean that container run time will pull apart the work that has been done on run C and volume manager plugins I don't know the only volumes that we support right now are like the native cloud ones for each cloud that you run on so I think we support GCP persistent disks vSphere persistent disks I'm not sure about that specific maybe there is no point in asking the question right now this is currently being developed so we expect to have a GA product by December by late in the year and I'm happy to give any updates for that actually can I ask a question on that that you mentioned persistence interface when you went off on your proprietary product tangent does that mean that we're going to have second class persistent features in the open source and all the good stuff will be in the proprietary product or what was the persistent interface that you referred to I think we can take this directly to the Kubu project managers but the idea is actually to have a fully working Kubernetes open source managed by Bosch as part of the open source and a lot of the open source goodness plugged into that as Kubo and what we're doing as part of PKS like you said it's basically adding NSXT for now and then contain a registry and everything else will likely go to Kubo but I we can ask Colin with the product manager directly that probably have time for one more so I know all of this integration between Cloud Foundry and Kubernetes kind of comes from the bottom up Kubo, Bosch, managing both of them that's clearly a topic for service providers but as a customer the developer side of all of this when I heard today about elastic runtime, container runtime coming both out of Cloud Foundry my first thought was that maybe this means that in the future the CFC Ally and the Kube CTLC Ally will kind of merge and I will have both capabilities and my fingertips at a single place and I like the orgs in space concept give me that on top of Kubernetes when I do a Docker image deployment as opposed to doing build pack deployments is this where this could be going into in the future I think those are two questions in one so let me answer the first part of it the idea right now is to keep using Kube CTLC as the developer interface to talk to Kubernetes that's what most of the Kubernetes community endorses that's what they want that's what works best for most of the companies to talk to and it's just operational, it just works a lot of people already have that script and that continues to deliver pipelines as talking about more about integration of like oh I want to have this on my spaces and orgs in CF I like how multi-tenancies done for example that's where the service marketplace comes really handy because you can say only the developers this org and this space can actually have access to this plan of the service and can provision this cluster and once this cluster is provisioned if it's bound to this specific space and org it only has the visibility within that space and org so it becomes a multi-tenancy division by itself you can all do that with the name spaces in Kubernetes if you prefer to do that way it gives a lot of flexibility and possibilities as we said as it's a project and a product under development I think we're still figuring out a lot of those things and we're taking customer feedback to see what's the best way going forward I think we have to leave the room because there's another talk in here alright, thank you very much everyone appreciate it