 Hello, everyone. My name is Ten, and today I'm going to talk to you about installing CF application service on CF Container Runtime. And this also means CF on Kubernetes. Back in the past, the naming convention wasn't there. So before I begin, please allow me to introduce myself. I am a software engineer at Dell EMC Dojo. And we are under a big umbrella called Technology Research and Innovation Group. In 2015, I attended the San Francisco Dojo and has been a Cloud Foundry contributor since then. In the course of three years, my team and I have been working on the persistent capability in Cloud Foundry and the bare middle CPI. We also did some experimental project like blockchain as a service in Cloud Foundry and many other experimental things. So for the agenda today, I'm going to talk about some terminologies in case that you don't know it. And then I'll introduce the Kubernetes CPI from SAP. And then we will address some of the problems we are trying to solve. Finally, some architecture demo and future work. At the end, some time for questions. Yeah. So how many of you here are all familiar with this term boss, boss CPI, Kubernetes and application? Okay. Nice. So how many of you are confident that you can do all sort of magic with it? Come by. Okay. So that's the reason why I'm going to try my best to address how this technology related to each other and can be used with each other. So what is boss and boss CPI? So basically, let's say I'm a virtual developer at my work and I want to use the cloud. So I come to some cloud providers like VMware, virtual stream and Google Cloud. That they will provide me some IS interface infrastructure as a service interface for me to create or delete VMs for my computational purpose. This is kind of fun when I have one or two VMs to spin up. But let's say I have 20 or 30 VMs. This is kind of pain. So that's why I need boss in between. So boss basically is just a VM that will have an interface called a CPI that will talk to my cloud interface and then it will expose me an interface for me to talk to it to deploy something. So this is especially useful because let's say I have two different IS. Using the different boss director, I can just move the deployment manifest to the new environment and spin up the entire cluster again of 20 or 30 VMs. So now let's talk about what is the container run times or Kubernetes. So to me, it's just a platform for managing container workload. So the reason behind is just that if I have one VM, I can just run container myself, one or two containers for fun. But let's say I have the entire data center. I want to spin up like hundreds of containers. I need to distribute this container across data center. So that's why I need Kubernetes. Because Kubernetes will deploy workers on bare middle or on the infrastructure as a service for me and then expose me on API through the master. And I just need to talk to this master to schedule my container workload. So this is why Kubernetes is good for. And let's put this in terms of boss and Kubernetes. So boss will be the VM management for the Kubernetes cluster. Simple. So let's move on to the third definition, which is the CF application service. So very similar to Kubernetes, the CF application service will have something called the Diego cluster. This is very equivalent to Kubernetes in terms of spin up and orchestrate containers. But CF has some flavor to it. It has some other CF components to help you to build this container automatically. So you just need to worry about your code. So putting this in terms of boss and CF application service, I have a boss director managing my VMs for CF deployment. That's it. So let's recap a little bit. CF container runtime. We have a Kubernetes cluster. You have your code. You build your Docker container manually. And then you push to the Kubernetes cluster to run it. Then for CF application as a service, you have the Diego cluster. Then you have your code. You push to CF. Then CF build a garden container for you. And then finally, it puts to the Diego cluster. So the architecture is very similar. So let me tell you the story of why I want to put CF on Kubernetes. So once upon a time, my product manager came to me and said, hey, Tim, the SAP team from SAP, of course, they developed something called a Kubernetes CPI that allow you to deploy a boss director on Kubernetes. So this is extremely helpful, right? Because now I can put a director as a container on Kubernetes. And using the boss director, I can control and spin up containers in the Kubernetes cluster. So instead of using the kubectl command, I can just use this boss director tool. So we think it's very cool. And we think that it's actually solving some of the problems in our data center that we are facing. So what problems we are trying to solve? The first one is the deployment time. Because CF typically contains about 20 to 30 VMs, the deployment time to spin up and now VMs usually take a lot of time. And container usually is faster than VM because you don't have to spin up the whole guest OS again. You don't have to spin up the whole kernel again. You share the same guest, the same OS with the whole system. And you share the same kernel with the whole system. So of course that's why the container is faster than VM in terms of deployment time. So it can save us a lot of time for deploying the CF cluster. Now the second reason why I want to do CF on Kubernetes is because of resource usage. For the same reason, containers use less memory than VM because it don't have to use to spin up the whole guest OS than kernel again. So typically it will save you, like, I think about, like, the OS system will be about 500 megabyte of RAM. So it will save you about approximately that amount. And a typical CF deployment, as I heard, is like about 30 to 40 gig of RAM. So you can reduce this number significantly for your CF deployment. And the third reason is that the adaptability of this. So if I pretend that Kubernetes is the mid-domain in this equation, right, then I can move CF on any environment that Kubernetes is on. And as you know, Kubernetes can even be deployed on bare metal, on open stack, and many other environments. So I can just bring CF anywhere the Kubernetes environment is on. So those are the three reasons why I want to put CF on Kubernetes. So let's come to the architecture slide. So no more joke. Just kidding. So the architecture, at first, the imagination is pretty simple, right? So I just deploy a boss director in my IS, which is GCP. Then I deploy the Kubernetes cluster. I deploy another boss director on the Kubernetes cluster to deploy CF components, right? So it looks pretty simple. However, we actually run into several issues. And this is the first problem. The first problem is we realize that we have this architecture. We have the ago cell, no, garden container in the ago cell inside a Docker container. And we run into some humongous error, like file system error, something like that. So we, like, first lesson learned, don't put the ago cell inside a Docker container. So it's a pain. So how can we solve this now? We think that how about we separate the ago cluster to sit alongside the Kubernetes cluster and having the CF AI components talk to this ago cell cluster. So it's actually possible after we deploy it. And so this is the second lesson learned. You can actually separate the ago cell from the ago brain inside CF. And doing some networking stuff, you can actually make it possible because it's going to populate the console event through this network. So here's some demo for that. So I have a Kubernetes environment in my GCP. So I'm going to use a coop CTL command to get all the container, all the parts. So as you see here, the name for the machine is boss VM something. This is deployed by the Kubernetes CPI from SAP. And so now I'm using the boss command to target the same Kubernetes cluster. Sorry. Kubernetes cluster to list all the VMs. So it's saying listing VMs here, but it actually container. So let me stop the video here a little bit. So you can see here, these bots, this VMC ID actually matching that of the part name in the Kubernetes cluster. Now I'm going to, and notice we don't have any the ago cell in here because we don't have any of the VMs in here. We don't have any the ago cell in here because we separate it into the GCP environment. So now I'm using the boss command to target the GCP environment in my cluster. And I have actually two the ago cells. Even though the network is kind of different, but we actually could communicate with each other using some logical route. Actually, the GCP environment did this for and we don't have to worry a lot about it. So now I'm going to target the GCP environment using the GCP command and going to resolve the DNS IP for that. So you can see here the DNS IP for this is actually matching that of the router in the Kubernetes cluster. I mean in Kubernetes. So it's going to route to the router in the Kubernetes cluster and then go back to the ago cell. So now I'm just going to do some classic work of CF which is pushing an app. Type CF pushing and fast forward the video a little bit. And now I have a running app trying to see the UI for this application. And finally I just cover this app to see what it's displayed to me. Yeah. So that's the deployment we did on Kubernetes. So some future work. So because we developed the bare metal CPI, right? And also we think that the GCP environment still sitting on some bare metal somewhere. So why do we need this abstraction layer in between? How about we use our own data center to deploy Kubernetes on bare metal? So that's why we're thinking about eliminating that layer. And you can actually use boss with the bare metal CPI to deploy this component inside your data center. So that's the future work that we're aiming at. And to put it in some animation perspective, right? So I will have a boss director interacting with my entire rack. And I can distribute Kubernetes cluster and Diego cluster across my data center. This is very powerful because I can have Diego cluster take care of my application workload. And my Kubernetes cluster take care of my container workload. And I can scale up and down the cluster very easily by just adding some hardware in and call it a day, right? So and even more futuristic, we can eliminate the entire Diego cell cluster. When we look into the Diego cell code, we actually discover that they actually using RunC to run the garden container. So you can actually hook RunC to the Kubernetes environment running Docker and spin up the CFF inside Kubernetes cluster yourself. So the ultimate goal is to on bare metal and using Kubernetes only. So that's what was the talk. Any questions? Any questions? Was it too fast? Was it too slow? So I don't remember the specific error, but it was about file system. That we actually talked to the SAP guy during that time and they were like, oh yeah, we have the same problem too. And then after we deployed in this new model, we actually give them the manifest for them to do the same thing. So I don't think they were trying to fix the container and container problem. They were trying to work around it. No, I haven't discovered that part yet. But to your question, right, because I'm just doing something with vSphere and NXXT, I think this is totally possible to do that with the way you suggested. So NXXT will be the load balancer in the vSphere environment and then routing all the logical work to Kubernetes cluster and to your CF environment, I think. Yes. In CF, all we didn't get that far. Yeah, we were like, okay, deploy CF. Yes. Yeah. So, okay. Not that I know of, but we have been communicating with the SAP team on this one. And yeah, when I attend their talk, they talk about something very similar, similar vision too. It seems like there's a lot of people trying to do this in this convention right now. I mean, like maybe that will be the proposal. I'm not sure, but currently I don't know anything about the top of my head. Yeah. Yeah. Yeah. Ansible puppet and something like that, right? So, I typically like boss because of the lifecycle management for your VM and I just have a single pane of class to talk to the data center. So, I mean, I use Ansible and puppet before, but I guess I like boss more. Yeah. I think all the tool in the community is kind of famous in the name because of the learning curve usually lower because you can just use it with your data center API. For boss, the learning curve is a little bit higher, but it will save you more time in the end. Yeah. Yeah. Thank you.