 Hi, everyone. Thanks for coming. Sandy and I are going to talk a little bit about the long journey of Cloud Foundry into the containerized world. And this is something both of us had some experience with over the years. And it's a kind of an interesting replay of a presentation that Sandy gave here last year. And that's going to be covered as we go through this. So this is not going to be as deeply technical as the Buildpacks talk that was just before us. That was amazing stuff. But it will tell a story of how we got where we are and where we're going with containerization of Cloud Foundry. So when we first launched the proposal to containerize Cloud Foundry to the wider community on the mailing list, one of the first questions we got from a lot of people who were longtime Cloud Foundry contributors and users was, why are you doing this? Why spend the effort to containerize Cloud Foundry has a perfectly acceptable way of deploying itself in VMs using Bosch? Why do this? Well, the original answer we got from the people who were working on this project was, because that's awesome. Why wouldn't you do this? There's this great technology available now. Containers are wicked cool. Let's do this. The marketing departments of our respective organizations also were pretty excited because in the initial stages, Docker was a word that you would hear a lot. And it was like Docker, Docker, Docker at every tech talk that you heard a few years ago. And now it's Kubernetes, Kubernetes, Kubernetes. And so they like the idea. But that's not a good justification to spend thousands of developers' hours on actually making this work. So seriously, what it is about is that we are noticing a trend that container schedulers, like Kubernetes, are replacing virtualized infrastructure. And our big concern or my big concern is that if Cloud Foundry as a community doesn't move with this change, we run the risk of becoming irrelevant, of losing market share for the companies that we work for, but also mind share. And also, we run the risk of people going off and reinventing the wheel, all the hard work we've done with Cloud Foundry being redone somewhere else. So the next question is invariably what's in it for me and a very Cloud Foundry-ish question is like, how does this affect the users? Well, it has no effect on the users. It should not have any effect on the users because the whole part of the Cloud Foundry contract is, here's my code, run it for me in the cloud. I do not care how. If the developer does not care how and the developer's experience is the same, that's actually what we want. However, for the operators, there are some, and by operators, we mean the sysadmins who are looking after the Foundries, but we also mean the people who are planning the deployment and the people who pay for the compute cycles. There's some really good benefits and it's a very low barrier to entry. It's very easy to install. Running it in pods gives you a much smaller memory footprint, so there's some savings there. Pods recover more rapidly than VMs, so again, there's an increased resiliency over VMs. There's ways that a Bosch deployment compensates for this, but it's really nice to have just that a little bit of extra speed when recovering failing roles. And portability, so our distribution, we can run on just about any Kubernetes, provided it satisfies some minimum requirements. And for some operators, a lot of operators, the Bosch learning curve that comes up and looks like a cliff is pretty intimidating. And especially if they've already made an investment into learning Kubernetes tooling, it's really nice for them to be able to use those tools again and not have to learn a new thing. So most of the Bosch learning curve is avoided for those operators. So where did we start off with? Troy referenced a presentation that I gave last year and there were others of its ilk too. We started off with a number of different approaches. One initial approach that I'm actually gonna give the microphone back to Troy here. So I was with a team that was originally at ActiveState and I went through Hewlett Packard Enterprise and we set about containerizing Cloud Foundry because it's cool, but there were a couple of reasons why we didn't do that directly on Kubernetes. Staccato 4 had this proprietary abstraction layer called the Helian control plane and that was done for a couple of reasons. One was Kubernetes was not quite as mature then and it required a few extra features which we would layer onto this control plane. But also the very ambitious goal was that this control plane would then be able to run on any container scheduler. So at the time we were looking at Apache Mesos and Docker Swarm. We only ever got as far as Kubernetes but what we found in trying to maintain this was that the development pace of Kubernetes took off so fast that new features were landing so fast that the control layer or the fabric layer over top couldn't keep pace with that development. So it was very difficult to keep that going. But out of this whole thing we did get a tool called Fisile which has served us very well and we'll talk about that as we go as well. Right, after that small little orchestration snafu. So at least initially the latter approach is, you know, people talk about Kube native and originally that isn't where we started. We started off more with Kube compatible and trying to take existing deployment methods and existing deployment models and say, okay, how can we refit this so that it'll play nicely in a Kube environment? So we took tools such as Bosch that were not developed in the context of a containerized infrastructure and we said, well, even though these tools aren't natively designed to work with containers, is there something we can do because this is what we know and make them work nicely with containers? So, you know, we ended up with a lot of projects that were like we would, and the Kube CPI project that I led last year is an example of that where we basically had to build essentially a shim layer, you know, not just the CPI but other projects as well, you know, where we had to map, here's the world that we know and here's Kubernetes and we had to map between them. We were trying, the paradigm was we were trying to, because IaaS and virtual machines were what we knew, well, let's just treat Kubernetes as yet another IaaS, which is, you know, as we'll talk about is, it has its shortcomings, a number of them, why don't we? So, the Kubernetes CPI project, there were a couple of these. Yann from SAP had one that he was working on and then I led a team at IBM where we were developing one. Again, if you go back and, if you look at the presentations from last year, I did a presentation on this specifically, we started off very optimistic, you know, we thought, okay, this is, you know, we all know Bosch, we love Bosch, Bosch does great things for us, we'll just, you know, make Bosch work in the Kubernetes world. And we, you know, again, a lot of this big mapping problem that we had from taking, okay, Bosch understands IaaS, it understands virtual machines, works really well with those, we need to have a mapping layer that says, okay, well, virtual machines are represented this way in the Kubernetes world, you know, and what does a virtual machine actually mean in the Kubernetes world? We spent a lot of time with that. Is it a deployment slash replica set? Is it a pod? Is it just a container? We ended up with this kind of, we weren't wholly in one world or wholly in another, you know, we ended up with this hybrid approach and a lot of the problems that we encountered and the reason why we stopped working on the CPI derived from that approach. You know, I generalized with split brain issues, but it was, if you want to take the issue of resiliency, you know, Kubernetes does a really nice job with its scheduler, you know, if you've got a replica set, you know, you say the number of replicas and if you then go out and delete one, it'll instantly spin one back up, does really, really nicely there. But Bosch also does the same thing, you know, if Bosch thinks you're supposed to have four cloud controllers and one of them drops off the map, Bosch will detect that and Bosch will spin another one up. So it isn't just a deployment problem, it's also a management problem. Once you've got, once you've deployed cloud foundry, who owns bringing that back up? Man, we were early on, we'd be like, well, okay, Kubernetes, you know, we have a, let's say a cloud controller drop off the map, Kubernetes will spin one back up, but what if in the meantime Bosch has detected that, you know, and tried to bring one back up? You know, who basically is a source of truth for what the infrastructure should look like or what the deployment should look like? You know, we ran into multiple sort of variations on this theme and a lot of them came from, again, came back to this fundamental problem that Bosch understood VMs and containers were not first class entities in the Bosch model. We also, because we were trying to work directly with, you know, CUBE APIs and things, and, you know, CUBE, while it's certainly, it's enterprise ready, it's quite mature, it still is evolving, we were having trouble, you know, we were frankly having trouble sort of keeping case with changes to the Kubernetes API, you know, many things, you know, again, deployments and replica sets, I believe are still technically part of the beta API and there were small changes that came in them and we were having to keep pace with those. At the same time, we were trying to develop this new CPI layer. It got problematic really quickly, so we said, you know, this is probably not the most fruitful approach, you know, and it just wasn't comfortable, you know, everything we tried ended up being, you know, there's this level of pain and pain has to be worth some level of gain and we really weren't realizing that one of the approaches that was taken was, you know, if you dumb down Kubernetes and I'm just leaving these up here because hopefully they're making you squirm and I gotta tell you, it is really hard to find a picture representing an uncomfortable chair and still keep your job, you know, I spent a lot of hours like, no, I can't use that, can't use that, can't use that. You're welcome, by the way, so having said that, you know, you can take some of the problems away by dumbing down Kubernetes, by saying, you know, I'm not gonna let Kubernetes do anything of what Kubernetes does, I'm gonna treat it as a dumb-eye-ass, and I don't mean dumb to sound derogatory, disparaging, but I mean in the sense of lacking a lot of the function that Kubernetes has natively, but you end up with, well then, why am I doing this? You know, what's the point of all this? You know, you're spending all that effort to develop new layers to let something deploy in Kubernetes, but you're not letting Kube be Kube, we kept, you know, why? So it was uncomfortable, but we weren't getting anything for the discomfort. I had to move it. No, just kidding. All right, so where do we, where are we landing today? So we're trying now the approaches that are underway now, and we're just gonna tell you about some of the, Troy's gonna spend some time talking about, and I'm gonna go ahead and give a plug for the next two talks. You know, definitely if you're interested in this material, stick around for the next two talks. Bent and Simon, Bent from SAP and Simon from IBM, we're gonna have a panel discussion next, and after that, Jules and Jules, you're gonna talk about Project Irene. Those fit really nicely into this, you know, current and then future discussion. You know, obviously we embrace, Kubernetes is not just an IaaS. Some people talk about it as an IaaS plus, it's not really a PaaS. Well, it's its own thing, that's fine, but it's got a lot of strengths, so let's work within that paradigm, and let's just accept that it is what it is, it works a certain way, containers are not VMs, and then we go from that, we said, you know, Kubernetes has a lot of adoption out there. I mean, let's face it, it's winning a lot of in the sort of virtualized infrastructure world, Kubernetes has, you know, it's got a ton of, it's got a ton of mind share, and that mind share is not decreasing, it's increasing, and it's got a big ecosystem of tooling, and people are familiar with it who know how to deploy things on Kubernetes. We said, let's just accept that as a given, instead of trying to take the world we know and map it on, let's work within the context of what people know, what they're comfortable with. Okay, another given, well, Cloud Foundry follows this standard packaging format, you know, it comes packaged as a set of Bosch releases, that's just the way it's packaged, and people who are working on community projects, they're familiar with this, they know how to do a Bosch init, and how to create, you know, how to create the releases and where things go, fine, accept that, the community's comfortable with that, so let's just work within that. These are things that we treat as givens, we can't change them, we don't try to change them. Which brings us to Helm. So, Kubernetes community doesn't have an official package manager, but then again, neither does Linux, there's just a couple important ones, and in the Kubernetes world, the de facto application deployment management that operates like a package manager is Helm. It's certainly the most commonly used deployment orchestration tool for applications, and it does the sort of fire and forget, deploy it and let Kubernetes keep its state. It has some magic over and above what a typical kubectl apply would do, and yeah, it's like apt or RPM for Kubernetes. It's likely to stay important in the community because it's got a lot of support from Microsoft now that Microsoft is sort of sponsoring it, and they're building their own ecosystem around this tool, so it's around for the long haul. So, what makes people use tools? All things being equal, most of us will generally use what's most comfortable, you know, what's easy to use, what's comfortable, what fits well, what just feels easy. It's not always a set of precise calculations. It's just feels right, it just feels comfortable. Why do we use CF push? Because it's the easy button, and so we sort of accepted that going in. What's gonna get people using this? FISA, back from way back when we'd done this crazy containerized deployment on a proprietary control plane, we had this great tool for taking Bosch releases, and turning them into container images and Helm charts. Back at the beginning, it was turning them into something slightly different. The container images were the same, but the configuration that came with it was different. Now it makes, initially it made a cube config, and now it makes Helm charts. This works at the largest scale, the most complex thing, which is probably the most complex Helm chart I've seen, which is for Cloud Foundry. But also the tool also works for all of the other Bosch releases that have grown around the Cloud Foundry community. So chances are, if it's a fairly straightforward Bosch release, it can go through FISA and come out as a containerized application that you can deploy with Helm. It's not pronounced fizzle, if you've ever heard Chip say it that way. That's not how we say it. Here's the GitHub repo. This will be on the slide at the end as well. And the end product is a Helm chart and some container images. The container images get uploaded to a Docker registry, or an OCI image registry. And the Helm charts are going to Helm repository, which is just a zip file or a directory structure, or a TARGZ or a directory structure. A lot of them are hosted on GitHub. Ours are hosted on our own servers, our own Helm repository. And in an idealized deployment, you would just need two or even one, if we package them together to deploy Cloud Foundry. In ours, you deploy UAA first, set some options, and then deploy Cloud Foundry. Typically, it's a little more complex. There's a copy paste from a standard installation. We do some things where we actually move a secret around between them. If we package them together, we could get around that. But it's still pretty easy. And you can put a UI in front of it, as we saw this morning, when you provision stuff on IBM's cloud. But even if you're just using the CLI and configuring the values in the YAML files there, it's a very, very easy way to get up and running. So the inevitable comparison to obviously, how does Bosch work? How does this approach work? Bosch is a single tool that kind of does everything. It handles the packaging, it handles the deployment, and it also handles sort of some aspect of day two operations. In this approach, it's a tool chain. And again, because we took this approach because if you've decided that you're gonna deploy into Kubernetes and you're gonna operate in a Kubernetes manner, you have to accept that there's some things that are done for you and there's parts of the ecosystem that are already there. So we're not trying to reinvent an entire Bosch replacement. Just what's existing that we can leverage and then how can we put it together to make a nice experience for deploying onto CUBE. So we've got packaging, that's Fisile. I'm a big fan of small tools doing small jobs. Fisile does a certain thing and does it well. For deployment, you've got Helm. A lot of people, not everybody, but certainly the majority of people deploying complex applications onto CUBE are at least somewhat familiar with Helm and have some experience with it. And then for management, just back away. Deploy it in a CUBE correct manner and then let CUBE do its job. But all's not completely rosy. Cloud Foundry has become very attached to Bosch and Bosch does things a certain way and CUBE doesn't always play perfectly with that. So Fire and Forget doesn't always work for all aspects of CF. One problem that we hit was the generation of secrets. CUBE didn't have a way for us to generate certificates in the way that was needed for Cloud Foundry. So we had to create a special pod that was in addition to the pods that we got out of the Bosch release to handle that. Also, the update orchestration you get in Bosch makes sure that you don't get any downtime and you get all sorts of great features that you can roll back. This is not something that Helm can do perfectly with Cloud Foundry. There's a little bit of downtime involved. And we've got ways around this. We've done a lot of work to make that work, but it would be nice if we had something that could handle that a little bit better. For example, another instance is if you wanted to change a setting in containerized Cloud Foundry that's deployed this way, you would have to do a Helm upgrade with a value that's been changed for Helm to pass on to CUBE. But that actually does an upgrade. It can be an upgrade to the same version, but it's a clunky way to actually just tweak one setting. It'd be nice if we could just go in and get an operator to make that setting change. We've both had, I think, problems with the CUBE API changing out from under us, and that makes certain operations in the Helm world difficult. And I don't know if you had to... Well, I mean, it's just a specific example of something, you know, our Cloud Foundry distribution that we're using is based on SUSE Cloud Foundry. And so we both hit one of these things at the same time, is when there was a very minor change from CUBE 1.8 to 1.9 and we woke up one day and we're like, hmm, things aren't working. CUBE is CUBE, but again, it's not... For the most part, it's pretty transparent, but occasionally we'll hit these ones. That was with security groups, which one was that one, do you remember? Pod security policies, that changed. And so we had to make some interesting changes and a lot of release notes that were involved in making sure people could do that upgrade. One of the hardest things was getting a Diego to run in a containerized environment because it is itself a container scheduler. And so you've got containers and containers in this style of deployment, which is architecturally looks a little funny. It works okay, but it was hard to get it right with file system issues. And that's why we're really looking forward both of our groups to Irene because we can actually use the CUBE scheduler directly for Cloud Foundry applications. So we've both got skin in the game. Both our companies have products in this space. You wanna talk a little bit about... Sure, so you've seen already, you saw Tammy's demo hopefully this morning where she demonstrated Cloud Foundry Enterprise Environment. It's a self-service deployment of Cloud Foundry on top of our cloud. It's on top of a Kubernetes cluster. We are, you know, despite whatever shortcomings Fisile and Hell may have, you know, we don't think they are inhibitors to their use in a production deployment. You know, because we're building a commercial product that's GA, you can go out and buy it today. So we believe it enough that we think we can actually sell it as a product. And it's worked well for us. One of the things that I kind of like about this, whether it's right or wrong, it's an opinion, take it for what it's worth, is that we are at its core. We're treating Cloud Foundry as another application that runs on our cloud. You know, we're deploying it the same way we deploy any other application or any other workload. So when a customer comes along and says, you know, I want to get some of that on the IBM cloud or some of that on the IBM cloud, this fits into that paradigm. We're not doing anything all that different for this. I mean, there's Fisile and Helm, but at the end of the day, Helm is how we deploy things onto Kubernetes. This is just another workload on top of Kubernetes. And it's a cloud native, sort of a more cloud native approach in the sense that things like Postgres and the Cloud Object Storage, we're not provisioning those as part of the infrastructure. We're provisioning those as cloud services and binding them to Cloud Foundry. That's the way you would do with an application. So it's end-to-end a, I don't know, it feels like a more cloud native experience to me. And our product, this is a cloud application platform. It's built on the Susie Linux Enterprise. It was a certified distribution. Hopefully it will continue to be so. Supporting it on Suze-Caz platform, our distribution of Kubernetes. It's actually bundled with the product. But more recently, we've been getting a lot of interest in running this on the public cloud. So a lot of the public cloud providers have a Kubernetes that works just perfectly with it. AKS and EKS are the two supported ones. 100% open source. You've seen other people using our stuff throughout this. And again, we hope people continue to use it. Now, as I mentioned, at the beginning, a year ago, we started discussions with a small working group of people from IBM, Suze, and SAP. Immediately following your talk, in fact, this group and props to burnt from SAP for doing a lot of legwork on this. We drafted a proposal that was going to properly introduce this way of, you know, a containerization of Cloud Foundry project for incubation. And we had, initially, this was drafted very much as a fresh start approach. Like, we're gonna break off teams of people from all these companies and work on a brand new project. But as we started working on it, and as we kept working on the products that we were working on using Fisile, we realized that everything we had to do to make a production-ready Cloud Foundry using Fisile was a step towards the goals of this project. So we actually revised the proposal to include a plan from getting to where we are now to getting to where we wanna be. And we're very happy to see that this was accepted for incubation this summer. And Fisile will continue to evolve. So, rather provocative subtitle we used to have for the Fisile project was the Bosch Disintegrator. And the irony is now it's part of the Bosch PMC. So it is, in fact, the Bosch Integrator. It's a better way to describe it. Phase one of this project is mostly done. I can't get into all of the details here because, A, we're running short of time. And B, I'm a product manager and I didn't write this stuff. But we are basically trying to reduce the amount of work that we have to do to make this Helm deployment. And instead, inherit as much of the stuff we get coming in from Bosch as we possibly can. We will move on to phase two, which will move to Bosch Process Manager. Making sure the jobs are all in separate containers. Use the BPM and link information. Make sure that we're using the BPM entry point information rather than relying on Monit. And we wanna make sure that we can consume CF deployment. And then eventually we want to have an operator at runtime, a fully fledged Kubernetes operator at runtime that handles some of the things which have been done by the Bosch operator. One of the things that this will allow us to do is, of course, to feed it Bosch manifests that any Bosch admin would recognize. And so we can, on a running system, feed it a Bosch manifest, even though it's running. It was made by Fisile, it was deployed with Helm, but we can still treat it in a lot of the same ways that we would treat a regular Bosch deployment. There's a lot more information on the details of this in the document. You can all memorize this URL. I'll post the slides as soon as we're done. But there's also stuff in the GitHub repos, the Fisile repo and a related repo called Configin. Fisile is probably the main one. And we'd love to also have you join us on the CF containers Slack, on the Cloud Foundry Slack. So, any questions? Any questions from anybody not named Jools? Amazing. Then, I'll just say before I say thank you for coming, please do stick around for the next two talks if you can. Bent and Simon, Bent Clinic from SAP and Simon Moser from IBM are gonna be leading a panel discussion about containerizing Kubernetes. It should be really interesting. And then Jools and Jools are gonna be presenting on Project Irene, which is, I think, really, really cool. And so if you've got time, the next two presentations, if you are interested in the whole containerization ecosystem, I think they're both must visits. Thanks all for coming. Thank you.