 Thanks. Welcome, everyone, to Cloud Foundry Day at the OpenStack Summit. Today, I'm going to be talking about containerized cloud foundry on Magnum. And it's going to be mostly about containerizing cloud foundry, and then about getting that particularly onto Magnum. My name's Jeff Hobbs. I'm the director of engineering in the SUSE Cloud platforms area, specifically focusing on cloud foundry and container technologies. So a little bit about what we'll be going over. One, why are we here? Then just about containerizing cloud foundry, using it as a Magnum as a base, then putting the cloud foundry on top. Then I'll talk a little bit about where our current state is and what we'll be doing in the future. I've set it up so I can talk mostly through this. There's about 35 slides. And then I've got a CAN demo that we'll get to because, unfortunately, the front to back, it doesn't all happen in the five minutes. I claimed that it took earlier today. So first, why are we here? Everyone's looking at this. A lot of people are working on their application and environment transition from regular data center stuff out to the cloud and their apps moving from physical servers to VMs to containers. And all of this is in the long term, hopefully, about microservices and improved development process. And I'm at SUSE, where does SUSE fit into this? I'll note that we'll talk about the whole idea here. We're really only talking about the top right corner, where it says cloud foundry. But all of these are elements of the SUSE stack. All of these are open source. So all of these elements you can play with right now, if you want, and we'll be talking a little bit more about some of them as they touch into cloud foundry, containerizing, the obligatory container picture. So first off, who here actually runs a cloud foundry installation at any level? So most. So cloud foundry is usually deployed using Bosch. And in a very simplistic way, Bosch is a tool chain that tells you basically how to organize all the code for your software so that it can be easily deployed. The specific aspect, though, is it deploys to VMs, where you look at cloud foundry and ever since its very version 1 instantiation, it had a pretty clear delineation of what the processes were. In fact, many of them, you could argue, were almost written in a nice 12-factor way. Yet still, the standard way in which deployments happen for cloud foundry is always to this VM infrastructure. And we wanted to deploy stuff to Kubernetes. So we wrote this tool. It's called Fisile. And it basically converts all these Bosch things in the VM-centric of Bosch stuff into container-specific implementation so that it can be run on Kubernetes, which means Fisile knows how to automatically compile, configure, and run all of cloud foundry. The nice side effect of using Fisile is that it gives you the mechanisms to make your deployment easier also for the end user and allows you to deliver all these compiled bits in the form of images in a much more ready-to-run fashion. And we'll talk a little bit more about that. So you're allowed to filter to what the user can really configure rather than the hundreds of things that Bosch gives you that some of these bits are very important to Twiddle and others you shouldn't touch. And the images ended up being configured after Fisile configured just using environment variables, like a 12-factor app would be. So the other aspect of it is everything that's done here is done in a way that maintains the cloud foundry foundation certification. And I'd note that Fisile, the Bosch disintegrator, and a lot of people have looked at it. It's kind of anti-Bosch. Well, it's not really. It's an alternative that actually builds off of a certain state in the Bosch lifecycle, the configuration lifecycle, and stops where it starts to think VMs and breaks it apart to think in containers. And it would be great to bring that further back, but I'll just talk more really about where we are now. So first off, why can't we just use Bosch? Well, Bosch is about virtual machines and that's not what we're targeting. As you kind of look around and you look around at the interest level of other people at the conference, you see that containers are more and more interesting and more and more important. And we wanted something for containers and container-based workloads. There's nothing about cloud foundry itself that prevents you from operating it in a containerized control plane. But Bosch does not operate that way. And also for science, because we like to play with things. So let's get into what it is in teasing all these bits apart. So first of all, it's about separating all the things. Over time, cloud foundry, which existed before Bosch did, then cloud foundry was Chef deployed and then Bosch came into being and they became very tightly coupled. Cloud foundry components are generally composable and have well-defined boundaries and APIs. But ironically, those lines between Bosch and cloud foundry become blurred over time as people have become dependent on using Bosch for the deployment of cloud foundry. And the other aspect is that these ERB templates and Monit have become embedded a lot into the entire deployment aspect of a cloud foundry system. And we don't like Ruby stuff sitting over and through the code that has to be used. And these ERB templates also contain control scripts, which are basically equivalent to cloud foundry templatized code, which are based on those Bosch primitives. They're not always cleanly, the most cleanly written logic and even some of those class definitions inside Ruby that seeped into the templates over time. So you've all of a sudden kind of intertwined way too much about deployment and just cloud foundry itself that doesn't necessarily need to be there. As we can see, as we will show. This also makes porting to other systems difficult. Now that is a debatable set. We're looking at it at containers and saying the containers have become more portable. But really we're bringing things up, just abstracting it to another higher level. And the fact that Monit's the only service statement that you can use is undesirable. And it's a piece that we haven't quite completely teased out of the system yet. Ideally you would do that in a truly containerized setup. So first off, what is in the container? We're basically starting from an Ubuntu 14.04 Docker image. And I say this ironically knowing that I am wearing the green shirt. We actually do it from other, from SUSE images as well. But the current certified system is Ubuntu based from cloud foundry heritage. Then you basically create your stem cell-like layer on top of it. This is very similar to exactly the way that Bosch is operating. But at this point then the adding of the packages and jobs happens at an entry point where we've now separated out. Has anyone actually dealt with Bosch at the template level? I've been on 40, oh you're all lucky. Where we have to then separate those things which happen at a compile time and which happen at the particular runtime. And then you can take what is in that container and decide to deploy it wherever you want. Get a little bit more about the building the world. So basically for this, again, we've taken a certain step, all the Bosch templates to the Bosch template releases are done. And we then take what happens at build time and separate it from certain runtime aspects. These are actually already separated in the Bosch YAML files where certain things happen at certain parts of the life cycle of the system. And we've taken the parts that need to happen at runtime and some of the compile time stuff and just moved that forward into container only operation. So basically all the compilation is already done when you're ready to run the system. In our case it's because it's in the container contrary to Bosch, which happens when you use your Bosch director to do your deployments. Also this is all happening in parallel because we're leveraging containers and go to compile everything with pretty much all of the cores on your machine for the creation of these containers. And it builds in again all of the compilation dependencies directly into the container. So those pieces of a Bosch system which say, hey, you have to do this, that and the other, we're kind of doing the same thing except rather than the VM, we're removing some aspects. And it's really not that many, it's not that far from being VM to being container based aside from a few key assumptions. And we're just making those separations occur so that we can have it all cleanly separated compilation when you're compiling in runtime for when you're actually starting up each container instance. And basically adding in a lot of smart detection of those various dependencies. So even if several Bosch releases with a lot of jobs that have a lot of jobs that you don't care exactly how they're specified, Fisile will correctly only build the things that are required for that particular role manifest. Sometimes some of these other dependencies that don't actually get run in that particular VM slash container end up in there. And Fisile is actually doing a true dependency check to see, okay, I'm only gonna work on the things which are important to the jobs. Again, the, I would say processes but it's not broken down to single processes but to kind of multiple processes that define the job of a cloud foundry component. So then next is going into the assembly line where you have the role manifest but basically before detailing what all the configuration looks like, let's talk about all that input. So again, as I mentioned before, we are taking things from the Bosch release manifest format. This is the same format that then Bosch takes as input and sets out to deploy and manage the life cycles in its, in VMs, we're taking that and then running it through Fisile which will then do the modifications necessary to say not VMs containers because then we put the container on the shelf and pull it off the shelf when we wanna do our actual cluster creation. So you could say we're kind of creating an extra intermediate step but that extra intermediate step is more pre ready to run with a lot of these assumptions that Bosch is asking you to either define or not touch but leaving in there to confuse you with the opinions to be already ready for use. So again, different Bosch releases, this allows us to be, we were building things off of the standard CF release format. Now we're moving to what's the newer CF deployment and these exist in multiple release forms so that you can have a routing release and you might have a different garden release depending on how you're doing your Diego, et cetera. Just represent some of the composable architecture that's behind the scenes for Cloud Foundry is remains composable in this setup. All that gets run through Fisile, as I said and then you get the Docker images. So a little bit more about the configuration and so this looks legible there. So the role manifest. This contains a list of all the Docker images that we want to build. You can co-locate more than one job on a Docker image and I'll talk about why we might do that in a second. Ideally, again, a job is a single or collection of processes that might run on a VM. Here we're running them on a container. It is not single process. It's unfortunately gone away from being pure 12 factor but this is kind of a step potentially in that direction for Cloud Foundry component definition. And so you can co-locate more than one job on an image from different Bosch releases even. And in the configurations section we transform those environment variables to the necessary Bosch properties. So again, the environment will feed into the container that at runtime only defines the necessary runtime Bosch properties you need for those jobs. And the opinions files are essentially Bosch deployment manifests with only the properties section and they contain basically the defaults for all of the things that, for example, a customer would never need to touch. The dark opinions shown in the bottom right corner is basically a failsafe where we're using it to insert and make sure that it actually erases the defaults on anything that may exist on the system. And then this is basically how it operates. You end up in the running and it executes the hook scripts, hook scripts coming from the Bosch side of things that need to execute a runtime. Config in, it's basically a sub project of fissile that processes all those templates and turns what are a lot of extra properties into only the properties you care about plus the environment variable management that should be, so all of the pieces that you care about plus the environment variables which should be the only things you really do care about and makes the runtime environment configured and ready. So then it starts syslog and cron and then monit. I mentioned before, we'd like to get monit out of this system, you could rely really on the container level management of readiness and things like that, but that would actually then, you'd have to change around the orchestration or fully change the assumptions of being able to run in bare metal VMs and that's obviously a larger discussion. So I mentioned this already around config in, but sort of configuration. One of the common complaints in Cloud Foundry is just how difficult it is to set it up. There used to be simpler ways than kind of Cloud Foundry grew and grew and now you can't barely get a system except for having 20 VMs and 60 gigabytes of memory and that is, there is Bosch light for some simplistic version of it, but there is a lot of configuration going on there. So the Bosch manifests are large and troublesome to deal with and I think that they're not really the most user friendly, the community would do itself a little bit of effort to take some of these efforts and hopefully separate out what are, it's kind of like what we're trying to do is if you ever worked in one of those properties dialogues and they have the advanced settings, you know, you maybe only have two or three things that you really care about and then there's the expert mode. That's what we're trying to do with this system because there's very few things that you really need to touch. In fact, I'd say less than 10% of the Bosch YAML properties that you could be faced with are actually ones that the end user customers generally need to touch and probably it's less than 3% you could do for very simple dev level setups. So config exists there to basically augment the whole Bosch template setup and provide for other sources of how to draw that configuration together from all of that giant JSON payload and the environment variables together. And we use some Bosch templates to eliminate some of that configuration complexity. A little bit on the configuration. So we looked at this, there's two ways, you know, the layer dynamic and layered static. In the dynamic sense, well, we could have had a console key space and it could have all this stuff in there. But there's a lot of problems but mostly being slow to run and you might have to restart it, it's not necessarily as dynamic as you think. And then it's requiring yet another key value RAF process. And when you've set these up plenty of times, you know that those underlying pieces like console and LCD are probably some of the most complicated pieces to make sure that in your distributed management and everything are set up right and everything on top of that just tries to work. So we tried to remove that by focusing more on a layered static and using environment variables for the user values and having everything else essentially pre-computed and stored in each container. So the only kind of negative of this is that, oh, if you find, oh, well, I actually did really need one of those other values, then you do have to go through, actually modify fissile to say, oh no, or config into say, I need to make sure that this one is user exposed so I can set it in an environment variable and then you end up rebuilding the world. However, in Bosch, you do the same thing and you rebuild much larger worlds when you have to do that. So all that said, we have the pull requests and they really focus on, and this is them and more, but they really only focus on a few areas. DNS lookup changes, hard-coded values that kind of were these subtle Boschisms and touching slash proc without restraint, which is a VM thing and something you wouldn't do in containers. And we've pretty much made, and most of these are all accepted, for removing these VM assumptions and dependencies from the system. And so, for the most part, a lot of this stuff is ready to see some of those changes move forward. And so we talk about, so you can see the fissile project, it's actually open source, it's on the SUSE GitHub. There's another side project of this called CFSolo and I mentioned before that you could actually co-locate more than one job in a container and this is basically taking fissile to the extreme of doing all of the configuration changes, the Bosch configuration simplification, but co-locating all of those jobs onto one single Docker container and that's what the CFSolo does. Yeah, it's one really fat container but it's much thinner than all of the VMs that you might otherwise see. And that's what CFSolo represents. However, it's not just about the container side, here we're at the OpenStack conference, so let's enter OpenStack and Magnum. So as you saw most of the stuff before, let's just, hey, well, let's get to containers and I could run that actually on a machine that only had Docker demon running and kind of simple. Obviously, it's not gonna get me anywhere into some sort of production scale. So we've been looking mostly at Kubernetes, I mentioned before, that's what our target was and that's what we're trying from a kind of a corporate perspective to raise it up so that Cloud Foundry could be retargetable in that sense and Magnum is one of those areas. So really, how does it work? Well, you get a little duct tape and you get a little baling wire and then you're gonna have a system that works because it's not always that easy, it seems. So first off, what do we start with? OpenStack, our OpenStack base is SUSE OpenStack Cloud 7, which actually has Magnum as one of the core components. I'm not gonna go too much into that one if you wanna see more about that. I believe there's other talks going on. In fact, I know there's other talks going on this week from the guys who on our OpenStack Cloud side on that. It is based on upstream Newton, in case you're curious, and does include the Kubernetes as a service. The Kubernetes here used in this demonstration is 153 and the Docker was 112. So then enter Magnum. And I say this all in a few slides to make it look easy. It's not quite as easy because I'll get to the quirks in a little bit. But basically we're using a Magnum heat template with some DNS server added onto it. It's important that you choose the appropriate OpenStack flavor and the Docker volume size so that your cluster can adjust as necessary. And this little command is pretty much all we used on top of a stock SOC 7 system. Oh, well there's a couple of other things that preceded that command. One, you wanna grab your SUSE Linux Enterprise or you pick your other one, but something that has a Kubernetes-enabled image uploaded to Glance, so Glance, ImageAd, blah, blah, blah. The LBAS side is not strictly necessary, but if you're gonna do anything with public IP exposure, then you'll need that. And then make the requisite local DNS corrections. And that is then the command which would kick it off from essentially, this is creating that cluster template. So, we've created a template. Now let's create a Kubernetes cluster. So you choose that previously created cluster template and that was in the previous slide, which was whatever name I gave it, I'd already forgotten. And then you give it a number of masternodes and kind of adjust this command as necessary. It's the command line version I'll show you through a demo in a second. So, I make it sounds like it's that easy. Well, it gets to be that easy after you get to, after you rebuild your cluster about 10 times, figuring out what you didn't do correctly along the way. So we did have to do that ourselves. Some of these were kernel options fixes, mostly because in Cloud Foundry, depending on what you're doing, certain things are sensitive to what's available in the kernel, so we had to make sure that we were setting everything up right. We had chosen the wrong flavor for core nodes because we needed a larger root volume or rather we need good volume management and we chose to use something with faster core disk rather than slow volumes. Now this is not a statement on what OpenStack configurations you may need. It happens to do mostly with, we were doing this with kind of desks, so machines sitting under desks, not next connected to the things like fiber channel arrays and all sorts of lovely other stuff. You could use volume management if it had the right level of IOPS in it. We did also stick with device mapper. We were trying to use overlay FS but again the configurations were not available to us in the particular OpenStack setups that we were, the restricted hardware that we were using for the OpenStack setups. Another thing is that in this particular case, Magnum isn't really configuring Flannel correctly for EdCD and it didn't do the master right for cubelets. We had to change some stuff in the Etsy Kubernetes configs to address this. So while it might have worked and you would have said, if you use Magnum and says, I am kind of ready, it wasn't ready for Cloud Foundry, which is itself sort of a large and complex workload wanting to run its own distributed configuration management and other stuff on top. We also required the host path provisioner because Cinder again was not attaching. Again, we're pretty sure this was mostly limited to our hardware. If anyone would like to loan us some more hardware then I'm sure we'd get around all these problems. Kubernetes DNS configuration changes also had to be added for a service network because we needed a service network for the monster service we were about to drop on top of it. And the one other thing is TLS wasn't configured and this was all our fault because we were using self-sign certs and we didn't want to have to deal with all that stuff. So I'm gonna say sprinkle a cube of salt here. That's actually, it's showing it's working but I'm going to jump now to, I did not show where I thought I was going to show. This is a time compressed demo and I'm not mirroring my screen here so I'm gonna look this way, sorry. So basically this is the setting up the cube on OpenStack, you're going through the creation of the cluster template as I mentioned before. This is, the command line was showed in the slide. This is about picking your image flavor, picking a key pair, all the usual stuff. You can actually all do this from the SOC7 interface and it's going to, let me get that out of the way. Jump that out of the way. So we've created the cluster type and it's showing that there it is created. As I said, this is, it's not that complicated as you're going through all this. The link to the YouTube there is there that you can kind of pause and go through it but again, this is the creating of the cluster. That was the second command line that I showed. We picked a couple of nodes and I'm basically compressing, we were trying to do this as a live demo but then realized that that was not gonna happen. So also especially the time it takes to download all the images when you go through building this but we're showing the, that was the cluster creation and we have our Kubernetes that was set up on a three nodes. So okay, three node Kubernetes and they're now ready. And so next we're going to basically go into the DNS as I mentioned. This is just some of those weird configuration items. Gotta have DNS working, you know, it was one of those quirks and especially between the configurations expected between OpenStack itself, Kubernetes and Cloud Foundry. You have to make them all consistent and they all kind of disagree on where that might, the DNS configs might say by default. So here we're just basically setting up, exposing our port so we can set up the cube dash and then we have a tunnel to get us going pretty fast. Ooh, it's going pretty fast. Now let's pause that and back to the slides and let's talk about the Cloud Foundry piece. So Cloud Foundry on top. So I basically compressed what was probably about, it's kind of 20 to 30 minutes of getting, you know, in the system that has to get the images and then it runs everything up to the stable ready state and I took a pause and we're going to say, okay, well, now we need Cloud Foundry. Cloud Foundry is itself quite a large and complex system but aside from going through and learning what some of those other configuration quirks were, the two minutes compressed there does represent a, oh, you know, I've got and set up and I now have a Kubernetes cluster that is ready to take Cloud Foundry without any other confusion or sleight of hand going on. So most of the things were related to service discovery and DNS. There's this assumption for cluster.local being your default discovery domain and but Magnum imposes something else and then who are you going to impose the configuration on? At this point, we're kind of adjusting things by hand in YAML files. I think if you were to harden this, you would make sure to really focus on a better kind of end-to-end configuration control for these things. There was also some Kubernetes namespace assumptions that we had to change and it basically represented more stuff that we needed to make configurable. So going back to here, let's get into that. What is that Cloud Foundry setup? And first we're going to set up the UAA for those, you know, Cloud Foundry. It's kind of, in an automated configuration, this might all look like it's one step but you really have to have your user auth and authentication all set up and making sure that it's talking to the right things from a service discovery perspective first. You have to have UAA knowing that it's talking to the right services and then Cloud Foundry will need to find that as its first piece. So here we are setting now up UAA and as you see the time clicking away at the top, which is why we've compressed this, it does take a little bit to get that all up and it showed about five minutes. Now we're basically just going to, we knew what our internal IP was, now we need to expose that and this is basically the dependency chain that you get through. Again this could be completely automated as it has been before but now we are showing that it's up and it's working, we were able to ping it. So now we have UAA, we can go in and distribute or deploy the rest of Cloud Foundry, create the namespace for it in Kubernetes and then we basically have this configuration file which is all the fissile generated configuration and that was pretty much it. Again this is slightly compressed in time and that it's operating but this is also assuming that you've already downloaded the images and so that, because there's probably 30 gigabytes worth of images across these things. There's a lot of layer sharing if you happen to do it right but it can take a while to download. And basically we're seeing that the readiness probes and liveliness probes are being used properly in the system and it's all then getting set up. Everything from API, Diego, the routing and the other bits are all on there. It takes about again five minutes till they've all come to ready state and that's done. So now everything post deployment's done and we're gonna expose it. This is locally so we've gotta go ahead and get the right IP address. Gonna look at the dashboard and again this is the Kubernetes dashboard that was set up earlier and just running all on Magnum. Now we see everything that's running and done. And so now we're just gonna look inside a container just to see, okay, this is actually a Diego so running and we can see that everything is operating properly. Then the next thing is, well, okay, now we've got, that was kind of it. And again, it's only time compressed. There was no cheating going on. You can run those things straight from those Docker containers. We only had to make some slight changes on the ones that we'd otherwise had published before for some of these Magnum quirks and then you can go ahead and deploy something on this Cloud Foundry. And so let's use it. We're gonna target the system. So on our CF solo setup and okay, we are actually pointing to the right thing. That was what that was, call was doing, login and create our orgs. This is all the usual Cloud Foundry stuff. Nothing too interesting here. We're only more pointing out that we're now pushing both the Docker app and as well as a build pack oriented app. So this is the usual Cloud Foundry stuff and waiting, waiting the usual Cloud Foundry push and everything's running. So that's pretty much it saying that it works. Now back to the slides. So where are we currently? The current working state, Magnum, duh, we saw that. We also have this running on Google Container Engine, MiniCube, HyperCube and Suza Casp. If you might not have heard about that, that's a relatively recent project from Suza Container as a service platform. It's Kubernetes, though instead of, so there's Suza OpenStack Cloud and Magnum running on that. There is Suza Casp, which is a Kubernetes system that can do, for example, pure bare metal Kubernetes. Our intention is basically to be running on any Kubernetes system one five or up. One note that for those of you who may be playing with Cloud Foundry in general, we did have to make some changes. We're using the Groot FS, which is not currently a mainline feature. It's an extra optional feature, but it's the one that allows you to avoid using AUFS in your system. And this is a very important aspect of being able to run on other Kubernetes where you can't touch the kernel because you would have to otherwise install the AUFS kernel mod to allow that. The Groot FS is an effort from the core team to use other things, Overlayer or Butter FS, and Overlay wasn't working for us, but the Butter FS was. Yeah, and the Hypercube, we've only deprecated that effort because it seems that MiniCube seems to be better supported inside the community. So a bit about the current development state. Groot FS, again, to alleviate the AUFS current CF requirement, and service discovery configuration improvements that we had to make inside the system that we are now sort of pushing back right at the FSL and config level. And also that kind of touches on how you're setting up your Kubernetes. We do want to be leveraging more Kubernetes features. So the ones that are not italicized are the ones that the first four are already in use at some level. So this is another advantage from going to the containers and container platforms for running Cloud Foundry on, is that we're able to use things like readiness and liveness probes. Ideally, we'd be able to get rid of the whole use of Moniton and further reduce some of the overweightness that has come into Cloud Foundry over time by leveraging some of these external platform features. Stateful sets, because they're a very important part of Cloud Foundry, there is state involved. The deployments of Kubernetes storage classes. The three things that we're not using that we want to be in the future are the concept of critical pods so that you can be tagging certain things for higher level of importance. Pod affinity, because it can be very important to make sure some of your routing stuff doesn't get confused with some of your CPU bound stuff and just to generally assert that and more of the network security sandboxing. In addition, Helm is something that we're looking at. We've been playing around with charts and have some things deploying via charts, but not the entire system. So a little more about where we go from here. As many of you know, SUSE has been extending its community presence and basically growing contribution in many other open source communities. It's obviously long been and the Linux community is the first enterprise Linux offering and followed up in OpenStack, but it recently joined the Cloud Foundry Foundation Board and just last December and the Cloud Native Computing Foundation just a couple of months ago. And if you're interested in the Kubernetes efforts, as I mentioned, that's in the SUSE CAS product which is already currently in alpha beta, beta now. And basically founding members of other new projects. So this is SUSE branching out to other areas. So I mentioned this slide before and I said, ah, you can catch all of this stuff and see where it is. Well, I highlight Cloud Foundry again because that's the one piece you're not gonna find everything for today. And whereas everything else is full open, you can find it as release stuff. Why is that? For those who weren't aware, the source code assets and team, such as myself, were just recently acquired from HPE's Takato Group. So I'm one month shy, one day shy of two months at SUSE. And while HPE can be great about many things, open sourcing at software is not really one of them. But SUSE is very much an open, open source company and we're now going through that process with a lot of the stuff that we built. So, Takato was this full batteries included platform of everything, mostly closed source. We're basically taking the time now to piece all these things out. So, you know, Kubernetes control plane, Cloud Foundry, service manager, et cetera. We're splitting all these pieces. They will all become open. The first two things actually was that, you know, that FISOL and CFSOLO from the Cloud Foundry bits, but more will be coming in the future. So everything you see for the most part actually is in the FISOL and CFSOLO, but there will be more coming in the very near future and all of it will be in the open. In terms of the next steps for features otherwise, it's hardening the ability to target any Kubernetes deployment. A lot of this is mostly around things like service discovery configuration and other setup, leveraging Helm charts throughout, as well as other extended Kubernetes features that I mentioned before, things like critical pods, pod affinity and such. And again, leveraging more of the new Cloud Foundry efforts as well as contributing old ones that had been developed previously at HPE and ActiveState that just never made their way open source, things like application versioning, applications, single sign-on, and backup and restore functionality that we built over time in my team that just never got open sourced. And the last one is an interesting side project that we had along with this, which is called Furnace. Basically, it swaps out Diego for direct Kubernetes access so that you could be running, you know, why run containers in containers? That's exactly how Diego operates. And when you have an entire container control plane underneath you, it's not necessary. Basically, it's an experiment that we proved it's possible, but right now we'll probably wanna focus more on breaking some of the Boschisms and bringing Cloud Foundry back to be a true component 12 factor app type system and then we'll address some of the other aspects. And with that, I say thank you. Were you curious on that? It's all recorded though, so I should be able to catch that anytime. Any questions? So realize I'm spot on time and somebody is gonna follow, so you can always ask me later. Please step up to the mic if you'd like to ask a question, otherwise I am generally around. One quick question I realized we're holding up some deals. So can you just describe a little bit of the helm aspect of this? Just amplify that for me please. So one thing is that if we can truly bring Cloud Foundry right to that containerized control plane Nirvana, as it were, and be easily to install, then it should be able to install via a helm chart by itself. Now the part that I haven't gotten into while there's this entire service ecosystem that can go along with Cloud Foundry and that can go along with the Cloud Foundry service brokers, as was mentioned in Christian's talk before and some other things. Now, if those are deployed via a helm or how you interact with those is an entire open space and one that we're just currently looking into and haven't made any kind of hard commitments, but we do know that helm does seem to be the current package manager of choice with growing interest inside the Kubernetes ecosystem. So if we're gonna work there, we're gonna work with helm as well. All right, thanks. I'll let the next person come up. So thank you everyone.