 OK, so I'm going to talk to you about how we do development at SUSE using FESAO. So usually, when you think about developing for Cloud Foundry, you think Bosch Lite and the SDLC that comes with that. At SUSE, we're containerizing Cloud Foundry using this tool called FESAO that we basically use to turn Bosch releases into containers. And we have to implement other tools, other tooling around how we develop for Cloud Foundry. So that's what I want to talk about today. And I'm basically going to walk you through our command line tools, the vagrum box that we use, and how we think some of these ideas might feed into the upstream process of development. Do you want to get a full screen? Yeah, sure. Thanks for taking part. Yeah. Boom. OK. So I will switch to the terminal more often. So I'm not going to be in full screen mode all the time. Just saying no. OK, so you can find FESAO at SUSE slash FESAO on GitHub. We're trying to make our distro Cloud Foundry certified, which means we consume releases just like normal Bosch does out of the specs for packages and jobs and so on. So this is a typical Bosch light flow simplified. You do some code. You do Bosch create release. You upload your release. You do Bosch deploy. You test. And then you keep doing that until your work is complete. So when we move to our workflow, it's similar. Or we at least try to make it similar. So you code, you again do a Bosch create release, because that's what FESAO consumes. Then you have to do FESAO build. And there are a few commands there. You build packages. You build images. And then you build Helm configs. These are actually three separate command line instructions. But we have tooling around it to make it easier for the developer. Then after you do that, you actually end up with Docker images in your machine that are the equivalent of essentially a compiled Bosch release that sits on top of Docker images. And you can Helm install that. And then after you Helm install, everything is up and running. You test, then you code, and you follow this cycle instead. So just quickly, I'll go to the terminal. So I'd like to show you what's inside our development machine. So you usually have a Kubernetes deployment. You have a bunch of command line tools, like kube control. We have this tool called K that Aaron, the engineering manager from our team, developed, which is just awesome to kind of shortcut commands for kube control. So what I did just now is just the blue piece shows you what I should have typed if I had used kube control. But with K, I can just shorten everything, essentially. So we have kube DNS running and teller. So those are the requirements. The VM is already configured to have privileged mode enabled for running privileged containers on Kubernetes. It has memory and swap accounting. So it has all the prerequisites required to run the SUSE Cloud Foundry distro. OK. So I want to get into some details now. How does this actually work? How do we build the images? It starts out with essentially a stem cell. We create the Docker image from the exact same process as you create Bosch VM stem cells. At almost the end, before you add the CPI-related stuff on top of that VM image, we stop and we create a root file system that we then turn into a Docker image. So at the bottom here, you'll see open SUSE Leap. So our stem cell is based on open SUSE. Then we have another layer that we put on top that has some tooling specifically for fissile-based things. And it has something we call config-in that essentially allows us to process Bosch templates. So this is a very important piece. We expose configuration as environment variables to the users. And we have to turn those environment variables into Bosch properties that eventually have to make it into the templates. And config-in is the tool that does that. And then we have a packages layer. So we experimented quite a bit on how we put the packages layer on top of the stem cell layer. First, we created a layer that was just specific to that particular role. So for example, if you have NATS, the packages layer would only contain NATS and whatever dependencies that has. Then we figured out that there is so much overlap that the total download size of all the layers would be around 30 gigs. So we figured that it's best to actually have a packages layer that contains all the packages in the system and then just differentiate with the final layer where you add the job pieces on top. So we have this huge package layer that contains all the packages in the system. And then the final layer differentiates each image. And basically, the jobs are the templates, the control scripts, and so on. And our entry point, which is RunSH. RunSH does everything that needs to happen before you can start up Monit. So it templatizes the templates, it processes the templates, it waits for dependencies, and so on, and eventually starts Monit. So I can show you here. You probably can't see that very well. So here we have the stem cell that we use as a base for everything. And here we have the actual images. So at the bottom, you'll see some packages layers. So these two guys, those sit on top of the stem cell layer. And on top of those, we have the differentiating piece for each role. So Diego access, post deployment, API Nats, and so on. So this is essentially what we end up with after we run through all the fissile steps and we get our role images. So now I want to take you through how we configure fissile to actually provide us with all these images. And what we have is two YAML files that try to describe everything that fissile needs in order to create the images, as well as the Helm charts. And this describes how we collocate jobs, Bosch jobs, into each container image. It describes how secrets are generated, how our charts are created, how the environment variables that we expose actually get turned into Bosch properties. So I'm actually going to open up the role manifest and show it to you. Hope you can still read this. OK, so first, we have roles. So it's just an array of roles that we describe. Each of these roles will become a Docker image. So for example, here we have Nats. We specify, hey, when Nats starts up, we actually want to run some scripts to configure HA hosts, to forward log files, and so on. We tell fissile, hey, these are the jobs that have to be put into that image. We describe the processes that are available. So once monitor is up and running, these processes have to be there in order for the container to be healthy. The tags here mention that Nats is a clusterable role, which basically means that you need a stateful set in Kubernetes in order to deal with Nats. So you need Nats 0, Nats 1, Nats 2, and so on. Then you have runtime capability. So these inform the creation of the Helm charts. And you have how much can Nats scale? Does it need any persistence, or shared volumes, memory, and so on. We have to define each exposed port. So communication inside of Kubernetes always happens via services. So we have to specifically instruct Kubernetes on how the containers will communicate. So we have to identify all the ports in Cloud Foundry that have to get open, as well as their protocol. We have some hints here for affinity and anti-affinity. And this is essentially it. So fissile gathers all this information, creates a container image out of it, and also creates a Helm chart. So this is the image piece. I now want to take you to the configuration piece here in the role manifest. So let's say cluster admin, cluster admin passer. So now we're in a section of the role manifest. This is a pretty big file. But one thing to remember is that the customer never sees this. This is just a development tool that we use for fissile to generate everything we want. So it's sometimes difficult to work with because it's so large, but it seems to do the job. So here we have a definition for cluster admin password. So this is a setting that we'll actually expose to the customer inside the Helm chart. We'll see that in a second. And we have some information about it. This is a secret. It's immutable. We have a description for it, and it's required. You can't basically, that instructs that we cannot generate this value. It has to come from the user. And it's a secret, meaning it has to be treated like a secret. We also have other definitions of environment variables, things like certificates. So let's find one. ConsoleAgentCert. So we have a descriptor here that basically says, we're supposed to generate this. This is a cert. And then we're working on a Helm plugin that will actually generate all the certs self-signed using this information. So we've seen so far that the role manifest has information on how to build the images. It has information on what configuration we expose to the operator. And now I'd like to show you how we tie these two things together. So how we go from the configuration values that the operator specifies to the actual Bosch properties that have to be put inside the templates. OK. So we'll see here, you might recognize some of these. These are actual Bosch properties there. So acceptance test, admin password, that's a Bosch property from the acceptance test, from the Cloud Foundry acceptance test. And we basically tell Fisal that, hey, when cluster admin password is set, you should put that inside of the admin password value for Bosch property. And we see that we need cluster admin password a lot, just like other configuration settings in Cloud Foundry. So we can expose one environment variable and use it in multiple locations inside these templates. And these are mustache templates. So you can have a bit of logic there, but not too much. But essentially, it allows us to simplify the, to simplify the experience for the operator. So in the end, we essentially distill all the configuration settings that come with Bosch properties. And we distill them into what we think should be exposed to the operator. So we don't expose everything that you could configure using Bosch properties. We expose things using environment variables. And then we turn those into Bosch properties. And you can see here that because these are mustache templates, you can do cool things and protect the operator from having to type weird things like this sim users property and just feed in the password into that string. OK. Now I just want to show you the opinions file. So I just took you through the configuration options that the operator sees. But there are other configuration settings that we bake into the images themselves. So essentially, things that we want to override when we build the images. So for example, the CF Admin username. We always want the first user that gets created inside Cloud Foundry to be the user Admin. And for example, for this setting, we don't offer the operator an option to configure this to be something else. For example, the CCDB, we always know that it's going to be on port 3306 because that's what we configure it to listen on. And we have a network that we work with. And we actually stand up the MariaDB Galera cluster. And so we always know that it's going to be 3306. The operator never needs to worry about this property. OK. So that kind of explains where we gather all the information to generate the images and the health charts. So next, we have a make file. It has a bunch of targets, basically, for everything that we need. And we use these targets to compile, to build releases, to essentially do all of these pieces that I showed earlier. So for example, I can do make compile. Now, I've pre-compiled this because it takes a few hours the first time you do it. We do have a bunch of caching mechanisms. So every time you change something, we only try to figure out what the delta is. So if you've just changed one package, of course, first Bosch create release will detect that only that package has changed. Then we'll detect that only that package has changed. And we'll just compile that package for you. After that, to speed up development, we try to figure out how to place that package on top of the packages layer so we don't have to rebuild it all. So you can imagine that if you have that three gig layer of packages that sits on top of the stem cell layer in Docker, if I change the clock controller, for example, I don't want that entire layer to be built. Certainly not while I'm developing. So what we're doing is we're figuring out what's the closest package layer based on labels that we set. What's the closest package layer that we could use in order for the delta to be minimal? We take that package layer, we build from that, we put your changes on top, then the jobs, and then, essentially, you can do make run. So our goal is to make development as fast as possible. So you don't have to wait for things to compile and so on. So like I said, we have a lot of make targets. And one of them is make run. So what make run does, and I should have showed you the actual running system before I did this, but now everything is going to get deleted and redeployed. Hopefully DNS stays up. So we'll try to do this and deploy live. So what happens is when you do make run, a Helm install occurs. Name spaces get created inside of Kubernetes. We have one name space for UAA, one name space for CF, and then all of the Helm templates get converted to Kube configs, and everything gets created at once, essentially. What we want to do in the near future is when you do a make run after you've changed something, I'll keep trying. When you do a make run, we'd like just the delta to be deployed. Right now we have some issues with secret generation that basically rotate secrets every time you try to do a Helm upgrade. So obviously that doesn't work well. OK, so I can't really run right now. I can do make run. Like I mentioned, we also have scripts for generating self-signed certs. They're based on the information that we put inside the role manifest. We use a version file. So we have a lot of dependencies for development too. So we have the CFCLI, we have K, we have KubeControl, Helm, Fisile. So we have this file that we use to basically freeze all the versions for all the tooling that we have. So the way we do development is we typically do a vagrant up from the SCF repo, which is public, so you could do it as well. You just vagrant up that thing. It'll share the source code. And from there, you essentially to make targets away from getting SCF up and running inside of it. It will take a long time because it needs to compile everything and download everything. But again, once that occurs, everything will be pretty fast. We do use a well-known IP to make things easier. We have cf-dev.io pointing to it. So next, we kind of figured out that it's all about configuration. This is the hardest part we think about Cloud Foundry is configuring it and making sure that that configuration that you expose to the operator is easy to understand. So we've done a lot of work, as you've seen, to try and distill what we expose to the most relevant things. But this causes us a lot of hassle when we do development because you've seen those mustache templates are not easy to work with. It's easy to have typos and so on. So we did a lot of work with trying to make sure that those mistakes don't happen. So we have linting code. We have validation code built straight into FESAL. So if I do validate, I have to rebuild this. Hopefully it'll stay on line. So we do a lot of validation for everything, essentially. We look to see which Bosch properties are defined in multiple jobs and which ones have conflicting defaults. And that actually happens. So you can have the same Bosch property that's exposed and it has different defaults inside of the releases. We try to make sure that all the environments that are exposed to the user are actually being used inside of the mustache templates that you saw. We want to make sure that all the scripts, so all the startup scripts that we define for the roles, that they're all being used. So a bunch of validation and linting codes that essentially is reusable. So if you were to dockerize or containerize another Bosch release, you'd have all these features. So to sum up, everything is open source. So we'd love it if other people tried to actually use the style of development and tried to work with a containerized version of Cloud Foundry. It's currently in beta state. Oh, sorry. So the SCF repo that you see there has some releases. The latest release is a beta one. You don't actually have to run the dev version of it. So go through all of the things that I mentioned here. You can deploy it on a cube if you have one. Just download the Helm templates from there and follow the instructions in the wiki. And we currently use open-source elite for the stem cells and the stack. So we added a new stack to the build packs, which is we went from one to two now because we just had CF Linux FS. And we also have a UI for Cloud Foundry that's going to be part of the incubator. It's a very cool UI. Works with any distro of Cloud Foundry. It actually allows you to manage multiple Cloud Foundry endpoints at the same time. So all of this is open. You can test it out. And if you follow that wiki there, you get running pretty much in no time. And that's it. Do you have any questions? This is the... They're both working. Excellent. Any questions? Thanks first for waiting till I got all the way over here. I thought there's somebody else, right? So it seems to me that some of the concepts that you've just shown might have some commonalities with things that I would find in Bosch2 and also in Kretup. Is there any thoughts around how that fits to what you're doing in FISA? So there's no parallel from Kretup specifically. But for the rest of the things, we definitely try to make sure that we want to be at least on par with what you can do with Bosch and what you can do with Bosch while developing. So all the nice things that Bosch does that it figures out that there's a delta when you create a release and so on and just deploys the pieces that were changed when you deploy, we want to have that as well. So of course there will be commonality and it's by choice. That's what we want to happen. Okay, thanks. Do you have any other Bosch releases that weren't the Cloud Foundry stack that you sort of, or smaller ones, say I wanted to sort of get a feel for how I might do this to my own Bosch releases? Yeah, so actually we use a bunch of small Bosch releases from Bosch IO like the NTP release, test assets for FISAO. Have we gone for time before I go and ask for questions that? Could you just bring up which example? Sure. Because Cloud Foundry is huge. If you were trying to figure out how to use FISAO, you might get lost in there. Uh-huh, I've read everything. So yeah, we use the NTP release as a test asset and also the Tor release. So I don't think we have the role manifest as you're probably asking for it, like just a nicely put together repo for you to test out but that's a good idea, we could do that. I thought you were asking from like a compatibility perspective. I'm purely from, you know, I manage many Bosch releases and the idea of being able to describe a complex system in one format, like a Bosch release, but it becomes available to people who are using Helm and haven't yet found the other reasons for using Bosch that's better than, you know, having to recreate everything. Yeah. So that's interesting. We'll definitely take a look at setting up a small sample for FISAO. Any other questions? All right, please put your hands together.