 All right, guys, it's 4.55. We're going to get the talk started. So welcome to the session for FISAL, aka Containerizing Cloud Foundry or Bosch Releases. My name is Aaron Lefkowitz. I am an engineering manager in the Healy and Cloud Foundry Group at Healy-Packard Enterprise. I've got a background in SaaS. I've only recently come into PaaS Space, but I think it's really cool. I really like Go and network systems. I've got a collection of plushy gophers to prove it. Hello, everyone. I'm Vlad. I'm the technical lead for Cloud Foundry at Healy-Packard Enterprise. I've been working on Cloud Foundry projects since about 2011. And I'm a big fan of Metal and StarCraft. So I want to start with the question of, why can't we just use Bosch? So this is the question that we've got to answer in order to proceed. So Bosch is really about virtual machines, right? When you deploy with Bosch, you get virtual machines. But we wanted something for containers. It's kind of apparent to everyone in this room. A lot of workloads moving to container-based sorts of deployment mechanisms, right? So we wanted a part of that cookie, but we also really like Cloud Foundry. So we needed something for both, right? And it's also for science, because we have timers. We have a peanut butter and jelly sandwich, and we want to see if we can take it apart, take those two things into their distinct parts. So in order to separate all the things, the first thing we had to realize is that as we started on this journey is that Bosch and CF are a little bit more tightly coupled than we first had imagined. And Bosch technology choices also kind of limit the implementations that we could do for this. For example, if you look at the diagram, you can see on the left-hand side, we've got the Bosch agents, stem cells, all the Bosch-related things. Those are squarely in the Bosch land. And on the right side, you've got jobs, configurations, packages, all in the Cloud Foundry land. But kind of in the middle, where these things come together, we've got Monit and these ERB templates. And these things are a lot harder to deal with. In particular, if you look at Cloud Foundry, the components are composable. We've got well-defined boundaries and APIs. But kind of ironically, the lines are a little bit more blurry when you get into this middle section here. So for example, the ERB templates, well, they're written in ERB. That means if you don't have a Ruby parser, you can't use them, right? So that kind of limits kind of the technology you're allowed to use around that. And they also contain, like the templates themselves also happen to contain entire Ruby classes and functions, which makes them really hard to emulate. So we wish that those kind of things didn't get into them over time. So porting this to another system isn't really feasible. And the fact that Monit is there and it's sort of, that's the only thing you can use, I wish I knew how to use PowerPoint. But it's the only service statement that you can use. That's it. So basically you can't, it's sort of undesirable. We wish we could switch it over system D or whatever, but we're stuck with Monit. So Vlad's gonna get into a little bit more about what's kind of inside now. Okay, so given all that, how are we doing this? How is Fissel turning Bosch releases into containers? So first we start from an Ubuntu trustee base and we put a stem cell layer on top of that. So hopefully you guys know what stem cells are. We create the stem cell layer much like Bosch does. So we run some scripts and install dependencies on top of trustee and we get that stem cell layer. On top of the stem cell layer we have packages and jobs. Packages are compiled and you have jobs with their templates, their configurations and so on. And then next to all this we have some secret sauce which is not that secret really because we're gonna tell you what it is. It's config in, it's a tool that will deal with the Bosch templates and run its H which is the entry point to every Docker image that Fissel creates. These images don't have a Bosch agent and you basically can use anything that knows Docker to deploy them, whether it's Compose, Kubernetes, et cetera. Okay, so we just talked about the fact that we have compiled packages that are part of our container images. So how do we do those? Well, we do it at build time contrary to how Bosch works. You, when you deploy the first time you'll compile packages on the stem cells, we do it at build time when you build your container images. We have a compilation layer. So we were able to separate the dependencies that you need at runtime for the Docker images versus the ones that you just need to compile the packages. So again, you start from trustee, you have this compilation layer which is basically dependencies for compiling things. And then using Docker and Go, we paralyze everything and compile all of your packages using all the cores on your box. This also does smart detection of dependencies. So for example, if I say I want NATS and the NATS forwarder on one image, it'll just pick out the packages that are needed for those jobs and it won't compile anything else. Also, if you use multiple Bosch releases and the same packages being used in more than one, we only compile that once. Okay, so now we know how the layers are created for these images. We understand how the packages get compiled. What else do we need in order for Fissel to do its job and give us Docker images for basically all of the Bosch releases? So we need Bosch releases that have been built. Dev Bosch releases are not final Bosch releases. We need a role manifest and opinions. These are all required at build time. They're not required at deployment time. So using all this information, Fissel will be able to output Docker images. And next, we're going to see how these two configuration, two configuration inputs look like. Okay, so on the left, we have a role manifest. And again, I want to emphasize this is used at build time. The user that deploys these Docker images will never see this. You have a list of roles. So for each role, you'll get a Docker image. And for each role, you need to specify what you want in it. So in this example, we have NATS, the NAT Stream Forwarder and Metron agent. So at the end, if you feed this into Fissel, you'll get a Docker image that contains these three jobs. And then we have a configuration section. So we wanted to do configuration through environment variables because we've noticed that it's a best practice and we've seen it in things like 12-factor apps. Environment variables are really easy to use with Docker. So we created these templates that you see here in the left, on the left, to help map environment variables to Bosch job properties. On the right, we have opinions. On the top, you see it's basically a Bosch deployment manifest which is the property section. And those are configuration defaults that will be baked into the images. Things that the user won't be able to change. They're basically your opinions of how the container should run. Then we have dark opinions and this is something to make security guys happy. Anything that's in there won't be allowed to have a default. So there we enumerate all the secrets in the system so that you could never get one of those secrets have a default baked into the image. Okay, so now we understand how Fissel creates things, how the images look like, how packages get compiled, and the configuration we need to pass to Fissel. How does it run? So when I do Docker run, one of these things, what happens? Well, the entry point which we call runSH will execute some scripts that are very useful for hooking into the process. So if you have something that needs to change, you don't like something that this automated process creates, you could hook into that point. It runs configing to process all the Bosch templates. It'll then start our syslog and cron, and finally start monit. Once monit starts, all the jobs that monit is monitoring will start up eventually. And then finally we trap int and term signals so that when Docker tries to stop us, we can shut down gracefully. Now I'm gonna pass it back to Aaron. So I'm gonna go into a little bit more. I mean, we saw how to configure Fissel itself and how to tell it what images are you gonna build, but I wanna talk a little bit more about configuration of Cloud Foundry because it's sort of a topic on its own, and we wanted to dive into it just a little bit here and see how we kind of tackled this problem. So configuring Cloud Foundry is hard. There's a lot of values to configure. And we wanted to kind of distill this into something simpler for this user. So we kind of created config in that augments Bosch template. It does the same thing, but it can pull from multiple sources such as environment variables, which we talked about prior to this. And then we also employ those mustache templates you saw a few slides ago to kind of help eliminate complexity and redundancy in configuration of Cloud Foundry. So just to give you an idea, there's a lot of configuration values in Cloud Foundry. We have specs. These come from the job spec files. And there's 500 some values of that in CF release and Diego release alone. And then we get to the level that we call opinions where we can safely define defaults. And we have about 200 of these where you actually don't need to really ever set these for most deployments. So there's 200 of those followed by what we call user global. And this is values that users actually care about. They actually wanna change. There's about 90 of these. These are host names, IP addresses, ports sometimes, and secrets, especially secrets. And then we have user role. So user global is sort of for the entire system where user role is for a specific VM. So for example, the API and the Cloud Controller and its jobs, that could have a specific value for NATs.machines for an example. And there's about 20 of these that we want. And so we had two schemes. And I go quickly through one that we didn't really work for us, one that did. So the first one we tried was something I'm calling layer dynamic. And this is something that we kept in the four key spaces. We kept values for everything. So we had spec, opinions, job and role. But there was a few problems with this. We used console to do it. So there was each of those values within there. But every time we looked it up, we had to do the fallback. Is it this one? Is it this one? Does this exist? And so it was actually quite slow to run. And we didn't actually gain anything from it because we still had to restart the container. So despite having this dynamic ability to configure it, nothing really changed. And we also had to have yet another KV raft process in the cluster. As you know, Cloud Foundry, CF Release already has a console instance. Diego and Logurgator already have XEDs. So we're putting another one of these kind of hard to configure for HA type processes into the mix. And we just didn't need that. So we kind of toned it down and went with something simpler, which I'm terming layered static here, where we have everything is pre-computed inside the container based on the role manifest and the templates we've already seen. And everything else is provided through the environment very easily for user values. So this worked out really well for us. In addition to this, these are just a list, a sampling of the pull requests that we've done in our team to basically change a lot of things like DNS lookup and hard coded values that are kind of subtly related to Bosch in certain ways, as well as systems that touch proc with kind of impunity. And so this is kind of an ongoing effort to, like I said, separate that peanut butter and jelly sandwich just a little bit more. So yeah, the next thing that I want to go into is a demo. So I'm going to show you a demo of Fisal in action. So I'm going to switch over to this. And let's see. This is going to be, I need to mirror one moment. Hopefully we can see this. Yes. Hopefully the font is also not big enough. Let's do better with that. Are we good there? Can we read it? OK, excellent. I'm going to do the same over here just so we get a clear vision of what's going on. So I put together this folder. I've got a couple things going on here. I have a CF release. And inside CF release is just a pre-compiled Bosch release. So Bosch create release dash dash force. I have a Fisal RC, which is going to basically pass command line arguments into Fisal. So there's a bunch of stuff here. I have a config in which is required because we bundled this into the image that we're about to create. And then I have this config directory in here that has all the files that we were just talking about. So we can see these are my opinions. These are the things that are not likely to change and will be baked into the image, such as the port for NATs or the fact that traces turned to false. The next thing I have is my dark opinions, which we also saw. And these are the things that I don't want defaults for. Even if I've specified them here, I'm not going to actually be able to use them. So this value will not go through. And the last thing we have is the role manifest. And so we saw an example of this. You can see here that I've defined a container or a role called NATs. It has one job. It has NATs from the CF release. And so I've also commented out a couple of processes here that I'm not going to need for this demo, such as the NAT Streamforward and Metron agent. We can control the processes that go into each container by modifying the role manifest. And then lastly, you can see our configuration templates at the bottom, the mustache templates that we were talking about. So with that, I'm going to actually build the actual image, or sorry, the packages. So this is a compilation step. And of course, I forgot to source my fissile RC for all my things. So you can see that because I disabled NAT Streamforward or fissile intelligently does not build Ruby, which it does not need, despite it being part of that kind of job. So you can see it compiled go and gNatsD. And the next thing I'm going to do is actually build the image with this compiled package. So now you can actually see in my Docker images, which looks lovely, you can see a container with the ID fissile-Nats. So that's the container we just built. The fissile role base and the fissile compilation base that you see there are ones that we discussed elsewhere and take a little bit longer to build. Those are the runtime and the compilation dependencies. So now I'm actually going to start this container up. So we're going to do one of these. And here I have, let's check. So this is actually a script that's just curling the endpoint. It's looking for monad actually. And so when I run NATs, you'll see that monad kind of pops up. And we see on the right-hand side, does not exist right now, except the T is on the second line. But you can see that it knows that NATs is not ready yet. And now NATs process has actually started up. Monad is reporting that it's up. And this is just the regular CF monad that's in every VM that you would produce with Bosch. And now we're ready to actually connect to NATs, just to prove that it's working. So I have a listen command here. Sometimes it's going to take a little while to start. The process says it's up, but Ruby, right? There we go. So subscribing to NATs, it's just subscribing on the wild card here. And then we're going to send just a hello world to it. And you can see that it gets passed along to the server. And there you have it. We have NATs running in a container just in plain Docker that you can use with anything. Kubernetes, you could use it with Docker Swarm. The world's open as long as it's Docker, which is what Faisal produces. And now we're going to see another portion, another piece of this demo. So I'm going to pop that up here. Good to go. Yep. OK, so this is a video that we built for this. And here we see Cloud Foundry running on Kubernetes, basically. We took five Bosch releases. We took CF release. We took Diego release, Garden, MySQL, and CD. We turned them into Docker containers, and we deployed them on Kubernetes. We went a bit further than that. So on the right, what you're seeing is kubectl get pods. And we went a bit further, and we wanted to make it HA. So you'll see that you have actually more than one API role running, more than you have three API workers. You have multiple cells, multiple MySQLs, multiple nets, et cetera. OK, so on the left, on the top, what you're seeing is a process that's making requests to an app that's deployed on the thing on Kubernetes. And right below that, we see a Chaos Monkey script. So what that does, and hopefully you see that this thing has fed up, and it keeps going faster and faster, but the script there kills something every minute. So it takes something at random, one of the roles, like a cell, like an API, and just kills it. And at the bottom, we have the distribution of Diego. So that actually shows us how the app is being distributed among the Diego cells that's being requested by the process at the top. And we can see that we basically get this for free. So we built the Docker images, we created some configuration for Kubernetes, and now we have an HA deployment of Cloud Foundry running on it. And we ran this experiment for about 20 hours, about 900,000 requests were made. And in total, there were about 1,200 killings of roles. And the thing stayed online. We're not done yet. All of the roles are fully HA. We still have some gaps. So it would be great if we could get some help to get to 100%. So now that the video is done, I would actually like to take you to the live thing. Can you switch me up to that terminal? OK, so hopefully you see this. The fonts are a bit smaller. But this is the system that you just saw earlier. And it's still up and running. We don't run the Chaos Monkey anymore. But you can see Cube CTL get pods is still running on a watch there. The app is still making requests. And we have veritas here at the bottom. So just to show you how one of these things look like, I'm going to exit veritas. I'm going to exit this container that I'm in, which is a Diego debugger. And I'm going to Docker exec into one of the cells. So here we just see the command line there. It's Docker exec. And then we get the idea of the first Diego cell. Basically, every image that Fissel spits out will have a label with its role. So we can look it up easily. So we're just going to go in there. And I'm going to take you to a familiar place, probably. So you can see here we're in varvcap. Because of the tight coupling between the peanut butter and jelly that we talked about earlier, there are still some things that we can't change. Like the templates still need a varvcap. The way we load packages and run them still need a varvcap. So when you go into one of these containers, you'll actually see the same structure. And that's about it. This is the live system running. Back to the presentation. That's the wrong view. I did it. OK. So the end just wrapping up here. Oh, this is your part. Yeah, so we still have work to do. We want to add support for other types of base images. We want to improve layering. Like you saw, we just have the base. Then we have stem cells and packages and jobs. We could be much smarter there where we take advantage of layering in dockers to reduce the amount of downloading that we have to do. We also think that this logic monitor is possible. So we would like to give that a try. And also, we want to continue the effort to decouple Bosch from Cloud Foundry. So with that, as of now, we're open sourced, fissile, and configured. Those repos are available at github.com slash hpcloud, fissile, and confign. So you can go check those out right away. And so what are we releasing with that? Just the tooling. There is no images. We're not going to be providing CF images for anyone. That's up to you guys. But hopefully with the docs and whatnot or any collaboration you want to do, we can get images out of it very easily. That is, of course, the tooling's entire job. So I do want to just say thanks to Hewlett Packard Enterprise for giving us the incentive and time to work on this, as well as our other members of the HCF team, who have contributed also to fissile, as well as the Cloud Foundry community, and especially the Bosch project, for actually making this possible. Because without the contracts that are there, that they have laid out, we wouldn't have been able to do this. So it's actually a testament to that. So yeah, with that, I kind of want to open it up to Q&A. And thank you very much.