 Hi, I'm Genevieve. I'm an engineer working on Bubble at Pivotal Cloud Foundry. And I'm Nevin Farar. I'm a product manager working on Bosch Bootloader at Pivotal Cloud Foundry. So maybe you've seen this before, but if you haven't, please make a note of the exits and use them should an alarm sound. All right. So this is a talk about automating your life with Bubble. More specifically, it's about how to bootstrap Cloud Foundry with Bubble in concourse. First, we'll start with a blank IaaS and we'll describe how to get your first Bosch director running concourse. And then we'll use that concourse to make more Bosch directors to run Cloud Foundry. And then finally, we'll configure concourse to update everything, including itself. So you might be asking why? Why is the Bosch Bootloader team talking about setting up your whole platform? We see people as they are very early in their process of creating their sort of Cloud Foundry environment, which gives us a lot of insight and we get to see a lot of different ways that people try to set up their environment and realize that there's not a lot of good opinion about how you should set up a Cloud Foundry with concourse and how to distribute those deployments among multiple Bosch directors. So let's look at one way that you might set up your Cloud Foundry and sort of talk about why that could be problematic. So everybody starts with Bosch. Use Bubble to get your Bosch. This is natural. This is good. And then you might deploy concourse after that. Then a lot of people will take that same concourse, point it back at your Bosch, and deploy Cloud Foundry. And that's still all well and good. And maybe someday you get users, you call this environment prod. And then everything's good. You might want to create a staging environment, too. So you deploy that to the same Bosch director. And that's also good. But one day, things catch on fire. And you find yourself in sort of a weird position because now both environments are in a sorry state. And you have to do something special to bootstrap everything back from scratch. So how could we change that? How could we do something different about how we deployed this in order to not have this problem, to not have this, have a better separation of concerns here? Well, it all starts with Bosch to lay it out the correct way. And of course, you want concourse. That was still a good move, not arguing there. But consider deploying more Bosch directors with concourse and separating those Bosch directors by their environment, by their stage maybe. And then on each of those Bosch directors, deploying a Cloud Foundry. Now in this manner, you have better isolation if any one Bosch director, or maybe even a whole region, go down. Your production is still fine. And furthermore, even if production did go down, you have a well-tested procedure for getting, for stamping out more environments. All right. So now we've told you what to do. But how do you actually do that? So generally speaking, you'll be using bubble to deploy Bosch. You'll be using Bosch to deploy concourse. And then you'll be using your concourse to deploy CIP application runtime. So let's start with bubble. What is bubble actually doing? In the first command bubble up, it actually starts by generating a Terraform template. This Terraform template will create network resources, security groups, and potentially load balancers for either a Cloud Foundry or a concourse deployment. It'll apply that Terraform template to actually create those resources. Then it will translate those outputs from the Terraform apply stage to Bosch deployment variables in order to deploy your jumpbox and then your director behind that jumpbox. And then finally, it will generate a Cloud config for that Bosch director and upload it. So where do these commands actually run? The first environment, your concourse environment, you'll probably be running bubble on your local workstation. And then this will create a Bosch director and jumpbox in the cloud, obviously. And then you can use the bubble CLI and the Bosch CLI to actually target that director and talk to it. And that's how you'll deploy concourse. And then you can use the fly CLI to actually talk to that concourse and add pipelines. With manifest, you cannot start to add pipelines that will be able to use bubble to deploy Bosch directors that will deploy your Cloud Foundry. All right, so we can look at some code and get some specifics. So for the concourse environment, the first command you'll run is bubble up with an LB type fly specifying whether it's a Cloud Foundry or concourse deployment that you want this for. Once the environment is created, you can then evaluate the output of this command called bubble print done, which helps you set environment variables on your workstation for the Bosch CLI to actually talk directly to your director. That way you can just go straight to the next step, which is saying Bosch to play concourse. All right, so let's go through the first pipeline. We're going to call this one the staging pipeline. And we're going to add it to our concourse deployment. So this is what it looks like. It's using CF deployment concourse tasks. So we'll go through each of the jobs. In the first of the pipeline, we're going to bubble up a Bosch director with all the goods we talked about before. And the output of this step is a bubble state directory, which you'll want to store in a secure location like an S3 bucket or a private Git repo. All of the files in this directory were used to create your bubble environment and will help you update and will help you talk to that director, et cetera, et cetera. And you want this obviously stored in a secure place. Cannot stress that enough. The next job is to upload the stem cell to the Bosch director. And then you can just run CF deployment against that Bosch director. So in this same pipeline, you can also do the tear down side of things. I do not recommend doing that automatically. I do recommend manually triggering that. So the first job in the tear down process is that you're going to delete the CF deployment. And then you're going to delete any orphan stem cells or releases that are no longer in use. And then finally, you'll tear down the Bosch director jump box and IS paving in that order. Okay, so now we have another pipeline, your production pipeline. This one is actually so similar to staging that I actually didn't even run it because the staging one went green. So it was fine by me. The really cool thing about these pipelines is that you can pull down all of the latest releases or stem cells stem cell fixes or vulnerability fixes and automatically roll those out to your environments. So finally, the last step is to use concourse to update concourse. This is kind of complicated to wire up the bubble environment and the concourse deployment that you had on your workstation into the concourse deployment itself. But it would look something like this. The first thing that you'll want to do is persist the bubble state directory from your local machine to a secure location, and that'll be the input to this pipeline. And this pipeline will go through the same stages that we talked about before, which is like bubble up, which will just do updates to that director and jump box or network configuration. And then it'll do a deployment for your concourse, which will update if there are any updates to it, updating latest release of concourse, whatever that might be, you a credit hub. Yeah, so that's kind of a good starting place, right? You have an automated process for spending up environments now. You have some Bosch directors. This is kind of like a baseline that everybody could start with that should be applicable to almost anybody. But let's talk about kind of next steps if you're going through this process as an operator. A little extra credit at the end of this. Try running this pipeline against another region or another I as that'll shake out any hard coded variables you might have in the process of developing this as well as giving you a little more robust platform to stay up in the face of failure of entire regions or I as is. You also might want to set up SSO integration. That way, once you do spin up an environment, it's ready to be logged into by your users, by developers. That way, you're prepared for easy failover. Should you need to bring it back or just like start a whole new environment from scratch, say a new staging fresh staging environment every week. Similarly, just any steps that you might find yourself doing after you've set up your environment. Any hand-done steps, consider adding them at the end of your pipeline right after initial install. You might want to seed orgs and spaces and configure your Cloud Foundry further like the administrative tasks. And finally, to really be ready for disaster, consider adding another pipeline that uses BBR, Bosch backup and restore to back up all of the databases to an external S3 bucket so that you can bring back in the case of disaster recovery. You can use the time resource in order to trigger that pipeline every night, which is a conquest resource. All right. This has just been a really high level overview of how to bootstrap Cloud Foundry with Bubble on concourse, how to configure it to update automatically, and then ideas you can experiment with given the setup. Any questions? Thank you. So for when you're doing BBL for concourse, does it spin up a Bosch first and then concourse or does it just do concourse? We just spin up concourse, right? And then use the concourse deployment manifest to install concourse. And how far along is your story with vSphere bubble? Bubble works on vSphere. We are working on NSXT for bubble in order to have sort of a story around load balancers and VPCs. But right now, if you preconfigured your network and have a vSphere environment set up, you can install a Bosch tracker with bubble on it. Any other questions? Yes, maybe detail for the audience. We had the conversation just before. The customization process for BBL for different CPI, the range of customization that BBL supports. Yeah, we've recently exposed a lot of the internals of bubble, such that you can add multiple ops files, use multi CPI by adding ops files for the Bosch deployment manifest. And by adding terraform overrides, you should be able to accomplish pretty much anything you want. You had asked about adding multi CPI and before the talk. And that's an example of one thing you might want to do is configure a Bosch director to talk to more than one IaaS, especially for on-prem, where regions are sort of needed their own CPI configuration. Any other questions? All right, that's it. You're free. Yeah, find us on Cloud Foundry Slack also. We hang out in BBL-users if you have any further questions. Thanks. Sorry.