 So, okay, so thanks for joining us. We know it's the end of the second day and everybody's probably excited but exhausted, so we're really grateful that you're all here. We're gonna talk about continuous delivery of Cloud Foundry. So now, keeping your cutting edge platform cutting edge, we think Cloud Foundry is the cutting edge platform, of course, so how do you keep it that way once you install it? So, I'm Cora Iberkleid. I work for Pivotal out of New York. I'm a platform architect. I'm a platform architect as well. I work at Pivotal out of New York as well. So, okay, so the objective that we wanna talk about is improving the ease, the quality, and the consistency of your platform. Platform meaning not just one Cloud Foundry foundation, but all of your foundations that compose the platform for your enterprise. So this is what's known as a platform lifecycle management experience and we're gonna talk about the angle of continuous delivery. So we're used to talking about continuous delivery from an application delivery perspective for software development, but we wanna take a look at our platform with Cloud Foundry as from an operations perspective is this is the software that we're delivering to the enterprise. So we wanna apply the same continuous delivery principles to the installation and maintenance of the foundations themselves. This is one piece of the puzzle of making the whole platform lifecycle management experience easy and consistent, but it's an important piece of it. So if we can do it well, if we can do it right, what are the outcomes that we could expect to get out of it? One of the main ones is a consistent configuration and there's like a bubble at the bottom. A consistent configuration and version parity across all of our foundations. So there are enterprises that are running tens of foundations, right? There are some that report over 30 or over 40 foundations. So how do you make sure that they're all running the same version as each other and that you have a consistent configuration across? So that's one important outcome. Also quick adoption of features and security updates. If you go through all the trouble as you might have experienced today installing Cloud Foundry manually or even trying to introduce your own automation. Once you do that and you have an environment that's working well and that's reliable, new features come out and you might start to fall three months behind, six months behind and you're no longer offering your developers the latest features of the platform that are available. Or as security patches come out, the more time that it takes you to install the security patches, the wider the gap where you might have a security breach, right? We saw recently in the US Equifax, there was a struts patch that they didn't install quickly enough and they were... Hacked. Hacked exactly. So by freeing up our platform engineers from these more mundane tasks that really can be automated, we're empowering them and enabling them to work on more creative activities. And finally it's also a question of having confidence, right? Confidence both in the automation pipelines as a process to be autonomous, to pull for changes in product and continuously and automatically update them. And also confidence in the platform, in the quality of the platform that we're providing for our developers without having to manually intervene necessarily, right? Yeah, so that was the outcome. That's the outcome we think can be achieved through pipelines, through continuous delivery of your platform. What I'm gonna introduce now is just some of the, in basic concepts around pipelines as to why pipelines and then we'll go into talking about what are the open source CF pipelines which are available, what are the PCF pipelines available, and then like a quick summary of that. So that's sort of the overall talk we're gonna do today. So let's take an example of an organization that might automate their CF environment where they would be like really focused on saying, all right, we're not gonna deploy CF or Bosch manually anymore, we're going to write some automation for it, we're gonna make sure that we have automation in place for doing those tasks, right? And this is an example of an actual customer. This is often one of our customers who have gone through this process of trying to write automation for their Cloud Foundry environments. So what they did was like they used Git for storing all their manifest files, credentials, configuration files, so an example could be BitBucketRepositories or something else, and then they wrote some automation scripts, in this case, an example of Ansible playbooks, but you could use something else, as an approach to write, to install, deploy, and update your Bosch as well as Cloud Foundry, right? And it was going pretty well for them in the sense that they were able to sort of, so they had this sort of workflow. I don't wanna spend too much time on it, but really they had a development box and then they had their production or development environment. They would make some changes to their box, they would push it out to BitBucket and then that would update their Bosch manifest and ultimately deploy Bosch Rector, CF, or whatever the broker release. That all worked in the sort of the happy part scenario. Also here in the smiley face is the manual thing and then the current is like the automated way of doing it. All right, but what happened when something went wrong? What happened when something went wrong in one of the Bosch deployed deployments or maybe at Bosch level itself? They would go into the jump box, edit those Ansible scripts or playbooks what they had and updated their and triggered another set of generated Bosch manifest. So it would update the Bosch manifest and then do a Bosch deploy, which is great. It fixed the problem, but now I gotta make sure the engineer who fixed the problem goes back, updates their BitBucket repositories, make sure that they're updated and good. Also make sure that this is not just for this environment but all the other different environments, they have it consistently configured across all these different environments, right? Now let's use a concourse pipeline-based workflow. I definitely bias towards concourse. I think you should use all things concourse for any kind of pipeline to write, not just for Cloud Foundry, but anything, be from app development to operational delivery pipelines, you should use concourse. But what is concourse? Concourse is a pipeline CI, more than CD system, but it provides like a declarative configuration for your pipelines, which means I can take that declarative pipeline and move it anywhere if running on any concourse in the world. The more important feature though is the repeatable builds. The repeatable builds allow you to say, all right, I'm gonna run my pipeline once, I'm gonna run my pipeline second time and third time, and I know those are all fresh, in fresh deploy, fresh builds, and I know there's no residue from the previous build, and I know that if it doesn't work, there's a particular change in that build which is not making it work. So just using concourse as a way to doing it, your happy path is still great, I still edit my manifests on development box, I trigger, I commit those changes on the bid pocket, and that triggers the whole workflow where it runs the concourse pipelines and then updates the manifests and finally does a Bosch deploy. But what's more interesting is when things go wrong. When things go wrong, I'm not editing that particular Ansible scripts on that particular jump box, but I'm really editing my manifest still on my dev box, I execute a task, if it runs fine, I just commit it, right? So I have now ensured that I'm not having inconsistent configurations by just maintaining one place where I'm running things off and triggering things off. So I just wanted to give you a quick view of it. I don't expect everyone to just follow the workflow right now and get it, but just the general idea of using pipelines and concourse pipelines for automation or for your platform is the key message, I think, I want to send. So from now, I think now we're going to focus on, first, the open-source CF pipelines. OK, so for open-source pipelines, here's an example. So for an open-source pipeline, you have to build the pipeline itself. So this example is very simple, three jobs. The first one is going to install Bosch for you. The second one is going to upload the stem cells, and the third one would install or update Cloud Foundry. We've put the GitHub references for anything that you can look up throughout the presentation that they're all referenced. But let's go through this and focus a little bit more on some of the subcomponents. So before that whole pipeline runs, actually, what about the infrastructure, right? We have to configure the infrastructure before we actually deploy Bosch Director. So there's a cool tool called Bosch Bootloader, Bubble for short. And again, open-source available. So the command bubble up will actually provision the infrastructure for you and deploy Bosch Director. So literally, as simply as you see on the screen, by calling bubble up with a certain set of credentials, that's what you need for AWS. This also works for Google Cloud. And it will work soon for Azure as well. And optionally, you can create load balancers as well. Just by running this command, you can do a Bosch login when it's done. So the infrastructure on your IaaS, as well as Bosch Director. Things that, before involved following a whole bunch of documentation and a whole bunch of steps, bubble up that command will output a file, a JSON file, or bubblestate.json file that has a description of the infrastructure that it just provisioned, which you can store probably in a private repo, would be a good place for it. And that bubblestate JSON has all of the information, including your Bosch Director address and the credentials to get to it. So as you can see in the second box, to do that Bosch login forever for the values of all of those parameters, we're using the bubble command, assuming that that bubblestate JSON file is in the same directory that we're working in, otherwise you point to it, but you can simply query for Director address or username and password. And literally, just by following those steps, you've logged into your Bosch in your newly set up environment. So really a simple. So in terms of the pipeline, we create a job, a concourse job, that's going to ultimately call this bubble up command. It's gonna do it by executing a task. So that, even though you have to write the job, the task is provided for you. It's available and also in a community repository called CF deployment concourse tasks. And a task is simply the execution of a script in an isolated environment with all the resources provided to it. So in this case, we wanna run that bubble up command that'll be provided to us in a script. The isolated environment is gonna be a Docker file that's provided through Docker hub that has, for example, the bubble binary. And so that task.yaml file, which is what concourse is gonna interpret, the actual shell script that eventually calls the bubble up binary command and that Docker file are all provided for you and referenced through this task definition in CF deployment concourse tasks repository. So the repository, this is what it looks like. So on the left hand side, you can see the second task listed is the bubble up task. So on the right hand side, we're digging into that directory. So it has the YAML file for concourse to read and the shell script to be executed. That shell script is what calls that bubble up binary command. And the YAML file is pointing to that Docker file that's provided through Docker hub. So that's the whole definition. So your job just has to call this task. Now we might wanna have a little bit of configuration, right? Because bubble, as you can see, it's quite opinionated. It didn't ask us anything. It just set up our infrastructure and deployed Bosch director. So there's an opportunity to tell the bubble up task to reference some ops files, which are basically YAML manifests that get merged into your Bosch manifests. So you supply that as an input resource pointing it to a directory. There are some sample ops files inside of the same CF deployment concourse tasks repo inside of an operations directory. Things like, for example, scale to one AZ because the CF deployment that we'll talk about in a minute by default follows a three AZ configuration. So if you wanna scale it down, you simply point to this ops file and it'll deploy a single AZ foundation for you. So this is what your Bosch definition would, the job definition would look like. This is the example of the piece that you might have to add. So you can see the declaration of the job. The first job in our three job pipeline is update Bosch. And you can see that it references the ops files at the near the bottom of the screen is the ops files variable which lists exactly which ops files we are choosing to run from that directory of available operations files. You can also point that to your own operations directory and write your own variations of operations files. And you can see the environment variables that previously I showed you would run from the command line that are passed to the bubble up command itself. And at the end, that bubble state JSON file is stored. Also, you know, we could put that in a private repository but it really is that simple that'll get you Bosch director. So in summary, our example update Bosch job declares that CF deployment concourse tasks repository as a resource. It executes the bubble up task that bubble up task script is gonna call the actual bubble up command. Bubble up uses Terraform under the covers to set up the infrastructure either on AWS or Google for the moment as or soon. And it also calls Bosch create M to deploy Bosch director all by just executing that job with one mouse click. So that's easy, it's predictable and it's repeatable. So what if you want a little more control over your IaaS? Once you call bubble up, you've got your Bosch cloud config created so you could export that cloud config, you could edit that YAML file and then you could update that cloud config back up to your Bosch director, that's one way to do it. If you want more control, you can not use bubble, just create the infrastructure yourself, create your own Terraform and then call Bosch create M, which is again, what bubble does under the covers, right? So we move forward to our job, our pipeline had three jobs, the second one was upload stem cells, so assuming that runs fine, let's focus on the update CF job installer update CF. So how do you decide exactly which components or which elastic runtime sub component versions were running or sorry, cloud foundry, application runtime components were running. So previously you might have used CF release to define that, so this is a more flexible replacement for that, it has a canonical manifest to declare the different versions, again CF deployment is an open source GitHub repo that provides this for you and it's just more readable and it emphasizes security and production readiness by default. So that production readiness, for example, I mentioned that by default, you'll get a 3AZ deployment and you can use the ops file scale to single az.yaml to scale that down, but by default you do get that production ready 3AZ deployment, so that's all defined in CF deployment. So we create a job that's gonna call use CF deployment for our Bosch manifest to deploy the runtime. Again, you can still use ops files as a way to configure things at this point in the same way that you would have done it for the bubble up job, you can declare them at this point as well and common configurations are included in that CF deployment repo. And finally, for credential management, this is also leveraging Bosch 2.0 constructs that allow you, previously all the credentials were sort of interpolated in the manifest file and Bosch 2.0 decoupled that, the credential store from the manifest file, so this is leveraging that, so when the deployment runs, it'll store all of the credentials in a var store file separately. It generates strong passwords, certain keys, and it stores them separately, so you could put that in a private repo or a cred hub or something like that. Awesome, so thank you, Kora. So that was about the open source pipelines. I'm gonna quickly describe sort of what the PCF pipelines look like. There are two sort of benefits to this, knowing this one, if you're a PCF customer it's great, you know what pipelines to use, but the other are, these pipelines have been used by several customer scenarios at this point and they have a level of maturity that you can adopt those patterns even in the open source world, so I think it's really extremely valuable to understand them. You could, one, either you could download them on network.pivot.io, you can sign up and access those pipelines there, or you could access the PCF pipelines repo which has all these different pipelines available. So what are PCF pipelines? PCF pipelines are for installing and upgrading of POSH as well as Pebble Cloud Foundry as well as the other data services like RabbitMQ, Redis, MySQL, whatever that be. It adheres to a certain reference architecture that's been defined by Pebble, by saying things like, as Kora mentioned, like multi-AZ environments, but also having a particular number of instances for, say, the cloud controller or the router having sort of a more HEA configuration for that. The pipelines are available for AWS, GCP, Jure, and vSphere. They also have the option of offline options which is really more for the government organizations where they really want everything locked down and nothing to be coming outside the organization. It has other important things, but really for me, the most interesting one is the jobs for wiping environments. As I'll go through the pipeline itself, so this is the install pipeline, you'll see something called the wipe environment job which is almost even before the create infrastructure, really, but what the wipe environment job is super useful for is that, all right, I create my infrastructure, I see something going wrong and I don't think that's the right way to configure. I can use the wipe environment just wipe out your environment and start from scratch or use deployed Bosch rector, you deployed your Cloud Foundry, now you're still not happy with it, do a Bosch delete deployment, it'll do a Bosch delete deployment and also remove the infrastructure again for you. I think it's a pretty cool pattern to have as part of the pipeline. It's also a little risky one, so not really expose that for anyone to click and wipe out your environment. The other jobs here are kind of similar to what we had in our open source pipeline. The one additional one here is the create infrastructure. The pipeline we showed in the open source one had like update Bosch, update SEM cells and then update CF. Here you have a job which is create infrastructure which really uses Terraform for say AWS or GCP, uses Terraform to bootstrap all your infrastructure required for setting up Bosch and Cloud Foundry. After that, it's really configure and deploy director jobs are just setting up your Bosch and then upload and deploy your T are just about uploading and setting up your Cloud Foundry runtime. You can have sort of similar patterns for installing say RabbitMQ, Redis, MySQL, so this is a repeatable pattern which you can use across other deployments as well. The upgrade pipelines are quite simple. They release something called the upgrade tile pipelines which are like a generic upgrade pipeline. Tile is just nothing but a Bosch deployment but converted into a little more fancy and easier to use sort of interface. But really at the end of the day, these are each of them are like Rabbit, ERT and MySQL are just Bosch deployments. And what the good thing here is that I have something called a regulator which is really a way for me to say, all right, every 30 minutes run for something new. If there's a new version, go pull it up for me. Go fetch it up and get the new version. It has two inputs. So it has like the tile and the schedule. So either every 30 minutes or if there is a new version of the tile. Again, it does the upload and stage style and then the apply changes is really the Bosch deploy. The additional bit about apply changes is that if I configure all my say three deployments, say Rabbit, ERT and MySQL, if I run apply changes once, I can deploy all of those three deployments at once. So that's pretty interesting because what you could do is set up your pipelines in a way where everything stops at the second job. Once it's all staged and ready to go, you click one apply changes and all your sort of several Bosch deployments will go update all the different components at once. And then lastly, it's the upgrade build pack pipelines. It looks extremely complex, but it's actually pretty simple. It's just all the system build packs. So right from Go to Java to Ruby to binary, all of them follow some of the same structure from a regulator to staging those build packs and then ultimately promoting those build packs in your CF environment. All right, so those were like the pipelines we talked about today, the open source ones and the PCF ones. Those are like the basic starting out pipelines. There have been some really advanced patterns as well built by some teams as well as the customers in pairing with the customers. The first one is really the YAML patch. It's kind of similar to ops files. Without YAML patches, imagine that, so there's a team which maintains PCF pipelines, they're building and releasing new versions of the pipelines, but you're using those pipelines and you have some customizations you wanna add to those pipelines. And that makes them your own sort of CloudFarmry pipelines, right? Now what this means is that when the team who's building these pipelines originally released new versions of these pipelines, how do I kind of make that merge, right? I run into the compatibility issues. Either I don't make my customizations, which doesn't make sense, or I don't have compatibility challenges. I don't know where the problem, if there's a problem, I don't know which where is it, either in my customizations or on the PCF pipelines. So with YAML patches, what really you can do is you can have these YAML patches or similar to ops files, you have these files or set of YAMLs which describe that customization you need. And then every time there's a new version of the pipeline, you apply those patches to that version of pipeline and you stay in sync with the latest, but also have your own customizations. I introduce this because I think this is useful for even in the open source world where you're trying to say, all right, I have these set of environments, I have the development environment, of the engineering environment, and maybe have the production environment. And then I have certain customizations for production, but which may not apply to my dev or to my sandbox environments. And then, okay, no, all right. So the second advanced topic really is around, everything we spoke about today was about single environment. What about, how do I build pipelines in a way which I can use for multiple CF environments? This could be for different reasons. You want your, you have one data center or you may have more than one data center, you may have different geographical locations. You may have different development and production environments. So you may have multiple CF environments. And what really this is describing is an approach or a pattern of using bill of materials, where I define the things that I'm providing as a platform and I'm defining what versions they're at. So when there's a new version of those, I release that in my development and my production environments. So really here, if you see the inputs to concourse here are either I make some configuration changes or I make some pipeline script changes or maybe there's a new tile or maybe there are new container images. Those all go to my Sandworks environment. I try everything there. Once I feel confident, and I may go through a few iterations there, let's just like app development, right? As you write code for it, you might have new changes to your product. You test those out in development, you make sure they're good, and then you release the, that bill of materials with those versions to your dev environments, where you feel great if it's all gone good, then I can go ahead and do that same thing for production, right? There's actually a more simpler version. They had a little more, there's a more complex version of this as well, but I think this is a, again, I say it as an advanced topic, it says that once you get more maturity with your pipelines, you can start exploring these patterns because you're definitely gonna need them for sure. All right, I think that's about it. This clicker has stopped working. All right, current to you. So takeaways, so main takeaways from this talk. Number one, installing an update Cloud Foundry has become really easy. Take advantage and enjoy it. It's pretty cool. And it's really in the long run, especially as your organization grows and matures, that the more foundations you're maintaining, it's really the only way to ensure that you are providing the most consistent, secure and up-to-date platform for your organization. And finally, continuous automation of Cloud Foundry. It's relatively new. I mean, you have enterprise customers using it for sure, but it's relatively new, and it's rapidly growing and improving, so stay engaged to the community. That's it, thank you. Thank you. Questions, please? Yeah. That'll be uploaded, but it's all the resources we presented. Yeah, so. Questions? Okay. Yes. We know the Gopatch syntax that is used when patching Bosch manifests. Does, is Yamel patch the same? It looks like it's different. What is the first tool you mentioned? Well, the syntax used by ops files it comes from a library that is called Gopatch. So we call that Gopatch, but is it the same that? It could be using the same construct. I'm not exactly sure whether it's using Gopatches specifically. Oh, okay. So it's not. But it's not. It's not the same. Different thing, because with Bosch interpolate, you can do the thing with Gopatch. It's fundamentally the same. Gopatch is just the same thing as Git patches. Okay. Or standard patches. There might be alternative solutions. Yeah. I can tell you more. Cool. Any other questions? Just more. Thank you for the presentation. Just one question about OpenStack. Just to know if there are some works currently on this provider. I believe there are pipelines for the PCF pipelines for OpenStack as well. I'm not so sure of this open source word. It was not on the slide, I guess. Sorry? It was not on the slide. Good point. It's in the works, I think. That's why I think it's not on the slide. Yeah, some of the artifacts are in the pipelines. So if you look on GitHub, you'll see some of the scripts for it, but I don't think it's at the same level of maturity as everything else, is it? Yeah, so get in there. Thank you for the presentation. It was very interesting. So my question is, how long do these pipelines typically run? Because it seems like every time you do something, it seems like you make a full-blown deployment of Cloud Foundry. So is there a way to speed up things? Because I think the feedback loop is important, and it should be smaller. Is there maybe a way to, before you promote the build, to just have a more lightweight deployment of Cloud Foundry at first before you start actually having a full-blown deployment of it? That's a really good point. There are some ways of making sure that your one way is when you make smaller changes. At the end of the day, it's doing a Bosch deploy of CF. If you're making smaller changes, it may just go through the rest of it and really take a very small time to make a change. The other way is what we haven't mentioned here is that sort of the smoke tests part of it. If you feel that the smoke tests are, if you have continuously running smoke tests, then you may not make changes a lot of times. You may make it only once a week instead. That may help sort of reduce the frequency of them, but yeah, that's one way of co-locating it. This is just sort of pipelining what exists, but as the deployment of Cloud Foundry actually gets faster because of the product improvements, then it's probably safe to feel it here too. So that is definitely a pain that we know when it comes to the build that we're trying to draft. I agree. I have another concern that we faced when building the Orange Pipelines for their cut Cloud Foundry OSS deployment, is that the versions of Bosch releases might be specified in the manifest when you're using the standard task for Bosch deploy. And that's a pain because as an operator trying to test drive a new set of versions into the sandbox environment, I just want to specify the new versions into a deployment manifest, not in my pipeline. I don't want to update the pipeline every time I have something new to test. So what is the state of the work you're presented? Are the versions of the Bosch releases and stem cells defined in the pipeline or in the Bosch deployment? I think the pipelines here specifically have the concourse deployments and they do specify like the Bosch versions. That's in the actual Bosch manifest, but I feel those can be abstracted out into concourse variables, which you update whenever your parameters file when you want to update those versions and that can trigger your concourse pipelines, which could trigger the deployments. Basically you could parametrize your Bosch versions, which you want to update, was that the... No, I want, as an operator, I want the Bosch deployment manifest to describe the exact state of what I'm deploying. So I don't want the concourse resources to fix the exact version that is actually deployed. So what, you don't want it? That way you want it in the... No, it's just that the concourse task was forcing us to tell that we were deploying, for example, the CosgressQL Bosch release in a version 45, specified as the concourse resource, and not as the versions from the Bosch deployment. Is it still true? Maybe it's not the case anymore. Yeah, we can definitely talk after just making sure if others have left, but still. There any other questions? No? Thank you all for coming for the last session. I hope you had a good CF summit.