 So all right everyone, so I'm going to start my talk about cloud foundry platform automation with conquerors My name is Dennis. I'm with any lines. I'm platform engineer and the enterprise operations team I do some support some operations and most of the time I don't try to do everything twice which means I'm heavily working on our automation concept for operations Some of you might also know me as the valued underling of this guy This is Sven. This is our technical lead for the platform. He made some advertisement for me in the CF operation channel Okay, what we are talking about so We have a product Which is tightly connected into cloud foundry and this means we are dealing with several problems on the one side We have open source releases. We need to update regularly like your UAE for example your garden release and your cloud foundry On the other side we have our own software releases and our examples about our data services. We need to update We have to update the Bosch stem cells and we have to maintain our cloud foundry runtime So and this talk is about how we gonna Deliver a software release with all these components to our customer systems And that's not the trivial task because our customers they are using different infrastructures Some are using AWS. Some are using Azure. Some are already using Alibaba cloud They have multiple Bosch directors They are running multiple Systems at once which means like they have a staging environment. They have a second staging environment They have another production environment a second production environment and They have multiple of these environments, which means like they have a setup on GCP. They have a set up on AWS They have a set up on Azure and we need to be able to constantly and continuously Shipped new software releases to all these systems every week and the question of course is how are we gonna do it? We're gonna do it with conquerors So let's look into our deployment process On the left side we have our CSCD system which somehow gets some deployment manifests Mostly from a Github repository and sends these deployment manifest to our Bosch director, which is actually Downloading all the releases from the release repositories and S3 buckets Into the director blob store and then provisions our deployments But actually the tricky question about this is what we're gonna put into this deployment manifest because we are not just supporting one infrastructure we are supporting multiple infrastructures and We also have completely different environments the one production system of customer a might need 10 Diego cells the other production system might require 50 so We are in a position where we cannot just hard code everything into the deployment manifest So would we figure out which work quite good for us is we separate it We have an environment specific configuration Which means you have this specific production environment and we split the configuration of this production environment in a IIS config in a Bosch ops files and in a cloud config and what it Makes us enabling to do is we can specify how many Diego cells you want We can specify which networks you want We can specify which VM types persistent this types and we are able to completely Manipulate the deployment manifest with Bosch operation files Of course, we are not saving the credentials in the Github. So we're working with a credential store For us, it's credit her but I mean you can use pretty much everything which is supported for example, has she cobbled and The last thing is the bare bone deployment manifest. This is like just like the deployment manifest which tells like We have this deployment name. It has these instance groups these jobs and these releases and all the rest Gets enriched with the environment configuration and the cloud config, which is already at the Bosch director So and that's that's how it's really looking our CICD knows we are now in the production environment of customer a So we need to get the environment config from customer a We take the Bosch manifests and then we enrich the Bosch manifests with the IS config We update the cloud config at the Bosch director and we might also change something on the deployment manifest with Bosch operation files We send this complete deployment manifest to the director and the director will then go to your credentials store of choice and Take all the credentials So we can make this like into a CICD process So first we're gonna take all our resources like the open source releases the stem sets and our own releases We make a build step at the end of this build step. We have something we call the release candidate, which is like on and Release YAML file, which contains all the manifests you need to deploy your Cloud phone fee for example to deploy your progress to register your progress Then we go and take this information and we deploy it on our staging environment and Afterwards we run a test suit and a security scanning. So after that you can tell that The deploying first is migrating and second. It's not completely broken So if we finish the security scanning, we now have something we call the final release so But right now we have again one problem We have different infrastructures if we are gonna test this just for example on a visa infrastructure It doesn't mean that it's working on Azure on AWS or any other infrastructure So this concept right now does not have a multi Infrastructure support. So what we're gonna do we're gonna take the deploy step the run test you step and the security scanning and Just put it into another box so we build our release candidate and then we're gonna distribute it to different pipelines and the job of this different pipelines is To test it on AWS on Azure on vSphere and creating a final for release for this specific infrastructure So afterwards, I'm able to tell with the AWS release. I'm able to migrate the software on AWS And we call this process the release builder so But your problem is not only about deploying Bosch We figured out that there are like several layers of problems. It starts with your infrastructure automation Then you have your Bosch directors. You need to provision them You also need to update the stem cells. You need to update the releases of the director Then you have your deployments Cloud phone. We as part of your deployments the brokers are part of your deployments You need to update again releases stem cells and you have sometimes you have also default applications you push on your runtime and When you run the pipeline the first time you want to make sure that it gets also carried out by the CICD system So looking at these different problems, we figured out that they are all basically the same The only thing that really changes is the tooling. So for example for infrastructure automation, you're gonna use Terraform for Azure or you're gonna use cloud formation for AWS. You're gonna use Bosch CLI to provision your directors You're gonna use Bosch and Cloud Foundry for your deployments and again, you're gonna use Cloud Foundry to deploy your applications So what we did we described a process Which can be applied for all of these layers and our process we always go at first We provide a release which triggers the CSED pipeline then we provision something and Then afterwards we're gonna test this For example, if you provision your Cloud Foundry runtime, you want to make sure that your users can actually create an org and space That you can create your services with your brokers that you actually can reach the Cloud Foundry API and All these steps they can fail So if you are migrating the Cloud Foundry manifest you're deploying it and it's failing It's gonna get a feedback loop back to the release which means you're running the CSED You see it didn't work it breaks from this point The operator needs to figure out why the release was not working and then you need to cut a new release and start the process over So either it fails on migration or on testing and it goes back or it continues So now you can make a big process getting all this layer together First you're gonna automate the Infrastructure then you're gonna provision your Bosch directors and update them then you do your Bosch deployments and at the very end You may do your CF applications What's actually it's not that simple So if you're familiar with Conquest Conquest, it's also mostly a Bosch deployment So when you want to run a pipeline you need to have a Conquest first Which means you need the Bosch director and you also need to have a running infrastructure underneath so we're dealing with a chicken or the egg problem here and What we figured out Makes this problem much easier to solve is to separate the Conquest environment from the rest of the environment So we came up with something we call the Slave Conquest We're gonna explain a little bit more later why this is called Slave Conquest and it actually in our case has a credentialed store We're using Crata. It has a Bosch director and it says the Conquest deployment So and this Slave Conquest is able to bootstrap all the other Resources needed for your customer environment. This means like multiple directors and And At the very end we are also able to provision new directors with a Bosch director Which means all the directors you see underneath they are actually deployed with the Slave Conquest director Which makes it very easy to bootstrap new directors and to monitor them So we got this a customer environment and Yeah, I mean obviously it's pretty cool We have a unified credentialed store, which means All these directors underneath they use the Crata from the Slave Conquest Ultimately it gives us the opportunity To connect our Conquest directly with this Crata and to get all the credentials We need in order to deploy all the directors automatically. I mean technically you could also give the Conquest just a credentials file But think about it. What if you just rotate some certificates in CloudFoundV? You always need to change the credentials in the Conquest as well So using directly the Crata of all these directors Makes it much easier every time you run the pipeline. It's gonna have the latest credentials from these directors So yeah, of course when we have a Slave Conquest We also have a Master Conquest and what our Master Conquest actually is It's this one component which is bootstrapping the Slave So it's providing the infrastructure for the Slave It's deploying the director of the Slave and then it's deploying the Conquest of the Slave and Uploads all the Pipelines which the Slave needs So why we are using this So this is like the complete system is looking like you have AWS release You have a Azure release which is built by the release builder these releases go for example into a S3 bucket cool then the AWS staging of the first customer is gonna be triggered and it's gonna migrate the release test it and When it's working, it's gonna trigger the next stage and the next stage is again It's gonna migrate it test it and ship it to the production system The same thing is also working if you have a staging system into two production systems At the very end. It's just like about how you trigger the releases So the process doesn't change you just need to think about who's the next one who's gonna receive this release So you have to play a little bit release ping-pong at the very end Okay, we still didn't talk of why we need this Conquest in the setup So think about this We have this AWS staging Azure staging and all the others But there must be one single instance which is providing all this and it's also maintaining the Conquest pipelines And this is what the the master is about It's provisioning all the directors and it's also distributing new pipelines So for example, you create a new pipeline and you want to make it enable in all the customer environments How you're gonna do it? You need to go to six eight or ten different environments And every time you need to update the Conquest pipelines there which can be really tedious. This is just an example of a single customer But maybe you have like five customers Then you are facing like I don't know 30 different environments and every time you need to update the pipelines that's why we have this Conquest master and Every customer system has one of these masters which is able to distribute the pipelines just for this one customer system So yeah, that's about our CICD system. Thank you for having me here It's actually picture from a water taxi. We experienced on Monday. I didn't know that something like this already exists So yeah, you have any questions No questions. Yeah Yeah, I mean actually our own staging environment is running on these here and we are kind of limited on resources there So it really depends what you have, right? We have not just only the runtime, but we also have all the brokers which is like right now I think we have eight brokers or something then we have something for DNS We have console and we have an engine X so It gets pretty heavy But I mean you can go there and prune something like you don't have to have 50 Diego sets on your staging, right? At least make sure that you have some right try it with three or four just Customize your staging system to the needs you have right? someone else Yeah, yeah, we actually How we doing it? So I told you that the slave conquerors is able to deploy the directors Office environment with its own director. We're kind of doing the same thing with our master So the master is actually Deploying the slave directors, which means you don't have to have the credentials lying in your one password or somewhere The credentials are in the Cretop of the master That's also the reason why we have one master per customer because we don't want to Have our secrets from customer a at the master of customer B so with this you You basically you don't have much trouble you just go and just update the stem cell and the credentials are in the master Conquest someone else Okay So I guess yeah, we're gonna enter the presentation If you have any question left just come tomorrow to our booth. We can have some chat and Hope to see you soon. Thank you