 Welcome all, it's great to have you here today. We're going to talk about how to provision platform and apps during using Cloud Foundry. I'm Guillaume Berge. And this is Mevin. Spend it again. Say your name, because I'm not sure I'll say it right. So my name is Mevin Sabar-Tangier. And do you want any questions? Yeah, and we'll start with our team introduction. So today we are presenting the work of a team of volunteers. It's been a pretty great effort of their time to work on this. So this is Artier, Guillaume, Janus, Jim, Mevin, Samet, Xavier, Angzilline. It's been great to have such a diversity of contributors and volunteers from different companies, as well as different continents. So a free continent and a two-time zone. So it's been challenging as well. And so let's look at the agenda for today. We'll start with the why. What are use cases for using Cloud Foundry? So we start with admin and operators, and then app developers. Then we'll cover how is Terraform helping us with those use cases. So we'll get an introduction about Terraform, model and syntax, a demo, and then some simple configs that match those use cases. And we'll close with getting more details about the implementation, what's under the hood, the story about the provider, and what is the backlog, what are we, what's all next. All right, so let's get started with the why. So admin use case. When as an admin, when I receive, my life looks pretty much like this. I get a request from users, can you please create a space on all the security groups? Can you publish this bit back and so forth? So it's like this. So I go off with the CLI, I create. But when it becomes overwhelming with a number of requests again, so it goes with feature flags, isolation segment, EV groups, quotas, and all that. So as a developer, I start scripting. But then comes more requests and more consistency requests to run then in sync between preprod, prod, and the different regions. And so very soon it becomes a headache. So there needs to be a better way of doing this. As a developer, I face similar challenges. The CFCLI is great. It's great for most use cases. But it doesn't fully cover all of the resources I would need to provision. I cannot create a space, set user space rules, network policies might be getting there. Service instance and user-provided service are not there. SpaceCorp service broker are missing from the CF manifest. As well, when we need to deploy a microservice-based application using CI-CD on multiple deployments, I need to perform other activities. So let's look at them. So download the app binaries, and all potential dependencies among the apps. As well as looking at domains on its deployment. So this is why the community came up with some additional tooling. You might be aware of SAP MTA or PushToCloud that try to address those use cases. But with a different set of tooling. Another use case is sometimes I need to reference resources from a different system, from credit, from application runtime, and from container runtime and cross-link them together. And I'm clueless with the existing CF manifest and with those other tools. So that's what the problems are trying to fix without the reform. And let's look at how it's helped. So most of us know the reform as a way to provision infrastructure prerequisite. You might be provisioning load balancers, networks, security groups. But there's more than that. And so today, we're going to show how you can provision platforms and apps using Terraform. So we made a bit of fun with the Ashikop website to illustrate some of this. So we took a search and replace from the Ashikop website. Everywhere it's mentioned infrastructure, we changed with apps, platform and apps. And we'll go through this website to illustrate all the concept maps and that most concepts still hold when we provision platform and apps. So the core principles to Terraform is that, for those that don't know, is declarative, declarative config files that are saved into Git and shared among team members. And it's really code. We are coding infrastructure as code. And so we'll be coding today, platform as code, platform and apps as code. So there's three phases to that. So first phase is we code, we do writes, config, and then we plan, we ask Terraform to tell us what change it would be applying. So it's a plan phase. And then the last phase is the create. Actually, we ask him to actually perform those change. So let's review that in more detail. Can I move this cursor out? Yes. So on the right part. So we see how we write code using Terraform syntax. It's in Git, so I can collaborate with my team. I see history, I can put tags, version control. So like we do in this code. And then since it's good, I use my usual ID. So I would get code completion. I would get syntax highlighting. I would get navigation, refactoring, everything that I'm expected to do with our programming language. On the plan phase. So not only Terraform also allows me to, during plan to see or might not change might affect some other resources. So it maintains some dependency across the resources that we can visualize as a dependency graph and we'll see that in the demo. And so by being able to review the proposed change before applying them, I can do some kind of dry run which I cannot do when I do CF push with a multi app manifest. It's an all or nothing. I cannot do a dry run. So Terraform allows me to do dry runs. And most importantly, it's consistent across those two, those different resources. So the same workflow I'm using for all of these front resources be them clots on the application run time that we are most familiar with, but as well with Kubernetes, with Credub and with UA. So we'll review that. So finally in the create phase. Since it's all code, it's reproducible. And so I'm getting environment parity. Remember the pre-prod, prod and the different region I have to keep in sync. Since it's the same exact same code, it would run exact the same way. If I have some repeatable patterns, I need to repeat over and over, sorry. Like in programming language, I would extract that in function, methods. In Terraform, I would extract that in modules that I'll be able to share. Share that in the open source and run ever again and again. And again, I can combine the different components together and cross-reference them. So for example, deploying Kubernetes to service and reference it into application run time. And so Maven will illustrate that in the slides after that. Okay, so let's try to get a demo to get that more visual. And so there would be a recorded demo because I have fat fingers. Does that work? Yes. Okay. And maybe actually I should pause because, yes. So on this demo, we'll start with the setup. So Terraform, download Terraform and the provider. Then we'll do the right face. So configure IDs to get the nice features for most programming language get. And we'll create an yellow world app that's using a service instance, a bicycle service instance. We'll reduce the change and then we'll do the apply. And so there's a GitHub repo you can use to just replay that at home. Okay, so what did I do? I just installed the provider. So you can think of the provider as most programming language. There is a run time and there is libraries. So you can think of Terraform as a run time. So you start the run time on your system. And then you send the library. So in our case, the library is a Terraform provider for Cloud Foundry. And notice that in the 0.9 version that we just released, we need to rename the provider binary to Terraform provider Cloud Foundry. We'll fix that in the next version. So once I have this, I'm able to run Cloud Foundry specifications, config files. And I need to configure my ID. So I got my preferred ID is IntelliJ. And there is great support for Terraform. With syntax alighting, code completion, navigation, refactoring, extract variables, renames, all that. So it's all great. To install that into IntelliJ, you do need a small quirk until we get official. Until the provider gets official into HACCOP. You need to install a json.schema, Cloud Foundry.json, sorry, in the schema directory. So that's part of the repo. And this will be removed in the future. And then you restart the ID. Okay, so here is the skeleton of the spec. Just to make it fast, I didn't record life coding. You have a different video with life coding if you want to go more in depth. That's 26 minutes. And so we configure the library, Platformary Library. And you see that we have many different libraries available. And you start seeing code completion in IntelliJ. And we'll be using today Pivotal web service. So API.dontrun.pivotal.io. With a user password, which I have extracted into a specific file so that I don't click my credentials live. So you see the syntax for referencing variable, the dollar, some with the braces, var.usus. And then we're gonna go ahead and create a space. So let's call this space demobeson. And to create a space, I need to specify an org. So on PWS, I'm not able to create orgs with the CFCLI, I need to use the UI. So in this case, I just looked up an existing org. So see the syntax is the data source. The data source is a concept in CloudFundry, in an inter-reform to look up some existing resources. So in this case, we are looking at CloudFundry organization by its name, berg.org. And we get out of that a grid, the dot ID. And again, complete code completion into IntelliJ. So the space is created, it would be created with that. That's for requested state. And in addition, I will need to add some developers. Since PWS doesn't support looking up users by name, I have a workaround here. I use a CFCLI to look up my user grid. And I specify user grid directly in the list of developers. But at home on-prem, where your admins do give the user ID permissions on the system, you'll be able to look up users directly using the data source. Okay, so we would be asking for a space. Let's ask for a service instance. And here you see we've got completions of different resources that are supported by the provider. So that would be a MySQL service instance, and I need to specify a service plan. So I know there is an Elephant SQL in the PWS marketplace, which is available. And I need to look up a service plan grid. So there is a service plan that I know is free. It's called Total. To make it brief, I didn't show the CF marketplace command in the CLI, which I looked up this service plan. So that's how I get the service name and service plan. And so from this, I would be requesting the service instance of the Elephant SQL service and the service plan total. And so the data source that we have on line 36 is returning me the list of service plan indexed by service name. So I'm getting the service plan grid specified on line 42. Okay, now that I have requested a service instance, I can request an app. So my app, I vendor my app in the GitHub repo to make it easier. So this app is from the acceptance test, the platform acceptance test. It's a simple Ruby app, Hello World, and we're able to look at variables. And to access an app, I need to declare a root. So let's do that. So we declare again the cross-reference with cross-reference grid. So that's a root grid. So let's define a root. So a root is made of domain, a space, and a hostname. Again, I won't be creating a new domain here. So I'm looking up an existing domain, the default domain in our PWA CCF app, the TIO. So I'm using a data source and I reference the data source in line 51. And then I reference the existing space as for creation and I specify your hostname, the raw TFT rule. And then my app would like to use the service instance I created, so I specify service binding. And yes, you run to the app, obviously. So since it's vended in the repo and the TOA form command would be executed in the current directory of this repo, I use the file URL. So service instance. And let's look at a nice feature. So as programmer, I expect to be able to do refactorings on my code. So basic refactoring renaming. I don't like this Dorati of demo, would rename it. But what about the cross-reference? Would they all break? Well, they've been updated, as I would expect. So I feel comfortable as a developer. I get my usual refactoring to work correctly. Okay, so we have our right face which is about to be complete. We can go to the plan phase. And actually, rather than executing the TOA form plan command, I would directly execute the TOA form apply command, but which really prompt me for confirmation before applying. So in the first part, TOA form is a refreshing state. So it's looking up the data sources. So I ask for looking up org, default domain, and service, so doing that. And then TOA form is showing me, proposing me to review the resources it would create. So an app, the order app, and then a root, and a service instance, and a space. Do, are you willing to continue? Yes. And so TOA form goes off and creates resources in parallel preserving the dependencies across them. So we see multi-threaded provisioning. So service instance, space, root, and app. And it's gonna take a minute to provision. So to keep you entertained during this time, what we're gonna do is look at the dependency graph. So TOA form is a nice command, TOA form graph, that would put some text to display the TOA form, the graph. So it's actually the text. I pipe it into dot viz, and send it to Firefox. So here we see our provider, our library, which is configured with user and password. And then we see the data sources that we have configured to look up data into a clattern ring to PWS. And then the resources that are relying on those data sources. So space, service instance, root, app. So this way I can have a nice visual representation of my dependency graph, and I can preview dry rents when I make some changes. Okay, so would this Dora app be up and running? Let's check this. 10 seconds, 21 minutes, yes it's complete. So let's look at whether it's up. I need to look up the root to access it. So Dora tfdemo, okay. Let's copy paste that into a browser and see if it's up. See if I have the tile, so I make up Dora. Yes, it's up, good. And so let's check the environment variable to see if our service is bound. So that's the backup services I can check whether this SQL service is bound. So yes, I see that I have a SQL service that go to Elephant SQL and login password. Cool. And so obviously I deprovisioned that so you don't need to record the password. Okay, let's move back to display mode with your speaker notes. So yes, this is if you want to take more time. The slides are live and you have a longer demo if you want to get more details on the coding part. So okay, let's get some more examples of how the use case are helped by Cloud Foundry. My terraform, sorry. And then use case for application security group and isolation segment. So that would be a pretty simple stuff. I need to specify some application security groups, two of them, and run them as default and staging so that they would be straightforward. You will skip through the code. Another example of the isolation segment. So I have an isolation segment that I call public and I entitled Cloud Foundry organization to the isolation segment and then assign a space to the isolation segment. So how does this look in terms of code? It's very simple for security group, nothing special and for isolation segment the same. So the org, isolation segment, the entertainment is referencing the org and then the space is referencing your org and the isolation segment. So nothing special about that. When it comes in, it's resting in when we are mixing different providers. So this is a more complex example of an admin use case for one of our team at Orange. So they deploy an app. You've seen that in your demo with the route and the service instance mapped to the route which is protecting the route. So nothing special here. And then they need to configure the app with some UA clients. And in order to create a UA client they need to have dynamically a username and password. So they use Credub to generate username and password, store that into Credub and use that into a client. So how does it look like? Very similar. In the same way I was using Cloud Foundry provider for CF application runtime, I can use Credub user to provision a user, specify user length and then reference that into a UA client resource. That reference username and password. And then this UA client can be using my app. In this example, the app is using a user-provided service and formatted it as a JSON but you can use flat environment variables and whatever. And then it's worth noticing on the app that this app is deployed of GitHub. So the resources is fetching the app binaries directly of GitHub from the zip. So I don't have to download the apps using a script that's done by the provider. Okay, another admin study, admin use case of mixing different provider. In this case, we are mixing an external SAS, that's Cloudflare. So some of you might know Cloudflare is helping in this case. I have some users that I'm asking a new domain on my Cloud Foundry setup and you see new domain needs to be protected by Cloudflare. It might be for caching reasons, providing content delivery network. It might be for rate limiting. It might be for analytics and maybe for security as well, providing a valid TLS certificate. So I was kept asking for this by my users. So in this case, I'm using two provider, the Cloudflare provider, which is official, which I get a record. And then I need to expose this DNS, the FQDN, and point it to my Cloud Foundry and expose that to users using the Cloud Foundry domain. And then the apps would be able to consume this domain as a trick with an empty host. So here again, we are mixing two different providers. So the code looks very similar. Cloudflare record, I specify a C name. And I have an alias to go to my Cloud Foundry. And then the Cloud Foundry domain to expose that to users. And then the Cloud Foundry route. And since I was asked repeatedly by users to create new domain out of Cloudflare, I automated that into a service broker so that the users, instead of calling me, they can just say create service broker, create service instance, Cloudflare, specify the name of the route they want. So you might want to have a look at this broker. And Maven, would you present the use case that you have? Certainly. I'll stand close to the mic. So thanks, Guillaume. So we actually, we might run over, right? So I have a quick few slides to go through and then I'll hand it back to Guillaume to take it forward. So right now, I will briefly showcase some application use cases. So the first use case describes a solution which we piloted it for a telco in the Middle East. As you can see in this solution, we had a microservices landscape that were, oops. Excuse me while I get used to the clicker, okay. So you see this, we had a landscape that actually was very well suited for CFCR. And these services depended on a set of backing services that had an architecture that was most suitable in our Kubernetes. So for this pilot, we deployed CFCR alongside the production CFAR environment, right. And then we used the Kubernetes Terraform provided to deploy these components over here, right. And then we used actually, this where actually I kind of started working on a lot of the application and services side of the resource of the CFCR Terraform provider. And we used that to kind of wire everything together. So like you saw previously from the previous slides and these slides, the power of the provider is realized when you actually bring all these multiple providers together to bring a holistic solution together, put together. So the next one is what you see is the high level architecture of the SAP Leonardo Machine Learning Foundation. Very similar requirements to the previous one as well. So you will see a general pattern in these application use cases. So on the top part you have, you see a few CF-based microservices. The computer here, there are some Kubernetes-based services, mostly services which need GPU support. And on the last, you have services which are running mostly in the cloud like AWS S3. So as you can see SAP needed to deploy microservice across three different landscapes and this is only possible with Terraform and its plugins. So to do this they used the CF provider's application service resources along with the resources from AWS and Kubernetes to build all of this, right, as one spec. Just one thing as well, we really appreciate because I really appreciate SAP's contribution to this because I originally developed the CF app and they actually extended that. They found a lot of missing features which they have added to this and they're actually making it much richer. Okay, so this slide shows you the approach taken by Orang to integrate Terraform with the concourse pipelines. So in this pipeline, the Orang uses the Terraform concourse resource to execute the Terraform workflow. So the concourse resource is this, I mean we talk about resources, but this is a concourse terminology. It's a way for executing the Terraform workflow. That's what the concourse resource is. And this is actually done by another colleague of mine who is at Pivotal and you can actually, when you build the pipelines, you can use this resource, right? And so there are two steps to this pipeline. The first step runs the Terraform plan to validate the consistency of the state and the second step is an apply step which we run manually if the consistency check is good. Once this is run, the configuration state is actually persisted to CredHub using another plugin. This is actually the beauty of Terraform, is everything is pluggable, right? So Orang developed HTTP backend, a Terraform backend, HTTP backend to store the Terraform state in CredHub. The reason for doing this is that Terraform state usually maintains credentials in clear text, right? So it is important that when you persist to the Terraform state, it goes to an encrypted backend, right? So you will see SAP does it a little bit differently. They use an existing Terraform backend to actually put it into Vault. So you will see that in the next slide. So in the next slide, so this is a Jenkins pipeline. So they use Jenkins. This is how the SAP Machine Learning Foundation uses Terraform to deploy new releases to the architecture that I showed you previously. There are four steps to the pipeline. The first one is you get credentials from Hashegop Vault. And then there's an apply step. Terraform discovers which applications have been updated and then it deploys them. It also configures any required dependencies. Then they run aliveness and integration tests against the functional services to see whether the integrity of the landscape has been kept. And in the last step, they save the state to Vault, right? Again, like we orange users credit up to save the state, they use Vault, right? It's important, I think, something for those of you who start adopting Terraform, just keep this in mind. Like always keep your state in an encrypted backend because of this, because it maintains credentials that are clear. So now I will hand it back to Guillaume who will actually talk about the past and current state of the development of the CF provider. Thank you, thanks. All right, so yes, what's under the hood and what's the story about the provider? How do we do community convergence and the details of each provider and backlog? So originally Range and Archer in the room started working on the Terraform provider in 2017 and even I think you started a similar effort in parallel and actually we met in Basel last year and we realized that we were working on the same stuff. So we say, well, let's merge, we'd be stronger together. And then the SAP team came along and say, well, we can make some use of that and we can improve it. So that was awesome contribution by the SCCT team. So the team of all of us here started get together and go in the product mode. And so we are close now to be official, work with the ASCII corp team so that we become an official provider that would end up on the ASCII corp webpage and release. So we hope to get that in the coming weeks. The Terraform provider credit, it is in production at a range but it's not official. Maybe we'd plan for official ASCII corp submission in the future. The Terraform provider UA is also used in production at a range but it's not feature complete yet. There's not many stuff missing. Maybe with the new Golang UA client that's out there now it's gonna become much easier. And the Terraform secure backend for storing credentials is still in beta mode and we don't yet use in production of the feature. On the container runtime side, we are using the official Kubernetes provider for pods. And that's worth nothing as well. The hand chart, the AMP provider, which is not official but looks very promising as well. And we need to better work with this repo. So let's have a look at backlog. Some of the features, we are still lacking the zero downtime support and blue-green support. So some of the use case would be limited by that. Network policies aren't there yet and we don't yet support the V3 API. In terms of challenge that we had, one large challenge was to be able to reuse the CFCLI Golang library Golang code to interact with the Cloud Control API. The CC API is quite complex, the complex workflows and it's a very large effort to be able to maintain that. So we'd like to be able to leverage what the CFCLI team is doing. And so it's very promising to see this team extracting this as a library and as well help us do the migration from V2 to V3. Another challenge that we had is around the acceptance test environment. So Maven's been donating his time and resources to maintain an acceptance test environment out of the pivotal field labs environment. But it's been a lot of work and it's not in parity with upstream. So that'd be much easier if we could get some help from the Cloud Foundry Foundation resintegration team to get a stable resintegration acceptance test environment to run automated tests and to validate incoming pull request. Some other future work. You've seen that the Cloud Foundry app supports the loading binaries out of the GitHub. We'd like to get that into a different provider so that it can benefit to other resources. And Maven got some crazy ideas about a Bosch director provider that he might be able to share after because we are running a bit of time. So to close, we'd like to get suggestions, comments from the community. And if you feel like a contribution, I'm very welcome. And I think we're out of time but we can take questions after the other slide. You can reach us on Slack. We have a Tafam channel on Slack. And thank you very much for your time.