 So, I was in the academy, figure out a little lecture at university for five years and what the last few years I was in the technology industry, most of which I finished in the science. I got a passion about the digital data lab, since I remember, since the lab began. Currently in my role I'm leading the data AI in Fox, a lot of engineering teams. The current project is sort of trying to do any technology that anyone can do, the digital and all this derivative human process of data sharing fields. So, what this is talking about? So, if we could cover how to install the figure into the city for people, by the way, any community in the Terraform, you know what it does and how it works. How many of you know it's Terraform, it's present. So, we will create a Terraform installed into here, when you just be ready to cluster. We will do the move state and our Df state in the back end in the pool of knowledge. We'll create a project for development and production. We will create a project for development. Initially I thought to do the head zone, the demo, but later I realized it's just typing it, there's so much data. So, I just yesterday I created a ticket, it's been passed and it's going to run quick. So, the amount of data you have in the hours, that's the information. Okay, so, you're going to deploy Terraform, it's in the query set, you just need to download it and copy it in your user moment. I think that is very sensitive. Yeah, it's coming. Yeah. So, for this, you just follow the instruction. Again, you need to keep CTL the same thing. I use the latest version of Terraform with this release. I think few days ago, version 1.11.10, I think this one will be released a new version. We copied a new language, supported, 1.12. Keep CTL again, it's the latest version of 1.12, great release. So, I didn't include any installation of the GCloud CLI. Currently, how do you deploy, the presentation will be how do you deploy, you will need Cloud CLI to create the Terraform service account and then you can use Terraform to deploy as it does. So, again, I didn't include how do you deploy, it's very specific to the distribution and my OS windows. So, what do you, so let's start in order to deploy with Terraform, you need to have a Terraform service account. A service account will be a technical account with basically deploying all the infrastructure in the Cloud. Terraform is about deployment. The manager, it's very flexible, supports most of the Clouds and it's a lot outside the Cloud of Cloud. It could be support for various. So, we'll start basically to installing, creating a project. So, as you know that every single resource in the Google Cloud, you need to have a project and the project, you create a service account or that is whatever it is. In this case, we're just exporting environment variables, assuming you're in the limit shell or might less. So, if you follow just creating the, creating the copy and basically basically, in this case, we're creating the projects and we need the project to be done. So, second step is, once we get the project created, we need to create a service account, we will call it Terraform, and we will need as well the keys. The search certificates, we will use as a user from the local project certificate or we can export it. You can have a pipeline, you can have a big package, you can have a base 64 and export it and use it to create an automated way. We need to show a few roles to assign to just creating a service account. In this case, we'll give the service account the role as a viewer and storage admin. Why storage admin? Because we will store under these projects the GST files. So, we don't want to store the GST file on Terraform, it's way you store all the states of the past deployment. It's much more efficient than the stateless, in this case, at least. You always can map where the API goes and map your state of the path to whatever you need to exist. So, again, we need to enable quite a bit of APIs. Just before we do just copy and paste, it should work. We'll see how it is. So, an additional provision we need to grant to Terraform service account, for example, the project creator, because Terraform's service account should be allowed to create the projects. And once the project is created, you need to assign the building account where we can spin out different resources. And the last part, we'll use GST materials to create a new project. And the provisioning, it's a good practice and where we can use... So, this is a quick demo starting from the beginning. So, this is how all the steps that we did, this is a little bit. And, definitely, I attached all the data. So, again, we're creating all the steps that we see before. Now, we're trying to replicate. So, it will take a few... Actually, this is a speedup twice. If you try to do it manually, probably, it will be slower last time. So, we're creating... In this case, we deployed APIs. Now, we're creating the project creator that assigned the role. Then the new role, it can create a building account. We create a new bucket, create a provisioning, and exporting the new account. So, let's see... So, the first thing, what we try to do in Terraform is, we need to create a project. So, in this case, I create a separate... Separate sources for the project, and we'll take a look at the condition of it. So, we have a backend. The backend will be pointing out the Terraform, say, to the Terraform state files in the Terraform service account project. Mine will have all the data for the project creation, outputs that we take and outputs and ingest in the Terraform format, Terraform DMVars, where you keep the secrets, and variables where you usually will keep all the variables that you will research. So, let's go a little bit more deeper. So, the first thing we practice is to put in the key form. If you will use a source protocol system, usually you don't need the tf state, tf vars, it's better to keep it away. So, Terraform tf vars, this looks similar to that, so you have a building account organization, I think, that will be preferred. Back end, this is where you store your tf state file, because we already created Terraform admin bucket, so that's what we say, and the Terraform project will be basically the folder in the bucket where we store all the tf state files for project creations. Looking at the variables, so we defined a few variables here. So, Terraform allow us for comparison, I think 11.9, they allow us to create, to have a single source and create workspaces. So, what does it mean? You can jump in, you have a single trunk, where you can use for post-development, staging, production, the same source code instead of, and all the environment variables can be defined as, so in a workspace, that's where we define the production and development, and we'll see how this will work. Again, the building accounts and organization ID, it is referred to to the back end. So, this language, the Terraform you think it is, it's essentially the file that we get, it's a special term, it's a special Hashi code language, it's very similar to Jason, so the people who work in the development of the tf state can be very supportive. Again, my tf is where we're creating the projects. Now, as Terraform allow multiple cloud creations, Google is one of them, so it's a modular system you need to download, or Terraform downloading modules. So, this is a good practice to specify the version that we use, because sometimes it breaks. Another provider, I use a random in this case, when I use the random, every time you create a new project, you need to assign a unique ID to the project. And the idea is you don't want to hard code it, you allow it to auto-generate and attach, as attached when you're project link, it's a new ID code. So, and here how we use it, so in this case, for example, the project ID will take a Terraform name, or here will be a hex, it will be the full character after this. Again, for this project in this case, we assign multiple APIs. So, it's quite a bit, so, but you may use different APIs for different projects depending on what you want to do, if you need only an ID you need more, you need compute and less login. So, once the last piece of the, of the, these outputs, so what we need, the Terraform need to export the project ID, once it's created, into the tuning. So, this is where we export it, and we'll see how it goes. So, once we edit all these files, so the last piece, it basically the treated development. So, apparently this is how you create many of these, you definitely can implement any CSE pipeline. So, first of all, Terraform initiate, we will initiate the whole dependency of the module. In this case, we'll give you a type of a random, GCP modules. We'll create a workspace, workspace where we can segregate between the production and development. Once we have the workspace and we have the workspace, we can use the Terraform pipeline to show, Terraform will try to show us what we'll be creating, it's going to simulate creation, and Terraform will apply, it basically will apply existing. So, we can take a quick look, again, it's double the speed. So, we initiate it, we create a new workspace as that, new production, now we switch back to the development workspace. We need to have a quick list to see where currently we did the plan, this will simulate what would happen, what we will try to apply, this is exactly what we try to create. So, in this case, we are creating just two projects, one for development, especially one for development and non-production, and assigning big gaps. So, we just created the project, we don't create anything else in this case. So, once it's done, next step, now we have an interesting part, that's where we are creating GKE cluster. So, it's the same logic, it's more modular in this case, we are using the modular structure instead of having the single source code in one big file. So, we structure it and we will see how it looks like. Again, the same thing we are using the Terraform admin bucket, but now we have different projects, we can take a subfolder where we keep a GKE file or GKE. In the mic source code that I mentioned, so maybe we can see better how it looks. So, this is our current team. So, what we are doing, we already created the new projects, but we need to pass the information of the project that was created from the previous step to ingest them in the current source. That's why we are using the Terraform remote state, we are importing the project IDs and we are importing in this new source that it can be used with cluster will be created in a previously created project. Again, we are using Google as a provider. Now, where we are looking, so what we do here, we have a few areas that we focus on, we are creating a VPC, then we are next back with adding the subnets, we are creating the firewall and that's why it's located. So, if you go to the back end, let's take a look at the VPC creation. So, this is a file where actually VPC is created. It's pretty straightforward, depending on which workspace you are, it will be development or production, the name will be assigned and the VPC will be created. So, this will be the name of the VPC. Again, with the subnets, it's the same story, you create multiple subnets, but the difference will be depending on the work workspace you are, CIDR, the range of your subnet it will be different and it's defined in environments. You will see where we are defining it. If we are in the workspace production, this will be the CIDR site, if we are in the workspace of development, this will be CIDR for development sites. So, depending on which workspace you are, it will be very specific, it will create different CIDRs. So, coming back to the mine folder, so when we have the subnets, we need to set up the firewall. Additionally, I look for SQL, we have Postgres SQL, it can be created by our, what we are focusing on the most with the GTE, which is in the GTE source and we have, again, three ports. So, this is how we are creating the excellent individual cluster. So, depending on the regions, I choose Singapore in this case. So, additional configuration, so you can have a dashboard, for example, all of those failing, you can disable enabled. So, all these configuration as a code, so you don't need to disable it. So, again, so we went through, so the same principle applied, you would need to initialize space first, because this is separate repository. You create a new workspace, you plan it and apply it. And let's see how it looks in reality. So, we initialize, you see, it's downloaded all the modules and dependency. We plan, this is what Terraform will do for us and now we are fine. So, this usually takes for database, Postgres SQL is taken on by administration. So, you're patient enough for relational tool for minutes. So, because we have double the speed, it should be fast. And some in the end, you will see, this is our new project. It was created from the previous example. This is endpoints, CICL, CIDR, that we define for development project, and we can see it. A few additional possibilities, you work on the Terraform, again, you can do the state list, do the Terraform, show different modules, you can configure them. If you want done testing, it's the best idea to destroy and remove. So, you can remove all the specific components from the cluster, or you can remove everything that was previously on the Terraform. Coming back to the Terraform, one of the good practices as well is, you have a separate, each module, it can be separate. So, you don't need to have it in the same folder. This allows different teams, for example, the database team, and not interfere with another team. So, it's very accessible from this perspective. Again, so, very quick, how to check the state of the file. So, this is everything that we built previously. Now, we can have details for the specific modules, and we're basically destroying everything that was created once the time we can recreate it again. So, another good practice for the development environment is to automate. For example, if you want to pay down the cost, you can create a script that before the weekend in the Friday night, basically everything is destroyed on the morning, everything is written, so you don't need to wait for the weekend. So, it's a multiple use cases that you can be used. So, in this case, everything is destroyed, we have pretty much done. As information, again, so, more presentation is available in Docker. So, if you're running this presentation in Docker, you just pull this presentation and run it, just as before they did. Sources, sources, you can quickly follow the github with all the documentation, so you can run yourself. If you have any questions? Any questions? Don't be shy. Just a comment. You need a batch group to install a Terraform environment that long. Your Google credentials, your Google Cloud, SDK. If you are using it at all, everything that you use, all the Terraforms, for example, Google Cloud, Kubernetes, I actually didn't install them. I used them as containers. I just read an alias, which is basically that. So, in my case, the same thing, if you run anything like the pipelines, it's running as a containers. So, what you need only the credentials for them, you need to convert in base 64 and export them and everything is running as relatives. So, you don't need to install or anything. So, it's pretty straightforward. Where do you keep your secrets? So, in this case here, because the secrets I get in standard calls here, for example, and I will show you approximately where the secrets are kept. So, this is a new distribution. So, if you take it into information, you can find the container outside. You just mount it as the environment part. So, example will be that. So, I just put the bottom user Terraform here, and I exported alias with my secrets located, basically. So, you just source the secrets file. So, I have all the files, and I exported the environment variables that the Terraform can read from the set container. So, one more question, I think. Oh, yeah. So, you can understand before you run Terraform, why did you create your project environment or does it seem not very sure what it looks like that would be in there? You can do everything, so I decided to separate them. So, the reason I decided to separate them is because they control. So, for example, the project creation is done once, and you don't operate. So, your cluster may change every time you create or destroy everything else, but the project is quite safe. So, in order to segregate the life cycle, so you can do as well that the project creation is the same problem. So, it's not different depending on how I decided to separate them, because once in a lifetime, it's going to be much better, and everything else you do. Plus, additionally, from a permission perspective, again, because the people who allow it will be able to manage the project creation and people who actually deploy inside the project will be different people, different teams, and again, if you have a different source repository, they just need to wait for the same files and people. So, again, it's as many reasons why you will be able to keep them separately. Let's see, thank you very much. Any more questions? Yes, we'll be back. We will be back in a few minutes. So, now, I think it's