 Execute some drash operations Then we can do some automated functional or load or smoke tasting We can do anything else. I believe There are no limits but we have only 20 minutes today, so and I already spent two minutes So I'm gonna pick up one workflow only and this Will be Kubernetes creating Kubernetes cluster, okay So we're going to create Kubernetes cluster on Google Cloud But in fact the specific workflow is not important here because I just want to share the tools and this is just an example, so Please pay attention to the tools, but not the exact specifics of Kubernetes or something like that because There's not an important thing today So when we talk about workflow, I imagine it is a evolution So first of all we have a very simple Workflow and then we kind of evolve There's something better better and better and in the end we have like production ready workflow and usually Take some iterations to do that So if you talk about creating Kubernetes cluster Our first option is to create it manually with Google Cloud user interface so we can log in with our browser to Google Cloud and then we can Pick up some parameters like version of the cluster like number of nodes like type of these nodes Some other parameters and then we hit create button and we are okay in a couple of minutes our Kubernetes cluster is created and Yeah, we are done. We already had the Kubernetes cluster smoking Is it okay? Yes, but it still has some cons. So you can see this comes here. I'm not going to read it all but Congratulations, we just created our snowflake cluster Does anybody here knows what is snowflake server? snowflake server means that Manually created server in our case. We manually created some cluster some piece of infrastructure and What is interesting about snowflakes? They're beautiful, right? Very beautiful, but They're different and each time when we create some piece of infrastructure, for example in our case we create the cluster It will be different each time and it means it will be fragile There are there is some cool guy Martin powder probably, you know that him and he said Smart words about fragility of snowflakes and The main idea here that it's hard to change the snowflakes server. It's hard to change the snowflake piece of infrastructure because You created it manually and probably in one year. You don't remember why you put the these settings here or what What was the reason of choosing this exact? Network settings or something like that and you will be in trouble so we better avoid this snowflake and we try to do it with the next iteration and in the next iteration we create the same cluster but with some Command gcloud command from Google called fdk software development key. So we install this software development key and then We use common something like that So we we need to remember a lot of different parameters here and again in the end We have still has we still have this snowflake infrastructure. So We need to do something else, okay? So next iteration is much better and the next iteration we are going to use Infrastructure is God principle which means that we put everything into code It means that we put all settings all configuration or logic into code Which means we can like check in this code into git repo. So we will have the all the Good parts of git like history logs some permission system And that's good very good But we still have some cons because for example, let's imagine the real life scenario So we created the Kubernetes cluster. We use it. We put some applications We deploy some applications into it, but at some point we want to upgrade it or change something and I Believe you know mergers law if something can go wrong. It will go wrong So yes, you can just hit the upgrade button for example and see What will happen, but that's kind of risky because you don't know probably it will break something probably some Application will be not compatible with new version of Kubernetes There is always some risk like that. So the better idea here is to create the Test cluster to test this change before you move it into production. Okay, and In this case you need to try to create this cluster like exact copy of the production cluster But at least the main the main parameters should be the same probably Some not important parameters will be different like number of nodes stuff like that could be different, but Main settings should be the same. So you need to create the same cluster is the same configuration files But you can do the with the same configuration files You need to copy so you need to copy paste configuration files into like different directory and then you change some settings like name so you created two copies of something and in this case you have a trouble because At some point you need to change something and you can you need to change it into places and You can forget to do that and it will cause configuration drift. So it means that at some point your configurations will go like different ways and they will be Different and the host another Will take no sense because these clusters will be different But okay, we we have a solution for that. So we can try something like that and The next iteration is to use some master configuration file and create specific environment specific configuration files like production and test Automatically from the master configuration then we can apply it and create our production and test Kubernetes clusters But to do it we need to introduce some tools. So that's the main part about tools But first I want to talk about unique philosophy a bit this you can see here main principles of unique philosophy and the first one is do one thing and do it well, which means You should try to find the tools that do one thing and do it well there that tools should not try to do everything at once they need like Do something but do it well and then you combine these tools to work together and you use point text as interface of communication between these tools So and this conception works very well in our case is work flows the ICD or those of those So first tool is unicorns. I guess nobody knows about that because it's like in house production and sorry no documentation here But you can still hurry the source house if you need but Here I will explain how we use it This tool is used to Process configuration files in young format and here on left side We have input file and on the right side. We have output file as you can see we defined some configuration on the left side and it has common parameters for every cluster in our system and then it has environment specific parameters But on right side all parameters are combined into one list. So that's kind of the main purpose of these two to process the configuration files and extract the data that we need not extract but collect Here there is some explanation how How it collects so I believe that we don't have enough time to dig into that like in many details so you can check check it after okay The next tool is JQ Anybody here knows JQ Cool because that's a very very important tool and if you work with JSON or YAML files You really need to use this tool because otherwise you're like wasting your time and Check the documentation documentation is very good and a lot of scenarios a lot of things here So definitely definitely one of the main tools in DevOps world ability The other one is YQ is the same as JQ but for YAML files. It basically is just a ropa over JQ. Okay here Here we can see how we combine these tools to work together So the first one tool is processing and collecting our data and then it pipes this like pipes pipe sign means that we use output of first command as Input of the second comment. So in the second comment, we just extract some part of the YAML file here. So on the next slide You can see the output if you remember it was configuration like that and Here we have output like way values configuration values and then we use the next tool the next tool name complete By any chance anybody use it No, okay. Check it out because that's a very cool tool which is used to template Rendering templates and it supports local and remote data sources And you can use a lot of things like vault console Jstone files XML files anything like that. So check the documentation and here is a small Example of how we can use it. So we have These two comments are the same on the top and then we edit the gameplay Comment and we pipe the output of previous comments to as equal to gameplay and here We can see the template for gameplay. It has some place folders as you can see It can be it can't have some functions. The tool itself is based on go there. So it use Go to plays and that's the name gone boy. Go to play. So you can do a lot of things in template and then you generate Any kind of text configuration file here? Okay, and here we generated the configuration file for Terraform And there from is used to create our cluster as infrastructure is called principal Okay So the next though is very important because we talked about how we can avoid vendor locking and this is most important tool to do that because This tool is used like a glue to glue all the different comments in our workflow in our pipeline and we use it like probably Some of you is using are using the make file to combine some different comments into chains And this is like make file steroids and it allows us to put like chain of comments Into one place and then execute it Then we can execute this variant command from any CI city engine like Jenkins or github CI or drone you can name it or from local and This means that we invested like on open source tools and we didn't invest on Jenkins or github And we can switch the engines easily because our workflow is defined outside of these engines and This tool allows us to do it in convenient way. It allows us to define some parameters It allows us to include something. It has a lot of things. So check the documentation, please And you can see on the left side the task definition for variant and on the right side the example execution so as you can see we execute like environment we pass the environment variable and Here on the left side. We can see that environment variable is Defined there and I believe there are some Error here because the name should be the same so I changed the name environment should be environment not and Okay, but the meaning is the same. So we have this command executed and then we have These commands like we define it, but they execute automatically for you That's an example like a big font of using this Okay, and you can see that's much more convenient to use the variant command instead of Trying to remember all the parameters for other tools Okay, so now we need to like support a lot of tools and it creates for us Some issue called dependency hell so for some tools probably we need some Python on a Ruby or Node.js and Visions could be different and if we want to execute this workflow on some other place we need to install all these tools and It's not okay, but of course Solution here is containers so we can put everything into container and execute it from containers So it solves the issue of dependency hell Other thing is secrets management if you want to avoid when they're looking we need to find a way to provide our workflow with secrets Which means some passwords some keys to access data for in our case. We need to access Google cloud console So we need to have the Google cloud key file secret key file on our like On local on local PC Okay so We use this tool for this for this example We put every secret into get reaper, but included it with git secret. So check it out and Then we use as doing in critical We need the GPG K and we store the GPG K in AWS parameter store and to access it. We need like AWS Now credentials and we store it in this AWS Vault 2 locally or I can encrypt it way if you need more details about that, please contact me I'll show you scenario how we use it because 20 minutes is not enough to talk about it So now we can execute this workflow locally, but also we can execute it on Github And we have just a simple job here defined with some execution of commands And we only need to pass the Amazon profile parameters like KID and access key and it will be working So demo time I believe we have one or two minutes. So I'll try to show you like a Real-life example It's more complicated. It has more comments than I showed you but the principle is the same So we execute this github job that I showed you just showed you and in the end In that it's executed all these all these programs it processes configuration Execute some secret key decryption stuff like that and then it Execute the terraform command terraformatization and then it executes terraform in our case terraform one command and it say that some parts of cluster should be should be changed and We have it we like executed it very easily in github. We can very easily Execute in Jenkins as well and and the same for any other CHD engine and also we can execute it on our local PC and Every time it will be the same. We just need to provide the access credentials To this workload to be able to connect for all the APIs or for all other parties, okay So basically, I believe That's all. It's very slow Try to go back to slides Okay, guys folks, sorry, but seems that I can't switch to survive back. So thank you very much If you have if you have any questions, please, I'll be here and you can contact me by my name on slug Drupal's leg and I'll be here to try to give you my email at least and here I Don't know. I can't switch back to oh, okay You switch it back. So this a context, please find me on Drupal's leg and ask any questions because now There is no time to do that Thank you