 we can now start. Hello everyone. My name is Alexei. I'm working in Adyax and this presentation is about how to create a mutable infrastructure for continuous delivery. So we are going to start with Alexander who is the second speaker. I'm going to discuss with you about some theory and then we will go in to show you some examples and if we will have time Alexander can do a live demo about what exactly is all about. So let me again introduce myself. I'm a program manager in Adyax so I'm managing different kind of projects and one of the scopes of my tasks are also the continuous integration, continuous delivery for some of the largest corporate accounts. So what we are presenting here is actually we are doing day-to-day life. Okay so let's start with some theory. Hopefully after lunch people are not so sleepy. So what exactly are we speaking about? We are speaking about the infrastructure as the code. This is as you can see this is an approach to define the infrastructure using the normal paradigm of developer. So this is something that most of the developers are getting into right now because the projects are becoming bigger. Lots of people starting to think out of the box. It's not only Drupal out there. We have lots of different things around. So and this is very familiar approach for the developers. So developer is getting used to write some code and lots of people are trying to stop using Bash because this is like the worst way of configuring stuff on the server. So why infrastructure is a code? That's quite obvious. Most of the people want actually to restart and recreate your infrastructure from scratch whenever they want. So when, for example, you have a problem somewhere and you for example, you got hacked and you need to restart directly. You can do it all wipe out of the system and that's kind of a core value of all of that. So you know that everything you do can be easily automated. We prefer to use Git as a single source of truth. So everything, you will see that in the presentation later, but everything we do is always in Git. Why? Because it's kind of a standard now, right? We are trying to save everything, code of the projects, the code of the system and the whole infrastructure is also in Git. As developers, we like to treat everything as a code. That's like, that's how our head works. And again, we want to replicate the component whenever we want. This is quite useful in our scope because we are a digital agency and we do some repeatable work for different clients. So this is how we save time. So what are the goals of the infrastructure? We want people to change a lot and improve the system a lot. We want them to not have a stress, don't be stressed about that. Most of the people want to deploy on Fridays. Most of the people want to change everything right now, not waiting for some release window which happens in two weeks. And you want your developers to be more autonomous, more proactive in all the things which are the projects you are working on. So we actually don't really like when developer is blocked by something, by the environment which is not working, by everything like that. This is not cool. We think that this is also part of the developer time. So what are the challenges right now? This is actually one of the first world problems. Right now, everything is so easy. You can start a droplet in digital ocean, Amazon web services, Microsoft Azure, everything is so available. So you give this access to all your team. And now a team can start, create servers, start environment, et cetera. But the problem is that it's too fast. You cannot control it anymore. And you need to make sure that your infrastructure stays the same, that it's still secure, that people can use it over time. That's one of the problems you probably are already aware. So when, at some point of time, your server, your infrastructure have some configuration done and it's working. You know, most of the people say if it's working, do not touch it. You probably started already with some automation. You have some Chef, Recype, or Ansible script, whatever. And at some point, someone decided to do a hot fix. Something is broken on the production environment. And he writes directly on the server, change it. And now your automation is gone. So this is configuration drift. You cannot see the difference. The snowflake servers. This one is quite problematic because this is, again, it's the same problem. People don't want to touch what's working. And sometimes, why we say snowflake? Because there is no snowflake alike. So every server is different. And we don't want that. And the fragile infrastructure. This one is kind of a usual problem for over busy people. Because they think that infrastructure is something that is like water. You don't have to do anything. It just works. But it's not. And there are a lot of stories about that. Some companies, one of the companies, they had a server somewhere in the corner running something nobody actually knows what's there. And in two or three years, people starting to ask, what is the server actually doing? Because most of the people who are working on that, they are not in the company anymore. So they decided to test and just unplugged it and whole infrastructure crashed. So they plugged it in again and never touched it again. So this is like the main problem. You want to understand what's happening. This is why we say you must use Git for everything. You have history in Git. You know exactly what's happening. If someone is changing the server, just force them to use the version control for all the configuration everywhere. And again, this is what I'm just talking and speaking about. When your server is inconsistent, you have this kind of a circle which never ends. You don't know what to do. You have to use the same. If you now launch your Ansible script, it will probably break something or not. You don't know. So you don't touch it anymore. That's a challenge. And of course, the entropy erosion. Why? When, let's say, let's give you a quick example. You speak about Jenkins. Jenkins is something that we use a lot. We think that's a great and powerful tool. But if at some point you forgot to upgrade it in two years, it's impossible to do anything because it goes so fast. You need to do it much, much more in a much more consistent way and regularly. So what are the principles of the infrastructure in the code? Demanding systems can be easily reproduced. So whenever you want, whatever you want, you have your script. It's in the code. It's in the history. You can launch it and you have your system exactly as you want it. You can dispose of every system. This is very important. We think this is, this is most of the people, most of the people are scared about that. They don't want to destroy servers. They know that, oh, what's wrong? So yeah, systems can be disposable. You can destroy it and launch it again. It's not a problem. We will show it later. If you are using, we are using Docker a lot. We can destroy containers, start containers again. We have that data storage just for the data. So we don't, we don't really care about the server. The server is something that, that exists for 10 minutes or for 10 days. We don't care. Systems are consistent. You are, everything is inside the code. So it should be consistent. And the processes are repeatable. Again, this is like, like very important for, there are many, very important principles of the code. You must be able to do whatever you want whenever you want and it must be consistent. So we don't configure anything manually on the servers. I think last time we configured something on the server was probably two or three years ago. Everything is automated and inside unseable or some, something like that. And the changes are welcome. So what are the, what are the general practices of infrastructure as a code? Most of the time, you want some kind of a definition somewhere. So this is what you store in the code, are definition, definition files. We'll speak with about the tools later. For example, the Terraform, which we are using a lot, stores configuration in the files. So there is kind of a definition which explains what is going to be installed. We use Git. Whenever you do something, we actually encourage everyone to use the continuous integration, continuous delivery system. Changes are very frequent. You want people to be able to deploy whenever they want. You don't want to be locked on some particular knowledge of some, some particular guy in your company. So you want, you must, must make sure that everything is continuously integrated and delivered. So incremental changes. Some people prefer to create a big batch of something and then deploy it. And most of the corporate clients actually work on, work only like that because they want it to be predictable. Which is good from one, one, one side, but from another side, you can break things. And we actually encourage people to do incremental updates. So with small changes, your data infrastructure, don't do a big, don't do big updates. And the continuous improvement. You must regularly update Jenkins. You must regularly update the tools you use. You must regularly think about what you are actually doing with your system. How it's deployed. Try to store, save some KPIs. How much, how many deployments you do per day? How many developers are working with your system? What is the average time of deployment? Everything like that must be analyzed. And it must be continuously improved. If you see at some point that people do not use the system as a design, why, why you build it? So that's the list of tools. We use a lot. And this, a big part of the presentation for, for Alexander. We use all of that. I'm not going to explain about all of them. The main thing that Jenkins, we think that is very powerful. It's, it's going fast. It has now a very beautiful new blue ocean interface. So if you never looked at that, have a look. We use GitLab. Because GitLab, we think it's an amazing piece of software. It's, it goes very fast. It's, it allows you to install it on your server and you do whatever you want with it. But it can be of course anything like that. GitHub, Bitbucket. Whenever there are the way of setting up a hook in Git, you can use that. And of course Docker. Because we want everything to be disposable and, and launched whenever we want that. We are big fans of using HashiCorp tools. They have, they have an amazing list of things. But most of what we have is Sparker to, you know, to pack the, the images for, for your containers. The Terraform. This is the biggest part of the, of the whole structure. Because this is exactly where you describe your infrastructure in, in the way of code. And the console, because you need, you need to do a service discovery. You need to define your services to make sure that all of them are working in the way you want. Uncivil. And I think 2017, lots of people know already about that. That's the way to automate things, to launch server provisioning. The Kubernetes because we don't, we don't have only one, one Docker image. We have lots of them. And we want to make sure that for all, any amount of, of containers we can manage and orchestrate them. And the helm, that's a package manager for Kubernetes. So now we are going to this, to, to, to show you how to actually write and build the delivery pipelines. Because we all, we actually think that you have, you want your websites or whatever application you're working on to be, to be continuously delivered. And, but what about the continuous delivery of the continuous delivery tool you use? This is most of the time people don't think really about that. And in a lot of examples, when you are working with different clients, they have some kind of a tool. They have a continuous integration system with Jenkins, with some code inside. But the whole, the actual system which, which delivers things is a snowflake server. No one actually knows what's happening there. It's Jenkins, some, some manually created jobs, some manually created pipelines. And the guy who was actually configuring them left already the company. So no one knows what's happening. So we want to do the whole system starting from the application to this, to the continuous delivery tool to be inside the code. And I leave Alexander to continue the presentation. Thank you, Alexei. My name is Alexander. Hello everybody. Welcome to our practical part of the presentation. The biggest issue with this presentation was how to choose the right material because we wanted to show too many things and it's not likely to be going, go well with the format of this presentation. So I'm going to go into the core issue of what Alexei already mentioned of continuous delivering, the continuous delivery system itself. To, I want to go to what his continuous delivery is very quickly. It's a software engineer approach in which things produce software in short circles, ensuring that the software can be reliably released at any time. Fast releases, testing and reducing the cost time and the risk of delivering changes because updates are incremental. Our continuous delivery system. We have the system to handle lots of different projects. We have many instances of this continuous delivery system and some of them are installed on the client side. And we want to apply the same principles that we use to deliver projects to clients, to deliver our system to the servers we use. And how we do it? Who will continue to deliver the continuous delivery system? Because why this question appeared? Because at some point we can create continuous delivery systems that will create our continuous delivery system. But who will create this new continuous delivery system? This is Anders Sokol and we came to the idea of, be aware, here be dragons, of our boros. Our boros is an ancient symbol showing the serpent or dragon that eats its own tail. It is a symbol of infinity and infinity is our continuous delivery. And the natural end of creation, we create resource, for example on terraform and destruction. Systems are disposable and we can destruct any component at any time. So I'm going to go to the practical part now and show how we create our continuous delivery system from the scratch. And we have a milestone zero where we have nothing except our local environment and some repository on GitHub. And now we're going to create everything. I'll be switching to console now to make sure that we have nothing. I'll go to the Kubernetes and we'll create the root environment and Kubernetes. I'm going to check that I don't have continuous delivery system installed. I'm checking and only have the console. Console also should be managed, but it will be not shown in this presentation how we create it. But it's very simple. You see the step one where I execute, create the Kubernetes cluster. I will skip this one because it takes too much time. Then I create the secrets. Secrets are just some passwords or tokens or ideas or keys to keep them secret, of course. And I need to create them first before I can pull some repo and execute some steps because we need the secrets to access our external resources. I'll call the repo. I can share the repo after the presentation if anybody interested. And I executed the install command for Helm. Helm is a deployment manager for Kubernetes. It makes life so simpler because I can do whole Kubernetes deployments with one command. Now I'm going to wait a bit and while it's creating, we can check it on Kubernetes dashboard what's happening. We can see that some red messages appeared and it's because some resources are not visible to each other yet and it will be okay. So I can show the Helm config while it creates something like that. I have one file to define the chart with name and version and then it have config map where it basically defines some files that will be used to create Jenkins in our case and some init scripts and property files that will be used to provide Jenkins with its variables and properties. Okay, common is finished and we can see that we have some information here and I want to check the logs what was created. For this I execute the describe pods command and check the name then check the logs for what is this name. I can see that something is happening now. In its scripts for Jenkins are executed. Some tokens created, SSHK copied into the workspace and now it says that Jenkins is fully up and running so we can check if it actually works. I run the status command to check the services IP address external so I can access the Jenkins. This is newly created Jenkins but it already have our LDAP working so I will authenticate with our LDAP and now we are at milestone one, the seed where Jenkins is created and we're ready to create a jobs inside the Jenkins. For this step we have a concept of mothership seed job. Mothership means it can create many projects and jobs for these projects and it depends on some example where we define how this project looks. Now I'm going to execute this mothership job and it will create the projects. This step should be automated too because we want to automate everything we do but for this presentation we keep it as it is because otherwise it will already created everything and I wanted to show you the process itself. While working I'll show you the mothership config how the main file config is a list of projects and for this project we use only one CICD because we want to create only CICD based jobs on this Jenkins. Let's check the job is finished and I can see that new folder appeared with the name of our project and there is a seed job inside this project as well and this seed job will create the project jobs and for every project we will have the same seed job that will create everything every job related for this project. This just I just executed these steps. Again seed pipeline means that it creates another jobs and pipelines. Now when the job will be finished we will be at milestone to the root. We call this Jenkins server that we just created the root environment because it will create other infrastructure elements other Jenkins servers etc. Let's check what's happening here. Okay job is finished and when we return to the folder we can see that some new folders and jobs were created. If we check some of them we'll see that it has inside some jobs like applied destroying plan. It corresponds to the terraform actions that could be performed with the infrastructure. At this point we can actually see how we can do updates to the our infrastructure and how it will be handled by the root Jenkins server. It will create all the needed infrastructure components needed. Let me show it. At this point I want to push some commit to our infrastructure repo and then create a metric quest in GitLab for this commit. I assigned it to me and submit metric quest. At this point our new Jenkins server is already able to pick up this event from the GitLab because when we created the Jenkins it already provides and created webhooks on the GitLab site that triggers on every event like pushes metric quest etc. triggers jobs on the Jenkins server. And now we can see that match request applied job is in progress. I believe I'll switch to video format now because it takes some time to finish. I believe we can do it faster with video. You can see that pipeline job was launched for the metric quest for new metric quest and it executes some actions inside like installing Jenkins server after we provide the configuration server with Terraform. You can see now the Ansible tasks executed and you can see that pipeline itself defined on the screen and it will go to the Ansible configuration. So you can see the steps that you form. You can see the Docker compose file used with data volume and master image. You can see the Docker file for this image that we use is based on the Alpine Jenkins fast forward a bit and at this point the Jenkins is installed with Ansible and Docker on the metric quest disposable environment. I just can copy this address and check if the new server was actually created. Now you see the brand new Jenkins server created only for testing the metric quest I just created and perform the same steps to login because every Jenkins server is configured the same way and we can see that some jobs are already executed and this execution was triggered from our root Jenkins environment. I will fast forward a bit. Now we can see the actual actions performed during our pipeline build. It's our custom system based on the yaw configuration format. We call it zebra inside and now you see the github repo that has all the actions all the actions logic inside it's open source so you can check how it works. I'm going to return to our presentation now. We just saw how these steps were performed and I'll just recap the steps because it was a bit messy. We created a metric quest for master branch in our infrastructure repo in github and then it triggered through the github webhook our metric quest applied job on nth environment root and the first step was tear from apply that created our master and save Jenkins servers for metric quest environment. Here you can see the pipeline defined with the blocks and these blocks are the main stages of the pipeline. This is the definition of the block. As you can see we use the docker images for each block and each block is executed in its own container. Each block has some stages defined with some actions inside each stage and there you can see the actual logic how action is executed. Here you can see some terraform config. Here we have a provider. We use console as the external storage of data because we want to be able to execute terraform commands from any Jenkins or from local PC and it will not recreate the same environment each time so we needed some external database and we use console for that as the remote state storage and you can see that we have two resources here for master Jenkins and for save Jenkins. It uses the pre-baked droplets on the digital ocean and these pre-baked droplets were created with the packer that Alexey already mentioned today. It packs the only needed software into the image and then it could could be easily and faster reused. Next steps Jenkins was installed with Ansible and then we performed some automated tested on Jenkins to make sure it works with the new changes. For now tests consist of two steps mother ship set job creation and project ship job creation. After that we accept automatically the merge request on the Git website and after that we destroy our disposable environment, merge request environment. We have automatically updated our master branch with the merge request accepted and the new stable take created to make sure we can follow the history of the releases and when stable take is created it triggers the same API job but it's this time for preprot environment that already should be run and it's just the same and use same process. They are for all applied Jenkins install if needed and Jenkins tests. Now we have a milestone three called the tree where we have a tree of Jenkins servers as much as we needed and each of them could have its own environments like dev stage preprot-prot for the actions. If you remember I started with question who will create the delivery system itself and I created from my local environment with some comments. At this point we want to move these actions into the pipeline itself and make sure it could be executed from any Jenkins server that we already created and when we perform it we have a milestone final our final milestone the circle when the circle is completed and we can create the Jenkins server for any environment from any Jenkins server and we cannot hear other different actions like be able to perform a health checks for the infrastructure etc and we can add it these steps as needed. I believe I can skip the technical details of for example Packer configuration or Terraform configuration because everything is available on the internet on Google it can be easily googled and I wanted to share our experience how we integrate these technologies and tools to have what we have. Again the core idea of the system is to have everything encode and we have mothership configurable that contains links to some projects and every project have a link on project triple and each project triple defines its pipelines and configuration and everything is created in hierarchy from top level and this changing to our levels. This is some high level overview of pipelines we already see some parts of that and main idea here that we have pipelines for each environment and everything is shared through the console and Packer provides images for Terraform. I believe we can skip this part so I believe we can answer some questions if you have some. We have some time left. Just to finish it you know it was quite technical for you guys I see that not not not everyone understands exactly what's happening but what's important here is that in the end for the business it's very important because when you want when you can just create a system which is which is which is disposable so you can basically set it up in different instances whenever you want so you can set up the system on the client side on your side on another client side and we are actually we actually had the challenge not saying Drupal during this presentation because Drupal is one of the things we deploy using using this using that and Alexander mentioned that when everything is in the code it becomes very it becomes simple because even developer can understand what's happening and on our side when we create and just when we're working on Drupal project we have lots of them in IDX so each project itself defines its deployment pipelines so each developer can actually access and write what happening when my project is deployed it's probably just clearing caches or probably launching some important in complex operation and everything as you said starts from the top to the bottom so everything is in the code the system itself is in the code and each project itself explains itself in the code so if there are no questions we can understand yeah yeah that's we usually we usually do this we can as Jenkins is running inside the container we the whole the part which is data in Jenkins is in separate data container so we can actually destroy Jenkins whenever we want to address it up it again with the new version and then of course it will going to launch the upgrade process of Jenkins so when when when everything is separated properly you can you can store your important data in one place and your configuration your infrastructure is in another place that's like a main idea of the system to be able to destroy it and then set it up again without losing your data because of course you want to have your logs in Jenkins you want to know what actually happened what are the where are the logs etc but that's it can be automated yes but we prefer it to do it manually because of how to how to say that we are in open source world right Drupal itself when you update it from 8.2 to 8.3 if your project is complicated it can create troubles right so I see most of the people here already encountered that so it's kind of the same so we are able to have to make that on the system level but we consider it too dangerous to do this kind of thing yeah in the end that's you need to start it somewhere so you create a route and then if you want to dispose it you don't need it anymore because you can set it up on other servers that's that's a good question the backups can yeah one one of the one of the projects we actually store it in the Amazon S3 we create backups of the data containers and store it there it's something that we we're not doing it a lot because as we said that's mostly the system is quite complicated so the it's for the corporate lines and they have their own usually system of backing up things so we try to plug it plug in external external system of backup and if we do it by ourselves we do we use our muscle nursery any question okay so thank you very much guys