 the next talk is by Mateo and he'll talk aboutleep immutable infrastructure and immutable deployments. Please go ahead. Hi everybody. Before start let me show you a little disclaimer. The avand participated in this story career and similarly to any technology living or dead is not a medicon sedent. The lucet approach is bother don't less than little bit of untimely research. ima nekaj nič ljudov, na pravno všeč gleda z vsej metologi v produkciju. Zato, ko je to problem? Je sva prva problema, ki v tem metologiji je pošlošnj taj snovljig servar. Snovljig servar je servar, da je unik, ker postavljš vse, Zdaj ti je konfigurati in tudi je tudi tudi svoj server in tudi je tudi svoj responsabilit. Zdaj ti je zelo, ki so tudi svoj pet. Zdaj je problem, da je noflik server, je server, ki se možete reproduzirati. Noflik server, nekaj nekaj konsekvenci, je konfiguracija drifta. The configuration drift is the delta between a well no star state to a no state that is caused by the manual configuration of the server, the manual update of the server and even the automatic configuration. The configuration drift is present also if we use an automatic configuration management, because the tools don't cover all the aspects of the server and with the time it's difficult to manage, to maintain all the manifest or the configuration scripts. And when you do some manual configuration, some manual modification to your service, because it's time consuming, consuming is most unlikely that you backport the modification to your configuration management. This brings us a concept that I call the part of last resistance of service management, every developer or operator will always follow the quickly, simple and less costly way to fix a production problem, then forgot about it. These two problems bring us another problem that is the unknown unknown. And unknown unknown is something that you have to know, but there is no way to know what is it, because you don't know the state of your service. Let's see an example of a service. Here we have a service version A with some input that can be our configuration of the service and some output that is the service that we want to offer to our customer. We are going to deploy a version B and what we obtain is not a version B, but is a version A plus a version B, because we are going to deploy the change of version B on top of the change of version A and so on. The next time we will have a version A plus a version B plus a version C. And what is the proposed solution for this problem? One of the solutions is the immutable infrastructure. The term immutable infrastructure was coined for the first time by the sunflower in one of the blog posts, where he described a situation like with the problem that I just told you. And he formulated a simple solution based on a fact. If you are certain that your server are created in automated way and never change, all the problems disappear. If you want to deploy a new version of your system, you simply build from scratch the new one and throw away the old one. Let's see how we can, let's see what is the deployment with the immutable infrastructure. We have our version A of our service. We deploy the version B and we obtain the version B, because every time we deploy a new version, we build the version B from scratch and discard the version A. Most formal definition of immutable infrastructure can be found in the infrastructure as code book, where it tells us that the immutable infrastructure is a paradigm, is a methodology where every change is made by replace the server. And with, it requires a sophistication in server management template system. She tells us that you require a sophistication, not something sophisticated. So what we need for doing an immutable infrastructure? We need an automated provisioning configuration tool. We need an automated image generation tool and orchestrator, and a system for keep track of all the change. For this one we can easily use a git. Now, what is the tools that we can use, that I choose to use for implement an immutable infrastructure party? The first one is for the automated provisioning, I choose to implement with a simple shell script, because every developer can read or understand it. The shell script is simple, but at the same time very powerful. For the image generation part, I choose to use Packer. Packer have a JSON configuration file, so you can easily keep track in git. At the support for many provisioner, like Ansible, Puppet, Chef, and also shell scripts, and can use multiple builders. So you can choose where to build your image. For the orchestrator part, I choose to use Terraform. Terraform have as add their own DSL, HL, and is a declarative language configuration, so you declare what you want instead of the steps necessary to obtain what you want, like an imperative language. I think it's more simple this way. Enable you in infrastructure code party, and have multiple provider support, like for example, Amazon, Google Cloud, Azure, and also DigitalOcean. And as Cloud Platform, I choose to use DigitalOcean, because it's not expensive, simple, and have everything we need, like API, so we can use with Terraform, computer instance, snapshot, clouding support, forward NEP, and load balancer. OK, let's talk a little why I choose these tools. Sorry for the wall of text, but I choose to use and don't use a container, because for most company, a container is not something they don't know, and it's not worth the work for learning how to operate a container. Why choose a simple shell script instead of any configuration management, because as I told you before, almost everything, everyone can know a shell script. And for someone, for some occasion, learning, a new DSL can be a high learning step. Why choose to use an approach in infrastructure code instead of a compressed orchestrator? Because again, for most company, an orchestrator, like Kubernetes, can be too much for what they have to do. And most of the time, when you have a compressed orchestrator, you end up with two problems. One is to orchestrate your service. The other one is to manage your orchestrator. Why a simple cloud platform instead a full feature one? Because most of the time, you don't use all the feature of only a small subset. And most important, because the management is more inclined to accept a simple and not expensive solution that have a clear pricing. But at the end, the tools is not important. What is important is the people, as we can read in the findings of the aceleric book. Tools and technology is irrelevant if the people that have to use them, hate them. Let's see what can look like an implementation of an mutable infrastructure. First of all, the application that we are going to deploy is a simple application, one go binari. It's deployed on GitHub release, so we can easily download it. We have one attached database, and most important is made following the 12 factor app principle. In particular, for the code base, so we have one code base for main deployment. Config, we are going to store all the configuration in the environment variables. In the process, our application must be stateless as possible, disposability. Our application will be destroyed in any moment, so we have to bear with it. Let's see the resulting Git repository layout. Here we have our packer configuration, the provisioning files, and the telephone configuration. Here we can see a simple systemd unit file. The important part are the environment file option that tells to cnd to load the environment variables from this file in our application. And the other is the after cloud init directive, because we are going to use cloud init for write the file. Here we have the packer configuration. The packer will spin up a instance. In this scale we are going to use the digital awesome builder. So when the instance is ready, it will launch a series of provisioner. In this scale we can see that we use the file provisioner for copy the systemd unit file. Then we are going to use a shell script inline, where we create the directory of our application, download it, make it executable, then reload the systemd configuration and enable the app on boot. When the provisioner are finished, the packer will shut down the machine and take a snapshot. We can see the output of packer with all the phases, like the creation of the instance, the provisioning shutting down a creation of the snapshot. At the end packer will output the name of the just created snapshot. For the telephone part we can see the configuration of the droplet that we are going to create. Most important part are the data resource digital ocean image, where we import the just created snapshot and we are going to use as base image for our droplet. Another important part are the user data, where we are going to tell to telephone and the digital ocean to populate the user data, the cloud user data with the provided configuration that we are going to see. And another important property is the create before destroy, because in this way telephone, when we are going to destroy our droplet, will be first spin up a new droplet, when the new droplet is ready, destroy the old one for minimized downtime. Here we can see the cloud unit template configuration. This is a simple database configuration, where we are going to populate the template of cloud unit with the property exposed by the database resource created by telephone in the file that ccnd will go into read. Here we have the dinas configuration is a simple standard dinas configuration. We create the domain for our application, the record A, but the important thing is that we also create the floating IP that we are going to point to our droplet. Then we are going to use the float IP in the record A. So when we deploy a new version, we don't have downtime caused by the dinas propagation. And finally we have our database configuration is a standard. We use the progress with one instance. So what is the resulting workflow of an immutable infrastructure? If you want to release a new version of our application, we are going to create a new snapshot, then add it to the telephone configuration and apply it. The telephone will destroy the old droplet and create a new one. If you want to modify the configuration of our service, we are going to modify the clouding template, then because we can't change a clouding user data, our answer is created, telephone will destroy the old droplet and create a new one with the update user data. In conclusion, what is the benefit of any mutable infrastructure party? One of the first benefit is the lowering of the deployment pane, because we have a very simple provisioning, because we don't have to care about the previous state, we don't care to be hidden potency, but also a simple shell script can do the work. We can easily roll back in case of failure and most of the time is a matter of doing get revert, get restore, and we can achieve the horizontal scalability, because now all the servers are the same, all the mazes are the same, so we can easily put a lot of balance in front of our application and scale up with the account property of telephone. We can easily reproduce our server, our service, because we have all the snapshots, but also most important, because we have automated site, all the provisioning process, we can easily use the same provisioning script for create a local environment, for example with background. What is the impact of any mutable infrastructure party? If we are going to do any mutable infrastructure, we are going to also have an automated provisioning and an infrastructure as code party, and for do these two things, we need a source version control, but also the other way, so if we have a version control, we can easily adopt an automated provisioning infrastructure as code and then immutable infrastructure. What is the tradeoff of immutable infrastructure? One of the most important things is to separate what is mutable from what is immutable. For example, the binary of our application is mutable, maybe also the graphical asset. What is not immutable is the database, obviously. But another thing is mutable is the HTTPS certificate, because if we are going to create the certificate, every time we spin up our application and manage in our application, every time we are going to destroy our trumpet, we are going to request a new certificate, and we are going to easily hit the quota limit of five certificates with the lesson script. Don't ask me why I know. In conclusion, father steps from here, because we can't store anymore the logs in our machine because we are going to destroy it. We have to set up a centralized logging system, for example, like Raylog, Elkstak, or also Loki, the Grafana, because most of the time when you destroy an instance, you lost also all the metrics, so you have to implement a centralized monitoring system, like, for example, Prometius Grafana. And also a further step for debugging if you have a company distributed system, can be the introduction of a distributed tracing tool, like JAG. So that's all. I'm Matteo Valentini, a developer for an Italian open source company called Netesis. Thank you for listening. Question?