 Hvala, češča, da sem tudi na svojo vse, da se je zelo našelega način. Mi je Ntonsanko, sem linjev na software, kaj smo zelo vse, kako nalivamo vse, z kubernetnih in topanskih. Na mene v 3 različi sem kontributor in kortin, memojstvo v Kivitek MS, češča vse, češča vse, češča vse. Sreč smo vse, češča vse, češča vse, češča vse, češča vse, češča vse, and key businesses hosted by us. If you want to help us, please go to this link, install it to your account, play around with it, we will appreciate it. So today's talk is going to be about the 12-factor app, the 12-factor app. This is a methodology for writing software as service applications. It originated in Heroku back in 2011. Heroku, as you know, is spot for major service and as such they have a lot of applications deployed on their site. So with time they realized that some of these applications are easier to scale, to deploy, to manage than others and looking to assess them apart came up with these 12 factors. This was back in 2011, so it was before microservices were popular. However, I think the methodology is pretty well and nowadays one could really see how it could help a typical microservices development. So here it goes. The first factor is called codebase and it states that your application should consist of one codebase only, which is tracked under version control and deployed to all the environments that you have. So this is the codebase that our developers are running locally when they are testing. This is what goes to our testing and staging environment and this is what at the end of the day goes deployed to production. Now, because you are deploying one and the same piece of code everywhere, this automatically means that everything that is environment specific should be separated from the codebase and put somewhere else. This is things like configurations, database credentials, addresses to third party services and so on. Where they should be, we will get that later when there is a whole factor for that. Another thing that this factor advocates for is using a multi-dip approach, which is basically keeping code for microservices in separate Git or SvN repositories. So if our architecture consists of three microservices, we will have three Git repositories for each of them. This is compared to the monorepo approach, which is the exact opposite, keeping all of your services in one repository as the folders of one mega repo. However, this monorepo approach is still quite popular these days across some big companies like Google, Uber, Docker. Same as my personal opinion here is that you should use whatever works best for you. So if you have good reason to use monorepo, use that. If not, use multi-repo. The dependencies of our application should be explicitly declared and isolated. This means that we should be using some kind of dependency management tool, which allows us to list our dependencies in their versions in a dependency manifest and to interact with them through that. It also means that we should not be committing third-party binaries in third-party code into version control, and that when we are deploying the dependencies of our application should be isolated from other dependencies on the machine for the applications. About the tooling, there is such tool for almost any modern programming language. You have Maven and Gradle for Java, PIP for Python, NPM for JavaScript, and so on. Of course, if you are a Goang developer like me, you know that in Goang Puru, until very recently, it was common for developers to be committing their abandoned third-party code into version control. But even in the Goang Puru the new Go module was introduced like two years ago, we changed to solve this problem and align the workflow with the one described. And that is, we have our code and our dependency manifest. We are using the tool of our choice to install the dependencies and we are packaging them together with our code into a package that is runable by itself. So this package should have everything it needs to run and it should not depend on the implicit existence of any system-wide packages. If you can do this, it means that you have the flexibility of running your application everywhere, from a bare-metal machine to a plain alpine-docker container. Configuration, as we already agreed, will not be committing database passwords and stuff like this into our code and should be keeping them elsewhere. The best place to do this, according to our factor methodology, is the environment itself. So all of our environments for developers' laptops to our production environment should be able to solve the configuration and to our application when it's needed. And on the other hand, our application should be able to read it. There are many tools that allow you to do this. It kind of depends on what you're already using. So if you're using Kubernetes to deploy your application, you could use the Kubernetes key value store, you could use Spring Cloud Config, if you're using Spring, or you could use good old environment arrivals. And this is how it goes. We get the package application from the previous slide and when we are deployed from the environment, it is bundled with the configuration and it's able to run. And in this situation, it's okay for your app to crash if the environment cannot provide this configuration. Backing services, backing services, as the 12-factor app defines them, are all the services that our application consumes over the network. In a typical architecture nowadays, you have a lot of these and naturally, some of them are developed locally, deployed somewhere close to your application, while others are third-party and they're living in the cloud. The golden rule here is to treat all them equally, treating them as resources, communicating with them only through the network and through their APIs. I will tell you why. In this example, I have my service in the middle and it has three backing services. Three backing services. Another microservice and a database which are deployed in my data center and a third-party mail server which is living in the cloud. So far so good, but if one day I wake up and decide that I want to migrate my database to the cloud, all I would have to do, except migrating my actual data, is to change this URL here. And when I restart my application in order to get the new configuration, I will be running my app with entirely new database which is in the cloud. And because we agreed that we will not be storing this in the code but in the environment itself, the change as big as this one involves no pole changes. And this is good. The fifth factor is called built release and run and it's regarding your built and release pipelines. So you should have strict separation between these three phases. The built phase is when we have our dependency manifest and our code, where styling our dependencies and building them together into a package that is runable by itself. And of course, you are already familiar with this picture from the slide about dependency factor. The release phase is when we get that runable package and we deploy it to the given environment where we want to deploy it and there it is sourced with the configuration and it's able to run. And again, you are familiar with this diagram from the configuration slide. And the run phase is when that is already running and our clients are able to query it. Ideally, you would have one button which when you press it will do all these three things for you but under the hood, this is how they should be structured. Easy, right? I suppose that even if you have never heard about 12 boxes before, this is more or less the way you are building software. You are using dependency management, version control, you are not hard coding databases, database passwords in the code and so on. And actually, this is part of the presentation. The factors that are coming next are not so obvious and they are not so easy to implement if you have not taken them into account before that. So, port binding, this one states that our application should be self-contained. What this means is that a lot of legacy and not all legacy applications depend on being deployed on a web power application server. In this scenario, the web server is the part that handles our networking requests and allows them to our application where the actual work is done. The 12.5 corrupt does not need that. Instead, it has this web server injected as a dependency and by just running the executable package, we get working web service exposed on a port that we have configured. This is the workflow. When you are running our app locally, we get this runable package, run it on our laptops and we have a working web service which you can query on local cost. When we are deploying in the cloud, we run this package on the machine in the cloud and we have a working web service on the cost and the port we have configured. And of course, in the second scenario, you usually won't query your application directly, but would have that exposed through a gateway or a world balancer and of course, that is okay. Processists, a typical application consists of one or many processes and the golden rule here is that they should be stateless, which means they should be holding no data. All the data that the application operates with should be either persisted in a database or in a memory cache which is shared across your processes and instances. This will make scaling very easy because if you want to scale, you can just start new processes and you won't care which user or which requests get wrapped to which process because all of them are equivalent and all of them have access to the same data, which is here and not here. It will also make your processes possible, which means that if one of them dies, you can just pin up a new one and from the moment it starts up, it will be just as good as the old one because it has access to the same data. If you do that, you will be able to scale out via the process model and scaling out in diversifying work load. So if you manage to diversify the work load for your application into tasks which are runable by themselves, you can scale them by just scaling the individual tasks and not the whole application. This is quite domain specific, but let's say that for a typical web application, you have web tasks, which serve in the web part, the web UI of your application, API tasks, which serve the API, and worker tasks, which execute some shared tasks and background jobs. Now, if any given time, I have hard traffic in one of these, I can just allocate new processes to exactly this part and just scale only one disk. And in this scenario, I used to have two tasks that will engage with this process, but because my API is under higher load, I stop one of the web processes and allocate that to the API processes, the scaling only my API. And here the number of processes that you can start is limited by the resources you have. So you can have more than six if you have more powerful machine. Dispositability, this boils down to two things, fast start up and grace will shut down. Our app should be quick to start and the 12-factor app does not define what quick means, but ideally your application should start in under a minute. This is so because when you have a high traffic peak, you would want to start new instances for your application so that you can scale. But if start can instance takes five or six minutes, then when you have already started it, you may no longer need it because the traffic is cool top. So the faster your app starts, the better. And in the reverse situation when traffic is cooling down, we will want to stop some instances so that they're not wasting money. That should be done gracefully. What grace will mean is that our application should stop listening for any connections, but should finish all existing clients and return a response to the clients that have managed to open a connection. This is so because you don't want to let any of the customers hanging for a response that they will never receive because the instance gets died out. The third party, this factor then solves the actual problem of it works on my machine. And it does that by telling you if your development and production environment is closest as possible. For example, if you're using post-grace care as a database backend in production, this is the same thing that you should be using for your development and testing environment as well. Because if you're using something else, like SQLite, because it's cheaper and easier to set up, then this means you have a part of your system which you're not able to test properly until you go to production. In this example of the database, you may have some SQL queries which run on SQLite, but don't run on post-grace, or behave differently across different database backends. But you have no way of knowing that until you go to production because you're not testing with post-grace. A bigger problem here is when you're using some third party cloud services because when you're using third services, you're usually paying for them. And paying for another instances just so that your developers that are able to test it may not seem feasible. So what some people do is substitute this with a mock server or simulator which returns card call as responses in the testing environment. But you should be really careful with those because you may encounter unpleasant situations in production. Actually, this is what happened to me in one of my previous companies. We had to integrate with such third party cloud services. We had documentation, but no testing environment. Just a mock server which always returns some card call as response. So we integrated with that, we tested, everything looked fine, so we deployed to production. And minutes after we deployed, we noticed that some requests started failing. And looking into the logs, we realized that this new third party service was failing, and the error was something like malformed request body. So we managed to run something as simple as a form of the request body, but because we had no proper testing environment, we had no way of knowing that until we were into production. So bottom line is, find a way to test your third party services in a meaningful way. And the best way to do that is just for your third party service provider to expose sandbox or a testing environment where it could be cheaper for you to test. The 11th factory is called Wox. Wox is something which is really important for our application, because as you have seen from the previous example, this is where we go when things go wrong and we want to know what exactly has gone wrong. But we should be treating Wox in a special manner, and that is treating them as if there were events and not files. What I mean by this is that when we're looking at Wox, we usually don't just go to the production machine and look into the console, but here it is exposing a third party system like Splunk or Sentry or whatever your company is paying for. But the golden rule here is that our application should not care how this Wox get from our application to this third party service provider. And ideally, it should just walk to standard out and standard error, and we should have another process or a side cut which reads from standard out and standard error, guzzodate in the proper format and sends them to our third party service provider. This gives us flexibility, because if we want to change what's here, we won't have to make any code changes in our application, but we just have to change this process here. The last factor is called admin process, and it states that you should be running admin and management tasks as one-off processes, they should be stateless, and ideally, they should be tracked in some way. What admin process you have is, again, quite domain on context specific, but I will give you an example with something that I think most developers are doing, and that is database migrations. So there are a lot of ways to do database migrations. One of them is to walk into the production database and execute SQL queries there. But if you're doing this, then you're probably doing it wrong, because in this situation, there are many things that can go wrong. A better way to do it is to commit the SQL that you want to execute in a file, commit that into version of the file, and have it executed on your database as part of your deploy script. This gives you accountability and traceability of what had been executed when, and it also gives us reproducibility, which means that if you want to execute it four times, we want, across our four different environments, we won't have to do it by hand, it will be done automatically by us. And this is how we should be doing carding processes in a traceable and reproducible way. These are the two factors. I suppose that even if you don't find value of them as they are defined, what you can take away from this presentation is that you should be as explicit as possible, you should be as decoupled as possible, and you should not be making any assumptions about everything. And if you do this, it will give you portability, so you will be able to run your application everywhere, thus not being invited to a cloud or service provider, it will give you scalability, so you will be ready to, you will be ready to, you will be able to scale when you need to, and if you are not running into the cloud, it will take you one time further to it. My name is Anton Sankov, thank you very much, and don't forget to install our GitHub app. In this case, when you have different databases, there will be integration collection, also unit test should be on the same one. Yes, if you are testing, if you have actual database back-and-back tests, you can solve this problem. But in many situations when people don't run actual database in tests, they just mock it out, too. So yeah, this can be solved by testing, if you have the proper infrastructure. So, good point. Anyone else? So, thank you and I will be hanging out.