 This talk will be about the interest to run your applications that you are building guys. And first, about myself, I am now an engineering manager at Twisto. And I also do some DevOps consulting on the side because I don't work full-time at Twisto. And before I started chemistry, I co-founded a machine learning startup that still exists. And if you have any questions, just let me up on Twitter or just write me an email. So this talk will be mostly for software as a service companies that are growing. I will be showing how we did at Twisto. How we did the infrastructure choices. And in the end, I will share some practical tips on how to work with bare metal servers or how to combine stuff if you want to go the same route as we did. So when you are making an online service, most of the developers focus on the shiny things, on the software or on the user experience. But there is a lot of hidden infrastructure and the supporting stuff. Like the diamond cannot be in the void. You need some servers for your software. And these servers need to be in some data center. And like when you are choosing a place to display your diamond, the infrastructure has a big impact on how people perceive your service and how easy it will be for you to develop it. So also you have a lot of choices you can make when you are thinking about deploying your service. So we have established that you will need some computers. When you are building software, you will need computers. But where to get them? What is the best place? One option is platform as a service. For example, Heroku. This is my favorite one. It's, there are many alternatives. AWS has something that tries to come close. It's called Elastic Beanstalk. But all of these services aim to make it easy for software developer to display their software on a public site. So it's usually some integration with development tooling. For example, with Heroku, you just push to a Git server. And Heroku does all the magic with installing dependencies building and redeploying the application with a minimally disruptive manner. There is a similar thing with Elastic Beanstalk. There's command line utility, EB, EB, and you just run EBDeploy. And it's all integrated to the service. So if you are starting up, it's definitely, I would definitely recommend not, unless you have a requirement, I would definitely not recommend setting up your servers, your virtual machines, just use Heroku or whatever. And it will save you quite a lot of time in the beginning. And it's not that costly. But later, like a year later or two years later, you may find out that you have more specialized requirements or that you need more power and it starts becoming costly. So you may think about other options. So what are the other options for running your service? Well, you could go shopping. You could go shopping for a server. I would not really recommend it. There is a lot of things to know about the servers. There is the trips to the data center with the server to set it up. And it's usually not worth it. If you go this route, as I've seen some companies I work for, just always use the server housing and at least don't try to build your own data center. It's rather not worth it. You could rent virtual machines in the cloud or you could use more sophisticated services. If you rent pure virtual machines, they may become quite costly compared to running your own servers. You are paying several times as much. And if you are using the services with higher value added, like the load balancers and so on, they are not as costly. But still, when you combine these services, there is a lot of the system that lives in these interactions. And you will need people who understand the strengths and weaknesses of these services and how to combine all of it together. So for example, this screenshot is taken from Amazon's blog post on recommended architectures to run WordPress. And it's not really something that you would, as a developer, you would start doing out of the blue. There are a lot of things to think about. So if you are in the cloud, either you use virtual machines that are overpriced or you use these higher services, but you need to understand them. And there is hidden complexity. There is a way to rent dedicated servers, physical hardware, provided by several companies, either it's, for example, Hatsnatter, a big one. Or there are multiple local data centers, like in Prague, there are at least four or five, which have reasonable offers in this. And you don't get as many services, although with the small companies, you can usually negotiate, for example, load balancing as a service, but you get really the best price for the raw power. And you can build really a lot on top of service like this. So these are the options. And now I'll be talking about what we did with Twistail. So first, what is Twistail? Twistail is a FinTech company. It's around, already since 2013. We issue MasterCards. We are in two countries, Czech Republic and Poland. We have integration with Apple Pay, Android Pay. And we are moving around a serious amount of money for customers. So how did the infrastructure look like in Twistail case? So in the beginning, these seven or eight years ago, there was a cheap hosting. Yeah, something just to get the service up and running. No customers then, no worries about data, basically. Since then, the company moved to a dedicated server, and quite soon after that, it added a backup server. In the beginning, as I say, there is a lot of business risks that mean that the company can go out of business even if the IT doesn't do any errors. So the company prioritizes to minimize the business risks by spending the engineers' time on actually improving the product and iterating the business quickly. So yeah, we could have lost several days of uptime if the server had failed or we could have lost some data, but it was a risk. And this risk is not as big as you may think. The servers are quite reliable, and there are many factors out of the IT which make this risk look okay. Later, we started to scale up the platform and split the application server and the database server. And from the single application server, we created a Kubernetes cluster mostly to prevent waking us up during the night when there needs to be some maintenance done or when some part of the hardware fails. So this cluster works quite well. And we are now working on a live replication of database to be able to do maintenance on during working hours and not having to take the full system offline as we do when we need to do some upgrades. And during all of this time, we have cloud backups. We use the cloud for durability, and we also use cloud for scaling up and down when we need it. So we have many staging and testing environments for the developers on demand in cloud. We use Amazon, and it's great to use it for things where we don't want to predict the need, how many testing environments will we need in a month or two, and where we are built just by the minute or by the second. It's very easy for the developers, and we don't just want to reinvent the tooling, but for the production, we are very happy with the on-premise service. Every decision has a cost, so what does this cost us? For a long time, only a few engineers could deploy a new version of the application. We were deploying daily or every few days, but it always relied on one of a very small group of engineers because the deployment essentially required admin access to the server. There was a script based on a tool named Fabric, which logged into the server, checked out the latest Git revision which has been tested by CI, and then did the database migrations, restarted some services, and so on. Now we have a better tooling which automates deploys, but because we had a quite simple infrastructure, we had to live with this for quite some time. We had also a few outages which could have been prevented if we ran on some setup with multiple availability zones, but it was only like three times during these last seven years, and we had way more problems which we introduced in the application code than the outages caused by the data center or by the infrastructure. On the other hand, there are some benefits which we gained by this choice. The development is easier. Basically in the beginning, it was the same environment if you used Ubuntu on your development machine, you had pretty much the same environment in production, so it was quite easy to test what would happen if we changed something. We have faster CPUs than are available in the cloud. We have lots of RAM, so we don't have to do some optimizations as soon as we have had to do it if we were paying more for this, and the overall architecture of the application in the beginning was really simple, and also because the dedicated servers are cheaper for the raw power, we are saving like two or maybe three engineer salaries, and these engineering salaries don't go only to the infrastructure maintenance, but also to the creation of developer's tooling, so we have tools like one-click staging environments, things like snapshots of the database in a few seconds, anonymization of the database so we don't risk as much losing the data, and many, many other things. So what is the resolution of the question of the old question if it's better to run software on-premise or in the cloud? We think it's actually an outdated question. We are happy mixing this together. We have dedicated servers for production. We have cloud VMs for supporting services, and for stuff where we need a higher availability, like card transaction authorizations. We are mixing it with software as a service, and in my opinion, this is how you avoid vendor lock-in to be able to actually think about these services or these parts of your stack individually and say that, for example, now Firebase doesn't suit us very well, so we are looking how to replace Firebase, not to look how to replace the entire Google Cloud setup. So now for something completely different. I promised you some tips. So when you actually decide to go this route and use bare-metal service, the first tip, I think the most important one, I would say, I learned by practice, is that there is a technology called KBM. It's not the Linux one. It stands for Keyboard Video Mouse, and it's basically remote access. So if your service supports this, you can configure BIOS remotely if you need to. You can install your operating system, and you save a lot of calls with the support people from the data center, and you save trips to the data center. Just be sure it's integrated on a motherboard. Some cheap servers don't have it, and the attachable KBM is not as good. And the brand servers license it separately. So just talk to your data center if they support it before entering the server. The other tip is to get used to disks filling up much faster than you expect. Buy more disks in advance, and be sure to have three slots for the disks in the servers. It's quite easy to extend the disks if you use LVM in Linux, which stands for Logical Volume Manager. And you just add new disks, and the software like the database, which is running on top of them, doesn't see anything only that there is more free space. So just be ready, forget that you will need to add some more storage. The third tip is that virtualization, like virtual machines or even containers, can really wait until you have multiple teams that need to somehow share these servers. And we were quite happy with the monolith. Even in the Kubernetes, the application is still based on the monolith. And we can deploy it 10 times a day without downtime. And this is something which really works for us. So if you introduce new technology like virtual machines on your bare metal servers, or if you split your stick to microservices, just be sure what you get as a benefit. We obviously use virtual machines in the cloud, but we don't use them at all on our bare metal servers. Fourth tip, if you manage like three or more servers, it's time to start looking at unattended installation, which means you write a short script that contains server name, IP addresses, and so on. What packages you need to install, probably some SSHTs. And then you boot from an installation ISO image, which contains this script and the server installs all by itself. You don't want to spend time waiting for servers booting up and installing them, especially if you need to do it several times. There is Red Hat Kickstart, which does it, Debian has a precede, or it can be scripted with a plain script that's not that hard, and the Debian precede is not as good as the Red Hat to link. If you would run more than 20 servers, there are services to have a look at, which allow you to manage these servers. There's Foreman, Ubuntu has metal as a service, but I don't have personal experience with them so far. So before we move to the questions, there is a last tip. If you like what we do, if you like what you see, if you would like to be a part of the company that simplifies payment, moves real money, but uses bleeding edge technology and runs in Python, then hit me up, and we are hiring both DevOps people and developers. So thank you for your attention, and now it's time for answering some questions. Just one, Kamin. Do you have entry-level positions open for DevOps? For DevOps, I think we currently don't have an entry-level position, but for development, we have. But definitely hit me up. I'll be happy to have a call with you, and I think we can figure something out.