 A bit of introduction about myself. I run a software company. We are called Patri 365. And we do a kind of chat aggregation for a lot of social platforms, like Facebook and Line. So in real terms, that translates to a lot of workload. And that's over the past seven years, I've been experimenting with a lot of deployment solutions and operations, operational stacks that is able to serve this kind of quite high requirement in terms of when we started off, we were doing it all pretty manually. And with a $5 a month VPS hosted on digital ocean, and gradually that became insufficient. And we had to grow the server farm kind of organically. And so we adopted a lot of open source technologies. So I'm going to be talking about some of them. And some of the solutions here, some of them are the things that you can experiment. And some of them are platform or Azure services that we also have been fortunate enough to be working with. So these are what the present requirements look like. We have like 600 requests per second in coming. And all this is hammering at the servers all the time. So the requirement is also that we need to deploy constantly new versions of our software. And they need to be updated about once a day, which translates to about five times a week deployed with zero downtime. And the requirement is also that the cluster needs to be basically fault-tolerant. So if any one particular server goes down, the entire cluster must keep operating. And the load must be shifted to other servers instead. So those are the key challenges that we face. The first one being that the deployment has to be repeatable. So you know exactly the configuration, the dependency that leads to that kind of setup. Back before we started, deployment used to be, if you go back 10 years, it used to be that deployment is you install whatever software and library on the server machine that matches your development machine. So that means if your software requires a certain library, the exact version of the library needs to be installed as well on the production machine so that once the application code requires it, it also loads. The scalable one is also a very fancy requirement. It used to be that. So suppose you have a load that looks like this. During peak load, you want to be able to scale up your server farm to accommodate more traffic. During idle loads, you want to scale down the server to be able to take advantage of the cost saving, which is one of the primary reasons for adopting the cloud in first place. And finally, the third challenge is how to engineer a cluster that is robust and fault tolerant to a degree that it kind of detects the failure in the system, in any part in the system, and it shifts the traffic to somewhere else. So what's happened in the last 10 years? Docker is kind of a revolution when it comes to this space. Before Docker, we used to have something called Chef or Puppet or Ansible, which is kind of a configuration management stack on a machine. So basically, instead of this is a movement that's also called configuration as code, basically, instead of installing software and libraries one by one until your server is capable of serving all the libraries and dependencies, you are instead listing the exact configuration, the dependencies, and the versions that is required in order to bring the server up to a repeatable state that is optimal for your software. No, I'm good. And this leads to a second philosophy that's never happened before, which is we are treating servers as cattle not pets. So what's the difference between pets and cattle? The difference is that when your pet is sick, you take them to a vet. When your cattle is sick, basically, you shoot them and you move on. So that's how we do with servers nowadays. Once you have a repeatable configuration, if a server is starting to misbehave, the internal state has gone wrong. The file system has become full with any other particular reason that you haven't anticipated in advance. What happens is you just shut down the server, you terminate the configuration, and you rebuild a new server from scratch to replace the one that you just lost. So with Docker, this contract will define and become the boundary in between the developers and the ops team can work. So developers are working with stuff inside the containers, and ops team are responsible for maintaining the uptime of this container. So recently, Docker is usually good for the very small scale kind of operation. But once you expand to more than one machine to a cluster of like 5, 10, maybe 1,000 machines, then you need something more advanced. Actually, a lot of people would be saying that this talk is kind of unnecessary. Because in 2019, the enterprise will have standardized on Kubernetes, which is a deployment manager, like an orchestration for Docker. So basically, if Docker defines the boundary between dev and ops, Kubernetes is the tool that exists exclusively in the ops realm and kind of takes care of making sure that your deployment is running smoothly. It has a plug-in for everything. So you can make it do large cluster stuff. You can make it all the scale. You can integrate it with any kind of software and service and stack that you have. But I think, in my opinion, it's kind of an overkill to, if you are anything short of a large enterprise to be adopting Kubernetes as the first choice, because it creates a more problem than you previously had. So in this talk, I'm trying to explore the middle ground between having a homegrown solution and having enterprise Kubernetes. So to see if there's something in between that kind of fits most small scale companies and teams that don't require, that don't have full time ops people to be running a complicated cluster. Yeah, that's what Kubernetes feels like sometimes. Some of you have played with this before. I will totally understand. The configuration is really taken to the concept of configuration as code. It's really taken to an extreme here, in which you have YAML files defined for everything. But it's worth noting that Kubernetes, while it takes care of the orchestration of containers, it doesn't actually do auto scaling in the sense that it's auto. So basically, you need to manually tinker with the scaling parameters. You need to tell them at a certain point, at a certain level, where the CPU gets high enough load, you want to add more instances to the cluster. And you still need to interface with the underlying infrastructure of API. Like if you're on AWS, you want to be talking to the Amazon API to add new instances to the cluster. If you're on this or if you're on Google Cloud and it's similar, you need to be talking to the cloud providers. So it doesn't automatically abstract that away from you. So on the opposite end of the spectrum, there's also another movement that they call serverless, and which is kind of a misnomer, because everybody knows that the server exists somewhere. But the idea is that you don't need to think about servers. So if you come from a programming background, mostly all you think about, mostly all you do is code and not the configuration part. So serverless kind of fulfills that demand in a way that you only think about your code and the functions, the parameters that it receives, the output that it gives, and not how it runs, not how it scales up and scales down. So if you think about how the cloud computing movement has commoditized the physical infrastructure to the degree that you don't need to think about when you're spinning up a cloud instance, you don't think about where in the world is happening right now. With serverless, it's the virtual infrastructure, the cloud platform that's being commoditized. So if you want to run a piece of code, you don't think about where in the world or where in the logical cluster that this code is going to be executed. The platform guarantees that this code is going to be executed with a well-defined contract, like within a reasonable timeframe. And they are going to build you at the end of the month exactly for how exactly seconds that you consume in order to run the code. Some of the most popular serverless platforms right now they are AWS Lambda and Google Cloud Functions. So they kind of fulfill this promise in a way. And I believe this movement is going to become very important because as more and more people realize that the virtual infrastructure also can be commoditized, this is going to scale down really well towards solo developers or really small team that don't need to be managing their own infrastructure. So I have deployed some serverless applications before that kind of run. They've been ongoing for three or four years with essentially zero maintenance. So if you've run any kind of infrastructure before, even either physical or virtual, you will find that this is kind of difficult to accomplish in practice because what's happening in the practical world is that there are things that you need to do to avoid platform rot. There are things like you need to keep updating your OS. You need to maintain your firewall configuration. You need to make sure that the disk isn't full and the server doesn't deteriorate to a state where it's incapable of running your app anymore. But with serverless, all that is abstracted away. And you can write a code once and you can leave it running on the server and forget about its existence for many years until at some point you need to touch that software again. So in between both extremes, there exists something that's kind of up and coming as recently as last year called serverless container platforms. So that's not a well-defined name yet. But it's basically like if you marry the serverless platforms and the container platforms like Kubernetes together, you get something kind of like AWS Fargate or Google Cloud Run. That is they shift the boundary so that you can also the limitations of serverless is you need to be running or you need to be writing code in the exact configuration that the platform has been written to support. Like, for example, it usually needs to be JavaScript. It usually needs to be Node.js. And it needs to be running the versions that the platform providers are providing. On the other hand, with Docker, you get the freedom and flexibility to choose the version of the software that you want. So as we said, and as we talked about in the beginning, the exact stack of PHP, of Python, of the library that your software requires is going to be installed in your Docker container because you write the code just so. With serverless container platforms, you get that kind of freedom and flexibility. But you also don't need to worry about how that container is going to be operating in the virtual infrastructure. Once you point the platform to the container, the platform is going to be responsible for scaling this up and down for terminating the instances that are misbehaving and for billing you for the exact container second that your containers are spending. So I think this is kind of a very good middle ground between both worlds. And when you need to have some control over your deployment platform, but also you don't want to be micromanaging the other scaling parameters, yeah. So essentially, to conclude, we covered different paradigms of managing deployments on the whole spectrum. So on one end, you can do at least if you don't, one thing that people don't do anymore is they spin up a server and they treat them as pets. So that's one thing that needs to go away. But if you move onto the spectrum of cattle, that's one end you would want to be micromanaging and defining the parameters manually. You want to use things like Docker. Or once you scale up, you use Kubernetes. On the other end, you want to be using something that only exists to manage your code and you don't need to think about anything else. You'd be running on serverless. And in the middle, you want to be meeting somewhere with containers where you have the flexibility to define the kind of stack that your application requires. But you don't need to be thinking about how to scale up and down at full tolerance. You want to use container platforms like Farget and Google Cloud Run. So that's it for my talk today. So thank you for coming. And if you have any questions, I'll be happy to talk about them. So what is the advantage of container platform serverless versus normal serverless? What is the use case? I mean, lew use case. So with normal serverless platforms, especially if you look at some of the newer ones, like Azure Functions and Google Cloud Functions, they can be very limited in terms of runtime that they provide. You are stuck with the Node.js versions that they require. And it can be tricky to install. I mean, if you have libraries that are Node.libraries, they can execute. But if you have dependencies that are compiled executables, like .so, they can be tricky to install. Yeah. OK. And what is the building method of serverless? I mean, in the serverless container, it's build per second, like serverless. How does it cost? How is the build calculated? It's also per second of runtime. I mean, usually, they build up to small fractions of seconds, like microseconds. But it's a bit different in that with serverless. Usually, they build per second of the runtime. But with containers, sometimes one container is usually going to be executing some web application, web serving platform, like NGINX. And with NGINX, you can already serve more than one client, one request at once. So if your NGINX is capable of serving, say, it is serving five requests at once, you are built for the container time between the first request and until the last request ends. But if they scale up from, if you get to the point where it decides that it needs to scale up, then your building is going to be multiplied by the number of containers that it needs to run to serve your application as well. So serverless, one instance, like Google Cloud it can run many instances at once. Or serverless, normal serverless can run many instances too. Yeah, definitely it can serve many instances. And I think with serverless, you are built per request. So I mean, if three requests can be served with one container, the platform doesn't care. It's going to build you for three requests. But with serverless container platforms, if three requests can be served by the same container, then the platform is going to build you for that container time. Oh, OK. Thank you. Has anybody got any questions? From your experience as a CTO, what parameters did you use to consider this is small scale, medium scale, or large scale? That is a very deep question. Actually, yeah, it goes deeper than this talk. But in general, I would think of it as in terms of people who need to manage that infrastructure. So it all needs to be, if it all can be managed by one person, then you don't need a lot of explicit configuration. It can be inside your head. But if it gets to the point where you need to have a team, you need to have night shift and morning shift, then you want some configuration to be visible. Or you want to abstract that configuration away entirely. Yeah. Thank you. Thank you very much for Mr. Patan Thananat. OK, thank you.