 All right. Thanks for coming, everyone. I'm Hinek, and my goal for today is to make your life easier by doing less. I want you to stop worrying about a lot of things in your applications. I want you to stop worrying where your logs go and how they are processed. I want you to stop worrying about where your configuration is coming from and how it's structured. And I want you to stop worrying where your application is running and how it got there in the first place. And instead of that, I want you to think about in different terms about how to make your application play nicely with others. Because the task, I just enumerated, they do not go away. Someone still has to do them. But it doesn't have to be you. Because if you stop thinking about the application as a Django web app that serves cat pictures and start thinking about it as a universal building block with clear interfaces that others can rely on, those others can start doing the work that you just dropped. So we are going to outsource complexity that is not inherent to the value of your application. So in other words, we are creating a lot of SEPs tonight. And as a side effect, by the end of this talk, your application will also be web-scaled by accident, which is also not too bad. Now, as with actual building blocks, it shouldn't matter whether you're building a doghouse or an airport. And I'm from Berlin. I know everything about airports. Believe me, it's fine. Your app should look the same on a laptop or on a cluster. It shouldn't know the difference. And this dramatically simplifies, of course, development, testing, operations, scaling, and also moving to new platforms. And while it may sound a bit academic at first, platform agility may be a bigger issue than you may think, because every infrastructure evolves over time until it goes extinct. So you will have to touch that CentOS 5 server eventually. And even if you are happy with your state of the art Kubernetes cluster right now, I'm going to bet you money that in five years, there's going to be a good business model around moving people off this legacy platform. So how do we get there and what exactly do we gain? I'm going to show you what a very simple web application. But before you start leaving the room again, this talk is not specific to web or something like that. But it's easiest to talk about. So I'm going to use it as an example. And most of the things I'm going to say apply just as well to other types of applications that run on servers. And while I'm talking meta, as with all of my talks, I will be using all of my time because I have a lot of materials. So there will be no Q&A. But I will be available throughout the conference. Feel free to come up and talk to me. I'm here to talk, not to sit around and play with my phone. I've also compiled a page with everything I'm going to talk about with all the links, all the concepts. So you can dive into the topics you find interesting because, as always, I can only scrape on the surface. Now, behold our beautiful app. This is a very exciting pyramid view. And the only reason this slide exists is so I do not have to tell you to imagine a simple view. This is it. We are going to play with this one. We are going to make this great. So to run this in a Python web ecosystem, you need to create a WISC application. What it is exactly is not that important, but it's something you have to build. And how you do that depends on the framework. So this is how it looks in pyramid. And it's usually the same for all frameworks. There's a function that does something and in the end it returns something a WISC container can work with or a test framework. That's the nice thing about this. I like to put it in a file called App Maker because it's a function called Make App, so why not? And this initialization code, once it gets a bit more complicated, is notoriously hard to test because you have to simulate an environment it's running in. So the goal is to keep this code as simple as possible as few if branches as possible, so if branches between testing, development, and production in this case. And of course, this principle is not specific to web or WISC at all. You always want to isolate the creation of your app where you get information from the outside, instantiate your classes, and so on, and so on. And the rest of your application are just classes and functions that take normal data types, which then means that it's much easier to test. So in Pyramid, you basically create a configuration object. You run a bunch of methods on it, and by the end, you call MakeWiskeyApp, and my clicker is not working. This WiskeyApp is actually just a function. So if you've ever seen a tutorial from a WISC container like GUNICORN, it's actually really simple. Someone has to build it. And speaking of GUNICORN, this is how you could use it. In this case, I'm going to assume our application has the namespace sample. There's the app maker, it's the module, and MakeApp is the factory that we are calling, and it returns something that GUNICORN can work with. It comes up, and we can curl it, and we even get Apache style log output, which is cool. Now, if we wanted to deploy this application as it is, you could, of course, take this command line and put it into your system unit file, for example. But that would bleed the choice of a WISC container into the configuration, which probably, if you are following best practices, is not part of the application repo. So it might be something in your Ansible or Salt or Puppet or whatever repo. So maybe you want to switch out your WISC container. At some point, now you have to coordinate your changes between having the new container in your application and changing the configuration. This is not great. This is error-prone, so it's better to avoid. Yay, my clicker works. So what are we going to do? So remember, we want a building block. We want a standardized way to start applications. And it shouldn't matter at all that it's a Python application or it's a web application. It could be C++. It could be assembly. It could be whatever. So how do we do that? Well, we do it the same like our ancestors in the 1970s. We write a shell script. And this shell script, we just check in along with our application. You can call it however you want. In Docker, it's common to call it docker-entrypoint.sh. For this talk, I'm just going to call it runappsh, because that's what it does, and it fits my slides better. So one thing I would like to point out is this exec thingy here. This means that the shell process that is started here is replaced by your application, which is very important if you want to receive signals, and you do want to receive signals. Otherwise, you run into all kind of trouble. This little tidbit, also straight from the 70s, means that the standard error output goes to standard out, which means that you have one stream of output, one stream of logs, which is much nicer to handle. And this way, a shell script becomes the adapter between your application and its environment. It's a very simple adapter. But still, someone who wants to run your application at this point just has to run a shell script. And this works just fine. In SystemD, in Docker, or in proc files, which you may know from Foreman, or Forgo, or Hancho, there's a bunch of those. It's quite popular because Heroku uses them internally, for their deployments, but it's also very useful in local development. So what do we have? We have our black box, which is our application. It's a building block, because it's very easy to run. It exposes its service on local host, which is the default for GUNICORN in this case. And it logs to standard out, which is great. We are done at this point, because in development, this is your terminal, which is exactly what you want. If you run it in SystemD, it gets forwarded to Syslog, which means that you get 40 years' worth of GUNICS experience for handling logs. And cluster managers like Kubernetes or Nomad have, of course, first-class support for this kind of logs. You can have streaming over the network, so you can just live and watch what's happening on your servers. And more importantly, they will help you to aggregate your logs in something like LogStash or GrayLog. So I would like to stress here that you should not try to log through files. And more importantly, do not try to rotate them. This is something that will eventually make someone very mad. Anyhow, you have clear interfaces, I think. Now, in the next step, my goal is to make this example more realistic by adding more features, but to stay as close as possible to this ideal. And let's tackle the most glaring problem here first, which is exposition. This kind of exposition is useful in two scenarios. The first one is you really just want to access the service on local host, or you have a local NGINX that will expose it to the network or wherever you want. That's good, but we need to do better. So let's shed those shackles and talk about configuration. Here I find it important to stress that there's a big difference between application configuration, like your application, and the configuration of general-purpose software. Because general-purpose software like Xeam, Apache, and GINX, they need to accommodate everyone. They have to make everyone happy. Your application only has to make you happy and your coworkers. So you have to change the question from what can be configurable to what varies. What varies between deployments, between environments? And it turns out it's very, very little. So there are things that some people put into their configuration that don't really belong there, like the routes configuration or middleware configuration. It's still quite common in the pyramid world. Or logging. Let's talk about logging for a moment. When you configure logging, there's basically two things that matter. First is the log level. So in development, you would probably want to have more logs. And in production, you want less logs. OK, that's simple enough. Now, that's the log format, which is a bit more complicated. But again, you just want two options, really. You want one human readable format with colors for your terminal. And you want one easily parsable for production, which can be some key value pair thing or JSON. So what you do here is just you define those two configurations in your application. And then switch between them using an option that you pass into your application. This way, you can test your logging configuration very easily, because it's there. It's not in a different repo. It's not living along the Ansible. So what you need to make configurable though is, of course, exposition. You want to be able to tell your application where to listen on. And also external resources, like web APIs or databases. These are also things that may change or that make credentials. So these are genuine configuration, things that should be configured. Now, once you've identified those few options, you need to pass them into your application. So how do you do that? So you could put them into an ini file, which is simple enough, because Python had support for any files since the 90s, I think. So there is one, not one, it has multiple downsides. So first of all, in some environments, maybe not in your current one, but maybe in one of your future ones, it's hard to impossible to inject files. That's just a matter of fact. And some of those options belong to the whiskey container. Some of those options belong to your app. So how do you separate them? You could, of course, pass the ini file in your batch script and pass them along. But I think we can agree on that this is not great. So what you really want is just passing key value pairs between processes. So if only there were a simple, reliable, and portable way to do that, there is. It's called environment variables. And they are not relevant just for our simplistic problem here. They are universally supported. So again, SystemD and any other process manager supports them. Dockerfiles have first class support and every cluster scheduler under the sun do that too. There's so many tools by now that will help you with that. So for example, DRN will set variables when you enter a directory. And there's like 5,000 others that do the same thing. Service discovery tools like console or HCD, they do have tools that will fetch the data you want, set the environment variables, and run your application. Of course, Python does have access to the environment. It's OSDoneEnviron. And if you really, really need a file for whatever reason, there can be legit reasons to use files. There's also solutions to that. So for example, GetText, which is actually for translating software, has a tool with it. It's called ENVyZepst. And it does what it kind of sounds like. It allows you to do very simple templating using files and environment variables. So you can get there. There's more complicated or more powerful tools for that too, like ConfD, which supports backends in Redis. So you can put values out of Redis and put them into environment variables, which is kind of cool. Console template, it's the official one from HashiCorp. So there is a very broad support for this kind of things. Now, back to our concrete problem. It turns out that the host port problem is kind of common. So it happened to people before that they wanted to configure the host and the port. So a standard, a best practice emerged. So there are two variable names that are called very appropriately host and port. And they are supported by most servers nowadays. And this is great. These are conventions. Conventions are amazing. So for example, a forementioned foreman, if you define multiple applications in one profile, they'll enumerate ports for you. So you don't have to do it by hand. So you can start multiple applications at once, and you do not have to fill around with the ports so you don't get conflicts. The lock level is easy enough too. You just grab into the environment, do a little bit of get-at-a-magic. But this is a little bit tedious and ugly, and it doesn't get more global state than host and environment. So it wouldn't be me if I gave a talk without plugging in another project of mine. So let me introduce you to Environ config. So I do realize that there are similar projects on PyPI, but at least when I started the project, there was none that did the same thing. So what it allows you to do is to declaratively define your configuration, including nested groups. As you can see, there's a subclass in a class. And when loading, those names just get concatenated, along with the optional prefix, and loaded from the environment. And once it's there, you can just access it like normal nested classes. The law of the meter be damned. You just use a lot of dots. Since Environ config is based on address, and I have new stickers, by the way, so if you want, talk to me, you get a lot of stuff for free, like default values, validators, or converters. For example, I love using enums to make sure that I get valid values into my applications, and it will just explode and not start up if you pass something illegal. Now, I like to put this thing into a file called config.py, but the declaration itself does not load it automatically. So where are we going to load it? We could load it in the makeApp function I showed you before that creates the application, but it is not great, because makeApp is sometimes used by tests, and you don't really want to mock out an environment, like a genuine environment just to create an application. That's not great. So instead, let's create a new file, a file that will do the dirty work of grabbing the environment and then just pass an app config instance into your makeApp. And this one I like to call whisk.py, which I've seen other people do too, so I guess a best practice now. And this is how you load it. It's really simple, just a one-liner. And now this module is the ultimate interface between your application and your environment. And makeApp, at this point, only deals with a well-known class, the Python class. So if you just grab something that doesn't exist, you get an attribute error. This is so great. And it also gives you a full control over your app instantiation. Yeah, you can create whisky apps for your tests without too much pain. And what's important here to note is that this allows you to use the lowest common denominator on the outside, key-value pairs, environment variables, and use structured data, well-known data, validated data in an insight once it's passed this file. So you may have noticed that now I put the whisky app into a global variable. It has multiple reasons, so it's more flexible. Not every whisky container does support this function call thingy. It's also, we need to pass an argument to makeApp now, so it makes things even more complicated. And I call it application because there's another convention. And because it's assumed that it's called application, it allows us for Junicon to only pass the module name. So our run script got even simpler. We just pass the name of the module and it grabs into it and pulls out the application thing. Now, in all the talk about environment variables, there's one thing I've left conspicuously out. And it's that time has shown again and again that certain things just do not belong into environment variables. Because some things you just want to whisper gently into the ear of your application and not make it global to your whole process tree. Because environment variables can leak. There's many ways they can leak. They've leaked before. They leak to very smart people in very sophisticated environments. So it can happen to you. It may not be your fault. It may be some weird package that you didn't even install yourself because it was a dependency of something else you used. And it just dumps the environment and now your AWS keys are on the internet. So let me be very clear here. I want to ask you to ignore the full factor app manifesto in this point for now because they got this one wrong. And that's not my personal opinion. That's quite, quite widely now. So thank you, Christian. But now things get a little bit hairy because solutions to this problem are platform-specific. Every platform has its best way. And it's kind of part of the lock-in too. And everybody wants to give you the best possible way. So what I can really just tell you is to use your platform-specific thing and leave it at that because giving you an intro alone to all these things to give you a rough overview is a talk by itself. But luckily for you, this intro exists. It's a talk by my friend Noa Kuntrowitz, which she gave two years ago, actually also at EuroPython. And I will link it in my talk. So it's interesting to get an overview. For the sake of completeness, we do run Vault. It's vendor-independent. It's quite nice. So in any case, whatever you do, you can use it too. It can be a lot cheaper than things like AWS Secrets Manager, where you pay per access and stuff like that. Anyhow, now I'm going to make Christian sad. So since we run Nomad and Vault, which are both by HashiCorp, we get built and templating for free. And they're just going to mount a special purpose file system called slashsecrets into the Docker containers. And I can template my secrets into this file. Now, this is not perfect. There are ways for files to leak, too. But those ways are a lot harder than environment variables. So I personally consider it a decent trade-off. OK, Chris is nodding, so it's fine. OK, thanks. But the safest way to do this is always platform-dependent, because you want to use the best features that are available. And of course, if you want to have dynamic secrets, you have to do definitely the native way. And what do we do in programming if you want to hide away an implementation detail? Well, you write a facade, right? And I click too much. Go again, write a facade. So what I'm asking you to do is just to wrap your secrets client. So for vault, it would be HVAC, and build a nice API around of it. And now you can decide when you load your secrets. Do you do it on instantiation or do you do it on access? It's your call. Your application will not know. And if you switch out your secrets back end, because that can happen, then you just rewrite this one class. The rest of the application will not know. And of course, this is also very easy to replace with a fake in your local development. You just return static strings, and you are done. So yeah, your application should not care about the secrets back end. So you will have to write more code than with the other things I've said, but it's still attainable. And you should really do it. Now, I'd like to point out really quickly that I like to encode credentials as URLs, which is vitally supported. And again, you prevent a problem of having to coordinate changes between configuration and secrets. If you have everything in just one place, it's one transaction once you change. And you don't run into problems that you change the host in configuration, but the credentials are still the old or something like that. All right. We still have a nice building block that is easy to run. We inject essential info that varies across deployment environments using environment variables. Secrets aren't as elegant, but with some effort, it's a magic. It's good enough. We exposed based on the configuration that is coming from the variables. We log based on this configuration. This is still pretty good. Now, keen listeners may have noticed that it's impossible to reload this kind of configuration. If you go down the road of environment variables, this is right out. I would like to reframe this ostensive downside as something good, because it forces you to rethink and reconsider. So for example, for things that change, not too frequently, so let's say once a day, you can just redeploy your application. And now this makes you think about zero downtime deployments very early on. And as someone who went through this a few times, I can tell you that caring about zero downtime deployments early on will pay off really, really big time by the end. And also, the longer you wait to think about these kind of things, the harder it gets. So it's kind of nice to have it from the first moment and that you can deploy any time and nothing breaks. And the good thing is, thanks to the fact that we now have a building block, it's actually very easy to attain. So instead of one instance of your application, you have two. And you can run them on the same host and you put in load balancer in front of it, for example, NGINX. Now, your only application is duty. The only thing that changes for it is that it has to know the way how to extract metadata that is communicated by the load balancer about the client because the app is not talking to the client directly anymore. So it has to know about things like the X-forwarded-for header or about the proxy protocol by HAProxy if you are writing a TCP-based application. But this is it. That's all your app has to care about. And if you use standard headers, the chances are that your framework and your whisky container will take care of this by themselves. And at this point, you are ready for rolling updates. So what you do now is, basically, you just tell your load balancer to ignore one instance and now do whatever you want and take how much time you just want to take. It doesn't matter. Nobody will know. So first, you will probably want to stop your app without disrupting anyone. And to do that smoothly, you need clean shutdown. And I would like to use this moment to reinforce that you should make sure to handle SIG-term because Python is not great about this. A lot of applications, a lot of frameworks just wrap themselves in a try-except keyboard interrupt, which is Control-C, and do not care about SIG-term. But SIG-term is the standard signal for terminating processes. It's used by default, by system, it's used by default, by all cluster managers, by Docker. Make sure you handle it. Or make sure that you configure your process manager or Docker to use the correct signals, which is possible. In the worst case, if you do this wrong, your app receives a signal, ignores it. Your process manager waits for a timeout and then kills it using a SIG kill, which you cannot handle, you cannot block it. You just get shot in your head. This means that you get a very slow shutdown because there's this timeout and you get no cleanup. That's bad. Anyhow, let's assume you did everything right and your application is down. So now you can just deploy your code and take your time. You can change configuration. Also take your time. You can edit it on a server if you want. Please don't edit it on a server. Now, once your app is ready, you can put it back into rotation and you're done. You've deployed something and nobody knew. And this way, you also leave the dynamic reloading of configuration to your load balancer. Load balancers are really good at this, of being reconfigured while running. And this is kind of complex to do yourself in your app. This is great to have to move this task from your app to someone who can do it better. And another upside is that if the deploy does not quite go as planned, so servers on fire, what do you do? Well, first you breathe and then you don't do anything because that's the beauty of it. As long as you don't return this instance back to the load balancer, nobody will know that you screwed up or that something is broken. So you can take your sweet time to do a rollback or reconfiguration or whatever. So even if you deploy using Git pull on production servers like an animal, you will benefit from that. You laugh, but I know some of you do it. So everyone benefits from this approach. It doesn't matter whether you have a super sophisticated cluster scheduler from the future or whether you have an intern within SSH client. No one will know you screwed up. Now, there's one thing I hand-waved over and that I need to talk about. And so when I said that it's added back to load balancing once ready, so how does the load balancer know that your app is ready? And for that, you have to add another interface. We need introspection. And introspection is an incredible, amazing, and powerful concept. The default way that it's done nowadays is to just expose a web endpoint that reaches into your application. And again, it doesn't matter where your app is a web app in the first place. You can always expose a web endpoint. It's even in the standard library, HTTP.server. So you don't even need an external dependency for that. So what the load balancer cares about is called readiness. So what is readiness? Readiness means that your application is ready to serve, ready to be added back to the load balancer. So you expose an endpoint that checks all resources it needs to do its job. And if it's fine, you return a 200. If it's not fine, you return a 500. And the load balancer will not add you back. Unfortunately, there is no clear standard how to structure or name this kind of endpoint. So most of you or some of you may have heard about HealthZ, which comes from Google. And I've heard multiple legends about the Z. So my favorite one is that it's a Google-grade security by obscurity. But it's probably just some clash avoidance. I heard they have this whole namespace of Zs at the end. Mozilla goes a bit pythonic. They use dundas. So dunder are heartbeat. And other common things is just to take literally the word and put it into a dash namespace. So dash ready is from Prometheus. Dash readiness is from GitLab. And I personally really, really like the dash namespace. Because it makes it super easy to block it in an edge load balancer. So with HAProxy, it's one line, one rule. And you can just have it as part of your app and nobody will ever see it. Now the downside of this readiness check is that it's expensive. Because if you ping your external dependencies, you have to ping your database with a select one. So that's not something I want to do every second or something. So you shouldn't do it too often. And sometimes you just want to know, is this application healthy? Is it still alive? And that's where liveness comes into play. Liveness is very cheap. Liveness is just to prove that your app is not in a deadlock or is running at all, that it reacts to requests. And this one should be relevant to process managers and cluster managers so they can detect that your application is deadlocked and needs to be restarted or that the startup failed and it needs to be rolled back. Now again, there are multiple common names. One is dash healthy from Prometheus, which I find very unfortunate because it's as close to health Z as it can be and means the opposite. GitLab uses dash liveness and Mozilla uses LB heartbeat, Dundra LB heartbeat. So this is kind of interesting because it seems like I didn't double check with Mozilla people but at least according to the documentation, they are using liveness for the load balancer. And I personally do that too because it's cheaper and all my resources I'm using in my applications are lazy, like database pools. So what I use for readiness actually is this. This is like straight from production, like no serialization, just plain text, no permissions. We just block out it anyway so nobody will ever see it and you return a status 200 and your load balancer will know that your app is alive. Then you add it to your routes and once your application is capable to route this view, you know it is alive. And so at this point, my pool is initialized so it knows how to connect to the database and it can either start serving or serving errors but I feel like my errors are better than that, like a 502 gateway not found by HAProxy but you may like my vary. So I do have the traditional expensive one too but I personally use it for monitoring to just check that the app is healthy and alert when it's not anymore. It's a trade-off, so for me one of the reasons also that I use Nomad with console, console has only one type of check and I have to choose one of the two and I prefer the liveness one also. I don't want Nomad to roll back my deployments just because the database is down or something. Okay, but why should we stop here? We have an interface into our app. This is a powerful thing, so what else is possible? Okay, Mozilla has Dunderversion which contains version information, deployment information. If you use pool-based metrics, like I do and love, like Prometheus dash metrics. Logging, there's it again, there's always logging. You can get your log level but sometimes you want more information right now about what is happening right now and if you redeploy to change the configuration, maybe the problem you're trying to investigate goes away. Well, there you go. Just allow to post your log level against the endpoint and configure your logging. Basically, you can use this endpoint for anything that you used a unique signals before, so it's a powerful concept. Now, we've taught our app to talk to a load balancer and it's incredible how much freedom we've gained by this, because it can now scale up and down as needed, it's like magic. And you can, of course, take this further because if you have so many instances, how about we distribute them over servers? Sure, why not? Now, the only difference is that we do not dispatch over ports but over internal IPs and the load balancer runs on a separate host but the principle is the very same one. But once you distribute your app over multiple servers, you run into a problem because they all have separate file systems and while you could use something like NFS or God forbid Samba plus some weird locking mechanism, it's probably better to step back and embrace a rethink. And in this case, the result of rethinking is the file system is lava. So one of the classic problems is logging but we've solved that. We are locked to standard out and it's the environment's problem to solve that for itself. If you want the logs, it has to catch them and send them somewhere. You don't care. It's not your problem. Other classics are states like user sessions and those are shared or end permanent. So they need to be accessible by all instances. You don't want to have your users to be locked in in only one of your back ends. So what you need to do is embrace other services in your app. So no SQLite for you, no files that are needed by others except for like temporary files and learn to love the elephant. Postgres is great for data of any kind. Redis and memcache are great for sessions and caching. Console and at CD are great for service discovery and for key value storing, like dynamic configuration, these kinds of things. So let's have another look. There's one more problem with this thing. Now we have four instances on four servers and managing this can become a pain. For simple setups you can use a bunch of shell scripts. You can use something like Ansible and people do and it works great, but at some point there's a threshold and it gets very complicated. So let's get some help. You didn't think that it would take so long for Docker being on a slide, huh? So I think I'm on record of saying that Docker is kind of meh and I stand with it. Although now it's a more positive vibe for me because it's just boring technology that works. But Docker by itself is kind of a low level concept. But there's something great they did. So first of all, they cost an industry packaging standard. Well, not one by now, it's like five or something. That will abstract away even more for re-application. But more importantly it created an ecosystem and this ecosystem is what I'm gonna talk about now because part of that are of course cluster managers. And cluster managers are game changers. So if you don't know them all, top left Kubernetes, top right Mises, bottom left DCOS, bottom right Nomad. Once you get them running, which depends depending on which one you choose can take something between one hour and one year. You're in a good place if you know how to keep them running, which is also another question. Because now you can say, this is my container that I built. It's none of your business, what's inside. And these are my hosts. Run this container, one of those hosts. It needs so much memory and so much CPU and it just happens. So this means that your applications become ephemeral. It's possible that their lifetime will be very short because maybe they get shuffled around. Maybe some cluster nodes go up and down. Maybe there's some rebalancing. Once you have this kind of freedom, you can deploy like every 10 minutes and nobody will know, so why not? And this is another reason why the file system should be always lava for you from now on. But our application is ready for a multi-data sender cluster. Without knowing about it, all it knows is how to start and communicate readiness, how to serve and communicate health and how to stop and clean up behind itself. So basically, we are web-scale by doing less. And this is the point where I could and should start talking about things like service discovery or meshes like LinkerD or Envoy Istio. But I do not have the time and they also do not change anything fundamentally about your application. It's just an add-on on something you've already attained. So let's have a final look at our application. So our app is a black box that is easy to start, it's self-sufficient. It has very few varying options that are configured using environment variables. It does a clean shutdown when signals are using standard signals. It magically retrieves its secrets from wherever. It keeps its data that needs to be shared and that needs to be permanent in external services like databases. It exposes its services as configured where it is picked up by a load balancer that will expose it to the world or to your company. It also exposes its state using a well-known endpoint. And it does log to standard out where it's picked up by the environment and audio terminal and then it does whatever is best in that moment. Your app does not know, it does not need to know. And for all of this, our web view hasn't changed at all. Our application creator just takes one or two classes. It doesn't know where they are coming from. It just knows that it's configuration and the secrets it's know, it needs and it's all. The interaction with the environment is limited to one file, one file only. And the same application works the exact same way on your notebook, on a platform as a service or in a cluster, just a matter of how you start it. And the heavy lifting is done by decades old Unix tools or the B's, Ne's container orchestrator du jour. So success, one final thought. If we squirm at what we've tried to achieve here and what I think we've achieved is that we want to see our application as a black box with clear interfaces that enable loose coupling with other components. We do separate IO from logic. We lock the standard out. We push configuration from the outside and transform it into a class. And we do isolate your process, global state to one spot. These are all practices from software engineering. It especially reminds me of the hexagonal architecture by Alistair Cockburn who talks about ports and adapters. So I guess the lesson here is actually that your application's boundary is just another boundary. So you should treat this as such. It doesn't matter that there's a process ending. It's still just a boundary between like any other there are. So because your application is or could be or will be part of something much bigger. But as with software architectures, what I've shown here is an ideal. So not every application fits the constraints. Not every application can run in a cluster. I have plenty of applications run on a server because they need to do something on that server. Turns out someone has to write to a disk at some point if you wanted to keep that data. And finally, some of what I say today directly conflicts with my advice from last year. Neither is wrong. It's just two solutions to two very different problems. So what we're doing here is engineering. We are making trade-offs. But to make trade-offs, you have to know the consequences of the actions and choices you make. So in other words, what I'm saying here is that you should come to all of my talks and make informed decisions. So one more thing. Don't we all crave realistic examples? However, companies tend to not expose how they deploy things concretely. They just give you a soup of buzzwords talking about continuous delivery, chat-ups. But in the end, it's just a bunch of shell scripts from 2002. It's written by someone who's not working at a company anymore and everybody's afraid to touch it. But it turns out we do have a great open source example of a Python application. It is modern and I come back whenever I need to, whenever I want to learn something. And of course, it's PyPI. It's kind of perfect timing after a keynote, huh? It serves, and these are old numbers. I got from Ernest when I was doing this talk in March. So back then, it was like six billion requests per month, 1.5 petabytes of data, and most importantly, fast. I mean, when Ernest asked me to test PyPI for the first time, I thought it was just some kind of joke because it just appeared. And it turns out this is how a modern Python web app can look like if you do everything right. And everything you need to know about this one is on GitHub. All the operational stuff, the Docker stuff, it's there. And the tooling around Kubernetes that Ernest wrote is also on GitHub, just somewhere else. So if you want to learn something, have a look. And that's all I have for you today. I would like to ask you, yeah. I have some, wait, wait, oh, okay. So please, this is the page I talked about. The QR code book, get you there too. Follow me on Twitter, get your domains from Vario Media if you speak German, the reason why I'm here. Have an excellent EuroPython. I'm Hinek, thank you very much.