 Thank you. Thank you. Uh, so yeah, I'm I'm on second dollar was supposed to be two of us up here But unfortunately my colleague Flavio couldn't make it which is bad because a he's Italian so he talks really fast So I'm not sure how we'll do on time But it's also good because like I can blame everything that goes wrong on him so Thank You Flavio So in this in this talk I want to talk about like what what does it take to create an application the right way with the focus on the DevOps part on the deploying how to create your infrastructure and Make sure that you set up yourself for success in the far future So not like the quick and dirty things that you can do in a day But something that hopefully won't take that much longer But we'll stay with you and with the project for for much longer It is an opinionated talk. I'll be presenting some concrete tools and solutions that work for me And that's just a personal just a personal choice. So the more more important part is What I say why I'm doing some things not necessarily exactly how because there are many different tools and and people have different experiences and that is that is great so that was So that was Some set up and let's get let's get started So first again some assumptions. So let's say that you want to build a website You want to build something that that you want to share with the world and This talk is for running it in the in the real world. So again, nothing we can dirty Nothing also free, but we'll try to keep the cost as down as possible The idea is that this is again sitting yourself up for long term So not something that you would do for a project that you want to just have up over the weekend and then tear it down This is more for for something how I believe Projects should be run When you need to need them to be alive and when you want to build upon them and expand them So like if you're starting a new project in your company or your personal project or something like that Or just want to learn how how things are done So There are some some disclaimers that well so far. That's all the things I've been doing but There are definitely easier ways how to do this both Those that are compromising on some of the things I deem important but also just some that You you might have something already prepared by a third-party library or a framework This talk is more about this is how it works underneath and how it's how it's done if you can jump start three steps out of five that I that I described by using a freely downloadable template or something like that 100% go for it and hopefully this talk will give you the tools to Determine whether that's a good thing or a bad thing what you're about to do and last disclaimer I promise and That is before you start writing an application think twice whether you need it These days when you have things like github pages when you have static site generators when you have all of these things You can get by with no code or very little code for a lot of use cases out there You can usually cobble something together with freely available components and services That will run them for you either for free or for some some small amount So always keep that in mind that that should always be your first option The best code is no code at all. I think that's kind of obvious So now let's get started So we are starting a fresh project. So first we need to do is is create some code For me if you don't know or if you're on the verge what to use for web application, etc That means that the answer is Django It is it is the default for a reason. It has the most resources out there It is the most common denominator in the Python world at least So if you're unsure if you don't know 100% sure that you want to use I don't know Flask or something something else use use Django That's that's the easy part Now what other parts are we going to need? So we're going to need to set up the the local development so First we're going to need something to install Django or something like that I use poetry to do that But again so many other tools and pretty much for for the use case that we're talking about all of them work Just fine. So again, just use something that works personally for you The reason why I chose poetry the the why that you should that you should remember is it does two Important things for me. It allows me to track dependencies exactly with it with a pinned version and It allows me to keep it clean So if I remove a dependency it will remove everything that that sort of came with it and it's much easier to To manage these things because those are sort of first-class citizen commands in the poetry ecosystem that's why I chose poetry, but again your mileage may vary and Just do something that will do these things for you allow you to very easily add and Remove dependencies Probably touch something my apologies So the other part we're going to need is we're going to need some configuration So in this case, it's pretty easy. I will stick to the sort of simplest thing possible, which is environment variables so whenever you're creating a configuration for your Django project or anything else just sticking the values that will work for you locally and Allow it to be overwritten in production by setting environment variables You can do it manually as I specified here or there are plenty of tools that will do that for you There are more or less sophisticated depending on kind of on your taste me personally I'm a very simple person. This works for me super well because everybody can see what it's doing Some people prefer nice abstractions, etc. There was a talk about Pidentic and and other and other tools that can serve for this purpose as well but It just depends on your kind of appetite for complexity Versus having something ugly like this in your code and When I said you should default for the local for the local environment There is one exception to that whenever there is something that deals with security or debugging or anything like that the default should be off So that's just that's just one thing you should never ever have to deal with the problem of accidentally deploying something in production in debug mode With whatever tool you're using so make sure that that's the one exception where you need to override it very explicitly one developer when developing locally so that to prevent accidents in the future and Then there are some there are some tools and I put them here at the beginning of the of the talk because I think that this is something that you should do first is Make sure that all of the good things that you're trying to do will stay good So something to keep you honest and also just basically for the psychological effect of things it's much easier to add Things to an already existing tooling or whatever Compared to introducing a new tool So that's why I like every new project to start with pre-commit installed and configured for my favorite linter Formatter and and also tests running even if you just put a dummy test that asserts that one plus one equals two That's a great test to have there because it's so much easier to add another test When you're comparing it to oh now I need to introduce testing and Do some bootstrapping or something like that? No, make sure that all of it all of it is there from the very beginning Even if it's just if even if it's just dummy things. This will help you keep things clean as you go and Also, don't trust yourself that you're gonna run it Make sure that there is there is an independent party in this case a CI continuous integration tool In my examples, I would use github because that's probably the most common one That will that will run it as well that will give you two major benefits First is the obvious even if you forget to run tests and and you push some code it will still get run and Second it will verify your setup. It will verify that the that the Dependencies for example that you have are sufficient that you haven't accidentally installed something without Committing it into the code so it will not get transferred to the new environment So those are the two major benefits even when you're working alone To have CI Once you add other people well, then it's even it's even more beneficial and Nowadays Adding CI is not a big investment With with github all you need to do is create a file like this So this is everything that you need to do to run Pre-comment as configured in your local configuration the way you use it locally On on github you just need these three predefined actions and and you're done It is it is not complicated if you want to add if you want to add another job to run your Python tests Pi test is my library of choice with with Django. There is Pi test Django again Pretty pretty obvious. It says what it it does what it says on the tin and This is how you run it. This is even a more sort of Complete example It even runs a database for you So this is pretty much enough for 99% of the applications out there already and You can get this you can get this done In if in a few hours even if you're completely fresh You just Google how to do Django tests in in github actions or pi tests or something like that And you come up with that not to mention that all of these commands are something that you should be familiar with because that's what you Had to do locally to get the tests running So there is really no excuse to not have these things and they will set you up for For a piece of mind because now you know that what whatever happens These things Linting code formatting Testing etc. You'll never have to think of think about it again. It's there. It's your safety net So when you add things all you have to do is add add the test Configure a new a new linter for example if you introduce new file format Or if you discover that something doesn't work for you really well Again, it's much simpler to add and tweak tweak things as opposed to introduce entirely new concepts So that's why I say that it's it's nice to have this done before you start any coding That allows you again to two things. I Like to say that everything benefits in two ways You can really subdivide it any way you like first is like what I what I described like This is now the project setup and now I can switch my mind Okay, and now I'm thinking about application and I'm done with that stuff So if there is a mental clarity that yeah, I don't have to worry about this and also Because this is done before any application code you can reuse this you can you can then save this as a template and reuse it or More realistically you can find a template that already does all of these things This is how I would judge a good template when when I'm looking for like how to start a Django project Does it have all of these things? Does it allow me to do all the things that I that I mentioned all the benefits? Does it give me the peace of mind? Is there a place where I can just add another test? without having to figure out anything so This is where we are at the beginning of the project now We can we can run some we can write some code and usually that belongs with Starts with an important question where and how are we going to store our data? Typically, we need a database. That's the established default in the industry. Everybody tells you every tutorial that you read For Django or for anything else. Yeah, this is how you start a project. This is how you would start your database It even assumes that the database is going to be a relational database that might be the case But again, you have to think about it Oftentimes, maybe The the files don't change. Maybe the data don't really change that frequently and they only come from a certain sources Like from you maybe at that point probably a YAML file in the same git repo is Much easier than having to have a database and manage a moving part Maybe I just need a few counters or something like that and Redis would be much simpler than having than having a full-fledged relational database, which is a very complex piece of software So always think what is it that you need and When thinking about this think about how are you going to use it? Many people fall into the trap that they try to model the data and say okay So I have these entities and I have they have these attributes and and this is what the data is. I Don't care. I care about what you want to do with the data What are the queries that you want to run? What are the operations that you want to perform and then choose your database? And then design your schema accordingly to make your life easier you unless you're writing this for like Like a school project on database design Nobody cares whether you're using first normal form or third It's it's some useful principles are definitely there and you should be familiar with it But ultimately what happens of what matters is how are you going to use this data and how What will make your life easier? Be lazy. That's ultimately what the job is so once you select a database or Hopefully not you need to run it somehow locally first So for this docker is your friend and in particular docker compose because it allows you to kind of grab something and Nowadays every pretty much database or service or anything like that will come with the pre-built docker container that you can just grab and run and Configure it super super easily and all you have to do is create a docker compose for those dependencies that you that you have So this would be my example where I'm just running Postgres. I'm running Redis and that's it Notice one important thing. I'm not running my Python code in there This is purely for the dependencies for Python code I still prefer to run that manually in my virtual environment Just on the on the command line or from your IDE It makes things so much easier because you don't have to deal with another layer of abstraction that you don't really need for for development You just you just develop locally and you just have these These services up up there through the docker compose If something goes wrong here database gets into an unknown state or something Just kill the docker compose and start it back and back again clean slate It will work for tests It will work for some of your some of your manual testing, but not so well It won't work really well if you if you're used to Testing and developing things while looking at the browser or something like that to me that's a feature because it will kind of force you to Do things the the the right way which is just just use tests just use repeatable code as opposed to doing things manually That's why we created computers in the first place so that we don't have to do things manually use it so That's it now. We're now. We're done developing We've we've created our setup We've we've created our feedback loop with the tests with the docker compose and we wrote all our code And now we're ready to go to go live we're ready to share something with the world and Make sure that it stays there So how do we how do we go about that? So first thing that we need to do is we need to create some infrastructure and Just like with the test just like with anything else. We want to do that through code We never want to do anything manually It might sound and and when I show you the examples it might look overwhelming And there's definitely a lot of things there but It's usually Roughly the same complexity as trying to figure out where to click in the cloud console UI or something like that and it is Repeatable and most importantly it is iterable Because you can kind of start with a small thing With the first example that you find somewhere Copy from some slides or download somewhere from the internet and then you can gradually improve on it and just keep running it over and over again until it gets until it gets good and You you never kind of are in that situation where you don't only know how things got where they are And you don't know what the next step is Because it's just code you can you can iterate on it you can test it You can you can validate it you can do all of these things you can cooperate on it in the future So for me the weapon of choice for there is terraform again. There are many other options I like terraform because it's again the default as I said I'm a simple person terraform is the default it will have support for any kind of services and tools that you decide to use And you don't have to think about it too much if you if you ever will need help or hire somebody Chances are it will be much easier to find somebody with terraform knowledge Then we need some of the services the databases etc I think this one is pretty clear. You don't want to run your database That's not your job if you're in this in this talk you want to develop your application You want to deal with the idea you don't want to babysit a database. That's just what what we developed, you know Services for that's that's why we have Google otherwise. What's the point? So just use the just use the pre-built pre-built services again You can use you can use terraform to manage them. We'll see how that works And finally for your code Docker is the perfect vessel We're kind of past like dealing with virtual machines and running things yourself You just package things up and hand it over to something that understands how to run a container That was the entire idea behind them in the first place and that's what we're going to use One kind of technical point here All throughout all of this process all you have to keep in mind is whenever you see something that's potentially sensitive like a password It should never ever be written anywhere and you should never ever see it That sounds counter-intuitive. How do I work with something that I don't see? We'll see Okay, so this is an example for terraform for working with the GCP with the Google cloud again You can work with any other cloud. It will be very analogous For me, I'm now familiar with Google cloud And this is like the basic basic setup. I'm creating a good project. I'm creating a database instance I'm quite a small one in this case and and that's the that's the beauty of terraform if I find out in the future that I Needed to be bigger. I'll just change that one value from f1 micro to f1 super super extra extra large I'll end up paying more money, but I'll have a bigger instance and it will terraform will do all of that for me And I'm creating a database and you can see that I'm just referring and as a variable to the instance where I want the database created And speaking of secrets, this is how this is how you do it You ask terraform to generate your random password Then you refer to it when you're creating the database user and then you store it in a Google secret store So you're telling Google here. I have a secret. I want you to keep it safe for me and also use it to create this database user and Then when you're handing your container over to something that will that will run it for you You say oh and there is a there's a secret that you have that you have in storage for me I want you to expose it to this container under the name database underscore password So it will be there as the environment variable But as you can see we've never seen it once it will never be written in in any file and That's a good thing Agreed awesome And by the way, yeah cloud run is The Google's alternative for the service where you just hand it a container and it will and it will run it It's it's that simple. It's It's what Google does so All good there and now we just need to we just need to wrap it all up So we need to we need to say we need one command to kind of do all of this Because we need this to be easy We need to be this to be so easy that we can run it several times an hour every time you you reach to reach to a State that works you should feel absolutely safe and comfortable just deploying and pushing it out Again to get you into the pattern of pushing often as opposed to putting things off and saying Oh, I'm not sure whether this this fully works And I don't want to go through the trouble of deploying it because I might have to change something No, if if that's your thinking that's a clear signal that you've done something wrong probably Because this process should be effortless you should always feel perfectly free to just say yeah, I deploy it should take a few minutes at most and That's how you know that your system works when you feel very comfortable Essentially every time you do a git push you you will do a deploy as well This would be the script in our example. It's really this this simple. You build the Docker image You push it into the registry you run the terraform apply and Then you update the Google Cloud Run In once everything is set up in Google So after you've run this once you don't you can actually skip the terraform reply because that's the part that will take the longest That's the one that creates all the databases or make sure that they're up-to-date and all of these things So that's all typically only has to be run once or then after you change anything in your terraform so you can then Kind of command it out or have two different commands one for the initial setup and one for then just the deploy and With the Docker build push and the gcloud run it should really not take longer than a minute or two and It is not super complicated Would you agree? Awesome, so Notice that I have kind of omitted a few a few things here So first is Kind of staging environment Usually you want to have two different two different environments What one where you test with test things and experiment things or you want to maybe share with somebody? Features that are not yet ready for ready for prime time there was a reason why I felt comfortable skipping it because It's easy when we have everything in code Then we can run the code twice and get two different environments All we need to do is define some variable that will declare the domain or the project name or something like that to to keep them separate But I could do just the same thing just by specifying two different two different sets of credentials for like two different Google accounts for example and And just run and just run the same Run the same code. That's the major benefit of having the infrastructure as code because not only can you iterate on things but you can also then parametrize them and have multiple environments or use the same code for multiple different different projects and It being code it just does that you can just define define a variable pass it in from the command line and say Oh now I want to go into that environment and in that region And and I want to use that docker image with that application It's it's just as simple as defining a variable which as programmers, we're all very familiar with So all of the benefits of code work here the simplest way to do it is to just parametrize the deploy SH file and and Make sure that it knows what to do and where to do it now to address the elephant in the room Usually when we're talking about deployment, it's it's you know the unwritten law that somebody will mention Kubernetes sooner or later No No so Kubernetes is something that other people use to build the services that you then use Unless you're in that business In which case, I don't know what you're doing in that talk in this talk Like you don't need Kubernetes It is way too complicated. It is way too low-level and it is just no please Like every time you use Kubernetes to deploy it a Python application There is a kitten that dies somewhere. I'm sure so yeah, I This deserves like to a few more knows because like this is this is important And I don't know for what reason it became like this idea that you know Kubernetes is the way how you deploy things Everybody clear on this Thank you, okay So then what happens what happens next what happens when you want to add more people more people to the team Or you want to automate things even further well, we have a word for that and that's continuous delivery So we already mentioned CI continuous integration. So this is the the other two letters this in the CI CD acronym So continuous delivery is where you automate even your deploy dot SH Script that's really all it would take in this case and it would be really simple because all we have to do is take those three commands that we had in in the deploy those Dot SH and put them into a get-up action We already know how to do that because we've done that for Pre-commit we've done that for our tests and all it will need is some credentials to make sure that it can communicate with Google to to do the deploy to upload the the image and all of these things and we know how to do those things as well Because we just use a secret So we would manually upload a secret into into github and then we would be able to use it in the get-up action So that the get-up action can run with the appropriate privileges to do all that is to do all that is needed So that would be a great step when you decide to add another person to your development team Because then you don't have to worry about who runs the deploy and when and From what state? Because otherwise it can happen that somebody will run it, but maybe they haven't updated to the latest version They haven't pulled your your latest change or something like that. There might be some miscommunication They'll just be concurrency issues where two people are trying to deploy at the same time Etc. Just not worth it. There is there is no reason to communicate with that other person You know just deal with computers. That's what we do, you know computers over people, right? But in this case, yes Because it makes it it makes it easier and when we're communicating to two humans We can focus on the important stuff not Can I deploy now? Do you have have you pulled everything? No, nobody wants to have that conversation. It's boring And finally, what's next? Once you have all of these things, what are the next steps in your long-running long running project? well, the first one is Backups make sure that everything is that you have all of the things Luckily due to us using infrastructure as code all you really need is the data because you can recreate everything else You have a script to do that. You have one command that you can run that will create everything from scratch so you're safe there if Google in Europe goes down. You can just change one variable to US East to run in the US region in Google Cloud and run the same command and it will recreate everything there All you need is then to populate the data So make sure you have the data. It's as simple as that typically all of the services that that we talked about using Like cloud SQL Will have options to have automated backups use them and verify them Make sure that you go through this exercise at least once every year To make sure that a it works and be you know how to do it Because you don't want to be trying to figure this out on a Sunday afternoon when you want to desperately be doing something anything else So that's not a conductive conducive, you know environment to learning how to How to restore from a backup make sure that that's second nature to you and that's documented somewhere You also want to have monitoring to even know that something happened Usually for the first few years of a project It's fine to just have an external monitoring service that will tell you whether the system is still up. That's it And if it's not you want to know why it isn't up So use some error reporting Maybe some APM in the in the future some logging etc. I Believe that there are there are a few companies here who do that like like sentry and others which again is Is a free service for for smaller accounts or you can or you can run it yourself But typically you don't want to So this would be this the steps to do to kind of make sure that once you're once you're live You will also stay live So that was pretty much it A Little light on the concrete examples because we had to oversimplify a lot of things But the general principles remain a stick to the philosophy of everything should be repeatable and iterable So that you can start small and Built on top of it, but you will never have to kind of stop and redo everything You will it will just always be another small step And you will have tools that will help you make that step if you want to add a replica to your To your database you just add one more resource into your terraform run terraform apply and it will happen You don't need to figure out how to how to do it if you want to resize your database if you want to Suddenly have multiple different deployments You can do again the same thing and just run terraform apply and it will do it if you want to move from this To something the size of you know Millions of users and all these things there is a clear path You will never have to kind of scrap everything and start over so this is really setting you up for for success and It highlights always what the next step is you should never be like okay now I don't know what to do. I need I need to get there and I don't know how no with this approach you should always know how or Be able to figure it out So, thank you so much and now we'll have some time for questions Thank You honza Please use the microphones to ask the questions. We'll have a few time Thank you for your talk. We're interesting. I would like to ask you about logging Have you any advice about a good practice how to manage logging in production because sometimes it's a little bit Tough to find useful information in locks. Thank you Yes, I Have unlimited amount of information on logging. That's my specialty. I tried to stay away from it but as so make sure that the logs are captured somewhere and They're they're not stored in files, but they're stored in somewhere else in our example with using cloud run, etc They will automatically be collected by by Google and you will you will see them You will see them there you might have to configure that again You can do that using using the same terraform file And it will it will be there. You will have an interface and you will have a search So you can so you can search through them the most important part is that? How you generate those logs and what kind of information you put into it? The best advice that I can give you there is always add structure to your logs and context so use a struct log or some other library that allows you to do structured logging and and Play a little bit around it and just make sure that your logs are in Jason or some other parsable format So that you can so that you can work with it again You can do this in an iterative approach first version would be just set everything up at the very beginning So that you can do structured logging That should be in your template again before you start touching application code and then you know that you that there is it There is a path forwards So first it will just be locally in a file in your cloud run. It will be in the cloud console And you can log there and search through it when you grow even further You can start parsing the logs and and creating dashboards based on based on those structured data into it and and Alerts and and things like that Even further down the line you can then capture it into some system like like elastic search the ELK stack or Or or anything else again tools like sentry or datadog will allow you to do the same thing The primary aspect is you need to know what you look what you're logging Why and that it contains all of the information that it that is relevant No, just you know oops something happened But this particular thing happened and this was what I was trying to do and these are the parameters And this is the time of day and you know, this is the phase of the moon right now All of the information Make sense Awesome More questions. Yeah Good talk actually you covered pretty much Everything from start to end up to going to live. So really good talk just one question. You mentioned about Storing secrets as github secrets when a new developer joins in It's easy for them to like, you know run through the github workflow But we had a problem. So say for example, I'm in the development team and I'm adding the github secret The new developer comes in they don't have to bother about anything they run it But when I leave the secret remains secret So we weren't able to extract that secret from the github because we can add it But we can't retrieve it. How would you solve that problem actually any suggestions on that? I would not And the reason for that is to me secrets are right only I never want to read a secret as a person if if something like that happens You leave and somebody else they should just create a new secret and replace the one that is that was there without trying to retrieve it So so again, ideally I would have some something that would generate that that that secret for me and push it there Without anybody seeing it. But even if it's just a copy paste for for one time set up. I'll allow it but yeah, don't don't try to recover the secret and Just create a new one because that's that's better. Anyways, because you want to rotate them in case something something leaks Especially when a person leaves Like that secret should no longer be valid because that should be tied to the person who created it But the project might have like, you know, 20 or 30 having like AWS roles and other things which are responsible Not always we can rotate it, isn't it? Well, that's why you want to automate that That's why you want to have it as code So you will be able to rerun it or update it and and not not worry about these things For example, like when we create get up secrets, etc Like it's also run from Terraform which will create like a service account in Google Generate the key and push it into GitHub without anybody ever seeing it So it should be it should be independent and it's it's it's a secret. Oh, sorry it's a secret like any other it's not it's not special and It should not be treated that way like if if in doubt kill it replace it. Don't don't worry about it. It's not a person It's fine. Thank you Sure Hi, I think first go talk I would like to ask about one little detail notice about pre-commit hooks because I liked To use pre-commit hooks, but lately I've been noticing that it They are doing more harm than they're good for and especially when I noticed that Where he was setting up the CI you are actually running a pre-commit hook So unless you are actually pushing to main is there any benefit of running a pre-commit hook always so First running it you mean running it locally Yeah, so you you want to you want to shorten the the feedback loop You don't want to push it and something and then figure it out later that it's wrong when you you've already You've already switched to something else you want to you want to get that feedback as soon as possible that something is not ideal and You want to do it before anybody else sees it Because that will be just like so I push something where the pre-commit doesn't doesn't work Let's say I messed up the formatting or something like that And I don't want anybody to have to go back and say okay. No, that's wrong. You need to you need to fix it We shouldn't we should not have that conversation as people That's again delegate that to delegate that to the machines And you want to do that as soon as possible So that you so that the feedback loop is as short as possible So you avoid sort of mind switching and and task switching and doing something else. It should be That's why you that's why you're right locally. I mean sense Awesome. Thank you for your talk The question do you have a public rapport with all this in one like some kind of template that can be used? No, no, I don't that's a good question so There are plenty of plenty of other other examples examples out there Honestly, I didn't have the time. I wanted to but a Lot of that is also super opinionated and wouldn't be wouldn't be super useful But for things like the for Django, for example, the cookie cutter that you can find out out there is pretty advanced It does even a lot more things that I didn't even go into So always just there are tools already out there. I was focusing more on the principles here Yeah Django cookie cutter. So cookie cutter is a library for or is a tool for using using templates for projects and for other things and It was created by two Django people So the Django cookie cutter which is a template for new Django project is actually quite nice and and has a lot of a lot of features It doesn't yeah, that is true. That's just for just for the Django part But again, if you just Google like Terraform Django cloud run, you will you will find you will find a lot of things But that's just an excuse. I know I know I didn't have time. I apologize true I Will have time for a last question. Yes, you showed how to Create a deploy a sage and have all the code all the infrastructure as code And then you subtly mentioned that after the first time you should delete the Terraform apply Is there a better way to like check if that's already done? Is it like it important or does that term apply in? Infrastructure, how does that work? Good question? Thank you. No, the Terraform apply is important You can keep running it. I mentioned that you can kind of omit it for to save time so so what I typically Did in that situation was I would have a single if around it says like if not initial then then skip it and So or like if if fast deploy flag is turned on Skip it and and do that and I would use that like if I need to deploy a hotfix or something like that but you you You should be able to always run the Terraform apply and it should tell you nothing to do and that's a good thing Thank you, so thank you so much if you have any more questions, I'll be around