 I'm working on the... Thank you for making it to the last session of the day. I know that the only thing between you and me now is me, so I'm going to get through it quite quickly. Hopefully you are in the right room. Well, yeah, exactly. Why doesn't everyone else have beer? It's just a fine example. Hopefully you are in the right room. You're here to learn about metrics and application logging and how it can help you. This is me. I'm Michael. Emma Keep on Twitter if you want to say nice things about the talk. If you want to say bad things, don't put that on the internet. Just come and talk to me afterwards. I was a PHP developer for about ten years. Then recently moved into operations land where monitoring and metrics is a lot more important. And that's why I come to give this talk to developers to say I was one of you once I didn't really get it, but now that I'm on the receiving end, it helps so much. We've got quite a lot to get through, so we're going to dive straight into it. But I wanted to give you a quick rundown of what we're going to cover. Today we're going to cover a few different things. We're going to start by thinking about why we need to log, what logs we currently use, and things to consider when putting the log together ourselves, how to get started, what to log, what to log to, how to add it to your application. We'll cover the Elk stack, which is the Elasticsearch log stash and cabana. It's the de facto standard for application logging. I'll take a look at what each component of that does and how it's useful to you. Logs and dashboards go hand in hand. Sometimes you don't have hours to read 10,000 log messages. You just want to see a nice graph that shows you what's happening. There are a few pitfalls when it comes to log management, but don't worry. We'll run through a couple of the most common ones. I made those mistakes, so you don't have to. Finally, almost finally, we're going to take a look at some supporting tools, things like beads and pager duty before rounding it off with a conclusion. Sound good? Cool. So logging, why do we log? All the way through this, feel free to shout it out. Why do we log? Debugging. Well, when wrong. Or with the first one that comes out. Why else do we log? Metrics. Who visited us? How many people visited us? Anymore? I heard accountability. Audit logs. Who gave Michael access to production? Because, well, then he's firing. But if you don't have an audit log, you don't know why decisions are made, why things happen. And I like to think of it as runtime documentation. Your application log tells you what your application is doing right now. Without it, you have no idea what your app is doing. Once you've shipped it to production, it's just like a big black box. Being able to log into a server, run until minus F, and just read what your application is doing, what queries it's running, what pages are being hit, things like that is so useful. Imagine if someone said, hey, it's not working, but I don't have anything to give you. Like, where would you start? That's why an application log is so important. So, exceeding your eyes, like, we're done. You're all sold. You're going to work tomorrow, no. You're going to work here tomorrow. You're going to work on Monday and start building an application log. You're sold. But I know what your next question is as well. It's always the next question. Can I have it for free? Can I have it without doing any work? And actually, yes, you can, for a lot of it. I mean, you use Apache. You use MySQL. You use PHP. You use Chef. Open SSH. All of these libraries, all of these tools have application logs. They write out what they're doing to disk and you can just go and you can read it to find out what these applications are doing. You want to enable the slow query log in MySQL to find out what query is taking the most time. You don't have to change your application at all. It's basically free. But that doesn't help my application. Unfortunately, you are going to have to do a little bit of work here. There's no easy fix. An application log just magically appears for you. You have to put in the work. But we're going to do it together. We're going to go through all the different things you might need. Before we get started, I want to make sure that we know that there are two different types of logs, both very important, but both very different. The first is the log for humans. Ideally, this should be silent. This is really for the databases in there or I ran out of space on this and couldn't find the image. It's things like that. This is your error log. These are things that you really care about and you don't want it mixed in with tons of other debug information. The other kind is machine-meadable. This is probably going to be JSON. It has a defined schema, ideally versioned. This is for things like immediately your object has been played. I'm going to log that that's been played. You can use that for accounting purposes later, whatever you want. I used to think this was true. But since I moved from development to operations, I realised that there's not really two types of log. There's two different purposes. One's for humans to consume and work out when things go wrong and one for machines to consume by when immediate objects played. But it's actually just one type of log and that should be machine and human-meadable. I mean, I'm sure everyone here has sat down, taken a piece of JSON and said, well, it's a little bit messy, but I know what it's doing. By using tools like JSON or the sort of log format is gaining popularity in the minute, you can read the machine logs. You might not be able to do it as easily. You might not be able to do it at a glance. But with programmers, we can write tools that consume JSON and output human-meadable information. It's much easier to go from machine-meadable to human than it is from human to machine-meadable. So my advice would be to always make sure that every log that you emit is machine-meadable. So what's an application log? What should it tell us? It tells us what's going on inside our application. It gives us information to debug an issue. It gives us narrative information caused to methods, event triggers. It tells us when a user sends the upgrade header and changes from a HTTP to a WebSockets connection. It tells you what's going on with your code. We've got business events. I said a media object being played. You can bill on that. Every time someone plays a media object, you get ten cents. Or a tenth of a cent. It doesn't matter. What if you build per login? Every time someone logs in, you want paying. All of these events come done through your logging system. What about when it's not just for accounting? What if you also log how many people successfully complete a checkout on your e-commerce website? It averages around 50 to 60 a second. That's not bad. Then you deploy and suddenly it drops to zero or it drops to five per second or three per second. Wouldn't you want that information? That historical data to say we average about 50 to 60 and be able to see that immediate drop? That will show up on the graph instantly. No need for a customer to report, hey, only one of my cards worked when I tried three and you have to try and work out what's going on. Seeing on the graph instantly that something has changed is invaluable. I like to think of an application log as a story, a story that signposts every twist and turn through your code base, through your application. When I started at my last job, it was a fairly complex system and I had no idea what to start. Fortunately, it had a great application log, so all I did was I booted up the service, I made a couple of requests and I read the log and noticed, oh, when I go to this endpoint, this happens. When I go to this endpoint, this happens. And I very quickly worked out that if I wanted to do a specific thing, I had to look in a certain area of the code because I can search for log messages. They helped me really zero in on what code is actually running. So that was a quick introduction to why we need logging, how we should do it, but really we want to get started. We want to go back to work on Monday and build it ourselves. And application logs always start with the 4Ws. When did it happen? Who did it happen to? Weren't I sending this information to? It could be a log file on disk. It could be an email to your CEO. If you're in an early stage startup and someone deactivates their account, that log message probably wants to get sent to your CEO. They want to see that information. If the database goes away and there's an error, you probably want to get notified somehow as well. You don't just want to go into a log file on disk. Perhaps most importantly, why did it happen? And make sure to log the reason why it happened. Not just the raw data. When your log said that something's happened, it's really useful to be able to say it happened because the user did this or the user did that. One of my favourite stories to tell here is we were trying to debug an issue that something worked sometimes, but not others. Well, it's working sometimes, but it must be doing it. It must be a race condition or something like that. And what it actually ended up being is the code wasn't sending the message that we expected at all. We had incompatible bindings, and the message just wasn't being sent. But the reason it worked sometimes is because unbeannos to us there was a cron job somewhere that ran every minute that pulled all the information just to synchronise that. And when we saw it working, it was actually that we tried to send the message 10 seconds before the minute ticked, so we added a reason and regenerating the user information because I received the message and regenerating the user information because of the cron job. And that problem went away. But honestly we spent hours and hours and hours on this when just logging the reason why it happened would have made it really clear to us so getting started. Really easy to get started. Who knows this function? Yeah. All you need to do, take your code. This is my code. It counts the number of consonants in a word. I dropped in this error log function. It writes simple lines to wherever it needs to go. By default it writes to standard out, which is fine for CLI apps. It's better than using Echo. But for webpages, this is better because it will log to the Apache log. If you don't want to go there, you can change it. You can set this INI setting and you say write to voilog myapp.log. And every time you call ever underscore log, you get a line in there. There's not many flows for this approach. It's built in. But there's plenty of cons. Is it really semantically correct? My message was more informational than it was an error. And I was getting mixed in with informational messages here. It's just not as powerful as we'd like. But it's still a lot better than nothing. So if you can go away and do this, start with this. My other option is to use a logging framework and this next slide used to be a slide that had K-logger, Zendlog, all of those on there. Just use monologue. I used to talk about all the different ones, but it's really not worth it. Just use monologue. It's the best out there. You trust Composer? Well, joy to do monologue. Same guy. And it takes a little bit more work to get it running. First, we need to instantiate a monologue instance and give it a name. Then we need to add a handler. And a handler is what tells monologue where to send the messages. In this case, we're sending it off to a file in our attempt folder. And we want to write anything at the debug level or above. And we'll cover the logging levels in a little while. We make a call to log.info, which adds a log line at the info level. Info is higher than debug, so it gets logged. And it looks like that. It's got a timestamp, it's got your app identifier, it's got the level and your message. And those brackets at the end are for additional metadata, which we didn't actually use. Job done. We're now using monologue. Once you've done the bootstrapping, that single call is all it takes. But I'd hate to talk about monologue and miss out one of my favourite parts of it, which is the fingers crossed handler. What this does is it wraps another handler. And it's in control of when the messages get flushed to the first one. So it's a little bit hard to explain, but we'll go through an example. Here, we create a stream handler that says, I want debug and above. So I want everything and I want to write it to a file. And then you've got this fingers crossed handler that says, I actually am going to take control of that stream handler that you've got. But I only want to log things that ever, or if I don't want any debug or notice or anything like that. We buff everything up, including debug messages, but we only log them if an error level is triggered. Like I said, it gets confusing. So let's just go through an example. In this situation, we say we want error and above, and we have an info. And our log file looks something like this, because it wasn't high enough to trigger. But why have we triggered an error? Something bad happened immediately after the info. If we were just using the stream handler, we could say, OK, I want error and above, and you don't even need to worry about fingers crossed, because you will only get errors and above. But what the fingers crossed handler will do is say, OK, I'm set to error. There has been an error. And it will log everything up until that point. That's why it's called fingers crossed, because you're hoping that nothing's going wrong, and if nothing does, your logs have emptied. But the instant that something goes wrong, you have all of that debug information, you have all of that info level logging, and it helps provide context for what's actually gone on. Mon log has slightly more flaws than error log. It's an object, so we can inject it into our code if we want to. It supports multiple log writers, so I showed you writing to file, you can send to a HTTPM point, emit it to syslog, send an email, do whatever you want. And it has log level support. It makes it say this is informational, this is an error. And again, we'll get to that in just a second. The only con for me is that instantiating an instance can be quite complicated. Passing it into all of your codes can be quite complicated. So if you've got a huge legacy project and it would be real difficult to use this, you might be better off with error log. For everyone else, just use mon log. And I mentioned the error levels. You can use whatever you want. Any PHP slash JavaScript developers are here. I don't know, not so many. You use npm. Npm has its own set of logging levels, completely different from anything else. And it actually has one lower than debug called silly. And honestly, people use it so much and it just spams and spams and spams. If you do C++, you might be used to the trace level logging, things like that. Everyone picks and chooses their own, but they shouldn't because there is no RFC for it. We should just use syslog. There are enough levels here to express your intent with reasonable granularity, starting all the way at the bottom at debug, info, notice, warning, error. I'm doing this from memory. Ever critical alert and emergency. That wasn't bad. Most of yours will be somewhere in the middle, somewhere between informational and notice up to ever and critical. You can imagine you can't connect to your database. That's a critical error. A user signs up that might be a notice. Alert and emergency are really reserved for the server administrators. For example, emergency is the server is physically on fire. So when you say, well, why can't I use emergency in my applications? It's an emergency. I can't get to the database. It's not. That's a critical fault. It needs fixing. It might even be an alert if you can push it, but it's definitely not an emergency. And one log supports the syslog levels and coming upon two years ago now, I think, this was actually a topic of discussion for the PHP fig and they ratified this as PSOI 3. So if you like following PSOIs, this is in there. They provide a logging interface for you to work to that contains all of these levels. All you need to do is go through your application and decide what level each log message should be at because you don't want too many in there because you'll just never read them. But think about it. Is this informational? Is it something good that I should be aware of? That's a notice. Is it a warning? If I don't do something about this, is it going to transition into an error? Or is it critical? Can I not talk to my database? So we've been through our code base. We've added a ton of logging. We need somewhere to send it. Writing to a file on disk is good, but no one ever reads it. I mean, you ship it to production, you might not even have access to the server to see it. And this for me is a bit that's amazing. Imagine this. Everything is on fire. Servers, buildings, everything. Would you want to go into the office and log into each server one by one and then tell minus F on each log file on there? Just trying to guess which server you need to look at which application. Wouldn't it be great if there was just one place that you had to log into and look at? And that's what the Elkstack gives us. Elk stands for Elastic Search, Log Stash and Kibona. But really, if we think about the order in which they run, it's like the leg stack, but that doesn't have the same ring to it. I can see why they went for Elk. It starts with Log Stash. Log Stash sits at the beginning and we can think of it as a database. It has a range of inputs, some middleware and some outputs. When I say it has some inputs, it has a lot of inputs. It can be a path to a file. It can listen on a specific port. It can pull data from S3 if you want. There are about 50 inputs, some more popular than others, but they're all available for you to use. Personally, I tend to use File the most. I also tend to use Standard In, because he doesn't like piping data into a process. And also S3, actually, because we archive a lot of our log data because we don't want to see it all the time. But sometimes it is useful to re-import that and try and work out why something happened with a little bit more information. This is just the inputs. This is how you get your log data into Log Stash. Once it's in there, you're really into the core of Log Stash, which is its filter system. If you don't need this, you can use a more lightweight option, because Log Stash is pretty heavy. It's recently in Java. It needs a lot of RAM to keep going. There are lighter options such as Beats, but they don't have any of this filter logic, or at least not to the same extent. And the filter looks something like this. This is the easiest one. This is the JSON filter. It expects to receive a valid JSON document. It extracts each key from it, and you can add or remove fields as you want. You log sensitive information, but actually that shouldn't end up somewhere searchable by everyone. Use Log Stash to remove that field so that it even gets to elastic search. And this is great if your application is already outputting JSON formatted messages. Model log can do this. It's got a JSON formata. You can use key value logs, which is the log format. They're supported, and you can manipulate all the fields and do things like set defaults here as well. But really, the power of Log Stash's filters comes here. It's called the Glock filter, and this is a line from Open SSH. Someone's logged into a machine, and the Glock filter lets us tokenise arbitrary text and extract meaning from it. Here, I'm interested that a public key was used to log in as root from an IP from this specific IP address, and SSHD was running on port 22. To poise this and get it into something that's machine-readable, you use something like this. And it can look through discovery to start with, but it's really not that bad because half of it is literal text, so things like accepted will match accepted literally. And then we say we want to match a word, and I'm going to capture that as auth underscore method. We want to match a user. We want to match an IP. We want to match an integer. And at the other end, this will pump out a JSON document that has auth method, user name, source IP and source points. And these are just named regexes. It comes with a ton of them built in. You can write your own if you want, but it uses words, user, IP and integer. They're relatively simple, regular expressions. There's a fantastic tool called Glock Debug to help you put these together. You put in your unstructured text, you put in your Glock filter, and it highlights in real time as you make changes. It's wonderful. If anyone wants to try writing these, I wouldn't try and do it without this. And once we've got that data out, we've manipulated it as we want, we've got it in a structure format, we need to send it somewhere. And again, there's dozens of options, but most people don't use, well, any but one, really. Most people send it to elasticsearch. Every filter applies to every input, and is sent to every output. We can do some clever things with tagging, but if you want different combinations of filters and outputs, the easiest way to do it is just to separate log stash instance. Log stash can be pretty slow. Well, sorry. It's actually really fast, but when you're throwing thousands and thousands of log lines a second at it, you need to organise them and generate structured data. It can have a hard time. Use a few grog filters as you can. Make sure that your applications emit structured logs. Just minimise the amount of work that log stash has to do. Like I said, you can send it anywhere. There's dozens of options, but everyone uses elasticsearch. And elasticsearch is a document-based search engine. It's got great full-text search capabilities, so ideal for log messages. It's great at aggregations as well. It wants to do things like build histograms. There's not actually much more to say about it other than you have to install it and run it. There's a few performance tweaks you can make if you hit issues, but most people don't hit those issues, at least not initially. We'll cover those at the end when we discuss the common pitfalls. There's not much more to say about elasticsearch. It's those documents for you to search later. Finally, Qibana. Qibana is your search interface. It's a quick way to query elasticsearch, to build dashboards, visualisations. It's actually kind of hard to explain, so I figure the easiest way to explain it is just to show you Qibana. This is where everything crashes and burns holodla. Can you see that? This is Qibana. All of this is fake data that I imported. You can click into it and you can see that the JSON is being tokenised. I got, I made a request to billing. Probably can't see that actually. Let's make that bigger. Not that big. I made a request to billing. It came from the IP address. It was a 200. This was the original message. You can do all kinds of things in here. That's HTTP logs. I also have a lot of logs for business events. So I also imported a set of bank accounts into here. So I want to see the bank accounts for all mails and it returned it. And I see the balance, the address. But I want to do more. I want to say sure all mails and their balance is greater than 20,000 and their balance is less than 35,000. And these are questions you can ask after the fact. You don't have to think about these in advance and specifically log this information. Just throw all your data into elastic search and Qibana will let you search and see it. So here we are. We can see that Mr Felt's earns... Where's it gone? It's 1,153, which is between our search criteria of 20,035. Now imagine this for SSH log-ins. You want to see when people logged in as root, when Michael logged in, when Wynn logged in, when people logged in using the password. All of that is now your fingertips. You can search and find that. But like I said, sometimes you don't want to read 10,000 log messages. You just want to see a graph. Qibana has excellent visualisation tools as well. So going back to our... Going back to our HTTP requests, I'm going to build a patch art and I'm going to say I want to search by endpoint. And there it is. We can see at a glance how many went to user times. But actually that's not enough for me. I want to add another dimension to it. So I want to do it by, let's say, HTTP code. And now, for each endpoint, we can see how many 200s, how many 500s, how many 400s. That's useful. I like seeing by HTTP code, but actually I don't like the format it's showing. So instead I want one pie chart per endpoint. So I want to split the charts. You can ignore the error message. And this is by endpoint. And I will just drag that to the top and get rid of all the other ones. And now I get five charts all broken down by HTTP code. So this one's billing. Look how easy that was. That was 30 seconds. We can do boy charts. We can say that our Y axis is the average balance of an account. And our X axis is the state that the person lives in. By default, there are five. That's 51, so we get all of them. And there we are. We can see that Colorado has the highest average balance. These insights that I'm pulling out just from the data that we're emitting from our application logs. And it's not just for application logs. If I change this, change index to Shakespeare, I actually ingested the entire contents of Henry IV. Every line spoken. And I can search for every line that Gloucester says, just like that. And the same with visualisations. We want to build an attack load. We want to do it by speaker. Let's find out who has the most lines in King Henry IV. Oh, it's Gloucester. Hamlet has a lot. King Henry V has a few. Othello has a few. Imagine importing commit histories and things into this tool. You can do all kinds of things. And you can save these visualisations and build up dashboards. You can have 3, 4, 5, 15, 20 graphs that all match up. They're all showing the information. And at the top, you can't see it at the minute because I made it too big. To the right, I'm actually fine. No. To the right, it gives you a time frame. You can say the last 24 hours got 15 minutes. And as time shifts, your graphs will update. So you can just leave them on the screen and monitor. If you expect something to be zero over time and it's one of the changes to one, something's happened that you want to know about. These are my screenshots in case it didn't work. Ever the optimist. About 15 minutes. Yes. Externally, like, is there an API or something? Well, you can put your boner on any screen you want. People do have to be able to get to it. All of the data restored in the elastic search, and that has an API. So you can use other tools like the former as well with it. The bottom is just the graph rendering part. So, section 5, log management. Some of you may have heard of Asimov's law of robotics. A robot may not injure a human being through. No, sorry. May not injure a human being or through in action allow a human being to come to home. I have a similar one for application logs. An application log may not injure an application's performance or readability. Logs are supposed to help us not make it difficult to work on a code base. I remember once I was working on the project and we started, I think it was using MongoDB actually, and we weren't getting the performance we wanted. So we started to log the time it took each quiver to execute. And we logged that to a MySQL database. And it started out great. We started to make optimizations. But then over time, page responses started to get slower and slower and slower. It turns out that it was taking more time to log the time taken for the query to MySQL than it was to run the initial query. That was a fun one. And everything I'm working on the project were for every line of code. There were two or three log messages. And they just made it a nightmare to work out what was actually going on. Logs are supposed to help, not hinder. So they can't injure an application's performance or readability. Those are the arbitrary rules. As your application grows, so does the amount of data. It's easy to get started. It's harder to scale that. But you need to make sure you can handle bursts of data. Because when do you get the most data? When something goes wrong? And when are logs the most useful? When something goes wrong? So if something goes wrong and your logging system blows up and you don't have any logs, what use is that? You don't care about the logs when everything's working. Disk space. This seems like a simple one. But it's the number one cause of application failure. You're logging to disk. You run out of disk space. I alluded to this one earlier with Elasticsearch. Elasticsearch stores data with what's called an index. And it's normally data-shoided. But imagine that you've got one application that sends a ton of data that isn't really that important. Or one application that doesn't really say much, but when it doesn't really have to listen. Because Elasticsearch is a circular buffer to start. As you get more data, the oldest data is erased. It's overwritten. Actually, I don't think that's bed default. I think that's something that we always just configured. But if you do have a high volume feed, tag it with something, send it to a different index, or tag your important ones, you can set custom retention policies. You can say this high volume feed, I actually only want to keep three days worth of that. But this really important one, I want to keep 12 months. I'm working out what logs are important to you and tagging them as such. You can make sure that you keep the information that you need. Shit, what's relevant? This one's down to you. If your log stash nodes are overworked, don't send it debug logs. This might involve writing two separate logs to disk, one for debug and one for error. So you can always go back and look on disk if you need to. But if disk space is at premium, give them and then let log stash strip them out. It's a trade-off that only you can decide on. Every feature that goes to production must have a set of grog patterns if necessary. Ideally, we're just emitting JSON anyway. It's got to have a dashboard that we can look at, work out whether it's behaving as intended. It's got to have a set of alerts. When devs are involved in thinking about how something will be monitored at development time, the instrumentation ends up being way better. If, before you start writing any code, you think, how do I know if this is working? What's going to happen when it's not? How will I know that? You can instrument your code for that. Make sure each request has a unique ID so that you can trace that request through your entire system. You might generate this yourself at your entry point at your API. If you make calls to internal APIs, pass it along. You want to correlate all of those log events to a single request. If you don't want to generate it yourself, use something like Voynash's ID header. It doesn't matter where it comes from so long as you have one. And finally, normalise time zones to UTC or use timestamps. I don't care which. This needs a second slide, just because people say, yeah, yeah, I've got that, but they're done. And then they come to use their logs and say, oh, why did this happen? It says it did that, but that happened before it, but you can't because that told that to do that. In terms of that, they're just in different time zones. One's running on the east coast of the US, one's running on the west coast. Stick to UTC, otherwise you're in for a world of pain. Almost there. Almost time for beer. I'm going to quickly run through a couple of supporting tools. I mentioned Fowl Beat earlier. It's a lightweight alternative to LogStash by the same company. It's written in go rather than Java. As a version 5, it has some data manipulation tools, but they're nothing compared to LogStashes. So you can use this to ship logs off your nodes to a central LogStash instance, which then processes them and sends them on. Or if you don't even need to do that, you can use Fowl Beat to send directly to a mastic search. This should be run on every node. I'm looking at log files, shipping them off to either LogStash or Elastic Search. We have Graphite, which is a time series database for recording events. I would like to change this and say use Prometheus instead. But Graphite's fine to get started with. And this is for things that don't need much context. We saw the log messages. They have a ton of metadata. This is for when you just need to know it happened. You can send data to it from your application, or you can use LogStash's Graphite output. But what you end up with is something that looks like this. And it's great for spotting patterns. This graph, and you've all had no idea what it does, I know what it does, it tracks data input versus data output from the process. We expect every item that comes in, we do a little bit of processing and augmentation, then we want to send it out again. So we expect the blue and the yellow lines to match up. Not exactly, but close enough. And as you can see there, pretty much there, if the yellow line dropped dramatically, I'd know something was wrong. This one's a graph of system loads per minute. If one service spikes, you couldn't notice it. This is useless right now. You don't care, but it's between 0.1 and 0.4. That's fine. If something shoots to 10, everything else is going to shrink down. You can see one big spike. It's about the shape of the data, not necessarily the values. And seeing that shape makes you more informed. You can use Grafana to talk to Graphite. Graphite's a data source. You can use Grafana to talk to Elasticsearch rather than Cibolna. It's just a graffing tool. And as you can see, it can draw much prettier graphs. Page tutor. This is important. It isn't quite right. It can trigger alerts based on data from Cibolna and Graphite. If a system's load average is back to above 10, you want to know about it. But you need to trigger that alert somehow. And for me, the best one I've found is a tool called 411 from Etsy. And you can set up rules that say, alert me when there are X events in Y time. And that can be, alert me when there is one event ever. Or when I see 10 couldn't connect to databases in three minutes. You can set those thresholds yourself. You can say, alert me when, or request per second, because outside of one standard deviation, or when it drops by more than 10%. You can trigger a lot of loading systems, such as email, Slack, and of course, page tutor. And finally, our conclusion, almost there. I know it's been a long day. Logging is required. It's not optional. We need to know what's happening inside our applications. We have to. It's not a question of, should we? It's a question of how we. And the key to this is that developers are empowered. You should be the ones doing this work. Shouldn't be your operations team. The ones that developers are the ones that know how an application should work. The ones that need to expose the relevant information. All the operations team should be doing is setting the alerts in production, based on the values that the developers gave them. And we do need to be aware that logging does have a performance impact. It's code that's got to run after all. For example, we compile debug statements out of some of our core C++ systems, because even just doing the check to say, should I log, no, slowed it down too much. And we actually had to compile that code out. But if you stick to info on above, that generally gives you enough information to work out what's going on when you need it, without having too much of an impact on your production systems. In the end, it's down to you to decide how much logging you need. Everyone's got a different situation, you've got to do what's right for you. But think about it. I flew here. I live just outside London in the UK. I'm sure that some of you flew here to get here too. And imagine, as you were getting on the plane, Stuart, I said to you, we've got a choice for you. We're going to get you there. But we're going to either fly slowly or we can fly blind. We'll get there much faster if we're blind for the pilot. But which would you choose? Sure you would. And the same is true for logging. Without visibility into what our application is doing. We are flying blind. And with that, we're done. I've been Michael. You've been awesome. Like the audience participation. I know it's late. I will do questions now but just come and find me afterwards. I'm sure everyone just wants to get out and get a drink. Thank you very much.