 Hey, welcome back. Hey, AppSecVillage, we're getting down to the last few speakers of our time together here on day three of AppSecVillage at DEF CON 2020. I wanna thank you for coming and for making this possible. I wanna make sure to thank our sponsors and the folks that run AppSecVillage. They've done a wonderful job, both the years that we've been here. This has become my home away from home when I come to DEF CON. I look forward to it every year and I hope you do too. And I hope to see you again next year when we do this, hopefully in person, but we're gonna do it one way or the other. Our next discussion is going to be Secure Your Code, Injections and Logging by Philip Kren. Philip lives to demo interesting technology. Having worked as a web infrastructure and database engineer for more than 10 years, Philip is now working as a developer advocate at Elastic. Based in Vienna, Austria, he is consistently traveling Europe and beyond to speak and discuss on topics such as open source software, search technology, databases, infrastructure, and security in general. Please welcome Philip to the AppSecVillage stage. Hi, I'm Philip. Let's dive into the next session. Let me share my screen and there we go. Secure Your Code, Injections and Logging. Let's see what we can do here. So when we talk about security, we normally keep telling ourselves that everything is fine, especially when we say everybody's job is security. It's normally nobody's job and we can all pretend that this is all good. And then you look at the oldest top 10 and you see at number one is still injections. So good old SQL injection, maybe hell of injection, no SQL injection or whatever other form of unsecured user input you are throwing at your application and that is wreaking havoc. And then you might reconsider and say like, well, maybe this is not so fine anymore. And then you look further down the oldest top 10 list and then you see, for example, number 10 insufficient blocking and monitoring where you then realize that on average, it takes something like 200 days or so until you normally find a breach, maybe a bit quicker if the attacker asked for some ransomware or ransom money, but it might take a while. And then you both or you might end up at the point where you say, well, this is no longer fine or good and everything is terrible and on fire. And we don't want to get to this point. So what can we do about that? I'm Philip. I work for Elastic, the company behind Elastic Search, Kibana, Beats Lockstash. You might have heard of us or might be using us already somewhere. My official title is Developer Advocate. So I mostly talk about the good stuff that we do. And today is no exception. So let's see what we can do here. I generally build highly monitored Hello World applications. Today is no exception. And we will pretty much dive into that right away. So like in any good cooking show, I have prepared something and a very bad application. So Sierra.wtf is where this is running and I will let this run. So while this is prerecorded, I will let the demo run so you can actually destroy it or play around with it live afterwards as well. If you're wondering where Sierra is coming from, it has something to do with my name. If you're bored, try to figure it out. And obviously WTF is the right domain for any demo. So let's hope this works out and let's head over to my very bad and well, average demo application. So we have some Elastic employees here and I added our CEO shy, one of our co-founder Simon and myself. And well, maybe we want to add more employees and to do that, obviously you need to log in. If you want to regularly log in, my username is Philip and my password is secret. This works. And then we can say other Elastician is I don't know, maybe in New York. And they have some salary and they also have a password of let's say secret. Okay, so that worked. But looking at this form, this is not secure and by design. This is using the Hello World application of SQL injections. So we can just use one parameter like Philip and in the other field, we will just do an or true. So I will do an or true and try to log in like that. This worked, unfortunately, and I can now add a bad user in, I don't know, from Gotham and have a much higher salary and they also have some random password. Okay, so we have added the bad user, nothing super surprising here. This has the classic SQL injection problem. I don't escape my user input. I just pass in whatever I'm doing here. I'm hashing my password, but I'm just breaking out of that and doing or true. For example, or whatever other variation you want to have of this. You've seen this plenty of times. It's called an SQL injection or some people call it accidental GraphQL if that is more your thing and which might be the deeper thing, but classically, this is an SQL injection. And I'm pretty sure all or most of you have seen little Bobby tables. If you give your child the right name, like drop the table students, you will have a lot of fun at school or your school just should write better systems, but that's a different discussion. So let's have another look around. So I can also display users. So for example, looking at our CEO shy here, this looks all nice and good and you can go back and forth. What I'm doing here is I'm just passing the ID of my user directly in. So I could just change that to two and head over to Simon. This is insecure again, like unsurprisingly, since this is a broken application. And before we try to actually throw some bad parameters as it, I want to quickly jump to SQL map. If you've never seen or used SQL map, it's an automatic SQL injection and database takeover tool. And you can basically say like, hey, there is this parameter like the ID and I would like to check that. And it will go through a lot of variations of SQL injections to actually say like, well, this looks like it might be interesting or not. Okay, so let's open my shell and let me paste in my command. So you don't have to see me type this. So I'm running SQL map against that URL that I've shown you. And at the end we add a purge because I've run this before and purge will just make sure we don't use any records that we've created before. So we will start fresh. So we will start running this and SQL map will just figure out what is possible here. It will ask me a couple of questions of how it should treat this. So you can see it's already trying to do stuff and it figured out that I'm using my SQL here. So yes, this is the right database and I want to skip tests for other databases because this is my SQL, we can focus on this. And I'm okay to run these risk level queries here. So you can see it's trying out various things. And at the end, it will then tell me what is actually vulnerable. It will also ask me if it should check any other parameters which we will not do here because there's only one parameter that is interesting. I don't want to test anything else. So we're done here. So looking at the result, you can see, for example, okay, this is running against engine X and against the more or less recent version of my SQL. We would have some logs in this folder here though they're not super interesting. They're mostly what we have here and all the SQL commands that have been running. So we will stick to what we see here. So you can see in 81 HTTP requests, we figured out a couple of things. So for example, we see a Boolean-based client attack where you can do something like ID one and something equals something or you can have a time-based attack where we add a sleep and we see indeed the query takes that amount of sleep plus a little more maybe or that you can have a union query and that you can try to join data together. So this is clearly vulnerable to something. So let's have a look of what I have actually in here. Again, a very bad query. I'm just taking the parameter. I'm not escaping it in any way. I'm just trimming any spaces off of it. And then I take this query and I run it. The good thing here, by the way, is that I actually log the query that people are sending to my system. So this will help us afterwards to figure out what has actually been happening here. So we will log this into a var log app log log file. The bad thing here is that I'm using my SQL logic theory. So this will allow me to run multiple queries in one go. So besides that select, I might do another insert, for example. So if I add this to the end of the URL and it will automatically then be URL escaped and work properly, I could just say insert into employees some bad actor. So if I run that, let's see in our application. Right now we have a bad user, but we don't have a bad actor. So if I have to that parameter here, I just add of the semicolon, the insert statement, run this, it looks like nothing happened. But when I go back here, you now have a bad actor. So this works as expected or not expected. Maybe this is mostly up to your expectation. By the way, this doesn't escape like the visualization in any way or the output. So you could run any random JavaScript here, which I didn't do because it will be kind of annoying to have various pop ups, you could break the entire site with some weird JavaScript. So there is no other protection here. So we don't do any escaping. So this is easy to pick apart. So what's going on in our application? Now it's the interesting part, like at first before I want to secure it, I want to get a bit of a better idea what even has been going on in my application. And yeah, if you don't have any log files, nobody can remove your log files. So let's assume we want to have log files and we also want to ship them off and have them hopefully in a secure location. So even if somebody breaks into that instance and manipulates it, they cannot actually do anything with your logs and you still keep some trace of what has been happening. So we want to collect all the things and having a log file is a very good starting point. Maybe ship it off in case somebody breaks into the instance. The main downside is these log files have the ugly tendency to get very large. So maybe at some point, you end up as this little submarine basically and then you have gigabytes and gigabytes of log files in which you try to find something. Maybe successfully, maybe not so successfully. It's correct is great until you have multiple servers and various big log files because then it's not going to be so much fun. So you probably want to use something else that is making your life a bit easier. One of the most widely tools there probably is the Elk stack. You can see Elasticsearch, Logstash and Kibana, one sitting on top of the other and that makes up the good old Elk. So Elasticsearch is the thing that stores your data. Logstash is the thing that gets parses and enriches your data and Kibana can visualize it. By the way, when I say parse, you normally have some log line and out of that log line, you want to extract some piece of information. So for example, you have a log level or you have the URL where something came from or maybe you have some user information or a timestamp. All of these you might want to extract to actually be able to filter down and then say, oh, give me all the logs from this specific timeframe or for this specific user or for this specific application or URL within the application or any way you want to slice and dice your data. When I say enrich, it might be something like you have the IP address of the person requesting the website and then you could add the GOIP information to that IP address and then you could draw it out on a map and then you could draw where are all my visitors coming from, for example or you could just filter down and say like, oh, give me all the users coming from Russia or China because that's kind of unusual. I normally don't have any visitors from there. Let's figure out what they have been up to. That works. The only thing is that by now, the stack has evolved slightly. So we still have Kibana for the visualization, Elasticsearch that stores it. Sometimes by the way, people are slightly confused and ask like, what is the database behind Elasticsearch? There is no other database anymore. There is the library of Petuluzine which writes it down to disk but otherwise there is no other data store behind that anymore. LogStash is still that versatile, bit more like ETL tools, so extract, transform, load so it can get data from various systems, change and transform that and push that out to Elasticsearch or other systems. But to make life a bit easier and also slimmer, we have added beats in various versions so far. So for example, for log files, we have file beat and it basically tails the file over the network and a bit on steroids. So we basically will say like, this is the log file that I'm interested in, take that, tail it, store it into Elasticsearch. To keep my demo simple, I will actually skip logStash and file beat, we'll tail my log file, put the data into Elasticsearch and then we can visualize it in Kibana directly. So let's see what we have actually here. So maybe before we dive into the visualization, let's take a very quick look at what we have done here. So this is the configuration of this entire demo and this is all being set up automatically so you can get all the conflicts afterwards. What I have done here is there are maybe two interesting things in this file. So this is the file beat configuration, ETML. What will be interesting at first is, so I'm getting some log files from the system and there are so-called modules. These are for well-known things that you have often installed. So for example, I have nginx running here as we have seen in SQL map before and it will know automatically for the server that I'm running, where is that log file located by default and what does it look like? So basically by saying use this nginx module, it will then pick up the right file, be able to parse it in its individual pieces and then store it and we're done. We don't need to care about any log file locations or anything anymore. With that bar log app log, which my application is writing, it's not so standardized. So for this one, I have this configuration here where I basically tell file beat, this is the file that you need to collect. So it is of the type log. This is the path. You could also use a wildcard and point to a directory, for example. And then what is also interesting, I will store this with an additional field called application app. And this is what we will be able to filter on right away because I'm collecting all these other logs. So we might see a lot of logs coming together in Kibana right away. So let's take a quick look. So maybe you have used discover before, which was kind of like old way to do that. We have now a dedicated logs UI where you can filter down and live stream data. So with the live streaming, this is a bit more like tail F where you just see all the log messages that come in right away. So whatever happened here to MySQL, we might investigate that later on, but it will just live tail anything that is happening here. But what I'm interested in right now is application. As I have said that before, and let me change my screen soon in slightly. And my application's name was app. So I just want to filter down only on the application that I'm running here. I don't want to see anything about Apache or Engine X or anything else that I have running. Here we go. So this is the log file of my log that I have collected. So you can see here from the login page, this is where I was logging in successfully with Philip and secret. I can see that here I created, for example, my other illustration that you saw, then here you can see the SQL injection as I was running it already. And then you can see here, these are the tests that SQL map has been running. And you can see as or whatever has happened here. You can also live stream these here. So you would just see any SQL queries that come in here. So for example, this one here at the very end, this was the last query that I ran. When I ran or added to the parameter of the ID one, I added the insert statement and added that bad actor value to my data store. So this is what happened here in the very last statement. So this is all pretty easy to see what my application has been doing. As long as you write out the right log statements that make sense, you can then easily collect them and see what is happening here. And you can also correlate that, for example, with the Engine X logs, for example, afterwards. I hope that makes sense so far. Oh, by the way, you could also search in this one here with highlighting. You could then just say like, let's quickly do that. In highlighting, I want to search for insert. And I'm only interested in the insert statements right now. So we have one insert here. So highlighting, another point done, let's move on. If you try to delete any record or drop the entire table, this will not work, by the way, because if I was running this live, and it would be very annoying if suddenly all the data was gone and I couldn't demo anything anymore. So the user that is running the SQL queries doesn't have the permissions to delete or drop anything. Feel free to try it. You can see it in the logs, but your requests will fail because those commands are not allowed here. Now, we know that something bad is happening, but we also want to protect against it. And that is where mod security is coming into play here. So what is mod security if you have never used it? It's an open source tool. It's a web application firewall. It's basically intercepting your traffic and either monitoring stuff or even blocking or fixing things as it goes along. And it is based on rules where it, for example, figures out, okay, this is an SQL injection. So I will deny that request, for example. It is, or it has two sets of rules. There is a free core rule set that you can use that I'm using in my demo as well here. So for example, it can have a real-time blacklist lookup to deny certain things. It can try to figure out an HTTP denial of service attack. It can detect SQL injections and other things. There is also a commercial version which has a totally separate rule set which has some more additional or maybe more advanced features. Though I'm not using that here, I'm only sticking to the free version of the core rule set. So to make this simple, I have the same vulnerable application running behind Apache as well. Only that Apache is running on port 8080. So the code is exactly the same. It's even referencing the same files and everything. It's just Apache has more security added as well. So if I run my SQL map against that, so let's run the SQL map command against Apache. So I'm adding this against port 8080 now. Let's run this. This won't be as successful as before. So you can see it is running. It hasn't figured out the database yet. And it will not because it cannot really see any of the typical MySQL things. So here, do I want to reduce the number of requests? No, I don't want to reduce the number of requests. It should run all the requests here. So please try everything you can. And it's actually telling me that 126 times it ran into a 403, so into a forbidden. And it actually says that potentially this is running behind a web application firewall. So we could with that dash dash temper command, we could run various ways to temper the requests so that maybe they're not being detected by more security or a web application firewall in general. So with SQL map dash dash list temper, it will show you all the different ways or all the different scripts that are included to avoid tempering. There are some which are specific to mod security, for example, but they don't make any difference for our example here. So I will keep this simple and I will just add the tempering. So I will use a run number agent. So if somebody tried to block on a specific user agent, we would avoid that. And then we add spaces to the commands to maybe avoid detection, which won't be successful here, but well, yeah. Now it did detect that this is protected by something. I still want to continue my investigation, but again, it won't be successful and it won't circumvent mod security in the current default implementation that I have. I don't want to reduce the number of requests again. I just want to run this plainly, okay. And we have the same result, 118 requests were forbidden. So we have just a 403 results here. Well, that is what you get, which leads us by the way, so we could do two things here to actually look at. So in the logs, let's trash this filter here. And in the logs, just let me make this slightly smaller so I can see what I'm typing here. So we could, for example, say Apache arrow module, and I want any. So, and now you can see, these are all Apache messages that we have been collected. So here, now I'm in the live streaming mode. You can see these were rejected by mod security already. We can, by the way, look at the details of one request here. So you can see that this is running, Filebit has collected it. I have this enriched this, by the way, with the cloud information. So you can see that this is running in an AWS data center. You can see the AMI ID on which this is based, the instance ID. So you could correlate this, for example, to a specific instance ID, or I have also added the host information, which is like the operating system. So for example, if an attack is only successful against a specific version of Ubuntu, for example, you could easily filter down on this as well. So here you could just say like, give me everything that is Ubuntu 18044, because I know only this one is vulnerable to some specific attack, and then you could just see the logs only for this one, which is not the case here, but just as an additional idea of what kind of information you might want to add. And then you can see this was an error, and you can see here, the error came from more security, actually. And since I have live streaming on, let me run my tests once more, and we should see, let me close this one here, we should see from this timestamp here, we should see stuff move on a bit. I mean, it will take a little while. So I run my attack, a patch is generating the log file, file beat is picking up the logs, and then it will be stored in Elasticsearch. And after that, you can then see it in Kibana, and you can see here now it is appearing in Kibana, that this was the attack that has just been happened. So you can see the abnormalities score is 65, and it was because of these individual pieces that mode security figure out. So we have a scale injection, cross-site scripting, remote filing inclusion, local file inclusion, remote code execution. So all of these factor into a total score, and then based on the threshold and attack might be, or a request might be blocked or not. And this one here has been because it definitely breached the rules here. So the other thing that I actually wanted to show you is in Kibana, you can also have as a dashboard, you can see a bit of an overview of what is happening in your system. So for example, you could see here, these were the average or the general requests that have been running and let me refresh this view because I think this data is slightly outdated now. So this one here just shows you the number of requests you can see here. These were the requests that I have just been running with SQL maps, so that's why they are peaking here a bit and the different shades of green are not ideal. Let me change this to more this pinkish and I don't know, this I'll change to blue so we can keep them a bit better apart. But you can see, for example, these here are the 403s and we ran into a lot of 403s with mode security now. Or you could see where the requests are coming from. So this here would be me. We surprisingly have some requests from the US as well. Nothing from Russia or China today, which is almost disappointing. But depending on the day, you might have more requests from there. And then you could, for example, see which URLs were hit the most and what response codes you got, what clients or browsers people used, how many arrows you had. And here you can see these are, for example, the arrows that are in the Apache log that have been generated by mode security and you can see all of those. By the way, you could filter just on the notices here or just on the arrows to if you have too much noise or just on one specific browser or a user agent just to drill down into that data. Since this is all based on a search engine, this is kind of the value of having a search engine you can easily drill down and filter into all of that data. So this is what you can see in the dashboard here. But let's take another look at mode security. So in mode security, you have the settings, sec rule engine, and you have it either off detection only or on. So I have set it to on and that's what blocked the request, especially when you want to roll this out to a production site, you probably want to start with detection only to see what would happen if I would run this or if this would be active. And then you might find interesting things like in San Francisco there's union square and I think mode security had a bug at some point where it saw union anywhere in the post request, for example, and assumed that this was an SQL union attack, even though it was a regular address that was just union square, it would block that. And I think the rules got slightly smarter now so that this is not having this false positive, but it's just one thing that you might run into and you might create a bad user experience by being a bit too over ambitious with your security settings. So that's why it makes sense to actually start with detection only and then review what is happening. You can also limit how big file uploads are, for example here, or how big a post request can be. So these here would just limit how big requests are so that nobody can overload your server with unnecessarily large requests. And obviously it would make sense to change those to some reasonable value for your setup. In terms of logging, what would be very helpful is if you turned the logging format to JSON because the non-JSON format is pretty terrible to parse and nobody wants to do that. By the way, how would you parse that? How do you normally parse writing a regular expression? Who enjoys writing a regular expression? I guess not so many. And those who say they do, I would always say that this is the Stockholm syndrome that you got so used to writing regular expressions that you kind of accepted that this is the right way to do stuff, but I'm lazy and you should be lazy too. So if you can't avoid it, don't write regular expressions, it's just maybe it's job security. So maybe that's why you would want to do it, but otherwise it's just a lot of work and it might go wrong and there are too many edge cases. So if you can have something in a structured format, ideally JSON, since Elasticsearch also stores JSON, that would definitely be preferred. So that is what I have done here. I have taken that JSON format. Who remembers where or as what we have stored it? How would I see my logs for that? Let me switch back here. Let's remove this. It was application, space, colon, and it was mod security. And if I stop streaming and filter down on this one, it will show me just the logs from mod security, which has, even though it's a JSON format, has a pretty terrible format. So let's look at one of those here. So we're looking at the details. By the way, the other option would be to see like the surrounding. So timestamp wise, it could show you a couple of messages before and after potentially from Nginx or Apache or whatever other logs you collected around that time, which might be kind of related to just to have like this overview of this specific timeframe that would have been the other option, but we don't need that here. So here, these are all the pieces of information we have collected. So application mod security, that was the field on which I actually filtered down. And you can see this was the pattern which we used to actually figure out that this was this specific attack here. So it was detected as a mode code execution Windows command injection, which I'm not sure that was really what happened, but it was detected as that. And you can see, if you like writing regular expressions, if you're in that Stockholm syndrome, then you will feel right at home here. So enjoy the glory of the regular expression here. And then you can see all the details of this rule and how it is structured. Yeah, which might not be the most readable format that this is what is being used here. And you have various other fields where you can see what has been up here. Cool. And again, we added the cloud information, we added the host information, just in case this makes sense. If you don't use that, don't add it because it will just add more weight to your logs in Elasticsearch, but it's just easy to collect, just to give you the idea how I got those. I just add this one here. I just said add cloud metadata and add host metadata. And those two would just add that to the logs. So here is the Ubuntu set, for example, the settings about Ubuntu, and this one has the one about AWS. And also that here, by the way, I just rename one field because otherwise I have a collision of how fields are named. And this is just an easy way to fix up some data when you collect it. And that's all there is to this. You can also write custom rules. So I have added a custom rule and this might be very useful like you have, I don't know, a forum software and somebody is posting ads for Rolexes or Viagra or whatever other stuff you don't want to have there. I have created a different rule. So what I try to do here is on my create.php file, I'm adding a rule that will deny any request that detects a fake shy. So shy is our CEO. Let's assume there should be no other shy in the company, otherwise somebody is trying to add a fake shy. So when a post happens, I check the body of that post and say like if there is the screen shy or ban on in the post, I want to reject that request. So let's head over to the browser to see how that was going. So this one is not running against nginx, but let's head over to Apache. So again, this is running exactly the same thing. It's using the same database, the same files. When I tried to log in here, by the way, the SQL injection is not working anymore. So if I tried Philip and I tried or true, this would then, for example, reject my request because it detected an SQL injection and would just kick me out. So I would have to log in correctly. So secret was the password. If I do that, I can now create a regular user. So let's say I want to create shy other and he is also based in San Francisco and he has some salary and he has a password. And when I try to send in that other shy, again, I get a rejection because the fake shy rule kicked in here. If I stream the logs for this one, it should as the very last rule in a moment show me or already now show me that we just detected this fake shy and rejected the fake shy. So this is showing that my rule worked and we rejected the fake shy. So even if you have some broken application where you cannot easily add like a spam filter or fix something else, you could just write some custom rules in more security to reject certain things or filter out certain things that you don't think should end up in your application. And then you have a more or less smiling shy again because there cannot be any fake shies. And this was the log message that we have just seen basically. More security is not 100% foolproof though. So for example, if you would run this one here with these parentheses or one equals one and parentheses, this will still allow you to bypass the log in. So let me just quickly copy that and try it in the form. So here we have a forbidden. I'll try to, I need to create another random user to actually pass or get locked out. So let's do that. Now I'm trying to log in again. And now I'm full bar doesn't exist, but I add that or query. And this, even though there is more security here at play, I still bypass that. So what security is not 100%, there are potentially ways to work around it, but it will filter out some things and you might have to tweak your rules according to what might be valid input or not. Yes, everything is terrible and everything is on fire and what the fuck were we even thinking? I guess that's a very good description of security. So to wrap this up, security incidents generally come in three levels there. For your information, what the fuck and oh my God. And today I think we were only in the, between for your information and what the fuck, it was definitely not the oh my God yet. Though you frequently see those in the media anyway. Yeah, write better code than me and use the right libraries or frameworks, especially around injections. Like nobody should have SQL injections anymore by using the right tools or libraries unless you just pass around plain strings like me, which is not a smart idea. Is anybody using more security, for example? Yeah, quite a few people. So for example, if you're using GitLab and deploy it in Kubernetes, since almost a year ago on the Ingress app, more security has been added to GitLab. So this is just one of many places where more security is being used as of today. And for the Elastic Stack, that's also pretty widely used just for the security use case, three companies which I think I can name publicly would be Slack, anything related to security around Slack. Mozilla has a project and Cisco is also a pretty heavy user just to get logs into the Elastic Stack around security. And that's not all. Like we recently added a SIEM and endpoint security. So those would play into that. And since I think I have like one minute or so extra, let me give you a quick idea of what SIEM might look like. So here I have already, let me zoom out one level again. So here, these are all the requests in the last hour. You can see these are all the hosts. I have a single host here to keep it simple. All my login requests into that one were successful and I could then dive into all the hosts that I have identified. You can see it also gives you a good overview of what is running on that machine here. And for example, I might be interested in, not the login attempts because well, they were all successful, that's not so interesting. Let's have a quick look at the events. In the events, we can see everything that happened on that host here. So for example, maybe somebody said like, the application is kind of fishy and let's have a look at what is happening on that host. So that is what we are trying to get here. And these are all the events that we have collected over time. So we have NetFlow, so any network connections that have been going on, user logins, processes that have been stopped or started sessions, et cetera. And then at the events further down here, you could see for example, that this is already our application with Apache. So let's drag and drop this one into our timeline filter here because I want to see anything that touches this port 8080. And then you could see, okay, this was Apache that was serving the request. You could see where it was coming from. You could see that many packets, et cetera. And then dive further down into that information and figure out why maybe your Apache server is throwing so many errors because more security is blocking lots of bad stuff. For example, or whatever else you might find interesting in here and you can slice and dice your data anyway you want. Okay, enough of this. Let's close this out. So more security and logging go very well hand in hand. If you just run more security without looking what it actually blocks or rightfully or wrongfully blocks, you're pretty much blind. So you want to combine this with some proper logging as well as logging in general in your application to figure out if somebody has potentially breached something or not. And especially if somebody breached it, when did that happen and what were they actually able to do to your system to actually know afterwards and not just stir some murky water? If you want to try out all of this, the code is on my GitHub page in more security log. If you want to try out the dashboards, I will also let this run. I hope nobody kills it. You will automatically be locked into Kibana with a read-only user. So this is running a reverse proxy that automatically locks you in with a read-only user. So if you go to dashboard, clear it up with UTF, you will end up in Kibana already locked in and you can play around with the locks UI or with the dashboards and just see what is going on there. And the application is running on Xiera.WTF. So any request that you generate, you should be able to find in Kibana as well. Just if lots of users are doing that, it might overload the system, no guarantees that it will be up, or you might see a lot of noise because of everybody else doing it, but happy exploring around that. If you want to get the slides, that is the QR code, but I will share the link if I can live as well. And with that, let's head over to the questions. Find me on this course. I'm happy to answer whatever you have or ping me on Twitter. I'm also happy to answer there. Thanks a lot for joining.