 Thank you very much and good morning, everyone. The conference has just started now, but we already had two interesting speeches. One is the opening speech of Mr. Jan and also the second one, which was pretty good. And what I really enjoyed about it is that it's pretty much aligned to what I'm going to talk about now. If you think back to the speech of Mr. Jan, he was really upset with the way things are done in government. And he wanted to change it instead of just complaining. And also the other talk, we heard about new paradigms and new ways that engineering has to be handled like fault injection or how Netflix are doing things to expect the unexpected and make sure that application development can succeed at a high frequency. So I'm on a mission too and I want to change the way security is being practiced in development teams and in companies. And this led me to become the city of Vantage Point, which is a company based here in mostly sunny Singapore, where we focus, amongst other things, in enabling development teams, specifically those that practice agile and DevOps methodologies, to deliver software rapidly at speed and really secure as well. And all of you know that companies nowadays, more and more development teams, they really switched how it's agile and DevOps, which is great. But at the same time, security seems to be lagging behind a bit. It seems like the traditional approach that worked really well for agile, for waterfall, it does not work as well in agile and DevOps. And so for this reason, I want to give this talk and share my experience of how this actually can be done in development teams that follow agile and DevOps methodologies. So let's start with a very brief history of application security. And typically a good way to start is by making clear what application security is not. And we're not going to talk about things like firewalls or antivirus or phishing or anything of the sorts. But we're really going to talk about what happens at the application level. What needs to be included in the application to make it not just great but also robust and secure. And I found this picture which I found very relevant because typically the firewall is done, right? And you only have port 443 open. But similar to the talk that we heard earlier, you kind of have to do forward injection. You have to expect the unexpected because you don't know what's coming through that port. So what is application security? It really is a quality aspect of your application. And similar like UX design or performance or usability, it's a contributing factor to business success. So in other words, application security is really about understanding or ensuring that the application works in a way that it was intended for. And to give your customers and the users of the application the confidence that whenever they enter their personal information or their credit card information or share anything with it, that nothing surprising will happen down the line. So that's great, right? I mean, that sounds like it's something desirable to have. But why is application security always associated with pain? Any ideas? So traditionally, security in enterprises was managed by network people. And similar to what I've discussed in the beginning, it's firewalls, it's patch management, it's all these things that's giving your road warriors access to your VPN and two-factor authentication, these kind of things. It's quite OK. It's still important. But actually, suddenly, these people were also responsible for managing the security of applications as well. And by applying the same principles that were used for network security, there was this audit-like behavior that took place. And that doesn't really work well. It kind of gives you a forced sense of security. You've all been in a situation where you want to ship software and then, oh, we have to get something done for this department, the security department, we have to get the checkboxes now that we've got them, we can ship. Is that the right way? Probably not. So what happened then is, and I can talk from experience because I was in that situation. I was a pentester before that really focuses on looking at applications, looking at the network of organizations, trying to find loopholes to attack them. And this worked really well in the beginning when things were very network focused. And I can remember that in the early days, companies gave us the whole network range. We should scan, look for applications. We try to penetrate them. We give them a report. And then a year later, the gifers engage us again to say, hey, we fixed everything now. Please do a regression test. So then that changed. And the focus started going on applications. And we did web application penetration testing, for example. And then the focus was to look at every release that this application does and figure out, hey, is it secure? Does it pass all the boxes? If so, great. You can ship it. If not, ah, there are some things you have to rework. You go back to development. You go back to acceptance testing, then you ship. That kind of works well. But for those of you that have worked with penetration testers, you might have had an experience similar to this. The kind of come in there, break everything apart, and drop the mic, and leave you to it. And that probably sounds familiar, right? And I'm guilty of that as well in the past. But the point is that's not the solution. There's a better way to address this and get this going. But for us, from a security point of view, we felt like we're doing well. We are testing the applications before they go live. We can identify all the issues. They get fixed before they actually are released. And then we come back a few months later and do it for the next release. So we felt like we're climbing this mountain. We kind of can see the top of the mountain. And it feels like we have a good understanding of the security aspects. We have a good understanding of how to identify these issues and what needs to be done. But in the meantime, something happened outside that security camp. And I want to use the analogy of this frog in the pot of boiling water. I'm not sure if you heard about it. They used it for climate, global warming, et cetera. So the thing is that if the temperature in the pot is slowly increasing the water only, the frog might actually boil alive. They won't even see what's happening. And that kind of is what happens with the frequency of releases, right? When we started like 10 years ago, there was waterfall. Pretty much one release a year and there was it. Then it got increased to a couple of times of a quarter or maybe a couple of times a year. But actually the goal is now to go towards continuous delivery or continuous release where you actually want to have every feature that you commit, you want to have it shipped as soon as possible. But that's great, right? That's what we want to achieve that's as close to a single piece flow as possible. We want to ideally get the customer feedback and within a few hours return this feature into production to have them benefit from it. So that's great, but for the security folks standing on the top of the mountain, they're kind of seeing this storm front coming up. It's not looking too good. Why are there so many releases? And it turns out that while we thought in the meantime that we're climbing the mountain pretty much at the top, that software and development actually climb the different mountain that's far bigger and we're not even anywhere near to solving this problem. So the only way that this can actually change is by doing continuous application security or what I would like to call DevSecOps. What is DevSecOps? It's essentially combining agile plus DevOps plus security. And the main difference is that security people are now not really focusing on showing where the code is sped and identifying issues in these releases and in these applications, but their goal now is to be a part of the team and work towards the same goal as the development team, which is ship great software, but just make sure that it's done securely from the beginning. And that's this big change in paradigm. So how do we achieve that? The way we achieve this is by essentially starting to integrate security into agile. So what does that mean? If you look at Scrum, for example, and this works really for any other process that you have, any other framework that you use, and no matter how you implement this. But the main point is that you have to start with understanding the process. And that means that it's not, hey, this is how you do security. Now wrap yourself around that. It's exactly the other way around this. Hey, how are you doing software? How do you develop applications? Cool. How can we now supply security activities in there to actually help you succeed without being disruptive, without any roadblocks along the way? OK, before we start with this, we have to do some general hygiene. First of all, no more PDFs, Word files, or XLS. That doesn't work. And for those of you who have ever received a 100-page PDF report and say, now go fix it, it doesn't work, right? What do you do with this? It's useless. So instead of this, why not use issues directly in JIRA, communicate exactly the same way that you would expect any other issue or defect to pop up in the system? Also make sure that security uses the same language as the dev team. So ideally, everybody in this development team speaks the same language. Ideally, it has a development background to be able to communicate efficiently with the team members. Not be somebody that just doesn't understand what's going on and just tries to push security through. And it's also important that security is part of the existing environments and workflows. So tools are great, and there's actually a lot of technologies that can be used, but the most power comes when you actually utilize things that are already existing. So instead of having your own platform for security requirements, they should pop up exactly in JIRA where it needs to be. Or if you have other things like tools, it should be a plugin in the IDE that just works in a normal environment without requiring any change. And importantly also, that security we should aim to complete this within the sprint in Cycle as opposed to delivering a piece of functionality and then only down the road, maybe two or three sprints later, somebody looks at it. Because if we actually manage to do it during development, you all know that it's much easier to actually change the behavior or modify or fix the code besides then doing it maybe a couple of months down the road. And it's also important to understand that not every application is the same security requirement. So for example, if you look at the coffee shop at the corner down there and you just want to know what's on the menu or a weather application, there's probably not as many security requirements as if you have really high volume, high user base, high transaction, finance app. So it's very important that not all size fits all, one size fits all. Okay, another important aspect to understand how application security can be integrated in agile is also to understand how the relative costs to fix based on the time of detection. And this is not specific to security. This is actually true for every defect. You know that when you prevent an issue or you find an issue at the requirements or design phase and you pay like $1 to fix it, it will cost you 30 times that if this somehow makes it to production. So our goal is to move away from this penetration testing that's always towards the end before things are released to really shift left and move towards embedding security from the very beginning. Okay, so now this is what a secure scrum looks like. It looks exactly identical to whatever process you have in place before, but you just start splicing in these different activities where it makes sense and you obviously don't start pushing them all in right away, but you actually start by doing a little bit here, a little bit there, get early feedback, see if it can be changed, if it works well or not, and then you iterate from there. And before we talk about some of these activities and how security embeds into them, I wanna say a few words about this big bar down here, which is security training. Security training is actually a very important part and the main reason for that is that there's a big gap in security talent out there. And not just for application security, even for network security and everything out there. So there's a study that says there's at least a million jobs for security that cannot be filled right now. And that's a problem of scale because even if you have enough people to do this, which we don't, you cannot just throw them at the problem. So the best way to actually solve this is to empower the development team members and developers, QA testers, whoever is involved in this and figure out the way that security can be addressed by them. And I'm not sure how many of you actually had instructor-led training for security or this very stale computer-based training. You just click through and then at the end, you have a quiz that doesn't work well clearly. But the good thing is that that's actually from the past. There are new ways to train, which is like micro, which utilizes micro training, for example, where you get what you actually need at the right time. Let's say you implement the requirement that has security, there is a security requirement. You get exactly what you need to understand why this makes sense and how to implement it. And it only takes five minutes while you're working on something. And another interesting approach is actually more of a war-game-style approach where you have to solve security challenges in the code, which is no more slide decks or anything like that. You have challenges, you can find issues, it helps you to train and figure out what the issues are and how to fix them. So all of this is very important because this has to happen in parallel. Of course, there's security knowledge going to be transferred as part of the sprints, but the more the skill gap is closed, the better the software will be at the quality of the software. Okay, on that note, are all security requirements non-functional? Because that's something we hear quite often. You might think yes, and this is a valid assumption, but the point is that there's also functional security requirements. So for example, anything related to authentication and access control, data integrity, or wrong password lockouts is still a function of the application. And we all know password policies and backups and characteristics of audit locks are the typical non-function requirements or availability, et cetera. So what do security requirements have to do with agile development, right? So if we look at it, it really all starts with the backlog. You all have your user stories, but if you think back to the chart where we show the costs it takes, this is really where you have the highest impact. So let's look at one user story or fragment of that. As an anonymous user, I want to see the entire book selection of my local library so that I know what's going on. Do you think this is actually somehow relevant to security? Yeah, could be, but in this case, you know it's an anonymous user and you want to see the entire public book selection, which kind of is public knowledge anyway. So of course you can add security here, but it's not necessary that you have to spend too much here. But let's look at this for example. As a logged in user, I want to see my entire purchase history so that I know how much money I spend on books every month. This is a little bit different because suddenly, even though it's a normal feature that you want to have, you want to make sure that the acceptance criteria is also security relevant. You want to make sure that the user has to be logged in before they can actually see the purchase history. You want to make sure those actually can only see their own purchase history because maybe they don't want to see that you have rented 50 shades of gray for the third time in three months. So it's still very important to just add these things that are already in the same format that you have that just consider security. And the third one is, as a customer, I want to ensure my privacy when using a public Wi-Fi. Very relevant, especially at a conference like this, if you browse now, who knows what's happening there, right? This is what I call a one-off security user story because the good thing about this is you implement it one time and then it's done. If you have to enable HTTPS, for example, across the application, you do it once, you don't have to care about it much anymore. But now you've fulfilled this requirement. Whereas what is required here might be relevant for other user stories as well. But keep in mind here that I have a tagged field here which comes in handy later. Okay, so requirements really heavily relates on how they are designed and how they're gonna be implemented and solution for. And another activity that we often do inside agile development early on is we wanna think like a hacker might exploit this application, might try to break the design of it. And there's a couple of activities here that we can do which I'm not gonna go into today. But the point is that, for most of the problems that we typically face for applications doesn't matter if it's mobile or web or microservices, architectures, there are well-known design patterns for that. There's well-understood ways how to implement it correctly. And it's actually quite easy to implement it when you know that's the right way to do it. It doesn't cost more or less. And it really helps reduce the ongoing amount of work for that as well. Okay, now that we have the security requirements, we have the design sets, we can actually start coding. So here you wanna make sure that there's some general understanding of how to develop in the given framework and language that we're using for this specific project. And instead of having like 100 page secure coding guidelines for Java in a Word document sitting somewhere which nobody reads, and nobody can tell me they read it because they're lying if they say they do, it would be a good idea to actually start putting this maybe into the Git repo. Maybe we have some of these snippets that come up quite often managed in there. So when you find some new cool ways to solve a problem, we just commit the new changes and everybody sees it in their environment. So they just pull it whenever they start coding again and see maybe some new cool things to do it or ways to actually start addressing common issues. And now think back to the tags we had for the security store, for the user stories we talked about earlier. Imagine that you're starting to work on a really complex one. The one that's quite important for security and you see it has like the security tag there. Wouldn't it be great that you could if you want to actually talk to somebody that helps you implement it while you go, especially for those that are maybe harder than the other ones, to make sure these are implemented right from the beginning? That's where pairing can come into play. And this is not something of course that should be done for each and every story because it doesn't scale, but especially when people start working in these agile teams early on, it's a great way to do knowledge transfer. And of course, you can also look at pull requests and do code review for those security tag stories. So it's allowing you to really scale this better. And it's also important because these kind of things early on make a big difference because you get to know people on the team, you get to work with them, you get to give positive and constructive feedback and over time the quality of the code and the security understanding gets more and more and this will be less and less relevant, at least from a manual point of view. Yeah, unit tests, my favorite. So here you see what happens when 99% of unit tests pass. Looks pretty okay, but not quite. Actually unit tests are really important as you all know because code coverage is a key aspect of the quality of the application. And having 100% code coverage should only be the beginning, should only be the base camp because the more you actually instrument the code, the more patterns you actually cover, the more confidence you have down the road as well. And this is where security-related acceptance criteria that we have discussed in the stories before really make a big difference. And not just for the manual tests, but also for automated ones. And the more automation that we use, the better it's gonna be down the road. But it's also important to understand that the QA testers can contribute dramatically in this stage as well because what hackers do is essentially QA but looking for this fault injection that we heard in the talk before. It's just trying to find different ways to fool around like the Semen Army and Chaos Monkey to trigger something that's not expected and you wanna address it as early as possible. And there's also some interesting projects that you can take a look at. They're open source. One is called Gauntlet and one is called BDD Security which essentially allows you to use BDD style like behavior-driven development patterns using Cucumber to write security stories that you can then integrate with your CI server. Okay, and another aspect, and this is really more a cultural aspect, is that during the sprint review, you actually wanna start talking about security. You start wanna at least put security into the mix. It doesn't have to be all about security that's not encouraged, but at least it should have some considerations or the security considerations should be considered during demonstrating the new attributes or features of the app and how they impact users. And the same is true for the retrospective. It's very important to share the lessons that we learned during the last sprint and make sure that everybody can really benefit from these realizations. All right, so is security hard? General question. Who thinks it is hard? Quite a few, okay? Well, the truth is it's not. The good news is that it doesn't cost you more time to implement the feature securely or then implementing it insecurely, right? I mean, what's really hard is cryptography and I don't encourage anyone to try to come up with a new way of doing this. There are well-proven libraries out there. Let's use the ones that have been wedded and don't care much about these aspects. But the point is security is actually something that can get better and easier over time and become really second nature in order to demonstrate that or illustrate that. I've created this what I call security depth burn down chart. So what you see here is what looks like a normal burn down chart, but it really goes over a life cycle of an application. It's not for sprint, it's really just, you start with this remaining security work, which let's say is a hundred percent where you have this one of security requirements that we talked about and of course for the user stores that you have to add security acceptance criteria to, there will always be a part there. But the point is that over the times, over the sprints, you just keep nibbling away at this remaining security work until you get to a place in time where the application is designed with security in mind, security is considered from the very beginning and everybody knows quite well what's going on in terms of security. So you also see that the application robustness and security skills of the team members actually improve dramatically. And this here is the sweet spot. Because once you are there, you can essentially deliver software as quickly as you want. You don't have to do a lot of free work, you understand exactly how to deal with these new features and there's no more holding back. And even here, this should not be holding back, it's just spliced in naturally into the environment. And we had a couple of case studies as well where we used this kind of approach and over time, we realized that at the end of the, before the first release, there was still like maybe a window of a month allocated for security fixes or other things, but there was none to be fixed anymore. So you actually had time, more time to work on other bug fixes for quality or other things and that's great because security was put in there from the very beginning, it didn't interrupt, it didn't impede the process, it was just a pile of it and once you came to the final stage, the pain test came out clear and you didn't have to do it anymore. And that's really what this is all about. To get to a level where security is embedded as much, that you can really put out software, push your product as quickly as possible. Okay. Which brings us now to part number two, DevSecOps. Who have you ever heard about DevSecOps before? A few? DevOps, everybody? Exactly right. So what DevOps is, or DevSecOps is, is really about to automate other things. You really wanna make what you have done in the first part automated in a way that you can move at the speed of light. Whenever you push something to your repository, it should trigger all the tests to give you the confidence that what you have developed now is okay, you can go ahead. As little manual intervention as possible. And there is this really good talk on the seven habits of rugged DevOps. And the link is in the references. But what we essentially have covered in the first part already is this aspect. It's really to increase trust and transparency between DevSecOps, because we are all working towards the same goal. And it's also about discarding these detailed security roadmaps and just focusing on iterative change and slowly bringing more security in there. And the rest of DevSecOps, or rugged DevOps, is this area. And I just wanna briefly touch on this, which is essentially how to use the continuous delivery pipeline to incrementally improve security. So what you see here, fortunately it's not very clear, is an AppSec pipeline, which is this part. And you see also the rest of the development and the application production illustrated here. So we've essentially before talked about this part where you come up with security requirements, you do threat modeling, you do design review. But every time you commit codes, something happens. Your CI server is spun up and you have static code analysis running over your code, giving you the details if this release introduced any new issues or not. And for those of you who use, for example, Ruby and Rails or Ruby, there's a tool called Breakman, for example, that gives you very quick feedback on how good the code is or if there's any secret injections or any other problems there. And the tools exist for pretty much any other language out there as well. And it's really powerful because you essentially get immediate automatic feedback. Then, of course, this works on the source code lever, but once the application is somewhere on the server where you can actually run dynamic tests, there's also a couple of tools that can be automated to give you feedback from that as well. And finally, you also wanna have maybe your staging or your production environment where you practice infrastructure as code or security as code where you can really make sure that whatever configuration is required is just in line with what you expect anyway. You wanna automate all of this away to have a golden image or something where you're really confident that you don't have to interfere with it whatsoever. And an important aspect of this is also that it's encouraged in agile and DevOps environments, for example at Amazon or Netflix or Intuit, in which that they actually have active attackers hacking their production day in, day out. Why is this relevant? If you think about everything that we have discussed briefly here is from being very security focused in development, putting out the changes to production as quickly as possible, it's also very important to be able to react to change or be able to react to any attacks going on. So there's this concept of a red team and blue team, whereas a red team is essentially trying to attack the application and cause flaws with it and tries to essentially do a penetration test against it. And the blue team is supposed to identify what's going on and Lia is with development team to ensure that's no issue and if there's an issue, it can be fixed quickly. And why this is so important is simply because each and every application out there already has a red team right now, they're just not getting paid. They're just the hackers out there that try to get into your application. And being able to secure and good software is one thing, but being able to respond to change and get this feedback loop and react as quickly as possible is just another very important characteristic. All right, so I'm not sure how many of you have seen this picture here. Without changing our behavior, it kind of looks like the unicorn goes away and really churns out the unicorn's source and security's trying to catch up and clean up the mess. Doesn't work well. So let's do this instead. Let's work together, let's feed the pony, let's make it faster, let's make sure this is a common way towards the same business goal and we can achieve it together. And yeah, so essentially, if you'd like to hear more about DevSecOps, there's also a conference coming up in February and there's a regular meetup as well that you can attend for free here in Singapore. So if there's a topic that's interesting to you, please go there and learn more. Okay, to summarize, how do you get from zero to hero? How do you get from having this penetration test that block your release to having continuous delivery, having software ready to ship at a moment's notice while having confidence that what's being developed is actually secure and ready to go? It all starts by really having these self-organizing teams where security gets to the table as well in a very collaborative way, in a very supporting way without falling a big bang approach, but really starting to go in there and help change the way security is being considered. And as part of that, transfer the knowledge because the goal is really to scale this and if one person is gonna be responsible for security, it won't work. It's really about sharing this and making sure that what we call security champions are found in a team that can really take over and help support this whole process. And then as soon as this is done, the goal is really to step back and let the team be self-organizing and just do what they are good at, creating amazing applications with great features as quickly as possible. So keep iterating, keep adding automation, making sure that these things over time get closer and closer, get faster and faster. And with that, once this is done, you can actually turn out awesome software and secure applications at the speed of DevOps. Simple, right? All right, and with that, we're coming to the question section. Thank you very much. So are there any questions regarding this? Is, yes. Is this process of yours? Okay. I'm gonna repeat the question for the microphone, yeah. Is there something that everybody needs to hear about more or less, this is what people are doing. Oh. So just to repeat the question, is this a custom unique process that only we are doing or is this something that everybody can be doing? I'm not saying custom. That's what your organization is doing. So can you, again, please repeat the question for the... The question is, this process that you described is very kind of, okay, you need to your organization and is it something that other people are doing along that line? Absolutely, yeah. So the thing is that a lot of companies are actually doing something like this right now, mainly end users like Amazon and Netflix and all the unicorns. But even banks now, they're moving towards this more agile approach in order to, and they have to address security because they're highly regulated. So people are looking for this. There's no, this is it, solution yet. It's still very much a work in progress like agile and DevOps itself as well, right? But many people are working with it. There's already a shared body of knowledge that people draw from, but it's still in this iteration. It's still getting better and starting to feel better. But yeah, to answer the question, there's actually a lot happening there. How do you write this test while you are in the sprint? See again? Dynamic testing, dynamic security testing while you are in the sprint. How do you do it, doing the sprint? So there's a few ways. So I mentioned the security tech stories, right? So for the dynamic testing, you typically need a system where the latest version is up already. So this works really well if it's containerized or you have, let's say, Docker instances that you can spin up with the latest code of a certain feature branches already where you can then look at this code and test it without having to wait for the next staging release. But besides this, there's a couple of tools that you can use like Burp Scanner, which is focusing, it's a web proxy or BDD security or web inspect or a whole raft of other solutions out there. Or you just do it manually, which of course doesn't scale as nicely. Okay, I just had a command on doing test my opinion. It's not that effective in the sprint cycle unless you do it like three or four cycles at once. Or maybe when the product is near to the end. But you don't mention a lot about the static analysis tools. So do you use those things? Yeah, definitely. And they're playing actually a very important part. This is more important when you move to the DevOps kind of way of continuous integration because they can run these tools whenever code is committed. And this is really important because there's not enough people that actually look at every single line of code and it's a waste of resources too. But the key is to understand what are the tools good for and to customize them as well. Because out of the box, these tools are not very effective. Especially if it's like more exotic frameworks, they're not good at actually giving you the right results. And then you have this overload of false positives and those of you who have run these tests have seen them. So it's very important that this is also managed well. But by all means, this graph you saw where it's getting better and better in terms of the robustness of the application and security skills, these tools get better as well. They get customized with every sprint whenever there's a new application of, sorry, whenever there's a new issue being found, this should be customized to make sure, okay, this is now being identified and any false positives would be suppressed so you really only get the things that are useful. Yeah, there's a good point, it's talking about false positives. So you need to have someone reviewing them. So go back to the very early stage of security education, how you do this. Exactly, and that's a very good point because ultimately, or typically in the beginning, we as security champions or advocates go do this and help set up the process because for us at second nature, we've been doing this for a long time, right? But ultimately the goal is to have this security champion that understands these security vulnerabilities that can get the scan report and say, oh, I know this one is not an issue, this is an issue, and actually do it themselves. But the goal is to reduce the effort at the beginning where there's a lot of false positives and there's a lot of confusion and actually start the learning curve a little bit lower once this initial stuff has been done and then the training is much more effective. Okay, thanks. Thank you. Got a question about your security burn down. You had the, you were burning down the remaining security work. Yes. You were burning up the percent of app robustness and security skills, which I think look nice on the chart, but I'm curious how you actually measure that burn up of app robustness. And that's a very good question. Thank you for that. So typically the remaining, you mean the app robustness or you mean what's being burned down? The burn up. The burn up. The burn up. So you had app robustness and security skills. Right, right. So that's actually, if you assume that you have a certain amount of security requirements per application, which has been agreed on, which are the one of requirements like centralized authorization, good authentication, clear strategy for handling, secret injection, et cetera, et cetera, right? Once you have this implemented in a good design, you know that as these things burn down, the app robustness actually goes up because once you've implemented the security user store as well, you actually know that the application goes up. It's not entirely correlated. And to be honest, I just made this chart because I thought it looks good and it gets the message across, but it's not actually mapping back to a clear science yet of how this can be measured. Yes, a few questions here. Hi, you mentioned we need to have some secure coding guidelines checked into our repository. Can you recommend where can we download these guidelines? Well, and that's actually a good question as well. How do you get started with that, right? Commonly, I'm not aware of any rep or as of now where you have this freely available, you can just pull it, but people are working on or in your organization, you might already have coding guidelines. You can start by just working through what you feel is actually a good one or a valuable one and start just working on this and putting it in. But if I find any resource that has this already, we can just really pull it, I'll let you know. Hi, one question here. I'm very open to for testing and I love testing about in testing or integration testing, but how do you do it for the legacy application or how do you get a disruption in the existing organization which is using for years and years? And that's what I find challenging right now in my existing organization, what I'm working on. So how do you do with it? How do you introduce unit testing or integration testing? How do you start, where do you start? That's actually a tough one. The problem is that you will not just move completely to agile or DevOps, they will always be these legacy applications. And for those to start getting them up to speed, excuse me, and start getting code coverage in there is really difficult because they have essentially acquired all this technical depth over years and years of development and just addressing this is probably not even a viable business solution or business option because it will cost a lot of money, it will take forever, and by the time you're probably done, the application is redundant. So my advice is to not spend too much time on these existing legacy applications, but of course find ways to be more time efficient in how this is tested without spending too much time trying to make them secure after the fact and both security on. It never works, it will always be a hack and just focus on the horizon and the applications that actually wanna make it right from the beginning. Thank you. Hi, Stephen. Thank you for the sharing. Just for the discussion, because I look at this DevOps with security, in practice there are some challenges may be faced by the team because I think security itself is a very specific area and we all know that security specialists their pace extremely high. I don't think every team can afford to have one security expert inside. The second thing you'll talk about come from the user stories. They are two parts. One is what we can derive out from user requirements, but however, I thought our user will really know how to, because they may not understand what security, they cannot give you a user story talking about cannot have SQN injection, cannot have cross-site for jewelry or this. So this one must come from very specific expert. So I don't know, maybe you can share with us some of the thoughts you have. How this can be overcome in the real environment? How do you guide the security expertise in the development team because currently we say a cross-functional self-management team is already very challenged to get a UIX expert, developers and the database expert inside. So how do we put the security into the landscape of this? Very good question. So that's actually something to think about a lot because if you look at how things developed in the last decades really, quality was where security is now. You know, people develop applications and nobody cared about the quality. You just do it and then put it out there, right? But quality was first to the table and they started getting closer and closer and closer and we're one of the first teams to be part of even the squads that you have as part of Scrum, right? They're on the team, they're working next to everybody. Performance is kind of coming a little bit later now, right? They're also getting now more and more into the picture, maybe not in the team yet, but at least you have resources for your product, right? And security is like on the very other end of the spectrum. So they're trying to edge in. The point is it still requires knowledge and still requires understanding how this can be, like what can go wrong, et cetera. And that's a thing where security champions come into play. These are people that hopefully have already some good development experience, but care about security and make it their responsibility with the support from their team and the organization to take on research to actually become that, go to trainings and like just essentially become a security expert over time. And it won't be perfect from the beginning, but it has to happen this way because if I wanna become a performance guru, I cannot just become a performance guru, right? It's something you learn over the many years of experience and being on projects. So I think it's good to have like this mentoring kind of ability or get somebody to help you out in the beginning. Otherwise it's quite tough unfortunately at this stage. But there's actually a lot of meetups in Singapore that are free where you can also learn about security. And as part of the DevSecOps meetup, we're gonna have our first workshop which also free on how we integrate these kind of tools, et cetera, et cetera. So it's really our goal to bridge this gap more, but it's gonna be a long journey, that's for sure. Thank you very much. All right, in the interest of time, we can't take any more questions. Let us thank Sipan again for his great side for talk. Thank you.