 All right, welcome everyone. Thanks for joining in. It's a real pleasure to have Harni here from Thoughtbooks. She's gonna be sharing her experience with building security practices into CI CD pipeline. As you know, especially since the COVID time, security has become extremely crucial and the companies trying to see how they can shift left and build security practices early to get that feedback. And so it's awesome to have Harni with us to share her experience. Yeah, Harni, over to you. Thank you, Nareesh. I'll just share my screen now. Hi, everyone. I will be talking about how can you build security into your continuous delivery pipelines. And a little bit about me is that I have been a quality analyst and I've been with Thoughtbooks for more than 11 years now. The last three of which I was playing the security practice lead and currently I am the capability lead in there. I'm also a core contributor and product owner to a security tool called Talisman. It's open source and I love training for quality related topics, for security related topics or anything that I have any experience in. And I'm definitely a non-source evangelist. So if you want to talk about some of the open source work that goes on or things even in the security space that can be brought in, I'm your person. I'll be happy to talk about it. So that's about me quickly. I have a lot of slides. So I'll just start with the relevance of this topic itself. And why do we need to talk about it? So there is one thing that has been in the industry which is about thinking about quality later. And then we became agile and we thought that that was very waterfall-ish and we need to move everything left. But when it came to security, it's still in the hands of very skilled niche team which sits out there and we really haven't done very much in the industry to be able to figure out how do we move towards the left more, right? So just a quick question to people and people can probably hit their thumbs up button on their screens. How many projects actually have some form of, let's say penetration testing or production security controls or there are some level that is put in around security controls on the networks. How many of you have that? Can you do a thumbs up on your screens? Few of you, all right? So while we have that, which is great because you still need it, right? In no way my talk says that these are not needed, right? But if that is all you have, the same cost of defect argument that I'm sure all of you are proficiently well aware of, the cost time risk of finding a defect that late is much higher. It's still reactive, right? And the question is how do we become more proactive? And there are many things that you can do to shift left. One is about continuous testing. You could be building a security architecture. You could be putting in proactive controls. And that is something which will give you more continuous feedback, more early detection, right? So the point of this talk is that while there's value in what we have in the left, we still need to shift left, right? And within that, there are again so many things that can be done. So for today, and security is a very vast topic. So for today, what we will be talking about is what can we do around the continuous testing place? And within that, what do we do in our pipeline specifically? Now, when you talk of application security itself, right? There are again many types of security that could be there. But if I concentrate specifically on application security, this is a model that helps me think better. So the four Cs of app set is code, container, cluster, and cloud. And for the limitation of time in this topic, I will be talking mostly about code and a little bit of container security, right? And what can you do in the pipeline? So I'll not be touching upon cluster and cloud. But even within that, there are so many tools, right? There's so many of them. And you go out and you check anyone's website, everybody will tell you that they are the best, right? Which is fair. I mean, of course, while they're releasing a product, they would be looking at many things from their side. But as a practitioner, what does it mean, right? So I have some experience in using these products and more, this would be like about 100 of the products which are out there. If not lesser. I have some experience in using these. I have some experience in consulting with companies on what to use and how to use them. So I will try to make use of this 40, 45 minutes that I have with you to give you some brief of how do you select what you need? What products do you need? What categories of tests do you need, right? So for building secure pipelines, I'm going to give you a practitioner's approach to building secure pipelines. I'm going to be telling you about my experiences and it is going to be opinionated, right? Cool. So a simple pipeline, very simplified, of course. I mean, on your project, you would probably have much more crazy pipelines than this, but let's just simplify that, right? So let's say you have built, you have your code level tests, you are packaging, deploying to a lower environment, test environment, dev environment, whatever it is. You have some functional tests that you have and then you deploy to a higher environment, correct? So let's say this is your simple pipeline. Now, if I have to inject security. So before injecting security, one of the things to understand is why are you injecting that security? What are you trying to achieve, right? And based on that, there would be different categories, okay? So first, how many of you, again a thumbs up if you can please, how many of you have pen testers or a security team which comes towards, you know, after you've performed all of this and comes somewhere closer to a release and performs some testing on your code? Thumbs up please. Okay, a few, a few there too, all right. So what is it that they do? Yeah, so they would usually have some dynamic analysis tools, right? And they would be testing on multiple layers, of course. And as I said, I'll not be getting into cloud and cluster and all of that. But as far as your code goes, they have something called dynamic analysis tools. And they would be trying out different scenarios against your application, right? So it's a black box for them. And they would try out different scenarios at that point. Yes? So what you can actually have to replace some of their work is called a DAST or a dynamic analysis security tool, right? And we'll talk about this in a bit. And the idea of putting these tools is not because you want to get rid of the pen testing team or the security testing team. It's like saying that you don't need, do you not need exploratory testing if you've got automated tests in there, right? So you're freeing up regressive work which they would have otherwise done so that their niche skills are actually utilized to do more exploratory testing and finding out what other issues could be there beyond the ones that they have been regressively testing all the time, right? So talking about DAST tools, right? This is your first category of tools. So how does a DAST tool work? So it's a black box testing, right? So between your browser and your application, there would be communications, right? A DAST tool essentially uses the concept of intercept proxy. So it sits between these and they will be able to listen and that's why you'll be able to also change requests and you can, what we call fuzzing, right? So you could fuzz the requests which are there. So basically deals are the request response layer, okay? So the qualities of this, it is for black box functional integrated systems, right? It is at that level. You can find common vulnerabilities at app level using this and it can be used for either manual or automated analysis. So even our security testing teams use this for manual analysis also because they have to understand what your application is about and you can use it to automate as well but we'll get to that. What can you automate and what does it mean? All right. Oh yeah, and it can of course detect runtime flaws. So what would you do? There are tools again and these are only two or three tools that are putting in a bunch of them there. But what can you do to analyze how do you choose which tool you need to take in, right? One of the things that you have to remember is that DAST can actually perform both passive scans and active scans. What do I mean by that? Is that when you're intercepting the proxy doesn't mean that you have to change a payload, right? You could be just listening in and you could just read through that and understand that it's missing some headers or some of the payloads look suspicious or that there are open API tokens which are going through. So you could have some checks or some tests on that, correct? So that is a passive scan. You're not interrupting with anything that the application does. An active scan is where you're actually injecting malicious stuff. So I'm seeing how the application would respond to that. So you'll be injecting payloads for let's say injections. So there are concepts of SQL injections or XSS and things like that. You'll be trying to inject that and see if the application is built to be resilient to all of that, right? So you could do both of these things. You could do passive scanning and active scanning with the tools that you can pick up. One of the common things that you're going to hear across all of these security tools that you're going to be talking about that they could be false positives. Why is that? Is because all of these tools are actually built by the community to help you and support you folks to say that, you know what? These are some regressive things. These are some common tests which would be there. Here's our brains from being a security researcher into the tools as tests. Go ahead and run them. But when it comes to your project context, it may or may not be true anymore, right? So you will have to deal with false positives across different security tools. So one of the things that you should always look for is how easy it is to suppress or teach the tool that something is false positive for you and you don't want it to keep interfering with your pipelines, right? So this is something which is very important and it'll go across all kinds of all categories of tools which are there. Configurability is one of the things that you should be looking at, specifically of the test or within the test itself again to make it more relevant. So if you have a no SQL backend, why would you want to run SQL injections, right? So those kinds of configurability is very, very important and you should be able to run thresholds. We talked about test data modifications, fuzzing, crawling, crawling is basically looking at an endpoint and going across and seeing what all other endpoints it is able to reach out to. This is something which actually sets different tools apart from each other, right? And when you look at reporting, this is a black box, right? So usually managers or higher level officials might want to take a final look at what are you getting at the end. So this is a reporting where it makes sense to actually have dashboards or drill down mechanisms so that there are different people who can look at this. Talking about the tools, Burbsuit, this is OWASP's ZAP, Z-Attack Proxy and Acunitix and there are many, many more. So ZAP is completely open source, right? It's completely free and open source for anyone to integrate and they're doing some really cool stuff. Burbsuit is usually a favorite with penetration testers and but if commercial is one of the things that you're looking at, Burbsuit, the free version helps you do the manual scans but if you're trying to automate you have to go for the pro version, right? So think about that when you're thinking about commercials as well, right? Acunitix is also paid and there are a couple of others as well. My personal favorite, as I said, it's an opinionated talk that I have. My personal favorite is OWASP because it does almost everything that any other tool would be doing and it's open source so I don't have to fight for commercials. It's just that you need a little bit of learning curve to get to it. So you have to get, like Burbsuit is amazing when it comes with the GUI and the learning curve is really easy but ZAP actually needs a little bit of learning curve to go forward. So in my pipeline, I'll just quickly show what a ZAP report looks like. I did a passive scan on one of my pipelines and this is what the ZAP scanning report actually looks like. It will tell you what the summary is and within each it will tell you what happened, what did it find, right? So I didn't have any high issues on my repo. It tells you what did it find. It tells you where you would find it. So it goes down to exactly the URL method that it did that and it gives you a possible solution as well, right? So it tells you that you could be fixing this if you did with this solution but again, you have to take your call for your own project itself, right? So there's a quick look into what the report would look like. Now, the fun part is practitioner steps. So things where you actually get down to doing stuff and this is what you're going to be facing. So integration with CI is first and foremost. So when you're looking at any of these tools, see how easy it is to not set up necessarily on your machine but how easy it would be to set up in an agent, right? And specifically when you're using dynamic agents, how are you going to deal with that, right? So think about integration with CI because not every DAST tool gives you integrations. Analyze the manual test first. So your penetration testers are finding issues, are finding things that can be put into DAST. So think about that and then add them to automation and look at writing your own functional tests for better coverage, right? Don't rely on only DAST tools to do the functional testing for you. Cool, the next thing is that you have secure code reviews which come in, right? There would be a security level proactive controls or not that some of you folks could be looking at or they could be a security engineer who actually does that as well for you. Anyone who has that thumbs up in their projects, okay? So when you have security reviews, what are they looking at? Let's look at that, right? So they would be either looking at static analysis and saying that if I look at the code itself, what would it mean? What would I be testing for? What are the common vulnerabilities in the lines of code that you have itself, right? The other thing that they would be looking at is the data. So are you leaking any sensitive data which is there? And the third thing that they would be looking at is, are you inheriting anything from the libraries which are vulnerable in itself, right? So those three categories are SAS for static analysis or code reviews, secret scanning, which is to understand if you're leaking out any secrets or any sensitive data and dependency check, right? Which is to say that the dependencies that I have, so the components that I depend on, the libraries or the packages that I'm using, are there any vulnerabilities within them itself, yeah? And why do they lie in here in the cycle or in the pipeline? Why do I suggest that is because, well, secret scanning can run on raw code, okay? It doesn't need a compiled or a built code. It can run on a normal raw code and can do some pattern matching over there. Dependency check, however, needs a built code. So if you don't put it after build, it will do its own build and then it'll figure out what are the packages and things like that and so it'll work on that. SAS can run on both. It can run on built scanning or it can be, sorry, on a built compiled code or it can be doing that on raw code, but that differs for every SAS tool that you pick. And so it could be doing a binary analysis or a source code analysis. It differs based on that and very honestly, every person, I mean, if I'm doing a source code analysis, I'll say binary is bad. If I do binary, I'll say source code is bad. There is a big argument in the industry around this, but there are tools which, many tools who actually do after they are, they do an analysis after the code is compiled or built and that is why for the sake of that reason, I've put it here, which can run parallel to your unit or integration test. So it needs a built code. So let's talk about SAS a little bit more. So how SAS works is that when you have your binary or source code analysis, there is a model extractor. So it understands what your model looks like and there'll be some intermediate representation that it'll do to either understand, does it look, does it form some tables or does it have a different kind of a tree structure? What does it mean? And then there's an analysis engine which finally spits out results for you, right? So it's a white box testing. It's doing code vulnerability analysis. It looks for data and control flows or it could look for code execution parts or it could also do some kind of pattern matching, right? So there are some static analysis tools which also do a very basic secret scanning work also for you. So they match against certain patterns which are already fed into the tool. And the rule set that it comprises of actually has some common vulnerabilities that are fed into as tests. So it can run that against your application as well. Again, some tools that I've put in, again, a mix of free and paid ones. But the things that you have to keep in mind is unlike DAS tools, this is tech stack dependent, right? Because this is running against your code. So if you're using, for example, Java, you can use spot bugs, right? If you're using, let's say Python, you can use Bandit. But Bandit won't work for, let's say, a .NET application. Security code scan is specific to .NET again, right? But there are also some tools such as checkmarks and Fortify and all who go across tech stack. So they have support across and they're also paid, right? So the commercials of SAS tool actually goes from zero because it could be open source like spot bugs and all. It can go from zero to almost a three bedroom apartment cost. So it really varies because this is a place where people and companies are ready to invest in because they really want early feedback, right? One of the very important things just because of the placement where it goes in the pipeline is the time it takes to execute. So be very clear about that. And when you talk of coverage of tests and all, most of these tools do actually release what all types of tests they cover. So even if you're going for a paid tool, demand to see that, okay? Demand to say what are the kinds of tests or at least the category of tests that you would be looking at. Again, suppressions because false positives, you are going to have false positives even in this, right? And you could be looking for platform solutions. So for example, you might already have some static analysis which is there, try and see if you can integrate these vulnerability detection-specific static analysis to that. This is not the same static analysis as linting. That is not the same thing. That looks at port quality. This is specific vulnerabilities that it will find, yeah? And the reporting I would suggest when you're analyzing, give more importance to debugability because debugging is important. Pretty dashboard is not important here. It is more important that it is more developer friendly. So think about that because you want to react on it also quickly, right? So some report from spot bugs. Again, this is a free tool. I ran this against Java application that I have, didn't have too much code. So you can see that there are lesser, but all the three things that it found were high priority warnings. And it actually tells you if you come down that there are possible SQL injections in my code that I had run this test against. And it actually tells you a little bit about that also that the code that you've written looks something like this. And why does it know that? Because it has got those analyzers in there, right? So it has broken down the model and it actually understands what it looks closer to. So it understood that the way that I've written my code is vulnerable and it actually gives me a better suggestion to use prepared statements or parameterized statements to fix that, right? So it doesn't matter if it is not pretty. It's not a pretty dashboard or anything like that. It doesn't matter. I need to know exactly which line you found these things at what happened there. So if you click on this, it'll actually tell you where did I find that issue, right? And where does it go onto? What is the type of bug? What can I do about it? That is what I need to know as a developer. So concentrate on that when you're looking at reports. Don't get carried away by pretty reports that might come in there, right? So practitioner's tips on SAST, commercials as I said, it goes from zero to three bedroom apartment sometimes even more. So what you want to spend on, think about what is it that you're trying to gain out of it? So for example, many of the paid tools actually charge you because they support multiple tech stacks. But if you're a product company, you might have a similar tech stack across. You may not be polyglot, right? And that's fine. If you're a services company, maybe it is better to think about across tech stack. So think or choose what is it, right? And the commercials also go from, like their license costs are per user, per line that it'll scan to per organization. So there are multiple different types of licensing also that you can choose. And there are open source tools as well. The differences that I have seen is that the paid ones which I've worked with have had lesser false positives to deal with. So you kind of have to intervene lesser manually. So there are those value ads that are there. But again, not all open source tools are also the same. So understand, which is this one, that there are some which are only linters. So many might be using SonarCube as your static analysis tool. SonarCube has some rule sets, but very honestly, it does a code quality analysis. It does not focus so much on actual, figuring out that this is the vulnerability, right? So be very, very vigilant of that. And SonarCube is something that you can still put in, like spot bugs can be a plugin onto SonarCube. So you can run both of these together, right? So think about that, go beyond it, right? And IDE support is if you want to shift even more left than the pipeline, there are some tools like checkmarks and all, which actually gives IDE support. So if you have it at the plugin, while you are writing your code itself, you can figure out if there's a problem or not. Cool, so talking about dependency check, how dependency check works is that, well, there are different types of dependencies that you interact with. You will have libraries, you'll have packages, and then the libraries have their transitive dependencies again. So it's almost impossible to do this manually, right? I mean, there are so many dependencies and static analysis, all the other analysis are custom code that you've written. This is not even in your hands, you didn't even write it. There are libraries that you're depending inheriting their vulnerabilities for free, right? So think about how can you automate that part as well? And it is possible because there are something called CVE feeds, so commonly known vulnerabilities. There are different data sources which are there and they all maintain their list of vulnerabilities that are there in known libraries, right? So these are all known vulnerabilities. So imagine, I mean, custom code, at least an attacker will have to do something to figure out what the problem is. But if you're inheriting a library's vulnerabilities, and hence you become vulnerable as well, it's kind of stupid to not do this. This is like the low-hanging fruit that you can have. So definitely look at this because if you're not looking at it, an attacker is because all this information is available to them as well, right? That exactly what is the problem, what are the vulnerabilities? So it's a software component analysis. It understands vulnerable dependencies, transitive dependencies, and more components. And then many data sources that I said, a CVE is a data source. So if you want to check out CVE MITRE, MITRE, or NVD or their OSS index, there are many others that you can also have, yeah? Cool. So parameters to analyze the dependency check. Again, there are many tools. Some of them are paid like sneak. Dependency check from OWASP is free. So over here, again, it is text tag dependent because it is closer to your code as well, right? And to understand whether it should be looking at a lock file or it should, like it's a gem file, or should I look at requirements.txt or should I look at Maven or Gradle? It needs to understand what text tag you're on, right? So it is important to choose this also based on the text tag that you're getting onto. And again, time, all of these things are common because they're all shifted left, right? So you don't want your developers to keep waiting for this to go on. So it needs to be quick. So one of the things that you can do is that you can have at an organization level, if you have some repositories that you actually get your libraries from, make sure that they run some of these tools so that they know if they have upgraded to the version which is not vulnerable, right? A CVS score threshold is important. So this is something that these tools actually maintain to tell you the severity or how critical something can get, right? So if I look at a report for this, so this is a dependency check report, you'll see that it actually found so many issues for me and it is saying that there are different severities, right? And how many issues did it find in each library that you see here, right? So if I just click on one of these libraries, it'll tell me more details, it'll tell me what issues it found. So CVE details, if you click on this, you'll be taken to NVD's database to give you more information, right? About that particular vulnerability that it found in Apache structs, right? So this is the library, this is structs to library and it found some issues there, right? It tells you what could happen if this CV was executed and somewhere here it actually tells you what is the score of this, right? So some of these scores, the higher it gets, the scarier it is, right? So some of these scores is like super critical, 10 is the highest you can get and they're like super critical. So just go ahead and change it. Like there is no reason that you should not be doing this but in some lower scores, you might choose to actually like say that, okay, threshold over seven is when you should fail the build. Anything before that, I don't care, right? So think about those things and you will be able to set those also when you're setting your thresholds, you will be able to do that. Things like NPM bundler, they actually come up with their own audit. So if you're using node or NPM modules, you can just say NPM audit and it actually gives you the same report. So you don't necessarily have to run a dependency check or any other tool for that matter, right? Dependabot is also interesting because this is, I mean, if you have GitHub repositories yourself, you would probably know that there's dependabot that is like a bot which will also raise a PR for you after scanning that what you should upgrade to this version because that is a less vulnerable one. Cool, going ahead, quick tips from here. So the whole thing, the whole concept depends on data sources, right? So be very careful of how frequently those data sources are getting updated. There are some which use the NVIDIA or CVE data sources. There are some who build their own, like Sneak actually builds on top of what they have. So definitely automate this. Zero-day attacks are still a reality. So there would be things that are not reported or into the data sources yet. So you still need to work on it separately and think about canary deployments as your place to actually keep checking for upgrading dependencies and seeing if your application is still stable with it or not. Quickly, the next thing about secret scanning, basically there are two types of secret scanning. You could have either a secret scanner as a hook, like really early. So even before you're committing, it'll trigger in and it'll do some pattern scanning or pattern matching and tell you that there are some strings which look like secrets or there's some file names which look like secrets. So it'll throw that to you. So you wanna prevent accidental check-ins and you could have proactive scans of that. What does a reactive scan do? Is that you have checked them. It's there in your repository, but you can still run a secret scan which will go through the whole Git history because many people just think that if I've reverted, it's all good, but Git history is still there. It'll still have the secrets. So it's not that good. So it actually goes through the Git history and it tells you if there are certain things that you should have taken care of. So parameters to analyze while secret scanning basically is something which matches patterns, right? So the better the pattern or the more coverage of the patterns, the better results you're going to get. You again have to choose if you're doing a passive, sorry, a reactive scanning or a proactive scanning. So if you're doing a hook, Git secrets, Talisman, Hawkeye are some hooks which are there. Git, Rob, Truffle, Hawkeye and also Talisman are also scanners that you can put on the pipeline itself, right? False positives are very, very possible in this because it's just doing a blind scan with patterns, right? So be really careful of that and see if there are ways to suppress this, okay? Again, give less importance to UI. It's a developer friendly thing that you have to look at. Cool. Quick interruption, 10 minutes time check. Great, thank you, thank you so much. All right, so this is a Talisman report that we ran. It actually found so many vulnerable, like possible vulnerabilities and it actually is telling me that, you know what, there are all these places where you have patterns where it needs to be looked at. It might be a possible situation there, right? And it also gives me shock commits and it tells me if there are, you know, this is the commit ID where I found this issue, right? So this is how your tools would work on the pipeline, but practitioner's tip, keep both proactive and reactive scans, you need both because proactive, of course, it's better because at the developer machine itself, you get to know and you don't even create a commit for it because removing commits and dealing with all of that and assuming that everyone makes a single file change in a single commit, all of that is really not reality, right? So you don't want to get to reactive places where pipeline has to catch these secrets. So it's better to put in the proactive ones, but do the reactive scans nevertheless because they are hooks at the end of the day, someone does a dash dash no verify, it bypasses the hooks, right? So at least you'll get to know when they have entered the, entered the repository and you can at least take a conscious call going forward from there, right? Revisit your custom patterns. This is something that I really encourage because there is only so much intelligence that these pattern matching tools will have. You know what your secrets are going to look like. So look for options to actually put in custom patterns into these tools as well, right? And better still invest in a secret management vaults. Try not to keep like or encourage or have access controls against who has access to these secrets. So that is the best prevention ever, right? So try and put investment in vaults, but again, you never know someone was copying something, forgot to remove it, checked it and pushed it and you have the secret in your repositories, right? Cool. So the last part of this that I'll be talking about is that there will be apart from the code part, there's also the containers that you have, right? And there are images in the containers that you're using. So your pen testing team would also be looking at what that image is and how secure that is. And they will be looking at scanning some configuration level stuff at that point, right? So where would you put that if you had to put that in the pipeline? So you definitely, you can put it here, but it's too late because it's still deployed on an environment, right? Can you do it earlier? Can you do it before packaged? No, because you need it to be packaged, right? And that's where you can have your containers. So a container scan will come between your package and your deployment, right? So you can put container scans here and there are two types that I'm going to talk about. One is image hardening, which is closer to saying a SAST or static analysis for your containers and a component analysis which is closer to dependency check for your containers, right? So there are two things that you can do over here. So within container scanning for image hardening, there are like, what it basically does is it checks for configurations, right? So this is from Dockle's site that I took. So there are things like CI's benchmarks. Again, if people you're working with any kind of infrastructure, whether they are containers or clusters or cloud or any network or anything, do look at CI's benchmarks, they're excellent. Again, free open source guidelines which are their best practices which are listed. So some people took the brains and automated that, right? So there are a couple of tools in this area and I think that we are still getting better at the range of tools that we have over here. And it basically looks at auditing, system hardening, compliance related things, right? So it's looking at those parts. So there are multiple things that it can do, like for example, are you using a trusted image for your containers? Are you, do you have access control setup? Are you using a root user to set the containers up or do you have a separate user for it? So again, you have tools like Dockle, Clare. Dockle is new but it's growing and I think it's growing really well. So you get a really good report out of this as well. So I'll just quickly show you a report of Dockle. Yeah, so I didn't have much in my container but it says, so I didn't have anything critical either but it says like I created a container using root, right? So it tells me don't use root or something like that or don't avoid using latest stack because latest stack doesn't specify to a version and things like that. So it actually does a scan on the kinds of configurations that I have. It does a compliance understanding or an auditing at that level and it tells me how can I harden my image better, right? The next thing is about component analysis which is closer to dependency checking in containers. So Trivi is excellent if you ask me. Trivi is open source again. It recently got into Aquasec if you've heard of Aquasec. So they have, so there are like you, there are many things that you can be benchmarking against. What I suggest in general, whenever you do any spikes, do it on actual code, not on benchmarks which are already there because all these people will always give you benchmark reports but do it on your actual projects and you'll see the difference, right? So the CVE data sources that they use could be different. Some might use the ones which are freely available. Some might use only one. Some might actually coagulate multiple and put it together. So the data source again is important because that's what it learns from. It's about dependencies, remember that, right? And there are other tools also. Clare is also good. Ankor is good as well but you'll have to have some learning curve to get to it. So quick Trivi report would look something like this where you see all the CVs that it actually found. What is the CV ID? What happened? What was the problem? It actually spits that out right there, right? So that's Trivi for you, that was Trivi's thing. And some tips around this. Consider time, choose what you need to scan. So the multiple things that these tools give you choose which ones actually make sense for you in your context. Scanning for base images and orchestration scripts, if you have them, you can actually push the test even earlier in the pipeline. You don't have, like base images doesn't need you to package. Base images are already packaged. You're probably building on top of that. So if you want to scan your base images, scan that much earlier, that's also possible. Unless you have a custom image completely from scratch then you test it after you've packaged it, right? And orchestration script level, configuration tests and all of that you can always put earlier as well, right? For me it's just last minute. Yeah, just one minute is all I need. So we talked about this, right? But it's a dream now if we hear that there's so many false positives, so many things that can happen that can go wrong with it. So here's my point. It's very easy and this is from experience that I'm speaking. It's very easy that you get frustrated with seeing so many false positives and saying that this, oh my God, there's so much of manual intervention and everything. It does take manual intervention. It's not that easy to set it up when you go with it to begin with. So look at, if you have red bills, they're going to stop. They're going to stop the bill going forward, right? And your developers are going to scream, your managers are going to scream, you are going to scream. So what I suggest is do a catch on error, right? So in Jenkins you can do a catch on error. So let them fail. Don't take them out. Our first instinct is take them out of pipelines. Don't take them out of pipelines. Let them stay there. Don't stop your bill till you're confident of that particular step, right? If you're confident of container component, then start feeling it. Then start feeling the deployments going forward, right? But if you're not confident, let it feel on the pipeline, let it be red, let it be a radiator, right? But don't stop the bill going forward, right? So you need, the other ways is that you can, you can short circuit or you can just break this pipe over here, but let it stay there and have patience and discipline of coming back to this and figuring it out, right? So, and just last few notes, there are more, there are more things that you can put in your pipelines beyond the ones that I talked about. I would recommend that you go to my colleagues repo as well, it's there in this slide deck, where he's also put in more types of tests that you can be doing. And this is the pipeline that I used for my demo. There are many more strategies which actually go into pipelines. If you've heard the very famous word of DevSecOps, which is about embedding security into DevOps, it is much beyond pipelines. It is only the beginning that you would put tools in your pipelines. So keep that in mind. There are some strategies that you have to keep in mind about how you release, how you deploy, what else should you invest in and the whole culture and practice around that. So that's me, thank you very much. All right, awesome. Thanks, Arani, that was great. I know I let you run a little bit longer. And I wanna again thank Arani for sharing her experience with us today. She will be in her VIP room for the next 60 minutes, hopefully more. So you can please take your questions there. You can interact face to face with her.