 Hello everyone, so please take a seat, grab a coffee, close the door. Do you hear me? Oh, no, I'm sorry, it's a pre-recorded session, so I can't argue at the moment. As you are listening, just record our being in holiday in the south of France in my swimming costume. So, give me a question in the chat room. I'd be pleased to answer you at the same time. Hello everybody, I'm Nicolas. Just a few words to present myself. I'm an independent Betrashian tester, a security editor. I've worked for a decade in this cyber security industry. Now I've built my own company and I work in the internal red team of a financial institution in France. So, as you see and heard, I'm French, so sorry for my French. Thank you. We'll talk about automating security operations, SECOPS. What does SECOPS mean to you and to me today? Everyone has his own definition. For me, security operations could be related to a business activities. For now, please consider at least these activities. I will talk about Betrashian testing, security control assessment, linearity measurement, CTI, DFI operations, security compliance, and all about code review and web application testing. We'll see why and how to add automation within this activity. We'll talk about patrol. Patrol is an open source framework for automating and orchestrating tasks as a security operations. It provides a solution to get a continuous and full stack overview of your cyber exposure. The solution lets you to define your assets, your scams policy, and the scams you want to perform. The scams give you findings. And all these findings are then collected, analyzed, and aggregated within a unique database. We develop several engines and connect all to existing security tools to assess risk on biosecurity demands. The idea is to get a risk overview from IP to data level. The scams could be start one shot, scheduled on, started on the regular basis. And the final goal is to get a continuous monitoring of our assets and our security posture on all stacks. All the findings, the findings could be a confirmed vulnerability, a suspicious change in our systems, or a suspicious activity over internet. These findings are then contextualized and tracked over the scams. And that's all from me. Now, of course, it's not a tool presentation. Now, I just want to tell you about security operation and about automating scams. As I told you before, I work in a CIS SIRT for five years. And from my windows, there are two major factors that lead to the current evolution of the IT landscape. These are acceleration and diversification. This applies to assets, the threats, and by the way, the security incidents involved. We see an acceleration of digital programs and diversification of assets. So, thanks to the digital transformation program, we see an explosion of IT projects. The information systems are more and more open to the world and then more and more open to hostilities. We have to deal with new technologies every day. New products, new frameworks, new libraries, and all these technologies are updated every day. We also see changes in the software delivery processes. Remember a few years ago, when it was about four to five months of production per year. Today, thanks to the hype of DevOps activities, we can see multiple go live in a day. A go live means a new application. By the way, new vulnerabilities or new exposed things. The threats are growing too. The number of CVs is growing year after year. It's a metric. I don't know if it's a good metric to talk about the number of CVs, but it could be representative. But the attackers, they do a great job. They are more and more organized and efficient. From a different point of view, we have to cover a quickly changing IT landscape. And at the end of the day, it's increasingly hard to get a realistic, comprehensive, and sufficiently updated vision of our cyber risk exposition. We also have to face another problem. It's the talent shortage problem. We have not enough people to do the job at the moment. Lots of tasks are repetitive and this led people to lose their motivation and leave the team. We definitely believe it's time to adapt our cyber defense paradigm. And we have to adapt the way we do our job and we monitor our cybersecurity posture. To face these challenges in the team, we try to manage security incidents with two goals. The first one is for the red team. It's about identifying vulnerabilities on our assets before attacker 2. And for the blue team, it's to identify indicators of compromises which could be path currents of the future of a potential security incident. And to do this, we have to keep us updated from many, many things. The first one is to keep us updated from the continuous transformation of our assets. It could be more or less considered as a shadowing in big companies. We have to keep us updated from the infrastructure of knowledge database. The new research publications, the talks talking about the new vulnerabilities or a new way to detect some things or to exploit the vulnerabilities. And all the security news and the spectrum of threat scenarios is changing also every day. We have to manage lots of feeds of information every day. Finally, we found that scanning our assets is not efficient anymore. We have to monitor external resources to detect the link, attack signals and to understand how to practically our security posture. And for day-to-day work, it could be very hard to manage all of this information. In the cyber security industry, it's a race against the clump. Another aspect we have to tackle is the window of exposure problem. It's all about our reactivity. Today, we know that attackers will attack us, not just because we have a bank, we are a gas industry, we are something fancy. No, it's just because we are on the internet and new vulnerabilities are found everywhere. It's just increasing the likelihood of attack scenarios. The window of exposure is a real problem and could be handled with priority. We said that we have to detect and fix the vulnerability and suspicious activities as soon as possible. So facing the challenges, we think about automation and orchestration. Just a quick reminder, my definitions of automation. It's setting up a single task to run and orchestration. It's about automating a lot of things at home. It's about coordination and management of automated tests. Before we go, let's start with our experience. A few months ago, I've set up a Kubernetes cluster with default configuration exposed to the internet. It was unfiltered. Maybe you'll see what will be the next thing. Only 24 hours after, I was hacked. A crack-to-miner was deployed on my cluster and my server starts to mine some crack-to-crvencies. That's there for my quick forensics. I definitely think I was not targeted because I was near Nicholas. Probably a scanner identified that a non-secure service was exposed on the public IP and the attacker automatically deployed his payload on my server. I don't blame it. He's doing a great job for this point of view. The fact we have to remember today is not that I'm cheap DevOps now. We have to accept that attackers do automation and better than us in their fields. What automating secops? The first thing for a defensive worldview is to do more checks. To cover a larger and diversified scope. To cover a bigger perimeter of assets and make more control on each stack. The second thing is to do it more often. It could be continuous checks. It could be very useful to reduce the windows of exposure. To reduce the delay in discovering and fixing a security incident. The third thing I would like to say is about efficiency. As I told before, we have to face a big problem, the time shorted problem. The idea is to reduce the time affected to low value adding tasks. To focus on more complex security cases. For doing this, we have to automate the simple task. It's also a way to reduce and manage costs and to start follow KPIs. It could be also very useful to help you in your compliance and benchmark activities. To define and expedite the same control on a subset of assets. And do it continuously to see the trends and how far you are compliant with your security standards. At this time of these presentations, you should be all convinced of automating. There are several downsides we have to discuss now. Of course there are limits. It doesn't cover all of the risks in itself. If you automate your control, you will have an increasing number of assets to manage. An increasing number of false positives. We found also that it's very useless and inefficient to found functional realities. We also have to qualify and contextualize all the findings we have found. About the TCO, yes. Automating, we don't automate things by magics. We use tools that orchestrate all the tools. So we have also to manage and exploit the tools. At the end of the day, a tool is a tool and it's very useless without a cyber-defense strategy. So if you don't have a strategy, don't try to automate things. By the way, we decided to build a battle for automating and orchestrating the secups. Because we wanted to improve our level of manageability and to become more efficient to adapt our work. The core concept is to efficiently moving from reactive to predictive, more or less predictive security posture. With the benefits from the power of automation. And also we decided to use, to don't develop our tools, but to use in priority the best of great tools. Great tools exist, but they are not addressing all the stacks at the same level. Not that. Issues or qualities are very efficient to scan for vulnerabilities, for misconfiguration of infrastructure and your cloud services. But they were application scanners and the container security assessment and the anti-malware modules are not sufficiently enough. We found that we can have only one tool to cover all the security controls we have to assess. We found that we have to support scan policies which are realistic from the attacker point of view. And the idea was to take the benefit from the best part of several security tools. Making it easier to define a scan policy and to plan it. And that's all. I will talk about that a little bit. That's all is composed in two independent type of applications. The first one on the left is the manager. It's the front-end application where you have your dashboard, you manage your assets, you define your scans, you have your findings, and you try to manage the engines which are the micro-applications that perform the scan. All this application is open source and developed with Python. All features are reachable through the web UI or the rest APIs. The pattern engines are the micro-applications that perform the scans. Pass, analyze, and format the findings into a unique and pivoting format. This could be deployed on several separate scans. We can scan the scans that way. For example, you can deploy probes on your internet, on your internet network, and probes on your administration or DMZ. So you rest it in networks. It could be the binding of your endpoint. These password engines scan the assets. For example, we developed an engine for NMAP. We don't redevelop NMAP, but we made a connector to NMAP. This way, we can perform on the same assets, security scans for using Nessus, NMAP, OpenVas, Qualys, and also a security tool from the same cockpit. All the findings are the same look. We compare and track all the findings on these assets, issued by the several engines we use. A bit deeper. Pass Manager, as I told, is the front-end application. Here, you define your assets and a group of assets. You can also define the scans policy. You can schedule the scans and manage the scan results. The password engines, it's the second component. The password engines are the connectors with the scanner, which could scan the assets. It could be on the internet or your internet network. The password engines could be also linked to an external scanning service. Or a link to your CTI repeater. You can also create tickets or raise alerts to your DFI system. You can also inject data in your SIM or on your ALK if you want to analyze or to make alerts on a different way. As today, we developed a large range of engines in various domains. For each engine, we create a Docker image, including the tool, the tool needed, and the rest API to deal with them. So you don't have to install tools, dependencies, or manage the system requirements. It's just as simple as a Docker pool. It's true, excepting from several engines, like NSU, SOP, and VAS. We end up with the Docker image. Do not embed the scanner. But it could be linked to your instance. On this slide, we see a lot of engines, a lot of various domains. The idea of patrol is a framework. You can build your tool on the control you have to perform. I don't see many, many companies that you patrol with all of these engines. It's more or less separated between the vulnerability assessment or the analytics or the CI CD. So static code analysis or dynamic code analysis. We also have a lot of ideas for the native engines, regarding vulnerability management, pasties, CTI, web application scanners, the containers, analytics, and so on. The engines are also open source, and we accept any contribution to create or to give us ID to develop any engines. Maybe we'll start to talk about use cases. The first one is... I come from the right team. I come from Petrocentesto. The assessments are always the same. The first steps of the test is to perform the recon activities. We serve for subdomains. We reserve IP. We try to discover the port, the open port. We fingerprint this in the services. We search for vulnerabilities and so on. For this, we use several tools with the same, more or less the same settings on your assets. With patrol, we use patrol in our security assessment to do this as quick as possible and to do this continuously. The second use case is to examine the source code and the running web applications for security effects. It's to be involved in the CI-CD pipeline. On each commit, we detect on the code repository, we are able to plan the project and start a static code analysis using the dependency checks or the retirees for the GS dependencies. The code is static. It's build-in and so on. Once the web application is deployed on the state-in of the environment or on the production, we can also orchestrate autonomous scan using our RECNY, ZAP, and NITTO. Patrol is available through the REST API and we also developed a pattern for PyClient, which is a Python library. So it's very easy to integrate with all those security tools. The third one is about fishing preparation scenarios. We use it to solve for all the signs of malicious dormants and the websites present. The idea is to search for suspicious dormants or typical squatted dormants. Once we identify them, we can monitor them to look for changes. Are they still parked? Have they should certificate? What is the web applications look like? Are they new exposed services and so on? And if we have any suspicious change on the attacker's assets, we present alerts and we manage them. The third use case is code leaks on GitHub. Many, many, many times DevOps or IT people are leaking something on GitHub because I don't know, it's easy, but we don't really know how to use it and we don't really know that it's a public repo, it's public. So we want to search for leaked internal resources code, API keys, passwords, and we developed just a simple scrapper on GitHub to monitor our keywords and our patterns just to detect that we don't have leaked any security systems script or API keys or any sensitive domain alerts. We will talk about use cases, but we found that we could automate, when we automate all these security tools, we can address a lot of use cases. So I want to talk about it. Just step back a bit. If you orchestrate the security tools, you will perform more control and do it more often. It will result in more findings and it will result to more alerts and the security dashboard will look like a Christmas tree very, very, very soon. So all these events are relevant, but we have to prioritize these things. And it's the key time today because we have the capacity to detect things to find out the force positive. We have the technology and the experience, but we don't have the time to manage every, every alert. So we have to prioritize. If you commit, I will share my morning routine with the CSR team I work in when a new vulnerability is discovered. Every morning we talk about this and we share questions. The first thing is when a new vulnerability is discovered, we talk about the CVSS best score. We talk about if we are vulnerable or not. Are we exposed on the Internet with this vulnerability? This vulnerability has been identified on a critical asset. Are we aware of any functional exploit with this vulnerability? Is there any patch or compensation measure available? Are there any likelihood catalysts? Is this vulnerability has been exploited in the world? What is the media hype level? Has the vulnerability been exploited with relevance of threat actors? And we ask for the CTI team. We also have the question, have we already found? And the CTI team is in charge to investigate and research us if we can. And the next question is, are we really able to detect exploitation of this vulnerability? At the end of the day, at the end of my morning, the managers say, okay, are we able to initiate a crisis and to free this in priority? It's definitely a teamwork. It's not just become within the sort of CTI team. It's really a teamwork. All the IT and business service lines are involved. The second thing is that vulnerability metadata are not static. They are continuously evolving over the time. Everything changes when a new patch is available, when a new exploit is released, and when a new security resource blog is available. One event could change the way to manage a security incident. And as you remember, we start using the CVSS basketball. And we want to know today if the CVSS basketball is sufficiently enough to be a primary factor of discrimination. Just a quick, quick, quick, quick reminder of the CVSS scoring. CVSS scoring, there are three vectors. The best score, which will represent the intersect and fundamental characteristic of vulnerability. We also have a temporal vector, which represents the characteristic of the variability that changed over the time. And we have another component about the environmental, which represents the characteristics of the variability within the client context. CVSS is the norm, is the standard, very adopt. But the CVSS basketball is usually provided, but the temporal and environmental scores are on our radar. We have clues, we have information from many parts, which are not always on the same page. And it's just a score, a best score. And for example, we have the last fun fact, Herblid was scored at five, and Spectre was scored at minus five. So we just have the CVSS basketball as the primary discriminator. We don't see if it's sufficiently enough to do our job. So just go, go, go deeper with it. We found that we have to manage multiple criteria for prioritization. And we have to manage the CVSS basketball, about the batch of variability, the age of the vulnerability. We have to manage the discovery ease and the dictation ease. All of this data are available from various sites or feeds of information. But all of this are publicly and in quite good quality. For the criteria of prioritization, it's all about the threats, about the temporal metrics. It's about the vulnerability, export maturity and the ease of exploitation. Also, the threat and the intensity is very, very useful to know, because it's a sign of the maturity and the likelihood of occurrence of the risk. We thought about also with the mitre attack and the CTI feed information, it could be useful to know about the threat relevancy. Sorry, it's very hard to say that in French, remember. And it's exploded by monitors, protectors or not. And the third pillar, it's about the assets in itself. The vulnerable assets. Is the asset critical or not? What is the exposure of the asset? Are we vulnerable from an asset exposed from the Internet or restricted network? And what about the distribution of the vulnerability? If we have a vulnerability with a high CVSS, without functional exploits on the restricted network, it's not the real priority. But a vulnerability with a standard middle CVSS base core, but exposed on the Internet with an exploit available and a large distribution, it's the top one for a priority. So we take all these criteria for presentation, and we have to make decisions to check if we have to minus this as quick as possible or not. The first thing is, okay, it's very, it's very urgent. We have to ask for an immediate correction. And we start the crisis, we open the crisis room. The second is, okay, it's urgent. We ask for an immediate correction and we assess that the correction is efficient. The third action is, okay, it's a vulnerability. We know about this. Apply the fix and in the next action campaign, but no more. And finally, if the vulnerability match no top criteria, apply the fix if possible, but we don't have any attention of this. We have to choose our battle, but the type of vulnerability, we don't manage this. For that, we are working on the new tool, which is the battle here. The open sourcing of this application is currently in discussion. I hope it will be the case. The idea of the battle here is to manage all these criteria, all these metrics. And we use, we massively use the CV search and the app for CVE tools, which are open source tools released by the circle. And the idea is to grab some, to collect and clean data, CVE, CPE, cross-references to create an update vulnerability metadata from this on the NVD, the exploit DB, back at store, metasploit, Talos, the attainable DB, and so on. And to compute a vulnerability prioritization rating using the vulnerability metadata and the asset criticality and exposure. And this sector is known by password manager, because in password manager, we know that we found vulnerability on the asset exposed on the internet, on the internet network, and we know about the distribution too. And the idea is to provide password manager a rating from the vulnerability found using any scanner as well. We can also use the battle here to track changes on vulnerability like CV assets, the exploit known and so on. And to perform alerting, like if we found that, if we found a vulnerability on the monitoring product or product versions. Okay, we send automatically an email, we send the high event, we send a slack, we open a geographic and so on. And finally, the idea is also to share feeds with using at home and NCCI. So we will talk about this later. I reached the end of the presentation. So if we can just have a step back and talk about all automating, let's say your backups. It's quite possible. The idea is to have a cost effective activities. We serve to rational the tool integration, the product licenses and all the skills needed to deploy and use the various security scanners we have to. The second one is to provide turnkey solutions. Every component is available to document images. It's very easy to use and they deploy it. We provide templates for scan policy and so on. The third one is patrol and all of the tool galaxy is open source and easily customized to your specific needs. We have documentation. We also have to improve this documentation, but everything is available. Globally, it's a full stack and continuous assessment. The idea is to have a 360 degree overview of your assets to perform a real-time assessment with relevant data to keep you updated from every source we can. Finally, it's made with love, by experts. We end all the patrol community. The patrol community is composed by a security expert. Finally, if we can summarize with two things. For big companies, it's quite an opportunity to aggregate findings from different existing tools you already have in place. For new commerce and small organizations, it can bring new capacities to quickly improve your security maturity. What next? We have a roadmap, of course. We are working hard to improve our integration with all the tools. And especially two tools, TIPE, which is a security and security response and the RADER, which is an IT automation and security compliance tool. All of them are open source projects. Very, very mature project. And we definitely want to have more integration with this. We also try to improve the pattern for P. For Python, it's the Python Client API. We are currently redesigning the front-end of the pattern manager. We are also testing endlessly new use cases. They begin improving quality and security, global security of the platform. And we also are building an enterprise solution. But it's an open source project, and it will be always the case. Consumption is really, really needed. So if you have the ability to test it and give us feedbacks, it would be very, very great for us. If you want to contribute or just to push new issues, we would be very happy to have this. So we are at the end of the presentation. So if you have any questions, please do it in the chatroom. And finally, I know it's a lot late to say that, but I really want to thank the DEF CON organization. Thank you for having accepted my talk. And thank you for you guys and girls to attend this session. Thank you very much. And we'll go for questions. Thank you.