 Good afternoon, everyone. I am Auba Kersidik-Angu, developer pipeline journalism program manager from GitLab. And I'm here today to talk about understanding the sake in DevSecOps. Now, almost every other week, a new thing is added to DevOps, DevSecOps, DevSec MLOps, Dev. Almost every week, new ones are coming up. But today, I'll be talking about the sake in DevSecOps and how it's beyond just shifting security to the left. Now, a little bit about myself. Like I said, I'm a developer evangelism program manager at GitLab and also a CNCF ambassador. You can reach me on my website, abuangu.me. Now, the least of things I'm talking about today is why are we even shifting left in the first place? Why do we need to shift left? What are attack vectors as surfaces? Common attack vectors that we see? How do we mitigate them? Security scans that we can use in CI to mitigate these attack vectors and also talk a bit about compliance and security policies, since those are some of the key things that we worry about, especially in regulated industries. Now, security is a huge thing. And over the years, how we build software has evolved. We all remember the days of the waterfall model. You start from gathering requirements and you walk your way down till the project is finally done. Azure came, DevOps, and different types of methodologies of building software have evolved over the years. But they all have one thing in common. Security is an afterthought. It's after the application has been built. That is when we're calling the security engineers to test the application, to ensure it's secure, to ensure it behaves the way it's supposed to. Then when they find things, we now have to go back to the beginning of whatever software methodology that has been used to say, oh, let's fix this. Let's make sure this doesn't happen. Let's all, sometimes it's when things have gone wrong. That is when we discover that, oh, we should have implemented some security best practice or feature. So it's usually expensive and costly because it might be a major security bridge that adversely affects the organization or the project. Then we start to re-evaluate in the whole software development process. That's why lately we've been hearing, OK, why don't we prioritize security to be right from the beginning of the project? That's why we've been considering, oh, we are shifting security to the left. So it's no longer towards the end or when we are building the application board right from the beginning of the planning stages of the application. How the application is to be designed, the security best practices that needs to be implemented, you know, introducing the application right from the beginning. How developers work with applications. We now see these days most companies are now introducing a remote development environment. Oh, the laptops or the devices that engineers use are even becoming an attack vector. Why don't we create remote development environments in a secure environment where applications can be built? So security has shifted. We are no longer shifting. It has shifted to the left because any organization that is not prioritizing security right from the beginning of the software development lifecycle is just preparing for disaster. Now, one other thing is aside from a lot of discussions around DevOps involves or talking about how our application needs to be secure, but it goes beyond that. Everything around our application, the devices we used to build our applications, the servers that builds the application we in our CI builds, where we deploy our applications to, even when the service providers we use, we need to be able to ensure that security is 360 covered. So previously here, I think it's during the Cold War that the term trust was verified became very popular. But now it's no longer trust, but verify. It never trust, always verify. So you don't trust that, okay, your developers will not use the wrong dependencies or your dependencies you've used. We all know the famous log 4G. You don't say, oh, this provider is secure. So everything the provider should be secure. So what if something goes wrong? What if they get attacked? What anything can happen? It will come back. A lot of software supply chain security stories are common and every day we are seeing new security breaches that are happening. Not because you are not secure, but because the dependency or a provider you are using gets hacked. Now, if we look at our software development lifecycle at almost every stage of the lifecycle, there are points where vulnerabilities can occur. Your developers pushing code, they can push anything. Even if you have 20X developers, not just 10X developers, mistakes can happen. They can push the wrong things or they can use things that at the end of the day turnouts to be insecure. Now, where you push your code to? Sometimes a lot of companies host their source code repositories on premise, while a lot of us use service providers. On premise, all the resources and environments are around your source code management. How secure are they? Oh, if you have a network issue or a network breach, no matter how secure the source code management is, a network breach introduces a whole new lot of attack issues to your organization. Then when you are building your application, what build systems are you using? In what environment are they being built? Do you have noisy neighbors? Are you using containers in privileged mode? All those things, how is your build system handling your builds? Now, then deploying your application to production, how secure is your production environment? How are you sure between what is expected to be running on your production and what you've defined? How are you sure a change has not been introduced halfway? Now, and even dependencies, which has been one of the major source of issues lately, the dependencies we are using. We've had issues of name jacking in dependency repositories. We've had issues of the repositories being compromised. We've had issues of, I think, the color JS and other issues that happened previously, or when kick application where the developer pulled out his package from the repository and broke a lot of builds. So anything can happen, not just using the wrong dependency, but even legitimate dependencies can have bad actors, or can have there is a popular security issue becoming more prominent is the protest way, where people are not just putting bad code into the application, but they are protesting, either via just a banner, or in situations where, I think I forgot the name of the package, that was checking IP addresses of users. Then if it detects an IP address from a particular range, it executes code to delete files on the server or the system. So even legitimate packages or dependencies can go rogue, or they can be compromised. So ensuring that the whole entire software development lifecycle is secure is part of having zero trust around the organization. But first, let's understand two terms, attack vectors. Now, these are ways or methods in which systems can be compromised, or vulnerabilities can be introduced into application, and this can come in various forms. It might be via a vulnerability, or a bug, or it can be a security breach in the network, or it can even be with humans, the usual reclaiming security systems. Oh, probably someone sent, oh, this is an invoice from the CEO of your company to confirm that you made some expenses, and so on. And you decide, oh, let me open it. It's definitely coming from the CEO. It's important. Then before you know, the system gets hacked. There's this popular YouTuber, Linus Videos. I've forgotten his name. Linus Tech Tips, yeah. So he got hacked despite being one of the most prominent YouTubers that tell people how to be secure. And how did he do it? One of his staff members received a file supposedly from a partner. He opens it in some distance for me, and he gets rid of it, not knowing that there are session cookies and a lot of other cookies from their browser have been harvested. And before they knew, their YouTube accounts have been hacked, their videos deleted, and replaced with something else. So there are several ways. Almost every day, there are new ones. And the more complex your application is, the more attack vectors can have. And attack surfaces, how many of these vectors are in your application, or in your systems, or your organization as a whole? Because your build system, your application can be secure. But your network or your users have a one way or the other in which the behavior on the network, at the end of the day, might compromise the system. Now, let's look at common attack vectors. The most prominent one is the network. Yeah. A lot of us are in a conference here. Oh, there's a free Wi-Fi. Connects to the free Wi-Fi. We all know sometimes the price we pay for free Wi-Fis. Or these days, I was watching one news and I think there was news about the FBI of the US advising people not to use USB ports that are in the airports. Huh? We all knew. Just seeing a random USB port, oh, I want to charge my phone or I want to charge my device. That was a recipe for disaster. And even I think I've even seen a USB charger, phone charger, that was opened at the end of the day. It wasn't just a security to power your phones at that day. There were extra security. Within it, they are doing whatever the person decides to do. Now, so there are a ton of ways that systems can be compromised over network, over Wi-Fi. Now, then another thing is bad software development practices. A lot of times, how the code are written or how configurations for a written can lead to a compromise. I've seen scenarios whereby not doing the necessary checks or not using the right conditions in certain places has led to financial losses or have led to systems being compromised. Especially for folks who are not familiar with technology or programming language that is being used to build an application, there's usually a tendency for things to go wrong. So bad software practices is also one of the ways. We have things like SQL injection, SQL-sized scripting, a ton of them, or a bug in a programming language that has not been fixed or that has been fixed, but your system has not been updated and someone did not take cognizance of that. So it might not even be your developers that are wrong or the way you build the application. It might be a bug in a software somewhere or a dependence in the home way that you did not put necessary guardrails for that compromises the system at the end of the day. Badly configured system, yeah. A lot of us might be guilty. Oh, you install a new application, you leave the default. PostgreSQL, you leave the default progress user or you set up Postgres and yeah, it's just local. Then one day, for one reason or the other, someone changed the configuration. It's not available publicly on the internet and everyone has access to your data or I think even one that I'm guilty of deployed a level application with a .env inside the file, then configures the Apache properly, but forgot that the default Apache configuration was still on the system and it just exposed the .env to the world. So how the systems are configured, also extremely important, compromise the devices. A lot of spyware, spamware or all the ways can infect laptops, can infect different devices that have it, not just build systems, even your laptop. I think in the past, there used to be some viruses that look for HTML files and automatically add scripts, Java scripts to them. It does that right from your own lab, not even in the build system, not somewhere else. Now, okay, I've already talked about omitigated vulnerabilities in your code or the programming language you are using. Vulnerable build systems, indiscriminate use of dependencies. You just use dependencies anyhow. Oh, I'm looking for this specific Docker container that has this, that has that and you got one random Docker image somewhere that does exactly what you want, but are you aware of everything that is inside the container? Are you aware of the layers of images that are inside the container? Now, questionable or vulnerable supply chain. Supply chain is a big deal lately because as our industry or any industry that uses software or uses systems need to depend on a lot of services, need to depend on a lot of software. Sometimes you have a chain of tools that are helping you to achieve whatever it is you want to achieve. One of them, we all know the issue of SolarWinds affected a lot of industries and a ton of them that don't even make the news. Now, how do we mitigate all this? First one, which is the most important is ensuring zero trust across the entire organizations, across the entire organization, not just your application, not just your build system, but also your system your developers use, not just the developers, the executives, the sales people, the marketing people, because someone might be used as just a conduit to get to the right places. Oh, he's probably a source person, a PDF of an invoice is sent to him, it's passed here, it's passed here. Before you know, it gets to the right target. Now, secure security processes of course, secure code and build systems, then the ones that I will be focusing more on this talk is CI security scans and policies, because specifically for application security, we all use continuous integration, continuous deployment and all the continuous security, continuous whatever, and using this, this is how we automatically build our softwares and using some of these tools is how we can ensure that we build our applications securely. Now, a couple of scans that are usually done to as part of desktop set ups are things like static application security testing, dynamic application security testing, I'm going to look at each of them, continuous scanning, dependency scanning, license scan, security detection, supply chain security, force testing and infrastructure as code security scan. Now, let's look at static application security testing. Oftentimes, the best practices in applications are usually in force with SAS, static application security testing. Oh, how many of us have used tools that will say, oh, your function is too large. You have more than a number of lines of code or you use this way in this place, you didn't declare it and so on. These are best practices that can be introduced in the application and not just that, but also identifying scenarios whereby, oh, let's say for example, you have an SQL query, select a static from users where username equal to dollar sign username and the user is supposed to supply that data. That's a recipe for disaster, SQL injection. Someone can just add whatever they want as their username and at the end of the day, your SQL query will always return true. So there are tons of vulnerabilities or security vulnerabilities that can occur from just statically our application. So SAS ensures that the right syntax are used. The right, some things that you will have done within your application that will be vulnerable are detected right from before it gets to your source code management. And SAS can be run locally on your system or it can be part of your build system and say, oh, in our CI rules, every time a developer pushes code, the code should be checked for any vulnerability with SAS. Now, the next one is dynamic application security testing. Yeah, you scan the application statically. It has been pushed, everything seems fine. Yeah, no errors there. But what of those edge cases? What of those errors that can only be detected when using the application? Oh, if you enter the username in this way or that way, oh, I forgot to put password and it goes this way or I do this and so on. These are things that you might not be able to detect when you are scanning just the code of the application. But when the application is running, you can, even there are dust tools, even have features where you can specify username and password that they can use to login into the application. Click, enter text into text boxes, click certain parts of the application to see how the application responds. That way, you are able to identify edge cases or other issues that might occur when the application is running in production. But it is always advised that you run this test in sandbox or staging environment. Running them in production might affect the performance of the application, especially if it's where your users are using and you are running dusts, I think they're usually called spiders. They are automatically attacking your application. There will be performance issues. Now, then the next one is infrastructure as code. Almost anything about our infrastructure, these days can be automated. Terraform, Ansible, it can, I think the way infrastructure as code is possible is this. You can go from zero to the full infrastructure and everything running in less than five minutes because everything has been automated. All the application resources, the infrastructure resources have been defined, pushed to AWS, pushed to GCP, pushed to Cloudflare and as soon as that is done, your Ansible script is already configuring the applications and so on and so forth. You can even mix several tools as soon as this is done, that is done. But we depend on providers when it comes to Terraform. We depend on playbooks when it comes to things like Ansible. We depend on Docker images that have been published online. Are we sure that all these are not contained vulnerabilities or vulnerabilities that have been identified or published but have not been fixed because it's not a priority to the contributors of that project. So having infrastructure as code scans in your pipeline ensures that everything you're using is secure. We even have tools, I think we can list unnecessary spending on infrastructure as a vulnerability too. Because almost all of us have cases where you spend more than your budget either on AWS or GCP and you are frantically looking at how to cut down on those costs. There are tools now that will show you, I think there are lots of tools now that will show you this is how much you spend. Or when you introduce new change to your resources, it's like you previously spent in $300 and you just added $10,000. What happened? So I think that too can be a vulnerability that you can check. We have tools like Kix, we have 3V that helps you scan your configuration files. Then container scanning. Almost everything now are containers. The previous speaker talked about containers being used in almost every part of infrastructure lately. And in building containers, images are used. And almost all images have this from, the first statement from this image, from that image. Some start with from scratch, but a lot start from a particular image that has already been built somewhere. Adds more layers to it. How are you sure of where that from is coming from? So and even some of the layers you yourself are adding, what practices are you adding? What things are you adding? What other vulnerabilities are you adding? Because you added the script or you installed a new binary into your own image at the end of the day, what vulnerabilities are there? It is extremely important to scan especially containers because any number of things can pass through. Now, for example, in the image here, when using scans, you can even have your CI scan to show you, oh, the number of, a lot of these tools can have summaries for you, like, oh, these are the number of severe vulnerabilities that have been detected in your container, or this is the number of them so that you can take action on them. Then, not just scanning your containers before you push them to production, but when they are running in production. Okay, you've defined your Kubernetes resource file. You've done your KubeCTL applied and you're happy. Are you sure what you pushed your Kubernetes cluster is what it's still running? It has not been compromised or changed. So scanning the images that are running within your cluster or within your production environment is also crucial to ensuring that your cluster is safe. Now, the next thing is dependencies scanning. The almighty dependencies. We all depend on dependencies for almost everything we build because nobody wants to build anything from scratch. We are an industry that stands on the shoulder of each other. Oh, this person built this, so built this library. I want to build mine. I don't have the right to database wrapper just to query my database. I can use a dependency. I can pick anything from anywhere and be able to build application in a very short amount of time. But these dependencies introduce a ton of things. We have, there are lots of stories out there of how dependencies are compromised or how even state actors hijack dependencies and add bad code into it. We don't have cases of namejacking. Oh, I want to use a dependency called Abu Bakr, for example, my name. But because, okay, someone else is not used to my name, then the U is removed. Abu Bakr, oh, and you just searched and you just found it and you just used it in your application, not knowing that it's a clone of the original one with some extra interesting code. So ensuring you are scanning your dependencies at every push, every commit, because a new commit might be the one that will introduce a new dependency to your application. And then also part of dependency scanning is license scanning. Licensing is a huge deal, especially if you are big on compliance. For example, Hashicob changed their license recently. What if your organization doesn't use that type of license? What if your organization, especially in regulated environments, might be, oh, as an industry, you don't use MIT license or you don't use Apache license. And you don't want to go get into a problem because one of your developers found an interesting dependency that solves your problem only to add more risks to you as an organization. So often dependency scans, license scan is done as part of dependency because every dependency comes with licenses. And even those dependencies have their own dependencies. So at the end of the day, you might end up, just because you pulled one dependency, you have like extra 50, all with their own dependencies. Now, the next thing is secret detection. This is a huge deal, though there has been a lot of progress this, this. I think years back, I read a blog whereby I said some researchers spent time combing through job blogs of public projects on GitHub. And they were amazed by the number of API keys, username and passwords that they discovered. Because, oh, people just push or they don't mask secrets, especially in job blogs. So scanning your application to ensure that .emv files are not exposed. Secrets are not hard coded or exposed in your application or in one way or the other, important data are not pushed. Now part of, a lot of tools now have a lot of secret detection tools now have or features called auto remediation. It automatically recognizes, oh, this is an AWS key. It automatically logs into a AWS account and rotates the keys. It has detected, this is compromised, it should be rotated automatically. Or other platforms that have easily recognizable passwords or keys. So that way, the moment, the time between detecting and mitigating is shorter. So that there won't be too much time wasted in ensuring that systems are secure. The next thing is false testing. Now, your SAS statically scans your application. The SAS dynamically scans your application with the right data. What of random data? What of other data that can be pushed, that can be used in your application? So false testing enters random false invalid data into your application to test and see how it responds. So it's usually called fauzing. Okay, what if I do one plus one? What if I do this? What if I do that? To detect things like cross-site scripting, SQL injection and other minor edge cases that cannot be detected by, because if you are testing with SAS or DUST, it's predictable. But with false, you are trying to identify unpredictable scenarios like my or coin application. I added a QR code to link a more detailed guide on false testing. Now, we've been mentioning a lot of scans, scans, scans, scans, scans, scans, but how do you make sense of all of them at the end of the day? A lot of times you do SAS separately, you do DUST separately, you do license scans separately. How do you make sense of it all? How do you make sense of everything you've been scanning? That's where vulnerability management comes in. There are a ton of tools for vulnerability management. And basically what it entails is, okay, we've done like 10 scans for this commit. How many vulnerabilities were detected? How many of them are high? How many of them are critical? How many of them are low? Which ones have been detected before? And which ones are probably false flagged? And this is important not just for the developer, but for the team, the company or the organization as a whole. So that the organization can see their security poster and see how security vulnerabilities are detected and mitigated. Now, the other thing that comes with it is compliance. Compliance is a huge deal, especially with a lot of the security incidences that have been occurring lately. There are new regulations that are coming in. And compliance can vary by the industry we are in, especially regulated industries like healthcare, financial sector, they are heavily regulated, more compliance. It can be self-imposed. Maybe as an organization, there are certain standards you want to maintain. Or it can be country specific, regional specific like GDPR for the EU or country specific. Some countries have policies that if you will be managing data of their citizens, it has to be localized in the country. How do you, and it's not moved. I think there was a time the EU wanted to penalize Facebook for moving EU data to the US. I once read it. So how do you ensure as you build your application, you don't violate these policies? It might not be a vulnerability, but it's a risk for a project or an organization. And their compliance has become so huge that organizations prioritize meeting compliance at every stage of their software development life cycle. And aside from that, security policies. Okay, a vulnerability has been discovered. What happens next? Security policies can be put in place to ensure that, oh, if this number of CVs are detected, or this number of critics are detected, that PR or MR should not be merged until someone from quality assurance, until someone from the security team reviews this. Or suddenly we detected a license that was changed. Legal team must approve. Or we identified a new bug from somewhere. Oh, this particular group of people must approve. That way, the organizational project ensures that they are always meeting regulatory compliance. Now, the next thing is software supply chain. This is a huge deal now with the cases of SolarWinds, Log4j and so on. A lot of organizations want to have more control over what they are consuming and how they are consuming it. Even when you are consuming it, you want to have the provenance that is how was it built. What are the things it depends on? What are the, in which environment was it built so that you can replicate building the application to ensure that it's secure? Now, you can also have things like software bill of material. You want to know everything that was used to build this application and all the dependencies it contains so that you as an organization can follow that check. Like we said, never trust and always verify. Check that that dependency or that application you are using from your provider is meeting your security policies or is secure. Now, that's the, all things I want to mention. The key things here is attack surface is ever increasing for projects. We've had, I think there was a time I was doing some research for an article about protest way and there was these dependencies that was modified that was overwriting any time it detects certain environments or certain project. Legitimate dependency, it's overwriting code. It will go into the system and start deleting files. There was also this time, if you remember the case of kick versus NPM where a developer of a package called kick was asked to rename that application because the kick messaging app complains to NPM. And out of protest, he deleted all of his packages and one of his package was a very popular one, left pad. And a lot of people were using left pad. As soon as he removed it from package registry, any pipeline that depended on left pad were just failing. Before it's not became an issue that, how do we ensure that packages that have become utility basically are secure and available? There are lots of ways that systems cannot be compromised. There are lots of ways that system, the attack surface is ever increasing. And our applications are becoming more complex, more complicated by the day. And as they become more complex, the attack surface is increasing. That's the end of my talk. I don't know if you have any question. Oh, okay. Thank you for your presentation. You mentioned fuzzing embedded to the GitHub. Could you tell me more about the tools under the hood, whether it dependent on the language or you use some common tools? What's the fuzzing tools? Yeah, I can't remember its specific tool now, but I think they are quite a number of them out there. But I can't remember any of them at the moment. Okay, thank you. Any other question? Awesome. I think I'm right on time. Thank you very much.