 Hey everyone, I hope you have been having a great experience at the cloud native security day Europe virtual event And I'm here to try and add to that experience with this session integrating security in the build pipeline Let me take a moment to introduce myself My name is Anirvan and I'm a member of the DevOps engineering team at Alliance Direct based in Munich Alliance Direct is the online insurance platform of Alliance operating in a few countries in Europe I have been working with cloud native infrastructures for almost 10 years and tend to automate solutions to make others and My life easier Also with so many security incidents happening in our daily lives. I Sometimes tend to get a bit paranoid about security in my projects. I Should also mention here that all the information tools strategies Phenions and demonstration presented in this session are entirely my own and they do not represent the views of Alliance Direct or any other Alliance entity Although we do a lot of cool stuff at Alliance Direct the contents of this session are not really aligned with them This session ends with a live demonstration of a CI CD pipeline with security tests enabled and Involves a cloud native setup I'll explain the tools that have been used here during the session and for the attendees who are really interested in trying it out themselves I have added the configuration files in a couple of GitHub repositories The build pipeline repository also has general guidelines about the base components that we are using in the demo and the commands to set them up. I Understand that for some beginners it might be a bit much to grasp But unfortunately because of some time constraints, I'll not be able to demonstrate Setting up the base components such as the Kubernetes cluster the autoscaler and the ingress controllers So we will first look at why we are here Understand the problem Analyze it find out reasons for them and look at ways to mitigate them We will look at how a vulnerability scanning workflow works How they plug into the build pipeline the options that we have in terms of tools Analyze different strategies to integrate security scans into the build pipeline and Finally, we will watch one such pipeline in live action. I Know that we all hate staring at PowerPoint presentations. Believe me. I know I really know that pain So I'm gonna finish up with the presentation as fast as possible and we'll dive into the fun stuff This is required as I really want to keep this simple and really want you to understand what we are doing and why we are doing it So why are we really here? What is the problem that we are trying to solve in general terms? We are trying to implement software security So, how do we do it? Generally, it starts with the business goals being defined Resulting in products to be launched which happens at the management level The architect designs the software as per requirements Developers get on the same page with the architect and they start creating some cool software With modern life cycle software life cycle processes Testing the software is really essential. We all we all know that once all of this is achieved The artifact is created and the product is released Generally at this point a security or penetration test is scheduled where vulnerabilities are detected or Sometimes unfortunately, there's really no security test done at all As the software has never been scanned before this list is quite long Now think about a cloud native scenario where we are talking about more than 50 to 60 Microservices and the amount of time it would require to fix those problems. That's huge, right? Now with product priorities and feature requests, it is quite difficult to devote dedicated time for such fixes As a result, it gets postponed or really neglected altogether and before we know There is a breach taking place So how do we approach the problem? You might have come across the phase shifting left on security This very very simply means that in the left to right process progress of Software development life cycle. We need to move the security testing towards the left That is early on in the development process The basic idea is to detect and fix security problems during development and not after the service is released So how do we actually go about addressing the issue? The final objective of any business is to create a trust relationship with the customers the clients and And the registered users to do this You need to be able to convince them that when they provide you with the data It is not gonna be compromised under any circumstances at present we are providing our financial insurance medical and Personal data online to service providers every day We would definitely not want to be associated with the service who are not able to protect our data To do this first and foremost thing to be done is to make each and every person responsible for security When you leave your office door open by mistake You're responsible for the safety of hundreds of your colleagues in the office building and you can compromise it by letting in an intruder This person can be an office administrator. He can be an engineer a manager or even the CEO of the company Similarly in the software product lifecycle each person needs to understand that they need to be responsible for security Whether it is via software design code creation software deployment or really site reliability Developers really understand features and feature requests very well, right? so start treating security fixes as features and Integrate them in the development process in large development teams instead of asking and expecting everyone to fix their libraries Use a pre-built library repository, which only contains pre approved and vulnerability free libraries To fix problems you need to detect them first and know how to make it part of the process so go for professional help in such cases or Really higher dedicated security personals Train the teams so that they first understand what they are doing and why they are doing it It's not only important to decide to shift left on security, but also to understand what to scan for Vulnerabilities are of different kinds and there are different stages when they can be detected and fixed The first step in this direction is to look for package or dependency vulnerabilities They include base operating system packages and also application packages such as Ruby, Python, Java, Node Or just any other programming language dependencies This is a stage, but the code is not involved yet. It's just the system on which the code is supposed to run Next we need to scan for vulnerabilities in code, which has not been compiled yet These are situations when problems in the code can lead to unintended code executions This is called static application security testing or S A S T because the code is not running yet another type of security scanning is the D A S T or dynamic application security testing in This stage the application needs to be scanned for vulnerabilities which exist when the code has been executed This might needs to be done against a running web application So we can really see that the attack vector is quite huge Next we will look at general vulnerability scanning workflow The CI CD tool is probably one of the most important component in the setup which orchestrates the entire pipeline The code repository is used as a source for our code and the image repository for image artifact storage The security tool is really not mandatory in most cases There is a central solution security platform or tool which coordinates the communication between the build pipeline and the vulnerability scanners It is also used as a visibility tool to visualize vulnerability reports believe me when you look at the command line and see all those vulnerabilities It's not really that helpful a dashboard can really help you here Pipelines can also direct commune directly communicate with the scanners The vulnerability scanners are the databases against which we compare our image packages If all goes well, we can deploy our artifact on the container orchestration platform For our demo, we will use a similar setup where we will use github as the code repository tecton and argocd as the CI CD tool harbour as the security tool and the image repository and kubernetes as the container orchestration platform There are few open source vulnerability scanners available such as claire and trivie which are really really powerful tools The default deployment of harbour ships with trivie as the default scanner We will fetch our application code and doc a file from the app repository Build it and push it to a test image repository We will then scan the image from the test repository for vulnerabilities If there are none to be found then the image is promoted from the test to a prod repository Finally, we will deploy the app from the prod repository We definitely wouldn't want vulnerable images to land up in prod repositories, right? If the image has vulnerabilities, the pipeline will stop at that stage and we can visualize the vulnerabilities in the harbour dashboard For our demo, our app is a simple nginx container built from a debian or oven2base image nginx is installed on this image and a test web page is added. So really really simple stuff The debian image has high and critical vulnerabilities which we don't want to be in prod Whereas the oven2 image only has medium and lower vulnerabilities which we definitely won't deploy it That that will actually be our simulation We will use tecton to create our build pipeline and argocd to run the pipeline and deploy it on kubernetes argocd is a cloud native deployment tool based on the github's methodology There are some tool specific terminologies used in the session, but if you feel intimidated Please feel free to use any other ci tool of your choice It is also a cncf incubating project harbour is also a cncf graduated project which is an image registry and helm repository It can also add pluggable vulnerability scanners to be able to scan the uploaded images Although we will demonstrate an automated build pipeline in this demo, this is quite far from a production build setup Advanced features such as triggering pipelines via events and passing parameters are not part of this demo I have kept the app repo with helm chart and the pipeline repo different to help you understand this better However, they can be modified to create a more sophisticated workflow Okay, so we will now start with our demo This is the harbour dashboard that we have already installed in our kubernetes cluster We have created two different projects one for broad and one for test Each harbour project acts as a docker image registry and also a helm chart repository We have already gone to the configuration page for both projects and enabled the option to automatically scan images on push If we go to integration services, we can see that trivie is already installed as the default scanner If we want we can add a new scanner like clare We have already set up clare in the harbour namespace in kubernetes if we check the pods We can see we have a clare pod running and a clare postgres pod which is a database where it stores the vulnerability database We also have a clare scanner adapter which helps clare to communicate with harbour If we Add a new scanner We name it clare We put in the address which is the service for clare and the pod There you can see we have added clare as a secondary vulnerability scanner For our demo, we will still use trivie For our demo, we have created couple of helm charts which will help us run our pipeline The first helm chart is called app deploy Which is actually a simple helm chart to deploy an application and create an ingress for it to expose it to the internet The second helm chart is called tecton build pipeline which Contains all the configurations to run a tecton build pipeline We have already built both of these helm charts And we have already uploaded them to the helm charts repository in the prod project in harbour If we check the build pipeline repository, we will find a directory called secrets Which are some kubernetes secret manifest which are to be applied to the kubernetes namespaces before our demo can be run They are just basic authentication credentials as some and some docker registry credentials Next, if we check our helm charts We will quickly go to the components that we have added here We have added some pipeline resources which have our git repository and the image registry We have added some config maps, which has the metadata for our build pipeline Like docker tag, the image name, the namespaces And the argocd server and the argocd application name We also have a series of tasks that we are going to run in our pipeline Which we have already discussed, which is the docker build task The security scan task, the image promotion task, and finally the argocd sync task We also have a service account which is attached to one of the docker credentials Which helps us to provide authentication for our pipeline We have a pipeline definition, which integrates all the tasks that we have already created And finally, we have a pipeline run definition, which actually runs our pipeline using tecton Our second repository is called example app, which has the configurations for the application that we are going to deploy The first directory is called app and it contains the docker file And it contains the docker file and the sample webpage that we are going to deploy We are going to start with the debian image, which is supposed to have the high and critical vulnerabilities and will be breaking our pipeline The second directory is called helm app and it has the helm chart specification To deploy the application and it uses the app deploy helm chart as a dependency The third directory is called helm pipeline and it uses the tecton build pipeline helm chart as a dependency and will be used to run our tecton pipeline In the build pipeline repository, there's another directory called argocd resources, which has some manifests for our argocd application The first one is used to add the Harbor registry information into argocd so that it can fetch help charts The example app manifest is used to create the argocd application to deploy to actually deploy the application And the third manifest is used to create the argocd application, which will run our build pipeline We will now go to the command line go to the argocd resources directory and We will apply each of these manifests if we now go to The argocd dashboard We will see both our applications have been created And if we go to the repositories page, we will see that our harbor Helm repository has automatically been added and the connection is successful The example app build pipeline gets its configuration from the helm pipeline directory of the example app repository the values file Of this helm chart has the corresponding values which helps the pipeline determine its status If we go to the argocd application We will see that all of these resources are ready to be created And it creates a dynamic name for the pipeline run With help of the tag that we have added in the values file In a more sophisticated workflow, we will not be adding this tag manually But it will be created when our automated build pipeline works via triggers and events So finally it's time for us to run our pipeline and see the output We will run the sync option And we will select the prune parameter here and we will run synchronize This will take some time. So I will skip the video here If we now check our pipeline, we will see that it has stopped at the security scan step And we will see the reason why because it has found high and critical vulnerabilities So in the meantime, we have gone ahead and we have edited our docker file to change the image from debian to ubuntu And we have also bumped The tag for the image If we go to argocd, we will see that the change has automatically been detected And the 001 pipeline run is to be removed And a new pipeline run with 002 will be started when we synchronize the pipeline So we will go ahead and run synchronize again And we will select the prune option so that the old pipelines old tasks are removed and the new ones are created If we now check our pipeline, we will see that it has completed successfully Because the security scan step Was not able to find high or critical vulnerabilities If we now quickly check our harbour dashboard and check the test project We will find our example app image here And if we go into details, we will see each of our tags with its corresponding vulnerabilities Here we can see tag 001 has critical vulnerabilities and 002 has medium and lower ones Which is why it allowed us to perform our deployment If we go further, we can actually get details about all the vulnerabilities that were found If you're curious as to what we did for the security scan step Just go to the tasks manifest file in the tecton build pipeline hand chart Go to the app talker security scan task and there you will be able to see that we used the trivi docker image And we ran the trivi command to scan our image from the test repository And we checked for high and critical vulnerabilities and we asked the command to fail if it did find such vulnerabilities So with that we have reached the end of this session I hope the session was able to help you get some understanding of how we can have better and secure build pipelines And optimize the software development process I have added some links about this topic. So be sure to check them out I hope to see you in another awesome event in the future. Thank you