 Good afternoon, everybody. Thanks for joining today. Chris Riantine, Chief Technologist for the West Region at Red Hat, and today talk about security, state of mind, compliance, and vulnerability audits for containers. First off, you know, in regards to what's going on in the industry, this represents a few years ago there were over a billion data breaches, and here just some of the marquee brands that were impacted by these breaches in North America. And what was the leading cause of these breaches? You know, first off, employees were not taking proper security measures in their environments. Secondly, the cause was around outside breaches and then unpatched or unpatchable servers. Typically I find when I talk with IT groups that it's usually those one-off servers that are on older version that were not patched in the root cause of the infiltration. Also, internal attacks by employees, something that's often forgotten about. And so when I first joined Red Hat about 13 years ago, I would talk with a lot of IT shops about security, what were their best practices that they were following, and I usually got a lot of blank stares because their security policy was around the firewall, right? They set the policy on the firewall and that was their defense mechanism for their environment. And with hybrid cloud, public or private clouds, it's really eliminating that wall and becoming an attack vector across multiple clouds now that simply cannot be defined by a physical network firewall. And so the other point was is that most of these discussions were with the operations teams. Security was not first and foremost on people's minds in the testing or in the development teams. And so the typical enterprise is really struggling between two vectors. One is innovation, right? How do they quickly innovate without impacting the overall execution of their corporation? In the real world, similar to the struggle that Ford versus Tesla, right? Ford is great at executing an idea at large scale. They took a concept of a car and able to mass produce it. However, their innovation engine is challenged. Where you have Tesla without a legacy built from the ground up, they're built for innovation, rapidly innovating. In fact, they're able to innovate greatly because they've innovated in the ability to deliver updates over the wire with software. However, they are having challenges around executing at scale, mass producing the vehicles. And so how do you achieve that balance? And so that's really the promise of DevOps, enabling a corporation to rapidly produce new software at scale through cultural improvements, process improvements, as well as technology. From a cultural perspective, really mirroring a lot of the attributes of a open source community, creating collaboration, transparency, and an openness across the organization, typically formed in a small teams that own a particular service application from development into production, allowing for streamlined efficiency of the development and operation of that service. Also, secondly, process improvement, rapidly being able to deliver and deploy new updates in an agile and a continuous environment in a matter of hours or days rather than weeks or months. And this impacts the business because they're able to rapidly innovate. Also, leveraging technology, typically early adopters of DevOps are embracing a lot of open source based solutions, whether it's the underlying OS, the infrastructure or the development tooling. What about security? You have this fast moving process. The thought typically is that, wow, how do you make this secure? We have a manual process today. We have a manual approval process that takes days to change control. I have a standby data center to test out this new change and then get feedback. How do I move to a DevOps environment that's rapidly moving, yet still maintain security? That's why the evolution of DevOps into DevSecOps. Embracing security end to end as a part of culture to a security first mindset. Also, in terms of process integrating it in an automated way into the overall development process from Dev into production, it's not just an operations concern. Embracing some technology, some tooling to help automate the security in your DevOps environment. DevSecOps. What are the benefits of DevSecOps? First off, reduce the risk of deploying new updates and ensuring that they're secure and you don't have vulnerabilities or a breach of your policies. Also, lowering the overall cost. If your environment has a low number of issues, you're going to have a much more cost effective environment because you're not have to change. Also, in terms of speed of delivery, the ability to rapidly deploy these updates at scale in an automated manner and being able to react because there will be issues in a very responsive manner in a matter of minutes or hours to any vulnerabilities. We're going to do this through security automation, process optimization, as well as continuous security improvement. A key part of DevOps is that continuous feedback and getting that to your development, your testers, your operations and be able to pivot and change course responsibly. Containers are a big part of the movement towards DevSecOps because it enables the developer to package once and deploy anywhere. It's really a culmination of this shift to microservices. Decomposing that monolithic app into independent microservices allows you to quickly move and update a service rapidly without having a huge bet of a monolithic app and the huge risk of potentially impacting the business of a failure. In DevOps, enabling the process of actually deploying that microservice quickly and rapidly into production across hybrid infrastructure. Containers are a big part of that movement and from a security perspective, they allow the developer to build their application or their service with all the immediate dependencies within the container image. I have everything that I need to and I build it once in development on my laptop or shared infrastructure and then it gets deployed in the test and the production as is. I don't have to rebuild or reconfigure it as it moves from one phase to another, whether it's across public, private, virtual or physical infrastructure. I can do that with a build file. It's a recipe so I can reproduce that image and be consistent every time it's built. I also can share that knowledge so I don't have to be the sole owner of that and it's reproducible down the road. Also in terms of containers, the ability to ship and share the container image, whether it's sharing it across DevTest production or different environments, that provides the ability to have a single image that's the same across DevTest and production and then also the ability to run that instance across any type of infrastructure. Very similar to the Java world where you have a standardized Java file or a JAR format and then a JVM for standard runtime. In the Docker world you have a standard build file or standard image format and then running it across the standard Docker runtime, whether it's Java, PHP, Go, etc., any language so it's polygot versus Java. Secondly, in terms of application delivery via containers from a security perspective, the nice thing about containers is that it also abstracts the developer from the underlying host that frees them up from a security and operational perspective to have the freedom to actually pull in the components that they need for their application and not be tied down by the policy at the host level. This also allows the developer to have consistency of their application runtime across all their infrastructure. Some of the key things when it comes to securing containers are first off around images, builds, registry, CICD, integrating security into that process, as well as hardening your container host which the container instance would run on. So we'll walk through all these. First off, container image security. So as a best practice, as you move to containers from a monolithic application, it's really to have a separation of your code configuration and your data. Treat the container as immutable. So what does this mean? This means the actual container image should actually contain your code and the necessary dependencies, the middleware or the OS dependencies that's necessary and that gets built and put into the container image. When it comes to configurations and your data, that should actually be abstracted out from the image. So that remains immutable so you can scale up, replace it very easily and efficiently. But also from security perspective, you may want to extract those configurations as well such as passwords or your private keys or public keys. And so in Kubernetes, if you're using that to manage your security environment, you could put it into a config map or a secret for the passwords for instance. And so that keeps it separate and then when the container instance is launched, it may put it into an environment variable that your application could then consume. Also from a data perspective, if you have persistent data, you could leverage your container application platform's ability to provision persistent storage or to leverage an external data service and make sure your data is being stored there. So the separation of your code, config and your data. Next thing is an important part of the move from a monolithic to a microservice in a container is that traditionally, if you're developing a Java application, the developer is typically providing tests and operations a JAR file with their code. As you move to containers, that process will actually shift to delivering a container image. And what's inside this container image now is not only the JAR or the application from before, but also now it contains the actual runtime that the Java application in the case would be the JVM and then also the OS dependencies. So from a security perspective, who owns tracking when there's a security vulnerability and who owns updating this in a container image? And so it's important to have that ownership in a process to address that because they're ongoing security vulnerabilities for all the components that are being pulled into these container images. In this example from left to right, you have a C, a Java, a Node.js and a Perl slash PHP application. And the squares represent the different components that are getting pulled in as you build your container image. The little triangle has a number in it for some of those and indicates how many security errata notifications have been issued for that particular component. This is relative to well 7.0 Linux distribution. And so in the second column, you can see the JRE, it has a 66. So there's been 66 and counting security notifications for the JRE. So it's very important to understand that they're ongoing security vulnerabilities being released all the time and I need to have a process which we'll talk about. The second thing is as you're running and starting up container images in your environment, you want to make sure that they're coming from a trusted source. And you can address this by signing your images. Therefore, you could have an automated process before starting something in your production environment to make sure that it has a known signature. Now let's talk about container builds. So we talked about the separation of responsibility for the different layers in the container image. If you're using a build image, you may want to separate the ownership. So in the current environment, for instance, I have a operations team that owns the Linux Kickstart file for the base OS image. And then I have a middleware team that owns delivery of the middleware layer. Maybe it's a tar ball. And then thirdly, I have the application developers who are delivering a JAR file. Those are all different syntaxes, text files or languages in how I'm delivering the component. With a container, I can now define those layers with the same build file. So I can collaborate easily and I can share and build from the same type of file. And so with the developer, I'm defining my OS in the Docker file. I can go ahead and own the security notifications and updates for that layer. And so my overall application will be the combination of the OS, the middleware, and then the application layer. Yet, I still have the separation of responsibility from a security perspective. And so some of the best practices when it comes to container builds is to treat this build file as a blueprint. This is a recipe for how the image was built. So down the road, I can reproduce it if I need to. Also, when you are building and configuring an image, don't go launching SSHing into the instance, make changes and then save it because then you no longer have a recipe or a blueprint for how that was built. Also in terms of version control, make sure that you're version control checking these into Git and have a consistent process around that. It allows you also to collaborate and share and reuse. Be explicit with the versions in the build file so that it is reproducible. And then make sure you're aware that as you create a run command, it's generating a new layer. So that will add some extra help to the image. In terms of the registry security, here's some best practices around that. You know, what inside the container matters? We did a scan of the public Docker registry and found that 64% of the images had a higher medium known security issue. Typically, as I talk with organizations, the development team starts out using containers, they're pulling down from the public repo. Eventually they run in a production like environment. Yet operations is totally unaware that they've been pulled publicly and that they have security vulnerabilities because there's not a standard process for scanning these in the development cycle. And so the first step typically in organization and their adoption of containers is actually setting up a public or private registry within the enterprise. And this allows the operations team to kind of create a trusted repo for containers within the enterprise. Another advantage is that when it comes to an issue down the road with a legacy version, you can go back and have all the dependencies in your enterprise to recreate that. So if you're building a image and it depends on third party components and they're stored on a public repo, how do you know that that version or that dependency will be there in 12 or 18 months down the road? So it's a good practice to make sure that all the dependencies that you're necessary to build an image are actually stored into your private registry. Now let's talk about CI with containers. So typically in the DevOps world, you have the continuous integration, continuous delivery pipeline, CI part involves the developer checking in their source and to get and then there's a continuous build where you're building the container image. First off, you take that source, build some RPMs and then the RPMs produce the container images and you store that artifact into the private registry within your enterprise. And then when it comes to deploying that micro service out to the container environment, you pull that image from the registry and deploy at scale. So from a security perspective, how do you integrate security into the continuous integration aspect? So first off, the operations and your developers, you'll be using the build file to define the image using version control systems such as Git checking in so that you have a record of that and you can reproduce it. But also you want to move towards the ability to have reproducible builds for your container images as well. And so you can accomplish this by leveraging a build image that's version controlled as well. So you can go back in time and you know exactly what build image. But also by using a build image, you will compile your application and rather than putting that application within the build image, it'll create a separate artifact, the container image that's outside of the build environment. So your resulting artifact does not have that build environment with within it. So therefore it's lightweight, a smaller footprint to attack as well as it's reproducible as well. And so reproducible builds. And then once that is built, you can store it in your image registry, build once and then distribute across your environments for testing and then roll out to production. Another best practice is as part of your continuous integration is to actually insert a security scanning phase into your CI process. Every time you build an image, go ahead and put it through a scan to validate that there are no security vulnerabilities and that it conforms to your security policy. Another way to go about this is to use a private registry when you upload that image, it generates an automated scan as well. Triggers that. So how would you go about automating these scans? There are a variety of tools out there, whether it's Anchor or Black Duck. In this case I'm going to talk about OpenScap which is a open source freely available tooling to help you with security vulnerability scans as well as scans for security policy. So OpenScap is a set of tooling and first off it provides content. A list of known best practices in terms of hardening your container image. It also provides tips for virtual or physical infrastructure as well. And then also content around known security vulnerabilities. With this content you can then leverage the tooling, the CLI or a Daemon to actually automate the scans of your container instances or a container image as well as virtual and physical infrastructure. And the result is that you get a report showing you any known issues for security policy or security vulnerabilities. And then you can rectify the issue with a remediation script in terms of the security policy or the security vulnerability. And so let's drill down into some of the details for OpenScap. First off use case may be, hey I want to scan for compliance. You know our password quality requirements set or obsolete services like telnet enabled is opened as a sage properly configured you know is slash temp on a separate partition. And so I can use the OScap command line tool to run an automated scan and it's going to check in this case the image to see if it conforms with my security policy. And so once the scan is completed it'll actually generate this evaluation report and showing me the different errors. And in this case I can see that there are actually 34 checks passed and 33 that failed. There were three high and so I can drill down. In this case I can see all the different checks and whether it was a success or a failure. And then with each check I can actually drill down even further in this case I want to take a look at the set password strength minimum digit characters and I can see that this instance or this image failed and I get a script at the bottom here that I could run against it to correct that configuration. Another use case is to scan for known vulnerabilities. So what RPMs need updating? What is the criticality of the vulnerability? What is the vulnerability and what CVs have not been applied yet to this particular container image? And so again I can run a scan here and I go ahead and run the OSCAP vulnerability scan against this image and it will generate a report showing me all the known issues. I can drill down into a detailed list and then actually apply and update the new RPMs to that container image. Another use case is for container specifically. I can run this to see if the Docker image is compliant, has it been patched, is the container compliant or the container patched. So I can run this at rest or running instance and I just install the Docker component and I can run the scan against this instance. Also there's a workbench tool to allow me to customize my security policy. So I could leverage a baseline default policy and then customize it and whether adding or subtracting some of the checks. I can also integrate this with my physical or virtual machine installations of Linux by using the Anaconda add-on and therefore from the get-go I can make sure that that instance conforms with the security policy that I've defined for my organization. Also lastly you talk about containers delivery with containers. One of the key differences here is instead of passing a jar file from Dev into production I'm passing the container image. I'm not rebuilding it but I build it once and then I put it through QA staging and production and that contains my application, the OS dependencies as well as the middleware dependencies. And so in a CD process over on the right side you know when there is a security issue in production how do I go about updating that component. Typically today I'll just go in and patch the production server but in a container world you actually want to go back to development and regenerate that application to version N plus 1 that contains that fix and then push it through the entire process of testing and then roll out to production. And so how do I deploy at scale? No longer do I have a single physical box or 10 VMs I may have a hundred container instances and these are all individual microservices distributed across my infrastructure. And so I want to be able to take a version one that's out there in production and be able to rapidly respond by deploying an updated version without risking a failure for the overall environment. And I can use this by leveraging various automated deployment strategies such as a rolling update. In this case I have version one out in production, developers found an issue, a security issue and they produced version one dot two, put it through CI testing and the rolling update I can gradually roll out this new version to production and as I gain confidence I can increase the number of nodes that are updated into my environment. So that's the rolling update. Another choice I have is a blue-green deployment. The blue environment may be up installed with version one, there's a known security issue rather than replace my existing environment. I'm going to go ahead and set up a separate green environment, a logical environment, it's a mirror of my production environment, my blue environment, so that I can quickly move over to the green environment and then also it's pretty much a mirror of my current environment so I can deploy with confidence and if there is the issue I could actually roll back and so here I have version one and then I go ahead and put in version one dot two in my green environment, I am routing all my traffic to the blue as my green environment is deployed side by side, I'm doing my testing and then I can actually switch the software load balancer over to version one dot two. The downside of this is I need more infrastructure to accomplish this but the positive is actually now I have a high confidence that this version is a mirror of my production environment so reduce the number of issues and I can quickly test in a production like environment and deploy and if there was an issue I could do a rollback to that version one dot oh. So how would I go about automating the updates? Well if I have a production instance out there that's built on layered images my OS, my middleware and then the top layer being my application I could actually monitor my private registry to see if those dependencies are updated with a new version of the image and then trigger an automated build deployment and rollout of that image update and so as a developer I don't have to worry about was the OS image updated was the middleware image updated that I depend on. My operations team is tracking that and rebuilding the OS layer my middleware team is tracking middleware security issues and updating that and then with this automation I could generate new versions of my service or application based on any updates in the private registry of those dependencies and I can have that as automated as I want that could actually be an automation of my CI CD pipeline and in leveraging a rolling update or a blue-green deployment in an automated fashion to push out that new security update. Container host security so just some best practices around the hosts that you're going to run these container instances on you know Linux is a key part of it enabling security in a containerized environment you have C groups providing quality of service namespaces providing logical separation so that for instance the root user and a container is not a root user as a host that's important if there's a security vulnerability so you don't have access to all the containers on that node as well as the host and then SE Linux provides mandatory access controls so that even a root process has to explicitly stated what objects that can have access to on the system such as the configuration file or slash 10 and then set com allows you to reduce the capabilities of a process with the kernel so that you doesn't have full blown kernel access and read only allows you to do read only mounts so that you don't have right access to file systems so just some best practices also don't run as root your processes your container instances also limit the SSH access if you need information from your container instance create an API call use namespaces define resource quotas so you don't have the noisy neighbor issue leverage C groups so that restrains the container instance in terms of network CPU and disk consumption and memory make sure you're enabling lauding from audit trail and applying security around it not just to your container instance but also to the host and applies security contacts and subcomp filters so that you can reduce the capabilities that process has on the system in terms of making sure that you're making good progress in your evolution of DevSecOps and your adoption in the enterprise here's some things to track first off is compliance score deployment frequency lead time deployment failure rate and the MTR so that when you do have an issue you're able to respond quickly and then certainly be tracking the overall service availability from a business perspective alright so that's all I had today if you have any questions feel free to come down afterwards you can also email me at cvantine at red hat com I'm also on Twitter as well and in tomorrow I have a couple of sessions going into a little deeper dive on some of this from a kubernetes perspective thank you very much any questions for Chris going once twice right that's a wrap so that was our last