 Good afternoon, everyone. My name is Chris Van Tijn, Chief Technologist for the West Region at Red Hat, and I've been with Red Hat for about 12 years. I work with strategic partners and customers up and down the coast who are adopting Red Hat's emerging technologies. Red Hat has certainly moved well beyond Linux into middleware, virtualization, container and cloud management. And one of the hot topics is certainly about enabling companies to deliver capabilities quicker to the marketplace. And ultimately that discussion usually leads to a talk about DevOps and enabling technologies such as containers, container management. And with that transformation, there's a lot of implications around security, so we'll touch on some of those today in the talk. So first off, about 20 years ago or so, Andy Grove talked about how only the paranoid survive. You know, it's not about security, but it's about inflection points in the market when there's huge disruption, huge amounts of change, a company has to make a decision do they adapt with the potential to thrive or do they stay as is and fall by the wayside potentially. And as we've seen in the marketplace, you know, software is a disruption right now in many different industries, whether it's retail, finance, media or transportation, and software is having a huge impact. And putting a lot of pressure on IT to not become the bottleneck and keep pace. And so there's a movement towards DevOps in terms of how applications are developed in a movement towards containers in terms of enabling applications to easily move from dev tests into production. And DevOps is all about, you know, a cultural shift, bringing together siloed organizations typically dev test and operations and bring them together. Also in terms of process, implementing continuous integration, continuous delivery, pipeline. And then also the enabling technology with CICD, typically open source solutions are being leveraged to build out this automation software factory to deliver software rapidly for the business and enable innovation. You know, in terms of security though, that's a missing piece on that slide and it's an important piece. Typically when I met with customers early in my early days at Red Hat, they would talk about security, they would say, well, we really don't patch our servers because they're all behind the firewall, right, we hide behind that. And if you've read the news in the last several years, you've seen a lot of brand names being compromised in terms of security. These folks represent, in 2014, over a billion data records were breached from a security compromise. And if you look at the top reasons for security issues, you know, a third of its employees not taking proper security measures, another third on outside breaches, and also unpatched servers as well. So a lot of reasons around these security compromises, but security needs to be taken seriously. And that's why, you know, DevOps needs to include security as well. There's a talk around DevSecOps as well. And extending security in your culture and the people and to the processes as well as into the tooling. So end-to-end security in the DevOps environment is very critical. And here are some examples, does anybody recognize what this is a photo of, right? This is the latest Intel fab that's being built in Arizona. And the reason I show this is because when I started my career at Intel about 20 years ago, they were going through the pinion bug. If you recall, the pinion bug was a floating point unit air in the silicon, and it resulted in a lot of negative PR at Intel. It also resulted in almost a half a billion dollar write-off because they had to do a recall of the chips, right? And so this was an inflection point for Intel, and they had to adapt and make a decision. Do they, how can they learn from this and how can they get better as a company? And so they invested significant dollars in terms of continuous integration and validation of their chip designs, not in the fab but in the software factory that frontended this fab. So as chip designers came up with a new design, their component of the overall chip, they would submit it to this automation cluster, which actually would schedule a job and validate that test or that design with an automated test. Today they have tens of thousands of systems running these automated batch jobs to validate chips. So continuous integration, a key component. Also here's another factory, does anybody recognize what this is a photo of? That's correct, that's the Tesla automobile factory, and the reason I show that is not only software automating the pipeline and producing these vehicles, but software is having a huge transformation on these vehicles after delivery to the owner. For the first time you can go to an automobile lot, buy a car knowing that car will get better 6, 12, 18 months down the road. They're able to send software updates over the wire to the vehicle, improving zero to 60 time, battery efficiency, or the applications in the center console. How many have heard about a vehicle recall really in the news for an automobile, right? Seems like every week. What if it was a security issue to an autonomous driving vehicle? You would want a way to deliver a security update quickly to that vehicle. And so continuous delivery capability is a key component of this new software factory that you need to build out. So CI and CD. In the flip side of that, I own a Ford Fusion and just got a recall notice because the network that the onboard cell modem uses is being decommissioned after a certain point. So they will no longer be able to push all of those updates. So you end up shifting to a certain extent where the safety exposure exists into upstream providers in the land. In terms of DevSecOps, the CI-CD pipeline, there are a lot of ways you can go about it in terms of writing scripts, in terms of do it yourself, but what about building that factory out of the box and that's with a container application platform? And that's what Kubernetes is all about, is automating this software factory enabling you to have continuous integration and continuous delivery, not only of your software applications but bug fixes and security updates too, and enabling you to deliver these updates in a matter of minutes versus hours or weeks or months that a typical enterprise takes to deliver new capabilities. And so containers are a big part in terms of security because they enable the developer to package in their application with all its immediate dependencies and abstract itself from the underlying host operating system. And this is important because this also allows the operations in the security team to still apply compliance and security rules without encumbering the developer from moving quickly with their application. And so packaging your application once and then being able to deploy it as is in your dev test and your production environment is a key part of containers. And you take a look at Linux containers, the basic flow is to build, ship and run. From a developer perspective, you define a build file with the different components you want to go into the container, pulls it from a trusted repository and then out comes a artifact, a container image. You can then share that container image via a registry, whether it's private or public, and then you can apply that artifact to your test environment exactly as it is or to your production across physical, virtual, private or public cloud infrastructure. And when it comes to this software factory, there are a lot of security aspects in terms of securing and hardening your container environment from images all the way down to the hosts, to network, storage, API monitoring and federated clusters. In this talk, I'm going to focus on that top level and talk about securing container images, registry builds, CICD and the container hosts. So let's start out with the container image security. So first off, as a best practice when it comes to containers and you're building out your applications, you want to make sure that you're architecting them in the best way in terms of security is, first off, you want to separate ideally your code from your configuration and from your data. This allows you to have the ability to deploy these applications without hard coding them with your security configuration or your security data as well. And so you want to make sure that you actually separate these out as best as possible. Now when it comes to the actual container, when you take a look at the container, whether it's a C application, a Java, a Node.js or a PHP application, you want to take a look at what's going into these containers. So when you build that container, for instance, if you build the Java application, it'll pull in the JRE, BASH and G-Lib C. Well, when you pull down that image from your registry, it may have some security issues. In this regards, the JRE, that little triangle has a 66 in it. That means that there's been 66 security notifications for the JRE since rel7.0 was released. So what that means is not only do I need to worry about what's in the container at the time I pull it down into my environment, but I also want to make sure keeping a regular process to scan those image is and know what new security issues are popping up because security errata is released all the time. And so you want to make sure that you have a processing in place to actually scan those. Also as you're deploying these container images out into your environment, you want to make sure that these are trusted images as well. So you want to sign these images and then check that signature as well. So before you run it in your container environment, you want to make sure that it's an authentic image and that it's not being compromised. Also in terms of container registry security, I took a red hat scan the Docker hub at one point in time and found that 64 percent of the images on the public hub actually have a critical or medium or high priority or medium priority security issue. So these are things like shell shock, heart bleed, poodle. And so if your developers are pulling these into the environment, you're exposing yourself to a variety of security issues. So you need to work together with operations and have an overall security scanning process. A good best practice is actually to set up your own private registry and the enterprise. This allows you to control which content is being made available within your enterprise. You also can keep track of the usage of these components as well, inversion control, et cetera, within your enterprise containing custom-made images as well as third party. How do you integrate this with your continuous integration process? So if you look at the typical CI CD process in a containerized-based environment, it looks something like this. First off, you have your typical DEV, UAT, and production environments. But in the container world, these aren't physical environments, they're actually virtual. So your DEV environment is a mirror of production and likewise your UAT is a mirror of production as well and containers allow you to set up these virtual environments using your container management platform. But from a flow perspective, typically your developer checks into a source code management, maybe GitHub, and then that triggers a build of the container image. Well the container image is built by pulling base images down here and then pulling it and the build image will actually build out the target container image, putting the application along with the application runtime and the dependencies that it needs. And then you'll store that resulting artifact into the image repository. Now how do you go about managing security of this container image? The container image may have multiple layers, you may have the base image which is the core operating system, maybe it's REL7, and then you have your middleware layer, maybe it's a Tomcat application server, and then the third component would be your actual application. So how do you go from the separation of responsibilities today to a container where it's all put together in that container and still have ownership and delegation for security tracking and security updates, etc. And so with the container build file, you can still have the OS owner specify the operating system component and then the middleware would own that in terms of the application layer and then the application coder would actually own the application logic that sits on top of that platform. The difference is, instead of using a kickstart file, a tarball, and a jar file, we're now all talking in the same language because we're using the Linux container build file to specify as the blueprint. So this kind of normalizes things and helps bring together these cross-functional teams to be collaborating. Also with these container models, you can also trigger rebuilds on these layers. And so you could actually watch any one of these layers, and if there's a dependency downstream, you could trigger an automated rebuild. So maybe there's a security issue in the core build. You would want to watch, is there an updated core image upstream in your registry, and if there is, automatically trigger a rebuild of any application that depends on that particular base image. And then you could actually take it to a next level, is then go automatically deploy this into production after it goes through testing with the updated version and security fix. And so integrating security scans into your continuous integration pipeline is an important step. So you may want to add an actual step. This is the Jenkins build CI CD pipeline. So in build one, build two, build three, you can see as it moves from commit code review unit tests, we've added a phase for security. And what this does is just run an automated scanning tool, which we'll talk about in a minute, to check the container image that was built to make sure it doesn't have any known security vulnerabilities, or it makes sure it conforms to the security policy as well. And so how would you go about doing that, to scam for compliance and vulnerability audits? You could use an upstream tool such as OpenScap, there are other solutions out there, Black Duck Software has one, Anchor is a startup that has a scanner as well. So a lot of tools that are out there, the important thing is that you're integrating some sort of automation into your CI pipeline, so you're scanning for container image issues. And so in the case of OpenScap, this upstream utility, it comes with a security guide for the target operating system, which has a list of security policies. And also you have the ARATA in terms of the CCEs, which tells you a specific checklist that the government has issued as far as guidelines for hardening your particular operating system. This may be minimum password length, it may be making sure that legacy services are disabled like TFTP or something like that. And then also the government issues CVEs for your target operating system, which are security ARATA, notifying that particular components in that operating system may have a security issue, what it is, and how to address it. And so that's the content portion of OpenScap. There's also a set of tooling available, command line tools, daemons, GUI tools to actually build the checklist, build the policies, and then also run the scanning and generate reports as well, which we'll go into. And so that's the third component, is the reporting aspect. So some of the use cases for OpenScap are scanning for compliance, right? So our password quality requirements set are obsolete services enabled, such as Talnet, is OpenSSH properly configured, is slash tip, temp on a sub-partition. And so to use OpenScap, you can use the command line utility, and this could be integrated with your CI flow. When you run it, it will give you a list of whether it's passed or failed on the target, whether it's a host operating system, a virtual machine, or a container. And then you'll get a report showing you the status of that scan. You can then drill down and see the overall scoring. So in this case, there was 34 checks that passed and 33 that failed, three with high priority. And then I can actually drill down, and I can see all the different checks that either passed or failed, and I can see what the check was all about. And then I could actually drill down into that check. So in this case, I'm looking at the minimum password length, and I can see that it failed so it wasn't set properly in the image. But even more importantly, there's a remediation script to rectify this issue on the target image. Another use case is to scan for known vulnerabilities. So this is checking for known security issues, what RPMs need updating, what's the criticality, what's the vulnerability, and what CVEs haven't been applied yet. And so likewise, you can run a command line scan, and it will go through all the known security vulnerabilities and see if that target has any positives. And so after that, you can go into the HTML report and see a summary of the scan. And then you can see a list of each individual failure and see what the actual issue is. And then you could actually update your system as well. In terms of containers specifically, it allows you to see is the Docker image compliant or is the Docker container compliant. So this is offline or online, the container at rest, or actually running on a particular host system. And so there's some scan commands you can actually run and see if an offline image has a known vulnerability or a security issue against your policy. And then likewise, if they're actually running on a target system, you could use this to see in real time if they have any issues as well. And so that's the open scap. There's also a GUI workbench for a variety of operating systems where you can define your policy and your checklist so that you can either remove or add particular checks depending on what your particular security policy is at your enterprise. There's also an add-on. So if you're building target Linux systems, you can, with Anaconda, you can actually add this on. For instance, if you did a default install, actually only got a 64% score versus if you ran this test, it basically got 100% almost in terms of being compliant. Also we talked about continuous integration. What about continuous delivery with containers? So here we're focusing more on the right side, getting the new capability, moving it from source code into dev, into UAT, into production. So how do I do that in a consistent, reliable, automated manner with containers? And how does that look? Well, typically, a typical CICD with a non-containerized application, you move the jar file from development into tests, into production, and then with a container image, with that same application, you're actually moving the whole image file. So you build it once and then deploy everywhere. So I build it in development, and then I take that exact image that the developers built and then move it into QA, staging, and then production. So QA isn't rebuilding and not reconfiguring it. They're not rebuilding or reconfiguring in production either. And so if you're using something like Jenkins and Kubernetes, you can actually leverage the built-in capabilities to do deployments. And there are a few different deployment strategies that are built in out of the box with Kubernetes. One is a rolling update deployment. So this is when you have a cluster of your application, version one, version one, and version one on three different hosts, multiple instances of that container, and you want to deploy the new version 1.2. In tests, I've created a virtual instance of my test environment and running those tests. And then within Kubernetes, I don't have to script this. I can just issue a command and it'll actually do a rolling update of my production environment. So think about this security patch. I need to quickly get it out there and roll that update across my cluster without bringing it down. And so I could do that by automatically updating my load balancer as each host is updated with the new version. So that's a rolling update. Also another deployment strategy is the blue-green. This is where instead of doing a rolling update of in place, you actually have one virtual instance of this cluster running version one, and a security issue comes out and you want to roll that out quickly. Well, instead of updating the existing environment, you just create this whole new virtual environment for this new version with the updated security patch. And so after the security patch goes through dev, test, it's now coming into production. So here it's being tested. And so Kubernetes created this whole cluster for version 1.2, and then the old cluster is still virtually there, so if there was an issue, I could actually roll back to that cluster just by updating the software load balancer. So those are blue-green testing and rolling updates for deployment strategies in your container farm to deploy those security patches quickly. Next, let's talk about the container runtime security. So this is a key component. You have these images that have been scanned and ready for production, and you want to make sure that they're running in a secure environment. So here are some best practices around running your container images on a container host in your cluster. First off, try not to run this container image as root. There is a security issue and within the container that's running as root, it can get access to the host, then it's also root, and then it could get into other containers as well. Also you want to limit SSH access to your container images. Try to access anything through an API. If you need logs, if you need any information from these containers, try to have a API access, so SSH lists. Also you want to use namespaces as well to set up nice isolation and separation. You want to define resource quotas, so you don't have the noisy neighbor consuming resources whether it's storage, network, or CPU or memory. And then also you want to enable logging. So Kubernetes has a way to actually turn on logging and aggregate that logging so you can put it into Splunk or whatever viewing system you'd like to analyze that. And then that will help you identify security issues, it can be a proactive notification. Also you want to have a best practice around applying security orata not just to the container images but also the container host in your cluster. So it's very important that you have a standard procedure for that as well. And then also apply security contacts and set comp filters so Kubernetes out of the box allows you to apply these capabilities so that you limit what a container can do on the host in terms of kernel access. And so that will provide an extra level of security for you and ensure that if there are security issues within the container image it will help minimize the authority and the access that they have in that compromise. So that will be limiting those breaches. So in summary in terms of securing your container environment, you know we talked about container scanning, really a critical thing to implement and integrate into your CICD process. Above and beyond that there are different aspects in Kubernetes that allow you to have security control around network isolation with network namespaces. Also storage with the ability to mount both local and shared storage, applying quotas and access controls to that storage so that containers can only access what they're authorized to from a storage perspective as very important as you move your brownfield applications into a container environment. So as we go to an API world where everything has an API, you know how are we controlling access to these APIs? How are we limiting the volume of requests from a quality of service perspective? So there's some capabilities around that, you know centralized monitoring and logging to help with your security proactiveness that you can get visibility into potential issues based on trends. And then also, you know, leveraging cluster federation to isolate applications from one another and even enabling you to run one cluster, a part of a cluster in a public cloud and maybe another cluster in the private cloud and being able to have an aggregated central API around those multiple clusters. So this is securing your container environment and that's all I had for today. And I want to thank you for attending today's session. That's my email address. Feel free to send any emails if you have any questions. I'm also on LinkedIn under Chris Vantyne and I'll be around after the session if you'd like to ask any questions. So thank you so much.