 Oh, yeah. Is that cool? I hope not. Yeah, yeah, yeah. I just have that dark. Yeah, it's definitely good to be that age. So we have Chris, who'll be presenting us with security with Kubernetes. Thank you, Chris. Good morning, everyone. Chris Vantine, Chief Technologist for the West Coast at Red Hat. And I'm going to talk about continuous security with Kubernetes. First off, Andy Grove, back in the 90s, talked about how only the paranoid survive. And it's not about security, but it was about strategic inflection points when there's massive amounts of change in the industry. And the company has to decide, do they pivot and change course or stay as is? And many, many companies are having to make that decision across all different industries right now. But also in terms of security, we're seeing a lot of disruption. And there's a need to make some changes in terms of how we approach security in the enterprise. First off, certainly a lot of adoption of DevOps, right? The movement to speeding up innovation and having a short feedback loop. This is contrast really with security sometimes, and that security is often viewed as a bottleneck. So how do we continue accelerating delivery of applications yet continue to be secure? So we'll talk about that. Also, evolution to how applications are developed with cloud native applications. These are typically distributed across either one or multiple environments. The network is increasingly becoming important. So how do we address security with a highly distributed lots of microservices in your architecture? And then thirdly, a movement towards hybrid cloud environments. This is where you may have your development environment in public cloud and your production environment in your private cloud. And so how do we manage security? No longer have control of the perimeter in the physical data center. It's spanning across multiple environments. And so this has many security implications as well. And so there's a big movement towards DevSecOps, right? Heard about DevOps, but DevSecOps is about integrating security into DevOps, not slowing down the business and not making the business more at risk, but integrating security in terms of the culture, the process, as well as the tooling, end to end. And the value of DevSecOps is all about reducing your risk, lowering the cost, also speeding up your delivery and the reaction time. And this is done through automation, process optimization, as well as continuous security improvement. And containers are a big part of this because they provide a standard runtime and a standard image format that allows you to package your application up once in a container and then deploy it across any environment, physical, virtual, private, or public cloud. And this allows you to have consistency and reduce the risk of having a vulnerability or change introduced along the pipeline. And so DevSecOps, in terms of technology, we're seeing really Kubernetes providing a lot of the value around automating not only the delivery of your applications and microservices to a clustered environment at scale, but also becoming the security platform that can span across physical, virtual, private, and public cloud. Kubernetes itself provides automation and some of the key characteristics and value of it is one is it provides the orchestration of your microservices at scale. It also provides automated health checks to ensure that that microservice is continually running so you don't get the page at 2 a.m. It'll actually detect it and replace a broken container or pod. And then thirdly, it will actually auto-scale your services to meet the demands that are being placed on the service from a load perspective. But what about security? Kubernetes can be viewed as a security platform because it can standardize your security practices across all the providers, bare metal, virtual, private, public cloud. So rather than having security per provider, you can abstract your security processes and provide that consistently across all of these environments. Now one of the key concerns with security is with containers is two-fold. One is we're seeing issues around container images, right? Folks are pulling them down from the public repositories. And for example, on the far left, there was a case where there was some crypto mining inserted into images that were on the public repo for over a year on address. And so folks are pulling down images that are tainted and have security issues. Also, folks are having issues in regards to the configuration and the setup of their Kubernetes environments. At scale, Tesla and Weight Watchers both have vulnerabilities into their Kubernetes dashboards, exposing proprietary data potentially to the public as well. So it's very important to make sure that you're addressing security in your container environment. And so today we're going to talk about some best practices when it comes to securing your environment from builds, images, registry, container hosts, and CI CD to start with. So first off, container builds. The basic flow for building out your containers is to build, ship, and run. You define the spec in your build file. This is kind of your blueprint for how to build the container image. And then you share it by pushing it into a registry and then you can pull it down across your private, public, physical, or virtual environment and have that run on a standard runtime. One of the key value prompts of moving to containers is getting everyone to speak the same language when it comes to producing your application. Today, you particularly have maybe your operations folks using a kickstart file, your middleware team providing a tar ball with their middleware layer, and then the application developer providing the jar file. Well, with a container environment, you can all adopt standard tooling, such as a Docker file. Right now, everyone can define their particular delivery in the Docker file, and then you can build a resulting container image from the multiple layers, with each layer being done fine by that build file. This allows you to collaborate and share and also use the same tooling in the same language, so better communication and faster delivery. In terms of the container builds, some best practices around security. First off, treat that build file as a blueprint. One of the things to note is when you're pulling down those container images, most of them require root to run, and certainly we don't wanna run root processes on our host systems, because they'll expose not only your container in the host, but other containers to potential breaches. So specify a user in your build file. Secondly, also don't log in to build and configure your container instance. Don't SSH in, and then save that to a file, because you don't have a record of a recipe of how and what change was made. Also, version control the build file. So put that into a source repository so you can go back in time and have a record of that. And be explicit with the versions in your build files. Don't use the latest, actually put version 1.1, for example, the tag for that image. And then keep in mind that each run creates a new layer, so you wanna reduce your use of that from a performance perspective. Also, container image security. Let's talk about some best practices there. First off, on the left, typically, the application developer today is just delivering a jar file. In the new world, in the container world, they're delivering much more than that. They're delivering the application, they're delivering the language runtime, and the OS dependencies, right? So what does that mean from an ownership perspective? We talked about having the different layers and everybody owning their own layer previously. And so when it comes to security, those folks will be responsible for tracking the security issues for their particular layer. Also, best practice is to treat containers as immutable. What does this mean? This means the container image should just contain the application and its immediate dependencies at the app and OS layer, and extract your configuration and your data out of that image. In Kubernetes, they provide objects such as config maps and resend secrets to actually store your configuration. Config maps can be used to set environment variables or a flat file that's mounted at runtime into the container. So maybe you wanna have a string or an IP address set in an environment variable. You can do that with Kubernetes config map, so it's not statically stored into a container image, reducing security risk as well. Also, your data, you wanna leverage a data service or a persistent volume to store your persistent data. That could be a traditional SQL service or a persistent volume can be used to store some configuration and that will persist from one container runtime to an X. And then lastly, when it comes to container images, the best practice is to make sure that you're signing your images so that you know they're coming from and you can validate whether it's a trusted source or not. In Kubernetes, you can actually check to make sure that image is signed before the Kubelet will actually launch that image and create an instance on a host system. Also, container registry security, some best practices here. At Red Hat, we did a scan of the public Docker repo and found that about 64% have a higher, medium security vulnerability. So when developers are pulling this down in the enterprise, they're actually pulling in images already tainted. And so one of the best practices and actually usually the first step we see with enterprises is that they set up a private registry, a trusted source for your content within your enterprise that you can have an audit trail. Also, the private registry allows you to go back in time and have all the dependencies if you're storing them in your local private registry. If you're accessing dependencies in a public registry, you don't know in six, 12 months down the road, will that dependency still exist with that explicit version? And so by having a private registry, it also allows you to store that information. Container host security. Containers are Linux. And the reason containers are Linux is because by default, really containers are not very secure. And Linux provides many security features to ensure improved security around your container processes, including C groups, namespaces, selinux, seccomp, and read-only mounts. And we'll go through those. So first off, C groups, this provides resource isolation. So how do you provide quality of service in a multi-tenant environment? And that's what C groups delivers. It's ensuring that your CPU memory network storage are guaranteed levels to each process or container that's running on a host system so you don't have the issue of one process consuming everything and not leaving enough for other processes or containers on that system. Namespaces is another security feature. And that allows the container in terms of PID namespaces to only see the processes within the container and not any processes of the host or other containers. So it provides that isolation. It also provides on the network namespaces isolation as well so I can confidently deploy, for instance, 13 applications with 13 different security networks and I'll have them isolated and live happily on one system in a secure manner. SELinux provides mandatory access controls to a process. So this allows you to have a process and basically essentially forces you to explicitly define what that process has access to on that system. So even if it's running as root, it doesn't get access to anything it's not explicitly stated. So that provides a higher level of security for your container. And in SecCop, this provides secure computing mode, restricting processes capabilities in attaching a system call filter. There's many different system calls that a process can access. If you can limit this and provide a filter, this allows you to improve your security of your container environment and reduce the attack surface by taking away those capabilities from the process. And then finally, read-only mounts. The ability to mount file systems in a read-only manner so that a rogue system or vulnerability does not have the ability, for example, to change kernel system parameters on the system and slash proc or slash sys as well. So some best practices around the host. Don't run as root, limit SSH, use the namespaces. Be sure to define resource quotas. So Kubernetes allows you to find quotas ensuring that you don't over consume and provide some sort of security issue because the disk is out of space. Also enable logging, apply security, route it not just to your container instances, but also, or container images, but also apply it to the hosts, right? You need to continue that best practice there. Also applying SETCOM filters and make sure that you're running unprivileged containers as read-only as well in production and we'll talk about that. In terms of your container runtime engine, there are many choices out there. Ciro is a relatively newer solution on the marketplace. The container runtime interface, CRI, is the Kubernetes API for container runtimes to integrate with the Kubelet, which runs on your nodes. The Ciro is a open source container runtime that is OCI compliant, right? So OCI provides a standard image format and a standard runtime, but it's built up from the ground up to lockstep with Kubernetes, CRI, and security first priority. It's a community built out of IBM, SUSE, Intel, Hyper, as well as Red Hat, right? So this is an alternative runtime that's more security-minded. And one of the key things that it addresses is it provides a read-only mode. So traditionally, if you use the Docker runtime, you can basically have read-write access to the root file system. So if I'm a security vulnerability, I could actually install onto the root file system and potentially make some modifications to that. If I use CIO in read-only mode, it actually allows you to mount the root file system in read-only. So even if I tried to attack the root file system, I wouldn't be able to write and it would be prevented from actually installing on the root file system of my container. So another security protection that you wanna keep in mind that is available out there. A continuous integration with containers, right? We talked about some security best practice and the images, the runtime, but what about inserting it into your CI CD pipeline? One of the things we found is that, with security issues in containers, it's not just a matter of is there an issue now, but also on an ongoing basis. So I need to continually be concerned about is there a security issue in my container. In this example, from left to right, we have a C Java, NoJS, PerlPHP application, and it shows the dependencies that are being pulled in as I build that container image. In the second column, that's a Java app. It pulled in the JRE. What you may not be able to see is that in the triangle is a 66. Well, there's been 66 in counting security notifications for the JRE since REL 7.0 was released. So I need to have a process on an ongoing basis for my layer that I own to scan that layer and to make sure there aren't security issues there. And so I wanna be able to integrate that into my overall CI CD process. So here's an example of a CI CD dev test in production. I go ahead and check in my build files, my source code, I'm version control them. And I want to have a reproducible build process. And the way I can do that is by leveraging a build image. A build image allows me to have a version control build environment that I can go back in time and leverage as well as use to ensure that I have a known state for when I'm building my images. I can then produce a resulting target image containing my application and just the necessary dependencies and then putting that into the image registry as well and then be able to build that once and then push to test and pushed into production. Also, as a part of this CI CD process, this is an example of a Jenkins pipeline. I wanna be able to insert a security scan into my overall pipeline. And so typically you add a phase security there as a part of my CI process. So every time I build an image, you get scanned for security vulnerabilities. And I can do this in a couple of different ways. One is I can leverage a scanning tool such as OpenScap which is an upstream open source solution and that provides the ability to scan for vulnerabilities as well as to check for security compliance. I can define a security standard such as minimum password length for a user and make sure that that's also checked. And I can get reports from an auditing perspective as well. And so I may wanna insert that into my Jenkins pipeline. I could also leverage some of the automated scanning solutions in the registry that I'm using in the private registry. So some of them when you push to the registry, it would also trigger a security scan as well. Quay for example, has the clear scanner built into it. How about continuous delivery with containers? So one of the key differences in moving to a container environment is that typically today in production environment if there's a security issue, the operations team will go out and patch the running virtual machine or physical systems to eliminate that security vulnerability. Well in a container environment, I now have a software factory which allows me to push from development into production in a matter of minutes. So rather than patching production, I'm gonna go back to development and build version in plus one with that security fix. And then so I build it once, I put it through QA, I put it through that security scan and then push it into production. I don't patch the container instance in production. Go back to development. And so how would I go about deploying at scale? I no longer have one physical machine or 10 VMs, I may have a hundred or more container instances. I need help in automating that at scale. And so continuous delivery, deployment strategies. Kubernetes has automated strategies built in such as Recreate, rolling updates, blue-green deployment, as well as Canary with AB testing. And we'll talk about these. First off, Recreate is the simplest way to deploy at scale with Kubernetes. You'll have your existing version out in production, there's a security issue, I need to get version 1.2 out. I'm gonna put that in my testing cycle, just gonna go through the automated scan. And then the Recreate, I just bring down the existing cluster, simple as that. And I go ahead and deploy a new cluster with the new version. Why would I do this? It's simple. I don't have to interact with the developers, they don't have to be aware of this, they don't need to make sure their APIs or the data is in sync. But the negative is there's downtime. And so what about when I need zero downtime? And that's when I may wanna use a rolling update or blue-green. So let's talk about those. So a rolling update with zero downtime, I have my existing cluster out there, I'm testing that new security patch in 1.2. I'm gonna go ahead and incrementally roll that out across my cluster one by one, and I can leverage the Kubernetes health check to make sure it doesn't come online before it's ready. And for example, I might wanna do a health check to make sure that the front end is receiving a response back from the back end before I add it into the load balancer. So Kubernetes will add it into the load balancer once that health check is completed and gradually roll it out across my cluster. I can control this or have it in an automated manner until it's at 100%. So the advantage of the rolling update method is that I can have zero downtime. I don't have to bring down the cluster and have a period of no response to my users. Reduce that risk and I can also roll it back at any point in time if I see that that security patch has a negative impact on my business or user experience. One of the things I need to work with my developers though is to ensure they're aware that there gonna be two versions side by side, potentially for some period of time in my environment. So there needs to be some backward compatibility, maybe in terms of APIs or the data as well. Also I have the option of doing blue-green deployments. A blue-green is where you have version one out there in the blue environment. All the traffic is going to it. I then deploy the new security update in 1.2 to my green environment. So this is a virtually separate environment on the same physical infrastructure. So with Kubernetes, I can create in number of mirrors of my production environment, for example. And then I can do my testing offline once it successfully completes for that new version. I can then adjust the software load balancer to direct all the traffic over to that green environment. Big difference here is I didn't touch the blue environment at all. So it remains as is. Why would I wanna do that? Well, if I wanna do a rollback, I can confidently rollback to what the environment looked like before the green. And so this allows for zero downtime as well. But it does take more physical resources because I have two logical environments side by side. I also need to be in sync with my development team because there may need to do some data synchronization from the blue to the green environment or vice versa if I do a rollback as well. Last one I wanna talk about is canary deployments with AB testing. With the movement towards microservices, one of the key things around DevSecOps is to experiment and try things out. And if it doesn't work, you can revert and change course. No look different with microservices. Only about a third of ideas improve the metrics they were designed to improve. So need to foster this culture of experimentation and have the systems that enable you to pivot. And one of the other key things of DevSecOps is that you have this continuous feedback loop. You shorten that feedback loop so you get real time feedback on how the system is going, how the application is behaving or how the business is functioning under that new version with that security patch. And so I wanna enable my developers to experiment. Kubernetes does exactly that. For example, on the left side, I have version A of my application and version B on the right with a slightly different recommendation engine. And potentially a security patch in that version B. And so I wanna do a canary deployment. What this means is I have version one currently accepting all the traffic. Version one dot two has that security patch. I wanna monitor and see what the impact is as I roll out a subset to a subset of environment, this new version. So right now I have 100% and I'm gonna do 50-50. And so I wanna be able to monitor and I'm seeing by deploying that new version, I actually improve the conversion rate, the click-through rate. So the recommendation engine or the security patch did not have a negative impact, actually had a positive impact on my business. And so I'm gonna go ahead and fully deploy over to that new version. If I had detected a decline in my business, a lower conversion rate, I could have reverted back and rolled back and eliminated that canary by directing everything to the existing version. So canary deployments with AB testing. Also, securing your container environment, there are many other aspects in a Kubernetes environment on network isolation, monitoring, storage, APIs and federated clusters. So let's talk about those. Network security. One of the things with network security on the left, you have your traditional model. You have typically a three tier DMZ, the internal, and your database. Each layer is a zone in your network. And typically only the DMZ is exposed to the public internet. On the right side, Kubernetes is using a flat SDN model. All the pods get an IP from the same network and you live on the same logical network as well. And it assumes all nodes can communicate with each other at the host level. One of the things Kubernetes provides is network isolation with network namespaces. So that enables you to have multi environments. You could have dev test and production all on the same physical infrastructure but logically separated through network namespaces. You can also have multiple business units or projects sharing the same host having network separation through network namespaces as well. Kubernetes also allows you to have fine granular control over the service to service communication via network policy. So I can explicitly define a policy to control the communication from one microservice to another. In terms of what we're seeing in terms of the network security models of our customers, really three different models. As one is folks are setting up one Kubernetes cluster per zone. So in your traditional DMZ app database model, you would have a cluster per each zone and then you have the egress and routers for inbound and outbound traffic. Also, you could have one Kubernetes cluster spread across multiple zones. So this is where you have the Kubernetes cluster and then directing the traffic for example, A and B would be routed through the network zone A and then your C and D applications would be routed through network zone B through the egress and routers. And then thirdly, your physical compute isolation based on network zones. That's another model. This is where the individual hosts are tied to a particular network zone. So in this example, A and B applications would be routed based on tags to the green nodes. And then C and D applications would be routed to the blue nodes and those nodes would be targeted would be directed to the appropriate network zone on the left. Monitoring and logging. So as you move to a Kubernetes environment, there's many different considerations you have. In today's world, you're worried about the application and the host in terms of monitoring considerations. Well, in a Kubernetes environment, you wanna be monitoring Kubernetes itself, but also the containers and you want some feedback in terms of how that's behaving. And so there's a variety of metrics that you may be interested. Even application has evolved as well into more distributed. So you want some distributed monitoring tools such as a tracing tool, Jager is an example, Prometheus and Grafana providing application and then also Kubernetes level monitoring for you as well in real time. So here's some recommended tools and monitoring that you may wanna consider for your Kubernetes and container environment. Here's just a detailed level of what Prometheus and Grafana looks like. You have some customizable dashboards and the underlying infrastructure to monitor in real time your infrastructure and then provide some alert management as well. In terms of logging, Kubernetes also provides the ability to aggregate and centrally store your logs, plug it into Splunk for example, or EFK Elasticsearch to help you in your debugging for security issues and as well as alert you to any anomalies. What about storage security? So Kubernetes is not just for stateless applications. Kubernetes provides the ability to have persistent storage and so you can set up things like different tiers of storage and then limit your services to only have access to specific tiers, so a form of security as well as providing resource quota limits to ensure that folks don't fill up your disk as well. And then API and platform access. Kubernetes has a API and as we saw in some earlier examples, if you don't have security hardened on your Kubernetes cluster, you have exposed yourself to being penetrated by some security hackers. And so you wanna make sure that you're locking down your environment. So this good practice to put in, for example, a API proxy gateway to limit access to your APIs as well as provide quality of service and an audit trail of those who are accessing it and define users and groups to limit their access. And then lastly, Federation. So Federation is a new and upcoming movement around the Kubernetes community. It provides the ability to set up a multiple clusters, have a single API across those clusters. From a security perspective, we wanna do this because maybe you have one cluster that's PCI compliant and then you would direct workloads to that particular cluster. And then for your dev test, you may reduce the amount of security around that environment so that they could experiment and try things out a little bit more. So what's next? There's a lot of movement towards providing management at the service mesh layer with microservices. So Istio, for example, is a service mesh management solution. It provides monitoring and metrics, access control. You can also inject faults, do traffic routing and encryption and authentication. So from a security perspective, I'd say a few of the things it provides is that it allows you to manage your keys for service-to-service communication and rotate those out. It also allows you, by default, it has an egress around the proxy that gets attached to the service. So by default, nothing is going outside unless you whitelist. So that's another nice thing from a security perspective. Also, in terms of routing, it allows you to route based on headers. Also, it allows you to more finely control the traffic instead of at a per-pod basis. You can actually get into a percentage basis. So just a lot more finer-grinner control around your traffic with the service mesh. So Istio, and I'm sure you'll hear a lot more about that, just released 1.0. That's a project out there now in the upstream community. DevSecOps metrics to close it out here. Here are some metrics you may want to track to ensure that you're successfully tracking to a successful DevSecOps implementation. One is a compliance score. So monitor the scans that you're going through with your containers and seeing if they're compliant to not only your vulnerability check, but also security compliance. Deployment frequency. So how often are you deploying? Is that improving? What is the lead time to get that from Dev into production? What's the failure rate when you do go to production as well as the MTR? How quickly can you recover when there is an issue in production and get it through your pipeline? And of course, service availability as well. So with that, I want to thank you for attending today's session. My name is Chris Fantine. Here's my email. Just feel free to reach out. I'm also available on LinkedIn. You're happy to connect with you there as well. Thank you so much. Yes, any questions today? Great, thank you so much. Yeah, thank you, Chris.