 Okay, let's get started. Welcome everyone. Thank you very much for taking time out of your day to join us today. Welcome to today's CNCF webinar, the ABCs of Kubernetes security. I'm Jerry Fallon and I will be moderating today's webinar. We would like to welcome our presenters today, Roger Klurs, senior product manager at Seuss and Danny Sawyer, senior software engineer at Seuss. A few housekeeping items before we get started. During the webinar, you were not able to talk as an attendee. There is a Q&A box at the bottom of your screen, so please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such a subject of the CNCF Code of Conduct. Please do not add anything to the chat or questions that are in violation of the Code of Conduct. Please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. With that, I'll hand it over to our presenters for today's webinar. Okay, thanks very much. Oops, there we go. Going over a little bit again, my name is Roger Klurys. I'm a product manager at SUSE, working on SUSE CAS platform, our containers as a service product or better known as our Kubernetes distribution. I work with engineering and other constituencies to define the features and the roadmap for the product. And let me introduce Danny Sauer or give Danny a chance to introduce himself. I didn't have a great introduction that I'm Danny Sauer. I'm a software engineer working on the CAS platform with Roger. Okay, cool. Let's take a look at high level agenda. We'll start by talking about how secure we feel about containers. Security is objectively measured, but it's also a subjective sense. And we'll look at how the subjectivity affects containers and their deployment. We'll look at how you can address container security in many different areas. We'll look at an idea that's closely related to security, which is governance. That is, how do you implement security policies? Not technically, but what organizational stances do you take? And if I'm gonna go into the area of securing Kubernetes environments, where do I start? I wanna say upfront that a little bit of level setting. You notice we call this ABCs. This is a fairly fundamental view. There are some more expert and more specific webinars that security specific vendors have done. If you're working already in security, this might just be a pleasant set of reminders or a basis for discussion. But we are assuming that you have a basic understanding or a little more than that of Linux and some Linux security features of containers and of Kubernetes. So it's not Linux 101, it's not Kubernetes 101, but it is kind of a 101 level discussion about security in containers and Kubernetes. So let's jump into it. How secure do we feel about containers? Well, I'm gonna ask you a question. True or false, containers are inherently secure. Inherently, sorry, inherently insecure. Well, as with almost any true or false question, the right answer is it depends. This is why I might not have done so great on a lot of true and false tests when I was good. There are ways in which containers are indeed inherently insecure. One of them is the fundamental assumption that everything is inherently insecure unless you work at it. Another is that there are aspects of Kubernetes like networking, for example, that were originally designed assuming friendly environments. So right out of the box, basic Kubernetes networking assumes that everybody can talk to everybody. But it's true also that aspects of Kubernetes and the Kubernetes ecosystem are being designed and have been designed with an eye of security from the ground up. So containers are inherently insecure, true. Containers are inherently secure, also true. Let's move ahead and look at how this affects how people feel about it. Now, one of the most interesting things I think about this chart is that something like 18 years ago, I presented a chart that could have looked with some of the labels changed exactly like this and a certain virtualization vendor when we were launching the first major server virtualization platform where the top three concerns were indeed cultural changes, security and complexity. These tend to be true for any new technology but you'll still note that security is a pretty high level concern for this audience. If 40% are concerned about security, security is an important component of their plan and their delivery and maybe holding people back from their implementation. Let's look at a few more things. Well, when we look at where people are deploying today, the number one leading category is in the public cloud. Well, we would similarly have a chart like that about public cloud probably four to eight years ago. So, the same concerns carry over to the platform. A significant number are fully on premise and a solid contingent are deploying in hybrid cloud, meaning in both of those in coordinated fashions. And then there's a small number who report other who are probably looking at custom managed services and environments like that. So, container use over the past five years has been relatively stable in development and tasks. You know, there was a jump up at the beginning and then we kind of leveled off. In proofs of concept, many people have already done their proofs of concept. So, that's maybe dropping a little bit, but the ramp in production is the most significant factor where we look at people now who are solidly moving into production environments reporting on this CNCF survey where four and a half years ago, only about a quarter of them were in any sort of production on containers. ESG did a survey about three years ago that many of the learnings are still valid. Almost, luckily, almost every company that they interviewed realized that containers had security implications. About a third were worried about the lack of mature security solutions. As with most new technologies, securing them generally starts out with new vendors and there are some excellent container-specific security vendors. I want to point out if I mentioned any of them by name later, it is neither a specific endorsement of them nor a lack of endorsement of any of the other people in that space. It's just that I'm going to use some examples. These two go together. Notice that the same number of people were worried about the lack of mature solutions and the fact that the current solutions that they had don't support containers. So this sort of security split brain was a worry for them. A quarter or more realized that a vulnerable container is an easy attack surface and a smaller number realized that portability comes with a price and that is that containers could possibly be more vulnerable to in-motion attacks. So what do we do about these concerns? Let's take a walk through a list. We're going to take a look at some security requirements now and drill down a little bit into each one. One of them is to start with a secure container image, a secure base image. We're going to look at applying access control to the platform and the containers, scanning for security, both at runtime and at rest, looking at adding network security to the environment, as well as visibility into the network and the general environment. We talked about data in motion issues. We'll look at encryption there. We'll talk about secrets management so that we don't expose passwords and other security information. We'll look at restricting access using policies as well as how the underlying operating system tools relate to the containers themselves. And then we'll look at monitoring the security posture using classical tools. So let's get started with those. The first is looking at the secure gold image and Danny, you want to jump into this one? Yeah, thanks, Roger. Containers like everything in computing are built in a series of layers. Whether you're using a stripped down OS as the base for your containers or a purpose built base like the Python image or something like that, you want to start with a solid foundation. So one of the first steps in ensuring you have secure containers is ensuring you're building upon a secure base. So you want to have a gold master image that you install your code on top of so that you're not starting with any holes. In that build process, you really want to use, you want to be reproducible. So you want to use your CI CD pipeline to ensure that your builds are consistently using that trusted gold master that's already been verified to comply with your organization's policies to all your governance rules. So when your bugs are found and patched, when they're rebuilt, everything happens the same way every time. And if you're using a commercial Linux distribution, by the way, pretty much all vendors who offer support also offer a base image, which is a solid starting point that is under the rich end-to-end controls that other delivery formats such as ISO images are in. So if your OS offers it a base image from the vendor is really a good place to start. You can add things and remove things and secure your own base, but that's a great place to start. We can bring role-based access control into the environment. First to the platform itself and then to the containers. For any application, the best approach to securing the application through access is to create a service account that the application uses that has only the permissions that it needs. So you're not only controlling the access of the users to the system, but also of the application to the system environments. And through this approach, you can guarantee that different applications that may require different access to capabilities can't do more than they need. The next step, which is inherently less secure because it really assumes a greater set of things is to take the default service account for a namespace and give it administrative control. That of course gives it access across everything in the namespace and it would be useful on an application basis to use things that are potentially more restrictive. Less secure than that is to have a service account for the application but give it full admin access, give it all rights to that namespace. And of course the worst possible thing you can do is not to use role-based access control at all or to grant CUBE system, the default system wide admin account, giving it all permissions on all workloads. So strive for the most secure approach and that is one of the things that security benchmarks and benchmark suites, I'll get to the difference between those in a little bit, provide for you and we'll examine for how this is implemented and report to you on to what extent you're sticking to best practices. Danny? So you've got your solid base image and now you've added code on top of that. So you've changed things. You need to have some sort of automation that verifies the code changes that have been made still comply with your local policies. You do that by scanning the generated images. There are a number of tools that do that. Aqua Security Triddy is one of them. They just have just a checklist that gets run through run against the container before you do your deployments. Ideally that gets integrated into your CI CD pipeline. People make mistakes, even with the best intentions. So taking people out of the equation generally results in a better, a more predictable results. In addition to running at deployment time, you really want scheduled scans, vulnerabilities are found all the time. Even if you haven't changed your container or your code that's on the container, new stuff pops up. So using scheduled scans on old containers, make sure that you're alerted of new vulnerabilities in old containers. There are also runtime scanners out there. Falco is one example. They do essentially the same thing, but they run while your container is running. So they're sort of your last line of defense where maybe something popped up that hasn't been caught by your scheduled scan yet that still alerts you that, hey, you've got something running that you might want to look into. Great, well, I said early on that Kubernetes networking originally by design was flat and open to everybody so that everything could talk to everything. We've grown well beyond that. And especially in multi-tenant environments, but also others, being able to segment networks so that only things that need to talk to each other can talk to each other. And then controlling which of them can talk to each other not just by segmenting, but by more fine-grained control of access within segments is something that has come along more recently in the Kubernetes networking space. You find CNI plug-ins. So CNI is the container network interface that implements the ability to have different implementations of networking in different clusters or with some extensions conceivably in the same cluster at the same time. Calico is a very popular one. Silium is a more recent one that leverages a kernel feature called EBPF or packet filtering to optimize the performance and improve the introspection by running in the kernel. These, you use these to control ingress and egress to the clusters, which typically involves an ingress controller. Obviously. But also to control access between namespaces. If you have a distribution that doesn't support network policies, you may wanna consider introducing third-party options such as container firewalls to introduce policy into the flat network as well or use them as an extra layer of defense in-depth and there are products and technologies out there that implement container firewalls as well as network policies not only at the more traditional levels of L2, L3, L4 but also introduce protocol and application-specific policies at L7. Silium and Calico, for example, do this and container application firewalls do this as well so that you can say, for example, instead of I wanna set a policy on who can talk to on what I can do, for example, on port 80 or port 443, I could set a policy on HTTP and then whatever port was speaking HTTP, the system would recognize the protocol and implement that policy on it, especially with cases where we use non-standard ports for anything from administrative interfaces to specific application-specific protocols being able to have protocol-specific policies is a very effective approach. Danny. Sure, you need to have visibility into what's happening within your cluster. A lot of the controls we're talking about are static, you're analyzing things to ensure are things happening the way we expect them to be happening without any kind of observability, you don't have any way to, one, know what's expected and to know when things are happening that are unexpected. So you have to expose really as much as you can about what's going on within your cluster. Network traffic is sort of the big piece there because the data going in and out of your container is primarily gonna be on a network of some sort. So with your applications, you wanna export metrics wherever you can. Prometheus is sort of the de facto place where all those metrics get gathered so you can do your later analysis and see what's normal. At the network level, CNIs typically provide flow analysis. So rather than just your basic packet level analysis, network flows are another level higher where you see where the packet went, what the whole communication looks like, which is usually a little more useful with determining what's going on with an application. You can look at volume of traffic, the size of packets, they're generally going so you can see when things have deviated and you might need to be concerned. Service meshes that are another layer above that where network flows are your layer three level four stuff, service meshes are operating up at layer seven. So you can see what's happening with your, your HTTP traffic, your database queries. The whole point here is to be able to know what sort of communication is happening inside a cluster so you can identify when anomalous communication is happening, more data is generally better here. I just want to add, for example, Cillium offers a tool called Hubbell, which is its flow analysis tool, but these exist for other CNI plugins as well. So we've talked about what happens, how we protect the environment, but how we can more specifically protect the data in motion is using in motion encryption that cluster science certificates deliver. So out of the box, Kubernetes components will encrypt traffic between them using certificates signed by the cluster. It's particularly good practice though to add to that trusted root certificates that is certificates issued by an external certificate authority for the external interfaces. That would be places where either things outside Kubernetes need to be spoken to like a directory service or for applications and administrative control that talk through the Kubernetes API, we want to expose those typically with a trusted root certificate so that basically it simplifies the need to distribute the certificate authorities certificates for the cluster outside of the cluster. So anything outside should use external certificates and if it's possible in your system and through your practices. Clusters should integrate the ability to issue certificates automatically and renew them automatically. Often this is done through available interfaces to a service like well specifically let's encrypt in order to time that auto issuance and certificate rotation and renewal for the internal certificates is available through tools like Qsara and onto secrets, Danny. The whole point of secrets is they have to be kept secret. So what you want to do is basically expose them for the minimum amount of time. So in ideal world, you have enough control of your application where you can use an external secret manager like Vault or one of those kinds of tools where your application say it needs to connect to a database and these credentials, they could make the API call out to Vault, get the authentication information, make the connection and then free that memory so that the secret is just not exposed in the application at all. In a lot of cases you don't have that level of control over the applications you're developing. So the next step down from there would be to have the secrets available to your application through the environment variables that sets up when the pod gets started. That's not quite as ideal because they still exist a little longer in the memory of the process but that's not really easy to get to. You can then move down from there to exposing virtual files within the container where the environment variables are passed in but they're exposed to within the container as a simulated file by the container runtime. If you can't do any of those things and you have to actually expose the secrets from a physical file into the container, at the very least you need to do some sort of encryption at rest so that those files are less vulnerable. Great, runtime security is an area that of course extends beyond Kubernetes itself. We do have capabilities in Kubernetes to deal with them and it's starting with pod security policies. Pod security policies will control how we use privileged containers and what privileges they can carry, can restrict which host resources that given pod can use, can disable or control privilege escalation, expose Linux capabilities that should probably be a capital C that's the Linux feature called capabilities as well as OS security profiles through things like ST Linux and app armor. In addition, Danny mentioned Falco earlier, it's a good example of considering the use of runtime security monitoring tools. So we see what's going on in our network and in the system environment at runtime, not just based on static analysis. Do you wanna move on? Yeah, and I can talk about this a little bit, I guess. So it's important to remember that your containers, you know, your container runtime runs on top of an OS. So just like your containers have to be built on top of a known secure base image, your runtime ideally would run on top of a known secure OS platform. This is really a solved, well, generally a well-known problem. You know, there are a number of OS level security tools that you run, you know, your system auditing, your, you know, OS firewalls, that kind of thing. Even within your container network, if you've done your CNI level networking, filtering and segmentation, you still probably wanna have firewalls controlling what gets access to the nodes that make up your cluster, using the web application layer firewalls in general to filter, you know, what's coming in and ensure that you're not exfiltrating data using intrusion detection, intrusion prevention systems to, again, look at the anomalous traffic and act on that, you know, use any malware tools. The same thing with storage and your cloud security, your enterprise should have policies that govern what nodes can access, what storage and how they use it, how you're interacting with an external cloud. Great. So, was stepped on from that to some of the specific things you can do to harden the host. And this is in pretty much the order that you'll step into them through the life cycle. Start with secured services. And we mentioned our build service here. There are obviously other places to get secure packages depending on who's OS you're running on, who's distribution, but be sure that you can trust the package, not just to the source, not just to the source you get it from, but to the source that it's built from. Secure boot is an option, harden the host so that it recognizes valid signed images and won't boot images that have been tampered with. Implement automation of security standards, security profiles, such as OpenScap will do that. We talked about firewalls, implement transit, layer security or TLS throughout the environment, not just the internal features, not just accessing the API server, but also exposing TLS for the applications. Enable auditing so that you have a path to assess accesses. Use controlling the capabilities that processes have and controlling their rights through AppArmor or SE Linux, encrypt file systems for data at rest. One thing we strongly advise to both reduce downtime and to be able to bring in security fixes as soon as possible is to implement live patching if your operating system supports it. And then be religious about security updates and vendors as well as yourself. Be concerned about security where previous versions are running as well. Okay, so that's a step through all of the areas that we're concerned with. How does this fit into implementation in a government's process? Here are some examples of policies that you can make and implement that will control your organization's ability to implement a secure environment. I'm not gonna read them all, but an example is we restrict containers from being started by users directly and allow them only to be started by Kubernetes in a Kubernetes cluster. We control where given workloads can run. Sometimes we do that for reasons like performance and availability, but sometimes we do that, for example, to keep things like, here are the payroll and HR systems, here are engineering development. I wanna keep them in separate places. Use of package managers, control of encryption. We talked about all of the areas. The idea here is have policies, control your changes, keep them visible and use an idea of the least access rights. So you want to be sure that nothing can do more than it needs to and implement those in both human policies and then take that to automation. So that's a lot. We've talked about a big chain of things today. Where do you start? Start at the beginning, obviously. And when I say start at the beginning, I mean look at these things from the beginning of your implementation process. Well, let's talk about some of the little hanging fruit. What are some of the things I can do most easily to have a secure environment? First one, and this is one that the benchmark suites are happy to expose and criticize you on is to disable anonymous access. Some distributions and releases enable it automatically. This is really something you really don't want to do. I said, I'm gonna take a little step back. I said earlier that I would talk about benchmarks and benchmark suites. The benchmarks are actually the standards, not the tests. There is a well-known and well-used set of standards or benchmarks for operating systems, for containers, for Kubernetes. There are several, but the one that most people reference is the Center for Internet Security or CI Security or CIS. And they have a, we'll see, they have a set of standards as well as there's an implementation of those to test those that is developed by Apple security called CUBE Bench that should be integrated into the process of all of your deployment processes. I generally insist that it should be run all the time in the upgrade and any process that involves the CI CD pipeline. Don't auto-mount the default service account token. We talked about restricting service account tokens before. Use admission control so that even if you have privileged containers, you cannot escalate privilege with shell access, restrict user impersonation, don't use privilege, don't use privilege, don't use privilege, don't use privilege. Don't use privilege containers to the extent that you can. And if you need to control individual privileges rather than just giving full privileges to a container, isolate name spaces, use resource limits so that noisy neighbor applications can't steal resources or interfere with the processing of other applications. As I said earlier, patch promptly is a key strain and most important, integrate security into all of your processes. Train your developers, train your DevOps people and keep DevSecOps at the heart of the implementation process. Staying vigilant is critically important. Don't release reconfigure, don't do anything without testing the security implications of it. I mentioned QBench, QBench early and often. And don't do it by hand. The most vulnerable component is humans. As we said, as they sometimes say, the most dangerous part of the car is the nut that holds the wheel. Automate and verify the automation of this testing and make assessments continuous. So that's it for us. I'd like to open the floor for questions. Okay, well, thank you both very much for a wonderful presentation. We have about 15 minutes before we wrap up. So if you have any questions, please feel free to drop them into the Q&A box and we will get to as many as we can. Our first question here is there any procedure to add security to the containers that could be done already in the firmware boot phase? Well, I think to containers in the firmware boot phase, that's an interesting question for which I don't actually have details. My email addresses and slides drop me a note and I will check with people. Generally, the firmware process validates the underlying operating system, not the containers themselves. I'm not aware of anything, Danny, are you? No, that's generally that would be securing the OS that underlies the containers, not the containers themselves. So I'm in the same boat you are with, not having a great answer. Okay, thanks. Sorry, folks, but we'll check in, but as far as we know that it's firmware to OS and then OS to containers. Okay. Do we have anyone else who would like to ask questions? We have plenty of time, so if you have anything you'd like to know, please drop in the questions and we'll get to you. Do you think that all security tools and benchmarks should be integrated more natively into containers and Kubernetes? That's a really, that's a good question because it's hard to say what integrated into Kubernetes means. As we go along now, more and more things that have started, there are two competing tensions. There are things that start as extensions that come into mainline Kubernetes. And then there are areas like cloud controllers and storage interfaces that have started tightly coupled and have moved out of tree to provide alternatives. I think the challenge for bringing native security tools into the core of Kubernetes is the question of which to pick. And there are competing options with merits to all of them. So I think the point of integration has to be your evaluation and your choice of processes, not native Kubernetes. Now, the process of building the Kubernetes ecosystem, security has very much been brought into it in that projects are now required that are at least CNCF projects are required to have security assessments some of which are audits, some of which are examination of their approach to security. So security is very much a component of the Kubernetes and CNCF development process. I don't think we'll reach the point where there's one single built-in security approach because the innovation is too fast and the variety too broad. Okay, thank you for that. Before we get to the next question, just Jim, I wanna address your question here. Yes, it will be possible to download the slides for the PDF for this webinar. As always, recording of slides are posted to the CNCF webinar page after each webinar. So please look out for those later today. Our next question, is it advisable to run antivirus-like silence on a cube cluster? Ah, good, good question. I mean, I think it's advisable to run and I'm gonna use a more general term. I think it's advisable to use anti-malware everywhere. Whether that is implemented on the hosts in containers in the pipelines or in appliances outside the cluster will vary. And I mentioned anti-malware because antivirus is one very specific sort of attack point. But more and more what we need to look at are analysis tools that use techniques like sandboxing for dynamic analysis that is looking at the payloads and what they would do rather than conventional antivirus, which is usually just looking at the fingerprint of known malware. It's the unknown malware that's at least as important. And we need things that can look at it behaviorally. There are more than a few technologies out there to do it. I'm gonna name one just because I used to work there and it's a technology called, you think I'd remember where I used to work. But I'll be back on it in a second. They were just acquired by VMware, a very sophisticated advanced anti-malware company. I feel really stupid right now. I'm having like the memory lapse of all time. Well, I like to point out silence is one of those anomalous behavior type programs that works like really any of the kind of runtime security tools. So I mean, it's billed as antivirus, but it fits in very well with, as we were saying, like the runtime security monitoring. So that's another- Last one is the company I was looking for. They were, as I said, recently acquired by VMware, but. Excellent. All right, so next question here. Today, we have to switch over to learn a new tool to run it and then link it somehow to which Kubernetes workload or container it applies. Oh, that's not a question. Continue. Yeah. Do you think that it makes sense to store some of the benchmark reports to be stored as custom resources with standardized CRDs? Danny, you wanna- I mean, if your analysis tooling is all running within Kubernetes, it makes sense that you could store some of your data alongside the applications. That's really more of an organizational, how do you wanna handle operations? It's not a terrible idea. So it's whatever makes sense in your situation, I think. Yep. Okay. Are there any tools to integrate the default Kubernetes RBAC into LDAP? Is that possible? LDAP and Active Directory. Most distributions include some method of interfacing external directories and other authentication methods such as OIDC and SAML into the processes. We, for example, at SUSE, use tools called DeX and Gangway in our product, but there are other approaches as well. But yes, absolutely standard directories are frequently used. Not only is it possible, it's pretty much typical. Do we have any other questions at all? Anyone at all? We have five minutes, so please don't be shy. All right. Well, no one has any other questions and I think we will call this a day. I wanna thank both Danny and Roger for a wonderful presentation and thank everybody for attending today's webinar. As I said earlier, the recording and slides will be available on the CNCF webinar page at CNCF.io slash webinars. Thanks again to everyone for attending today. Have a wonderful rest of your day. Take care, stay safe, and we will see you next time. Bye-bye. Thanks, everybody.