 So, I'm Kirsten Newcomer, part of the hybrid platforms product management team. I focus on security capabilities for OpenShift and also lead the Red Hat Advanced Clusters security product management team. Andy, would you like to introduce yourself and a little bit about the topic? Absolutely. Hey everyone, my name is Andy Block. I'm an distinguished architect at Red Hat. I primarily work with customers on containerization, security, and everything OpenShift as well as the cloud. So happy to be able to share with you some of the knowledge that we have around securing just container security, but also with a focus on telcos. And this is important because when you think about the cloud and security, one single vulnerability is all the attacker needs. We look at some of the secure supply change challenges we've had and heard about over the course of the last two years. There are companies that we may not have heard of before, but we hear about now because that one single vulnerability is all that made them become national headlines and international headlines. So let's talk about telco workloads first. Why does that matter? Because telco workloads are a little different. They have unique characterizations versus traditional containerizations. Number one, they make extensive use of host resources because they need to get device characteristics. They may have some advanced networking capabilities or use cases. In many cases, they are provided by vendors or third parties. So that is an area that you need to be concerned about when it comes to the security side of things. Telcos typically, if you know when you think about it, they're usually in remote or edge deployment sites out in the field. They're not necessarily always going to be in the data center. And considering they are in remote locations, connectivity is intermittent, not guaranteed, you need to also consider that in your architecture design. Absolutely. And just a quick comment about this, right? So telcos have been running on network, I'm sorry, virtualized network functions for some time, NFEs on platforms like OpenStack. And we're in the midst of a migration to containerized network functions, CNFs, running on Kubernetes. And so that's a key area of our focus today. Back to you, Andy. And one of the areas that we're going to focus on is identity. We're going to have a couple of different themes that we're going to focus on in this presentation. One is identity. And when you think of identity, the first thing you think about is user identity and access management. So a user accessing Kubernetes or any platform needs to be able to log in typically. And you want to have some typical characterizations and policies that are applied to them. You want to make sure you always, when integrating with an external identity provider, use two-factor authentication, strong authentication mechanisms that has rotation or a way that you can have the second factor that is either going to be a text message or Google authenticator, some sort of two-factor auth. When you go ahead and integrate them into the system, you want to make sure we separate cluster access from application access. You want to make sure that you limit host resources or host access whenever you need it to only break glass solutions and then finally minimize privileged access with role-based access controls. We're going to talk a little bit about a concept called zero trust a little later in the presentation, but definitely want to look at when those who are accessing the platform to minimize the resources they have access to. Now we talk about physically users, but then a new concept that comes out has come out really is identifying workload identity. These are separating users that access your applications from the applications themselves. Usually the applications need to sometimes communicate with third-party systems. You want to make sure you give them their own identity and then only give them the necessary permissions that they need. And in a Kubernetes environment, this is typically accomplished using a Kubernetes service account. We want to make sure that if we have multiple applications to put in the platform, each workload has its own individual identity so you can minimize the amount of permissions that are granted to them. One of the cool things about workload identity is when working with cloud providers, there are ways that you can use workload identity to authenticate the cloud resources. So like using IAM policies in AWS or other policies and other cloud providers, you can integrate the identities of a workload that's running on Kubernetes or any containerized solution or any platform with those cloud resources. And there are new frameworks called Spiffy Inspire that help implement capabilities to attest both nodes, the hosts themselves that are running in the Kubernetes environment, but also the applications themselves. So a lot of cool things that are coming out in workload identity and it really highlights that we want to really increase the ability to provide secure operating environments. Yeah, that's a great point. And one of the things also for us to think about with both user identity and workload identity is as more and more Kubernetes users are moving to working with Kubernetes in the cloud, as Andy mentioned, we need to think about the best ways to integrate with not just the identities that those cloud providers offer, but also solutions like secure token services that help to manage and minimize the lifetime of token-based access. So lots of really great things emerging in that space. So as we move on, we wanted to talk about integrity as well, right? So you've now got identity for your users, you've got identity for your workloads, but how do we assure that we maintain integrity both for the host and for the workloads themselves, right? And so there's a lot that is available in Linux today that can be leveraged here, secure boot, to be sure that an unmodified kernel is loaded, solutions like advanced integrity, let's see, AIDE, advanced integrity data. I'm sorry, I've totally forgotten what the acronym stands for, but basically we use this in Linux to monitor file integrity, and that helps us detect system intrusions, right? So if you've got a database of hashes, files that you expect to be on the system that don't expect to change, you can be alerted if there's an unexpected change. Similarly, you want to encrypt volumes. So again, if we think about telco and edge environments in particular, it can be very easy for a whole server, a whole deployment to just be stolen, right? The disk might be taken. And so it's really important to leverage encryption for that disk, things like network-bound disk encryption, TPM, TANG endpoints, so Clevis and TANG for automated encryption and decryption. And again, we want to remember that these environments don't necessarily have an administrator or an IT admin, IT ops, looking at them very often. And so you really have to, whatever encryption solution you use, it needs to be enabled in an automated fashion so that somebody doesn't have to be at the console to type in the key to decrypt the environment. One of, go ahead, Andy. This is also important when we talked about telco workloads being out in the field. You want to make sure you have each one of these capabilities because someone can just walk off with your device. There's no guarantee. Obviously, you want physical security, but if that somehow fails, you want to provide additional security mechanisms on top of that. And these are two areas that are available for us. Yep, absolutely. And related to this, for Kubernetes, there's a ability to encrypt the SED data store. SED is where you keep this known state for every cluster. That said, there's a trade-off between encrypting the SED data store and performance. And so again, in a telco environment, organizations might think about, is it more useful to encrypt the volume and is it necessary to also encrypt the SED data store? So both can be explored. Same for the cloud environment. This isn't just for edge environments where that physical risk is stronger, but also when thinking about SED encryption, there's that performance trade-off to be evaluated versus encrypting at the volume. Another thing that's really moving forward in this space is moving beyond secure boot to attestation, and remote attestation in particular, recognizing that again, you're not going to have somebody necessarily hands-on with the server. So how can we ensure that there have... How can we attest to the reality or the goal of not having unexpected changes on the host? So Key Lime is a great open-source project that's investing in this space. Red Hat absolutely investing along with our colleagues in the open-source community. So things like measured boot, integrity measurement architecture, not just for the host, but also are there things we can do to encrypt the payload? Is there a useful story for Key Lime there? And then of course, just as when managing certificates, you have to be able to revoke the attestation in the case of trust failures. Again, here we're able to leverage TPM and virtual TPM as part of that. So those are some of the areas. Similarly, when we look at beyond the host, we've got workloads, just like we talked about users and workload identity. For integrity, we need to think about the host integrity and also the integrity of the workloads themselves. Starting with trusted sources is really important there. And so this is a place where something like the catalog from Red Hat and Red Hat's offering of the universal base image is a really good value add that we provide sort of for anyone who is building containerized images. So you want to be sure again that you are leveraging signed content so that you can maintain, you can be sure of the integrity of that content as you download it from the service and that you're downloading from a trusted source. And ideally that source is providing security advisories and updates to the content you're using because as the quote started, one vulnerability is all that's needed to erode trust. And in the world we live in today, vulnerabilities, new vulnerabilities are discovered regularly. And especially like if you look at Docker Hub, there are so many vulnerabilities across a number of container workloads. So being able to trust the source of content is key because new vulnerabilities come in on a daily basis. And again, these for containerized workloads, that UBI in particular, but also your runtime, whether that's Java runtime, .NET runtime, those runtime things are embedded in every application containerized app that you deploy. So integrity also is important for the communication and managing communication. Again, in a Kubernetes environment containerized workloads, you have applications interacting with each other on a regular basis on that deployed cluster. So we want to think about how are we securing communication both into the cluster and off the cluster, that's north-south, but also how are containers and pods communicating within each other on the cluster as well. And so lots of options available here, but be sure to secure, whether you're using Kubernetes Ingress or OpenShift routes, be sure to secure those routes into the applications. Lots of options available, re-encrypt, et cetera. And you also, if you have services again that are going to communicate off cluster, as Andy mentioned earlier in the conversation, it's really important that you think about how are you going to manage that off cluster communication. And this is not just a bad identity in this case, but let's make sure we encrypt that off cluster traffic so that it can't be interfered with as it's moving from the cluster to the external service. East-west traffic, again, lots of options available these days. Istio with Envoy to encrypt pod-to-pod communication is really important. And in Telco, we have a strong request for IPsec to encrypt node-to-node communication. This comes out of certain standards that our Telco providers need to use. 3GPP explicitly references IPsec, for example. And so we'll see that in for our Telco carriers and our Telco radio access network solutions, IPsec becomes a key element. And they also will emphasize IPsec for that initial ingress onto a remote cluster. Monitoring and servability, right? So many people, you can't secure what you can't see. So you need to understand the workloads. We need to have monitoring capabilities for the workloads, for the platforms to maintain operational security. And that includes, of course, the communication between those workloads. So you want to audit absolutely everything you can. That includes auditing ingress, host events with things like Linux audit D, Kube API server events, application logging. Ideally, you're going to forward all that data off cluster to a security information and event management system for analysis and alerting. When we think about Telco workloads at the edge, one of the realities that we have to consider here is that they often have small footprints. That edge will have a small footprint. And again, they have intermittent connectivity. So if you're not able to forward on a regular basis, you need to really think hard about how you're going to configure the log storage, right? How many logs are you going to retain? For how long? What's the size of those logs? All of that's important. And how are you going to manage when the system loses connectivity? How much space can you allocate to store those logs before they're forwarded off the cluster when connectivity is available again? Similarly, you want to be sure you audit access to your audit logs, right? So you want to know that nobody is inadvertently stepping in and modifying those audit logs, right? So again, the integrity of your logs matters. And one of the ways to ensure integrity is to forward them off cluster, but another way is to monitor the access to those logs. There's a new project called Observatorium that's out there that is designed to help integrate multiple open source projects that are used for audit logging and monitoring, integrating projects like Thanos, Loki, Jaeger to kind of provide a tenant, a single API and sync signal correlation capabilities across all of these solutions. So look for more to be happening in that space. Take a look at what's going on with that project. Check it out. Contributes would love to get additional feedback there. Similarly, runtime analysis is key. And this is a place where again, there's a lot of activity. EBPF is a key area of focus for collecting deep system data and being able to do some correlation across those events. Runtime behavioral analysis can help to identify anomalies, privilege escalation, namespace or ownership changes within a container, unexpected network connections. All of these things, execution of SSH binaries, these are key elements. Again, where runtime analysis and alerting on anomalous behavior is really key. There's some great projects in this space. The Stack Rocks open source project has been launched, but the GitHub repose won't be made public until sometime in February. Really looking forward to a community growth around that and contributions. And also similarly, Falco, CNTF Falco project is another great space where we have the opportunity as a community to invest. And these combination of capabilities, I think, are going to be really important moving forward. So from here, isolation is another thing for us to think about. Again, continuing that runtime theme and considering that in some cases, some workloads, certain organizations believed that a shared kernel doesn't provide enough protection. Even with all the things that Linux offers, even with Linux namespaces, Cgroups, SE Linux, there are times when it's important to have additional protection. And both Kubevert and Cata containers are solutions that offer that additional protection. So as telcos are migrating their VNFs to CNFs, often their first step is more of a lift and shift effort. There are these strong dependencies that they have on host environments. The applications are often so comprehensive that it takes a long time to break them down into microservices and to kind of figure out the right isolation pattern for the different services so that they can talk to each other. And this is a place where, because the telco solutions are often run on bare metal, especially when it comes to the edge, Kubevert, where you can run a VM in a pod on Kubernetes, is an option for a more of a lift and shift environment. And then as you break down those into more of a microservice-based approach, those VNFs, you still might want the protection that something like Cata containers provides, where I've got a micro-VM that I'm able to leverage and I have more isolation from the kernel. Again, because we want to avoid nested virtualization, these are best for bare metal deployments, but that is a popular space for telco. So as we mentioned at the beginning of the conversation too, often telco workloads need to use host networks, secondary network interfaces. This can be enabled with the Multis CNI plugin. A couple of the use cases that we see, where these are particularly important, high-performance multicast workloads, for example, they might leverage SRIOV or Mac VLAN. There are links here if you want to learn more about those solutions. But these secondary network interfaces bypass the Kubernetes SDN. And so we've got the Kubernetes SDN plugged into Multis, but we have these other CNI interfaces that are also plugged into Multis. So because it's bypassing the SDN, you don't get the micro-segmentation provided by Kubernetes network policies or the encryptions that Istio and service meshes offer. So we're bypassing that SDN. That means that we need to think about how do we provide some of those same capabilities on these secondary network interfaces. Frankly, this is a space where I think there's a lot that we can, a lot more thinking and investment that we can be doing. So there are things that are available like net stat within the containers pods and the host, use of intrusion detection solutions such as Snort. But again, when we think about the edge, this is a small footprint environment and for telco workloads, high-performance requirements. And so while there are things that we can do to get visibility into this space, I think this is another place where attestation and workload identity are going to be key. No, I'm going to just jump into the next topic we need to zip through. Great. So we want to talk about zero trust. Zero trust is a different way of thinking about how you manage security. Traditional security models typically make use of a parameter-based security where once you're into a demilitarized zone, a DMZ, or within a firewall, you are given somewhat free reign. And then that causes challenges from a security standpoint because once an attacker enters that region, they're able to move somewhat freely. Zero trust assumes that you have no trust. Every single action must be authenticated and authorized to ensure integrity, integrity, isolation, and intrusion. So deny all by default, verify all, and accept only as necessary. And this really goes down to how do we enforce this? How do we manage this? And this is where policies become so important. Really, it's the ability to implement rules for safely operating in a Kubernetes cluster. And there are separate, several, pardon me, different components when it comes to policy management. A policy administration point, which is basically a centralized management for defining policies. These policies are typically backed by a Git repository or some way that you can version control them. A policy enforcement point, which is where you have the ability to ensure that your policies match the desired states, which is usually through Kubernetes controller. A policy decision point, which is a policy engine, like OPA keykeeper, or Kyvronon, those are two examples in Kubernetes. And then a policy information point, which is basically what additional information does your policy tool need to gain assets and information and understanding to make those policy decisions? And what are some ways that we can do that? We can do this by allowing some of the things that Kirsten mentioned previously, ensuring for network traffic, we only have certain allow lists and denial lists, making sure that we only approve from certain approved registries. We can then do this at that pod admission time where we validate image signatures. We can then also ensure that we have appropriate permissions to authenticate to the Kubernetes API and perform different permissions on the clusters through admission controllers, as well as we can ensure the different placement of those resources on target environments, whether they be dark target clusters or target namespaces. And I'm going to include one final diagram, which is a policy architecture being driven by the Kubernetes CNCF policy management white paper. Highly recommend taking a look at that. That is fresh off the presses in the last month. They really describe this in further detail to ensure and enforce and improve the security of your environment. So really, how do we ensure all of these processes are and ensure that our telco workloads are secure? And that really is enforcing configuration management. Ensure and use declarative configurations, use GitOps, the best part about GitOps is it really allows you to do collaboration between all your different teams to enforce DevSecOps. And as a result, you can reduce configuration drift between your environments. And most importantly, you can improve your overall security posture across your clusters. And finally, for telco workloads, I mentioned way at the beginning, you have a lot of vendor solutions in the environment. Make sure you communicate with them. Understand expectations and guidelines for operating in your environment. Provide documentation examples. You don't want to just say, here you go, try to deploy your solution and see what happens. No, provide documentation, provide examples, make it so that they are able to understand the rules of engagement and be successful. And if there are any specific platform requirements that might be unique, make sure those are documented and shared as well. And then finally, open communications is key. Ensuring that your customers, your workloads, your developers, everyone are able to use shared communications for answering questions and being successful in the environment. Yeah, and what I love about this space in particular is all of us use these environments all the time, right? We use cellular networks. We use internet. This is such a great space where as developers, as product managers, as consultants, we have the opportunity to contribute to the security of the solutions that we use on a day-to-day basis. So that's really it. Thank you. And happy to take any questions if we have time. Thank you, Kirsten and Andy. I think since it's the last speech on this day, we still have some time for questions. Yeah, please feel free to... Yeah, we have a question in Q&A section. So feel free to... So a question, Kirsten, that you might know more than I do. Are any plans to use quantum computing in regards to security within Red Hat? Wow, so yeah, we do have somebody who's looking at quantum computing within Red Hat. I forget exactly. I think that this is coming probably out of our office of the CTO, although I'll be honest and say that I'm blanking on exactly where in Red Hat that's happening. But absolutely, we are looking at that. And so it might be a good topic for us to tee up for a follow-up session. We'll figure out who can we get to come talk about quantum computing. That would be awesome. 2022, 2023, pardon me. Yeah, exactly. Yeah, there are a couple of other questions. I saw the workloads, this question about what workloads do we expect to be running Kubernetes? Frankly, every workload that is... We do know that 5G networks, core networks, are running on Kubernetes today. And radio access networks are starting to be run on Kubernetes. So really, all sorts of different types of workloads are running on Kubernetes for telco. But yes, 5G is really kind of the key place. We're starting to see that happen. So absolutely, the number of the amount of data, the performance, critical applications, all of them are key. And for Edge is where you find the minimal footprint rather than the core network environment. So you want to think about this as the carriers might have a large CUBE cluster where they're deploying a whole bunch of services to enable 5G. And then at the Edge, there are other environments, especially for radio access networks, cellular. Real time does come up. I have not yet seen a ton of it, but it's absolutely a part of the overall environment. And I think it's mostly that just some of our colleagues in Red Hat are more involved with the real time requirements than I am. But absolutely that comes up. The last question I saw was what are some of the areas that you see a future opportunity? One is for me, short-lived tokens. More and more use of short-lived tokens. A lot of times we use long-lived service credentials using like Kirsten mentioned earlier, STS as a solution to provide short-lived tokens for cloud resources, but integrating other solutions either for the developer, being able to communicate with your Git repositories using short-lived tokens, and just being able to reduce the potential when it comes to more long-lived credentials. Absolutely. And I think we talked about image signing briefly and signatures for workload identity. The work that's happening in Sigstore is an awesome solution there, right? As Andy mentioned, Think GitOps integrate signing into the pipeline, help our customers integrate signing into the pipeline. This is an awesome opportunity. And finally, I think a real area of interesting investment is going to be those, how do we provide the best security for those secondary network interfaces in an environment with a minimal footprint? Cool. I think that's it. I want to thank everyone for the opportunity to present to you today. It's a happy hour over in EMEA. Enjoy everyone in the U.S. and other parts of the world. Have yourself a wonderful, wonderful day and hopefully we'll see you all in 2023. Yeah, we're hoping for in-person in 2023. Yep. Thank you very much. Yeah. Great. Thank you to the speakers and to the audience. And at this moment, we are saying goodbye and see you next time anywhere. Bye-bye. Bye-bye. Thanks, everyone.