 Hello. Hello, CubeCon, Tracy from NuVector here, bringing you a little bit of security as code, egress inferno stand-alive edition. So what I'm going to be talking about today is Kubernetes security as code. We're going to take one specific use case and compare it against different ways of implementing within a Kubernetes environment. Those different ways are going to be with Kubernetes security policies, OpenShift, Istio egress control, as well as NuVector egress control, which is a commercial approach to egress control. If you're asking yourself, well, why egress inferno stand-alive edition? Well, that's what happens when you mix the conference theme with disco and throw in a little bit of copyright infringement on the site as well. Enjoy. So first, let's take a very quick big picture view of security issues for containers, Microsoft deployments, and then we will quickly narrow into our topic. Ideally, security is being built from the beginning of your pipeline. You're wanting to prevent vulnerabilities from getting into that pipeline. So you're doing scanning. You're also wanting to try to prevent misconfigurations with Kubernetes, the host themselves. If you are using Kubernetes, then you are all too aware that you don't want to negatively impact the velocity due to network issues or security. Everything needs to be as automated as possible. Once those containers are deployed, there are not only risks from the applications themselves, but now you have the new attack surfaces such as the Kubernetes API, the orchestrator itself, the container runtime, or if you're using a service mesh like Istio, the control or data plane can also be a potential attack vector. And finally, we have all the traditional attacks against applications just like we did before containers, vulnerability attacks, ransomware, crypto mining. These are all serious things to think about. But today I am going to focus on one aspect of security, and that is the connections leaving the container workloads to outside of the cluster. As my granddaddy said, don't rob a bank on half a tank. Getting away is half the battle. So if getting away is worth their time, doing egress control is certainly going to be worth your time. When we're talking about egress control, just just assume that's any connection that goes outside of the Kubernetes cluster, or that's for namespace for Kubernetes. OpenShift describes that as outside of the project. This could be a desired connection to the internet, an API service, a popular domain, basically anything outside of the cluster, even like to a legacy database or something like that. The reason egress control is worth at least 15 minutes of your time is because this is how someone actually makes off with your data. We too often focus on preventing attacks or vulnerabilities of getting into our environments, but the egress connections are literally the getaway car. In order to get the value of an infiltration, the attacker may need to use a reverse shell or connect to a command and control server, or in the case of crypto mining, you need an egress to get your Bitcoin credit. With Kubernetes, egress control can be a challenge because the source IPs of the pods change dynamically when they're spun up or moved node to node. This makes it difficult for traditional firewalls or gateways that need a source IP address to base their policies on. If you're trying to funnel all your traffic through a gateway inside of the cluster in order to egress, that too can become a bottleneck and that can create some performance issues or even obscure the source or the content of the requesting service. For example, if a certain service is allowed to egress coming from a certain pod and that pod needs to be identified by labels or service names instead of IP addresses. So let's take a look at some basic egress controls built into Kubernetes. This will be a very quick review. In Kubernetes, you must be using a CNI plugin that supports the enforcement of network policy in general, and it has to be enabled. Not just using the CNI without enabling it will not allow your network security policies to take effect. This is inner pod as well as egress control. The default is that all pods can egress to the cluster, but as soon as you enable a single network policy, egress control is then enforced. And in Kubernetes, you can create egress control based on a destination IP address or port, or you can block all egress controls. Again, this is a namespace based policy. Now, there are some limitations. For example, you can only allow specific egress. There's no concept of deny to specific IP. So if you've got a block of IPs that are in allow, you can't then ask deny specific IPs within those allows. There's no concept of external outside of the cluster. So you have to specifically allow connections outside of the namespace and then separately to specific services. There's no concept of rule prioritization or order. So if you're familiar with firewall rules, you know that rule order is important for how the rules are hit so that that can be cumbersome if there's no host name or DNS support. Red Hat's OpenShift extends Kubernetes and adds some nice functionality by using custom resource definition, or CRD, called the egress network policy. egress network policy is able to deploy an egress firewall, which does recognize the concept of egress outside a cluster. You can enforce one egress network policy per namespace, but not on the default namespace. So everything needs to be in a designated namespace. The OpenShift CRD does support the resolution of DNS names and adds the concept of rule ordering. The different rules shown for egress in the example I have here on the right is exactly the order that they will be enforced, much like a firewall. The egress network policy also supports deny rules in addition to the allow rules and has a granularity at the layer four port level. A few limitations. First, the egress control does not apply to routes. So if you're using routes to send traffic from the namespace through the pod to a router, the egress controls will not apply to those routes. Secondly, the egress controls you see here apply to the demo namespace, but notice there are no pod selectors. In Kubernetes, you can do namespace and pod selectors, but in OpenShift, you can only apply namespace rules. And in neither Kubernetes and OpenShift, there is not an ability to verify the application protocol at layer seven for egress. And that's another layer of security that can prevent tunneling or other types of attacks, which try to mask themselves inside of a network protocol. So the ability to verify the application protocol being used, not just define it, but verify it for like say MySQL, MongoDB, SSL, that is not supported. And as noted, the order of the rules here must be set and maintained in the YAML file. A third way of implementing egress rules might be through a service mesh such as Istio. If you're familiar with the architecture of Istio or Kubernetes service mesh products, you know that a sidecar proxy is deployed next to the pod for every application workload and the connections will go through that sidecar proxy. That's how you can actually implement egress control using a service mesh via that sidecar proxy. In Istio, egress is allowed through a sidecar proxy for DNS lookups as well as HTTP and HTTPS traffic. You can see the mesh example here on the right is setting an outbound traffic policy allowing HTTPS egress. You do have the flexibility to bypass the Istio proxy for a specific range of IP addresses and you can implement egress control in Istio via an egress gateway, which will allow access to external HTTP, HTTPS services from applications inside the mesh. A specific use case for this approach might be where a cluster or an application nodes don't have public IPs so the in mesh services that run on them cannot access the internet. By defining an egress gateway, you can direct all of the egress traffic through it and allocate public IPs to the gateway nodes so application nodes can access external services in a controlled way. The limitations of a service mesh, first and foremost, and I'm taking this from Istio documentation, a sidecar proxy is not considered secure and can be bypassed. So whether it's being bypassed for specific IP ranges or for a malicious bypass of the proxy, Istio will not guarantee traffic is going through the proxy. And as I said before, other than HTTP or HTTPS, there is no verification of application protocol. And if you're thinking about using the egress gateway approach and funneling traffic through that gateway, again Istio's documentation says it cannot securely enforce all the traffic that goes through the egress gateway. So if you really want to lock down your environment, you're going to have to probably use a combination of Istio gateway policy and maybe some basic Kubernetes network policies, combine those together to enforce all the traffic is going through the gateway and denying all of the other egress traffic, perhaps a little bit more prone to misconfiguration or potential unknown holes that allow egress that you may not be aware of. And finally, the performance considerations of a sidecar on every pod enforcing egress or funneling all of the traffic through the gateway, depending on your application performance, that could be an impairment there. So now let's take a look at a fourth way of doing egress control, and that would be with a commercial product called NuVector. NuVector is a container security product that can enforce network egress control policies based on IP address, source destination for applications running outside of a cluster, or DNS names that can be resolved. NuVector also verifies the layer seven application protocol provides support beyond HTTP and HTTPS to include TCP, UDP, DNS, and others. Also provides granular egress rules beyond network IP addresses or host names, but that also use the container native deployment names, labels, or groups of namespaces. And it can also be combined with process security roles and file access security roles. And so if you're asking, well, wait, how can I combine network security policies and process and storage slash file access together? How does that work? Well, so NuVector also uses a CRD, much like OpenShift, a custom resource definition that allows for there's the NV security rule that allows for egress, allows for ingress at the network level. And so that's both allow and deny rules, the layer seven protocol as well as any ports. NuVector also implements file protection or for specific file locations, directories, as well as process protection, basically blocking any process that hasn't been seen before. NuVector uses different policies. There's a discover mode that automatically learns all the processes and all the network activity so that you don't actually have to write this YAML file. It can be created automatically. It can be exported and applied to other clusters as security as code. So a quick summary for NuVector security as code allows you to define specific application behaviors in Kubernetes native YAML that can be exported and applied to multiple clusters with the same applications, same rule sets automatically learns the processes and network activity and you can add file system activity protection as well, integrates with our back, makes it easy to move rules from one environment to another and supports OPA and other integrations. So here is a quick summary, egress control comparison between the four. Feel free to screen cap this particular slide. And finally in summary, definitely should be looking at egress control. Again, it's half of the robbery. Lots of good ways to enforce via open source. Lots of good ways to enforce with commercial products such as NuVector. Definitely consider your egress as part of your segmentation strategies and include that process file protection as well. Basically assess your risks and apply security where appropriate. So thank you so much for taking the time to get through this entire video. Much respect. Hope you're having a great conference from everybody at NuVector. Thank you.