 Hey, my name is Natalia Venko. I'm a security engineer at ISAvalent. And today I would like to talk about some of the concerns that every security team has, and how to overcome them with the power of EBPF. So if you are running your application or your service in a cloud native environment today, and you want to secure it, probably you already realized by yourself that you're having a hard time. It's because the strat landscape are extremely wide and the attack scenarios are basically infinite. And if you look at the landscape, you're getting overwhelmed. It's basically up to the attackers' creativity and imagination what he can and will do. But prioritizing the components where you have the least trust and the most risk is a good way to get started. Today, I would like to focus on the network monitoring and observability and show you a couple of examples that every security team or SecOps team should monitor for. We have an under-talk on KubeCon which presents more specific attacks to the live cluster, but today I'm going to show you some quick things that's not hard to implement. And if you apply or teach it to your security team, you are going to be a bit more secure. So what's the main challenge with network monitoring and threat detection in a cloud native environment? First of all, the traditional network monitoring tools are based on IPs and ports, and they are not really effective on cloud native environments like Kubernetes where the workloads are containerized, bundled together, and the IPs are faculty changing. So these IPs and ports are not meaningful there anymore. We would need a solution which would translate these IP addresses and ports into Kubernetes identities like pods, network policies, services, abstract the observability and monitoring data easily and efficiently. So there are a tons of tools out nowadays what a security team or SecOps teams can use, but today I'm going to take advantage of the power of EBPF. So what is EBPF? EBPF is really the optimal tool to provide these observability data into Kubernetes workloads like pods, because the pod is just a set of Linux processes running on the context of a kernel namespace, and that's where EBPF resides. EBPF is a virtual machine in the kernel with programmable logic. So one can attach EBPF programs directly into the kernel code pass to extract meaningful Kubernetes identity information directly from there. So for achieving those quick wins that every security team should implement, I'm going to use Cilium and the power of EBPF. So as a little bit of background, Cilium deploys as a demon set inside of a Kubernetes environment. So there is a Cilium agent running on each Kubernetes worker nodes, and it's communicating with the Kubernetes API server to understand pod identities, network policies, services, and so on. And based on the identity of each workload, Cilium generates a customized and highly efficient EBPF program to achieve among others observability and security. So Cilium is able to very efficiently observe what behavior happens inside of a Linux system. It can collect and filter out that observability data directly in the kernel, and then export it to user space, where it can be sent to a variety of log aggregation systems and CM systems. For example, Elasticsearch or Splunk. So this is how ROJs and event looks like, which was exported to user space by an EBPF program. You can see the Kubernetes identity of very information like pods, containers, labels, and you can export this to any log aggregation system or CM system that is out there. It doesn't really matter, to be honest. But for this demo, I'm going to use Splunk. We are going to see for the examples here, what is really easy to monitor for, but people tend to forget or to overlook. These are really specific to cloud native environments and falls into network monitoring. The first is to monitor for unexposed services. The second and the third is related to the Kubernetes API server and the dashboard. And the fourth covers monitoring access to the cloud provider metadata service. So let's assume that your platform is learning Kubernetes, but they need a ready service to be deployed. And they just put type load balancer instead of a type cluster IP and boom, your ready service got exposed to the internet. So how can we catch it? How can we catch this exposed service? We need to monitor for connections which are not coming from the internet, internal network, and they are coming from outside from the internet. So here you can see the guest book application from Kubernetes that I set up on the live GK cluster. You can see the frontend pods and the ready spots and you can see the services listed on the guest book namespace. You can see the frontend service and the two ready service. Both of the two ready service are exposed to the internet on an external IP and don't port 6379. And here in the browser, we can see the frontend service and we can actually see that it's working. Now we can go back and generate some traffic. I can actually pin the database. We see that it responds. So now I'm going to go to Splunk and use some signatures and the power of EVPF to actually see the traffic. So here on Splunk, I'm going to use a predefined signature to check for all the incoming connections from the outside world to Kubernetes destination labels. So for example, here we can see the redis label and we can see the network policy decision that the packet was forward to attract and we can see how many requests were made. So basically here we can see the connections that I generated before. So in the second demo, we are going to monitor access to the Kubernetes API server. In this scenario, we can assume that the podcast compromise or somebody stole the token or actually there is a vulnerability in the Kubernetes API server itself. In this case, we should monitor access and connections to the Kubernetes API server either from the internal network or somewhere else. So let's assume that I have a running web application. That's what we can see on the left terminal. I have multiple services. For example, a recruiter or job posting and I have multiple databases like Elasticsearch and ZooKeeper. So let's assume that the crawler pod got compromised. That's what we can see on the right terminal. And now I'm going to generate some traffic to the Kubernetes API server. We can see that the status is 403 because it's forbidden. I don't have the token, but for generating traffic, it's actually good. Now, if we go to Splunk with a predefined signature, we can actually see that which workloads are accessing the Kubernetes API server. So here we can see in the signature, the Kubernetes API server IP. And we can also see the source names, the source button and the source labels. So basically where the connections were coming for, the network policy decision, like the package was forwarded or dropped and basically the destination IP. And here we can see the crawler pod, which we assume that it was compromised. So in the third example, we are going to monitor connections and accesses to the Kubernetes dashboard. So the Kubernetes dashboard is a really sensitive web UI which allows you to manage applications and troubleshoot applications and manage the cluster itself. But obviously it's also a bell door for attackers if you leave it unexposed. So here on the left side of the screen, you can see the pods under the Kubernetes dashboard namespace. You can see the metric scraper and the dashboard itself. And if we check for the services under the same namespace, we can see both the Kubernetes dashboard service and the metric scraper service. We can also see that both has an external IP, which we can access from the browser. So now let's try to access this external IP from the Kubernetes dashboard namespace from the browser. And we can see that it was successful. We can see Chrome jobs, demos and deployments. We can manage the cluster and basically we can do whatever we want. So in the following predefined signature in Splunk, we can actually monitor who had access to the Kubernetes dashboard from the outside world. We can see the destination namespace here, the pod and labels, the network policy decision that the pocket was forward or dropped and actually the source. So basically here we can see all the connections that I generated previously. So in the first example, we are going to monitor connections and access to the cloud provider metadata service. This service contains highly sensitive information about your environment and has a predefined IP for each cloud provider. So the setup is going to be the same and then it was in the previous demo. We can see a running application on the left terminal with multiple services. And on the right terminal, I'm in the crawler pod. So now let's generate some traffic to the cloud provider metadata service. So with the following predefined signatures, we can actually look for the connections and the access to the cloud provider metadata service. So we can see the source namespace, the pod and the labels. So basically this represents where the connection was coming from. You can see the network policy decision if the pocket was forward or dropped. You can see the destination IP, the pod and you can see how many requests were made. So basically these are the connections that I generated previously. And the last thing is to educate your team because in the end of the day, security is all about people. It all comes down to your team. And the more you are going to educate your team, the more efficient you are going to be. Even if it's just minor things and blood points. And thank you for your attention. I will have a Q&A for 10 minutes and you can also ask me in the section now.