 Hello and good morning Dublin. So yes, I'm Liz Rice and I work at Isovalent, which is the company that originally created the Cillium project. Do we have any Cillium users in the room? I can see a few hands. OK. Cillium is built on a technology called EBPF, and I'm here today to talk about why EBPF is a really great platform for building cloud native infrastructure tooling like observability, security and networking. These tools have been often deployed using a model called SideCars. That model's got us a long way, but EBPF opens up a whole new approach that I think is better in many circumstances. Before containers, we used library code to share common functionality across multiple applications. The library code that you wrote or that you imported from a third party had to be in the same language as your app. When we move our apps to Kubernetes, we put the apps into containers, the containers run in pods, and that gives us the opportunity to pull the common functionality out into a separate container that we inject into every pod and call a SideCars. This model's been widely used for logging and tracing and security and service mesh. So that's SideCars. How about EBPF? The acronym stands for Extended Berkeley Packet Filter, but honestly, the acronym is pretty much meaningless now because you can do so, so much more than packet filtering. What EBPF really means now is the ability to dynamically change the way the kernel behaves by loading custom programs into the kernel. It's a little bit like how JavaScript lets us change the way that a web page behaves. Now, normally, as app developers, we write code in user space and we don't normally think very much about the kernel, but every time our user space code, our application code, wants to do anything that touches hardware like accessing a file or communicating over the network or allocating memory, all of these things require assistance from the kernel. We normally don't think about it because it's abstracted away for using higher-level programming languages, but our applications are asking for that assistance from the kernel all the time. When we use EBPF, we can write programs, load them into the kernel, and attach them to events in the kernel. Whenever that event happens, the EBPF program runs. Those events could be any function call, any trace point, network packets arriving at certain points in the networking stack. All of these things can be used to trigger our custom EBPF programs. In a Kubernetes environment, our applications are running inside containers, inside pods, but all of the containers in all of the pods on any given node, on any given host, are sharing one kernel. That means whenever our applications want to do anything interesting, that kernel on that virtual machine or bare metal machine is involved. The kernel is aware of everything that's happening in all of our applications on that Kubernetes node. If we instrument that machine with EBPF and attach to the right events, we can build tooling that's aware of all of those applications. We don't need to change anything about those applications or how they're configured for this tooling to work. Adding instrumentation using EBPF, we only have to install one instance per host rather than one instance for every pod. That is one of the reasons why EBPF is a really powerful platform for building tooling in a cloud-native environment. For a sidecar container to interact or observe application containers, it has to run inside the same pod and share Linux namespaces with the application containers. To get that sidecar into the pod, there has to be some YAML to configure it. Usually you don't manually write the YAML to inject sidecars. It's an automated process, maybe admission control as your pods are being deployed, the sidecar YAML gets added. Maybe it's done earlier in the CICD pipeline. But if something goes wrong or maybe the pipeline is misconfigured, then if the sidecar container YAML doesn't get added, the sidecar container doesn't run and your instrumentation is not going to be instrumenting that particular pod. In contrast with EBPF, we only have to instrument that node the one time. And as soon as we load EBPF programmes into the kernel, they can start observing and interacting with all the processes that are already running. So you don't need to restart your pods and there's no need to reconfigure them in any way for EBPF-based tooling to work. EBPF can see all of the activity on the node. So a malicious process is just as visible as regular processes. The sidecar model can also be pretty wasteful of resources. Every pod has to be configured to allow for the CPU and memory, not just of the application container, but also for the sidecar container. And there could be duplicate copies of configuration information, state information in every one of those instances of the sidecar. Pods are by design isolated from each other, so if those sidecar containers want to share information, the pods are only really supposed to communicate using network messages. So that's how you have to share information between sidecars. But with EBPF, we have a data structure called MAPS which allows us to communicate information between EBPF programmes and between those programmes in the kernel and any user space agent. So we can share information much more efficiently and use resources much more efficiently than the sidecar model. And there's lots of really powerful tooling that's already been built using EBPF and I'm going to have a couple of examples from the CNCF landscape. One is an observability tool called Pixie. And this project uses EBPF to collect a whole variety of different metrics and present them to users in a graphical way, like this example of a flame graph that's showing CPU usage across a whole cluster. This example is Cilliam's Hubble component which gives network observability. You can see individual packets flowing, you can see how Kubernetes services are communicating with each other and you can see metrics at layer 7 and layer 4. Cilliam is also using EBPF to provide networking capabilities. I mentioned before that EBPF programmes can be attached to events in the networking stack and Cilliam can use this to bypass parts of that networking stack to deliver packets more efficiently. And as well as connectivity, Cilliam is providing security based on EBPF with things like transparent encryption in the kernel and dropping packets if they're out of security policy. At the end of last year, we added additional capabilities to Cilliam to make it a service mesh or make it a service mesh enabled. A service mesh provides connectivity between applications. It abstracts away the underlying network and provides extra features if the network doesn't already provide them, like observability and security and traffic management. But what was innovative about Cilliam's approach to service mesh was that we built it without sidecars. All service meshes use a network proxy to handle and process traffic at layer 7 which is the application layer. For a long while, other service mesh implementations like Istio and LinkerD have used the sidecar model to inject that proxy into every application pod. Cilliam's implementation allows us to share a proxy across multiple pods. Avoiding the sidecar model helps us to avoid the complexity and the resource usage that's often been associated with configuring that proxy in every pod. Also, combining the service mesh with EBPF also gives us a much more efficient networking path. In the sidecar model, a packet from the application has to traverse the network stack multiple times just to leave the pod. If we have two pods communicating, there's a proxy at either end, and the packet has to take that convoluted network path at both ends of that communication. Using EBPF, we can pass through a single proxy while making much more direct connections at either end to the application container. This can have a really significant impact on latency. We know from Cilliam beta testers, Cilliam service mesh beta testers, that the operational complexity of configuring the sidecars and the resource costs and the additional latency of those sidecar-based service meshes had been too high a price to pay for some organisations to adopt service mesh. Now, other service meshes are also recognising the benefits of removing sidecars. Istio introduced a sidecarless option just last week. There are a few differences in the way that Cilliam and Istio are approaching this, but we're both taking sidecars out of the equation and using an envoy proxy to handle those complex layer seven parts. The quote there, sidecars have always been an unfortunate implementation detail. Mesh features will move to the underlying infrastructure, and that's really what we were asking ourselves in Cilliam. Can we move service mesh entirely, not just into the underlying infrastructure, but into the kernel? Well, we haven't moved everything into the kernel, but we've moved a significant part using EBPF into the kernel and delegating responsibility to the envoy proxy in user space to handle those complex layer seven processing. That's a pattern that's been used before. For example, Suricarta makes security decisions based on network packets sent to it by the kernel using NFQ, or when you plug in a hardware device, the kernel calls to a user mode helper to deal with some of the complexities of configuration for the kernel module that's being loaded. In service mesh, there could be reasons that you still want to co-locate a proxy in your application pods. For example, if you've got an application that uses a complex custom wasm filter in its proxy for a particular application, you may want to isolate that proxy so that it can't affect other application pods. So, I think we still see service meshes offering a menu of options for how you deploy that network proxy. But does EBPF mean the end of the SciCar model altogether? EBPF programming is essentially kernel programming, so that does create quite a high barrier that stops people from just rewriting all of their tooling overnight in EBPF. SciCars running in user space are really great for experimentation and innovation. Also, in some managed container environments, you don't always have access to the underlying virtual machine or bare metal machine. And in those cases, you may not be able to run EBPF-based tooling unless you have the co-operation of the cloud provider. But more broadly, I think we're going to see a lot of infrastructure tooling implemented in EBPF. And one of the reasons is this ability to avoid sidecars and improve performance. It's the improvements in performance that create the demand to implement more and more of the complexity within the kernel using EBPF. And because EBPF programs are loaded dynamically, we don't all have to run the same tooling, and we can innovate in the kernel without requiring everyone to use the same custom programs. So, the SciCar model does still have its place, and we'll continue to see it used in some environments. But I believe EBPF has a huge role to play in the future of infrastructure tooling for observability, for security, and for networking. Thank you very much.