 All right. Welcome back, everyone. It's me again. I will be slightly less fun than last time, I'm sure. My apologies to anyone who arrived late and doesn't know what I'm talking about. Apparently there were some AV issues. I'm not sure whether it was recorded. That suits me just fine. We were here to break the ice, especially for the people in the room. But this afternoon I'm here to talk about something which is more important and more relevant to everyone, which is EBPF. It is the panacea which will solve all our woes or it is something which just makes life a bit easier and only is for vendors to think about and isn't relevant if you're an end user. It could be one of those things. There's an angel on this shoulder. There's a demon on that shoulder. First up, I would like to invite the angel to give a talk. Thomas Graf is the CTO and co-founder of Isovalent who are the creators and sponsors of the Cilium project of which he is a creator and maintainer. He is also chair of the governing board of the EBPF Foundation. He's going to give us the four argument on the case of EBPF over the course of the next 10 minutes. We will hear the against after that. So please join me in welcoming Thomas. Thank you, Greg. Hello, everybody. My name is Thomas. Maybe adding to the intro, my background is a kernel developer. What the heck is a kernel developer doing at the service smash conference? What we have been creating Cilium, which is a cloud native CNCF project providing networking security and observability. And we're doing this with EBPF. We're being run by lots and lots of different users. I've listed a couple of them here, including major cloud providers, GKE, KS, Digital Ocean, Alibaba, large SaaS companies and so on. And what we have been hearing loud and clear from our users is can you bring this magical EBPF technology to service smash? Can you make it a bit simpler? Could you potentially get rid of sidecars? We love service smash, but could you simplify the data path piece? So maybe we can and this talk is exactly about this. So what is EBPF? And I will keep this very quick. I have 10 minutes. I don't want to bore you with kernel level technology, lower level details, but in a nutshell, EBPF makes the Linux kernel programmable. The easiest one sentence description to describe it is what JavaScript is to the browser, EBPF is to the Linux kernel. So we can load EBPF programs. In this picture, we're seeing example that we run an EBPF program when a system call is being performed, but we can perform this for a variety of other events, network packets, sockets events, security events and so on. There are entire security systems being built with EBPF. For example, Google's Project Zero team is working on LSM BPF to secure Linux further. We've also released Tetragon a couple of days ago, which provides the runtime side of observability and enforcement. EBPF is definitely not something that is only used by vendors. EBPF is powering giant data centers, including for example, Facebook slash MetaStatus Centers in terms of data's protection, low-pallancing. If you have an Android phone, you're using EBPF every single day. All of the accounting of traffic management is done with EBPF. EBPF is everywhere, but usually you cannot see it. And our goal is that with service smashing the EBPF, you don't have to see it either. We want to optimize the data path. Something that's very, very important. Cilium does not require you to choose sidecar or not sidecar. In fact, we have an Istio integrations that existed even before Istio went 1.0 several years ago, which allows you to run Cilium and Istio separately. And the Layer 7 policy that Cilium supports will be enforced in Istio sidecars. In this model, we're also able to accelerate and secure Istio. What is new is option one, where we're looking to implement a data path that is entirely built with EBPF and Envoy that does not require a sidecar. It still requires a proxy at times, but it's not necessarily one proxy per part. We'll talk about this. One example that I'm using where I can show how EBPF can add value to Istio today, as an example, this is one of many, is with sidecar injection on the network side, there is actually clear text between the app and the sidecar. This is the arrow that we see going through the stack. And for security teams, that can be a problem because anybody with CapNet admin privileges will have accessibility on the clear text. As we announced in 2018 at EnvoyCon, we've built Sockmap, which allows to accelerate and secure the connectivity from the app to the sidecar. And this clear text is never hitting the wire. There's never a single network packet being created, and thus, compliance security team is actually fine with it again, and it allowed customers of us to actually run Istio, even though there was previously clear text information on the loopback device. If you look at Service Smash, what we want, observability, security, traffic management, resilience, these things have existed before on the networking layer, and I've listed a couple of tools here, but it's very clear that they are falling short of cloud native principles and requirements today. What we need in a nutshell is elevate all of that to layer 7, but at the same time, we cannot forget about layer 3, layer 4, the pure networking level. So I think what we want is a Service Smash that combines the old world, the networking world with the new world of Service Smash while providing observability, security, traffic management and resilience. Service Smash origins, I think you all understand this in order to overtake to fix these issues. The first approach was to build this type of functionality into the apps themselves. This was not really feasible because you had to put it into every single application framework. So sidecars were an obvious solution out of this. Sidecars have a couple of shortcomings, very complex network injection, and you have to run a lot of sidecars. This can become a problem at scale. We're not claiming, we're not saying that sidecars is never an option. It's definitely a great option for many, many use cases. In some cases, the performance penalties and the efficiency can become a bottleneck for you and you might be looking for an alternative. And for those users, we want to provide a sidecar-free alternative. So we're essentially introducing Cilium Service Smash. What is the goal of Cilium Service Smash? We want to provide a sidecar-free data path whenever possible. So use eBPF natively, and I will talk about when to use eBPF and if that is not possible, technically fall back to Envoy. We want native network and latency in terms of performance and we want MTLS support that can support any network protocol. If we use MTLS or TLS as the protocol for both authentication and transport, we're pretty much limited to TCP. It can be done otherwise, but it's really hard. By separating them, we can support any network protocol with MTLS. Also, we are offering and providing a CRD where you can essentially inject Envoy configuration anywhere into your network. Very important, we don't want to invent yet another control plane. We actually want you to bring whatever control plane you have or want to use, whether this is Istio, MTLS, maybe choose Gateway API, maybe choose Kubernetes Services and Ingress with annotations, even LinkedIn could be made work. It's a little bit harder because it doesn't use Envoy. And then also important, we want to integrate with observability tools that exist, Prometheus and so on. Silium, Hubble and Tetragon already support this. Essentially, what we want to achieve is something like this, which is, as we look at through this evolution from built into the app, sidecar, built into the kernel. And we've kind of done that before. If you think about virtual machines into containers, we've done that before and we've made the Linux kernel multi-tenants aware. From that perspective, we're looking to apply the same principles and bring the same multi-tenancy controls into Envoy and into the kernel. How that will look like technically, I cannot explain all the details in one minute, but essentially we will integrate Envoy as part of the kernel stack and run Envoy listeners per thread in a separate C group off the pod. So CPU accounting will still be accounted to the C group, to the CPU quarter of the pod. This allows us to have no sidecars, no network injection, no need to stop Envoy, no need to start Envoy on pod startup and so on. I'm going to make this my last slide because we're already almost out of time. What can we do in each BPPF natively and where do we still need a proxy? There's a lot of things that we can already do in BPPF natively, traffic management, L3, L4 forwarding, low-balancing, cannerate the policy of our routing, multi-cluster network policy, MTLS, tracing, open telemetry, metrics, HTTP parsing. What we cannot do are things like layer 7, low-balancing, reach-wise, layer 7 rate limiting, TLS termination and origination. For these things, we still inject a proxy, but not two proxies. We inject one proxy in most cases, not two sidecar proxies. The benefits of this can be great. On the left here, we see the performance difference in HTTP parsing between BPPF natively and a sidecar. With that, I'm out of time, so I will leave, I think, questions to the outside hallway track. If you're interested in eBPF, if you're interested in running a service mesh with outside cars, feel free to catch me outside and I'm happy to answer questions. Thank you.