 My name is Dan Popindrea, I'm the director of open source ecosystem and community for a company called Cystic. I also have a show called the Popcast. If you all have seen it, it's pretty good, I guess. We're going to talk today. This is the first cloud native EBPF day. We have a great, great set of talks, wonderful talks that's going to help you understand this magic that is enhanced Berkeley packet filtering. I'm going to hand it over to my man Duffy to talk a little bit more about the tech and what's going on today. Hey everybody, I'm Duffy Cooley, I'm Maui Lion online. I spent a lot of time in networking computers. Hello, Mike. Hey, that's better. Hey everybody, I'm Duffy Cooley. I'm Maui Lion online. I'm also a blogger and a vlogger and that kind of thing. I do a lot of broadcasts on Kubernetes and now on EBPF every Friday at 2pm. We do an echo office hours where we're talking about some EBPF project or another or just exploring some new aspect of the technology. It's a really great series and so if you're interested in EBPF, definitely check it out. I'm here to kind of talk a little bit about the technology and kind of get everybody excited about what we're here to do. I'm actually really excited to see what you are all here to learn. We have a number of really great presentations coming up today. For my part, the thing that I think attracts me the most to the EBPF space is that it gives us the ability to think about kind of the problem of networking, of events, of understanding profiling, of understanding just context and information about what processes are running and those sorts of things in a way that we've just never had, certainly not integrated in the way that we do now. So that's the piece that I think is probably really driving me. And just the fact that it's superpowers, right? You have everything from the ability to profile applications and spend and understand where those applications are spending their time to really making very complex decisions about how to handle routing, whether to route like memcache to a local memcache process for any process running. I mean, I just saw the incredible paper from Orange on that. It's just a lot of really great stuff happening out there. So I hope that you're all really looking forward to this session. So without further ado, everyone, we are going to have this panel in our first panel. We have why everyone is excited about EBPF. And again, this day that when we were putting together this day, we wanted this to be for you all to look at this technology, a lot of the magic that's here and be able to apply this to your day to day from a networking perspective, from a security perspective, from a troubleshooting perspective, from a debugging perspective. So again, first panel is we have going to introduce to the stage first, the matriarch, Sarah Novati. Welcome, Sarah. I guess we have a mic for you there. Next on the stage, which we have some remote folks. I don't know if you know those two faces over there. I don't know. They know a little bit about EBPF, I guess, a little, right? Coming to the stage, we also have one of the creators of Wireshark and Cystig. This is Loris DiGiorni. Did I get your name right? All right, good, good. Next to the stage, we have the console Yuri, Andy Randall for Microsoft. All right. And live via satellite, we have Liz Rice and Thomas Groff. Hello. So I'm going to ask the first question and we can go around the horn here. Why are you all excited for EBPF in cloud native? Let's start with the matriarch. Hello. Why is it that we're excited about EBPF? Giving access safely and securely to kernel capabilities has always been a challenge. And EBPF has started to allow us all sorts of really interesting ways to do this safely. And I have to say we're happy to have exciting, interesting ways to evolve this. And Loris can tell us more about it. Why are you so excited about it? Does this work? Check. No, I think it works. Okay. Sorry, we're making the mic check. To me, so clearly the Linux kernel is the underlying engine for cloud native. And Linux in general is the operating system that runs our workloads, our applications in the cloud. And EBPF makes it programmable, which is incredibly powerful. Like the underlying engine that is powering our car is we can open the hood and in a safe way essentially work on it and make it more powerful, extend it in incredible ways. I come from my previous ten years where the first ten years of my career were in networking and PECA capture in particular. So I come from the age of the old BPF without the E. And it was already exciting at the time when you could filter packets with the virtual machine. And what you can do today, the power, the tooling around it, the programmability is just incredible for somebody that witnessed the evolution and super excited about what's happening. So we're going to kick it over to our live via satellite. We'll have Liz speak about what she thinks is super exciting for EBPF in cloud native. And then Thomas, if you'd like to add points as well. So I think building up the previous speakers have already said the really interesting thing is that there is one kernel per host when we have a cluster of machines, each machine is only running one host. And that means we can instrument all of the application code that's running on each host with one set of EBPF programs. We only have to apply our instrumentation, whether that's for observability or networking or security. We can apply that to each host rather than having to worry about instrumenting each individual application. And I think that's why specifically for cloud native, this is a really interesting technology. Yeah, I think those are all great points. First of all, hello. As you can see, I'm calling you from the mountains here. It's nice and cold. Are those Swiss mountains? Thomas or those Swiss mountains? They are. They are. This is actually the most famous north phase in Switzerland. I can see in the background. For all the mountaineers and climbers out there, that's kind of the dream. So I've been involved from EBPF since the early days, 2014, and EBPF is just massive, right? I think before working on EBPF, I was working at Red Hat doing Windows kernel development. We tried to kind of predict what customers and users would eventually need from the latest kernel. And why? Because it took so long for a Linux kernel version to get into the hands of the users multiple years, right? You would not run the latest bleeding edge kernel if you're like a production level user. So that meant you would take years for a kernel development feature to get into the hands of users. If EBPF, this is now different. We can now reprogram change the behavior and the serum noted in a safe way. That's so crucial because technically this was possible before with the kernel modules. What we have the potential to crash your kernel, right? Nobody wants that. EBPF gives us this sandboxed ability to run programs very, very, very similar to how a web browser allows extension of JavaScript. Like think back pre-Java script when you have to install new versions of a web browser in order to view certain websites. We were basically there with standard operating systems. And with EBPF, all of a sudden we can innovate because we can get a kernel change into the hands of users within hours or days. So we can use the significant, like the high strategic ground of an operating system that can see and control everything and actually innovate. And that's just fascinating. And I think we'll see a ton of open source projects make use of that. And finally, sir, Andy Randall. Yeah, I think all great points. I'd like to kind of step back a bit and say this is really about speed and the speed at which we can move. If you look at the pace of innovation that's happening in cloud native, how frequently Kubernetes releases are coming out, how many new features are in them, how quickly new projects are getting created around them. We're moving at incredible pace. The Linux kernel can't do that. It shouldn't do that. It shouldn't do that. Yeah, it needs to be the stable basis on which we're building. And I remember in the early days of Kubernetes networking, a lot of people were still on very old kernels in the early days of Kubernetes. And we just couldn't get a performance out of the kernel just because they hadn't got all of the latest IP sets, IP tables, kind of capabilities. And yeah, the kernel got there eventually, but we need to keep moving forward at a pace that the kernel is not going to keep up with. And that's what EBPF does is it allows us to innovate at the kernel level at the same pace that we're doing in cloud native. So that's one key piece. Another key piece is it gives us this visibility and observability capabilities and ability to actually debug and handle things at scale. In the past, it's fine if you're debugging a single machine, that's one thing, but if you've got a problem that's happening cluster-wide, these problems to troubleshoot in production get really, really tough. And EBPF gives you this kind of visibility. I remember at Kinfolk we had a customer where they came to us with like a CPU throttling performance issue in their Kubernetes clusters. These were smart people and they'd been spending like three months trying to get to the bottom of this issue. And we came in and with some EBPF tooling, basically in a day, he said, OK, here's the issue and solved it for them. And they couldn't believe it. And it wasn't because we were much smarter than them. It was just we had these EBPF tools. And so I think those are kind of two really crucial pieces that play to cloud native. With that, I mean, we've all put in the solutions and from all of these sponsors that have been out here have kind of embedded EBPF in our tools, right? If you think about Isovalent, Azure, excuse me, Microsoft, Cystig and Tigera, the solutions are basically doing had made this bad on EBPF, right? So I kind of want to ask the question here and I'm looking at my notes, you all, sorry. So in terms of the EBPF and your individual solutions, why did you go with EBPF? I'll start with Loris. I kind of know the backstory. I kind of want you to share it with everybody else. Yeah, you can say the story if you want. Cystig, both from the open source point of view in particular with Falco, which does essentially runtime security for cloud native applications and also with the companies, commercial products that are based around it. We decided right away to go with kernel instrumentation. So there are multiple ways that essentially you can instrument for security, in particular for the deep kind of visibility that you need for runtime security or for threat hunting. And one thing that Liz said that I agree very much with is the kernel of the operating system, especially from the point of view of security instrumentation, gives a big advantage because it's essentially big O1 instead of big ON, where N is the number of processes, containers, applications that you're running on the machine. So it's just much simpler and much higher performance to instrument with the kernel of the operating system. When Cystig started, EBPF was still not in the operating system kernel. Actually, also when we released Falco in 2016, those were the very, very early days of EBPF, and when we started looking at the technology and how it was evolving inside the kernel, we were like, that's it. That's how this should be done. So forget about kernel modules and everything that has to do with essentially executing components inside the Linux kernel. And this is the perfect solution. It's sandbox, it's safe, it's secure, it's verified, and it's high performance. So essentially very, very early days, we decided to bet on this and we never looked back and we're still not looking back. He went away a summer in Italy and came back and there was Falco. So next I would ask Thomas and Liz, the same question, why did you make this bet on EBPF for Cyllium and Isovalent? Should I go first, Liz? I think you know the history, so you should definitely answer this one. Yeah, for us the story is even simpler. We basically, I think, created huge portions of EBPF to then create Cyllium. So everything we have built on Cyllium, whether it's the networking pieces, the security pieces, the observability pieces, they're all done based on EBPF. So on purpose, like the entire reason why Cyllium exists is we did not want to create yet another network observability and security solution that just bases on existing Linux kernel abstractions, but instead start with something completely new, EBPF based, that as Andy mentioned really, really well, and is able to cope with the pace of cloud native because it's still evolving very, very, very quickly. There's a couple of pieces that are truly super interesting. I think that make EBPF an obvious choice on the networking side. It's all about abstracting away and carrying less about traditional networking actually. End users, users, they care about services, connectivity, regular less in which cloud it is running, whether it's on-prem or whether it's in a public cloud and so on. That requires like the Linux networking layer or Linux networking in general to understand containers, services, API protocols and so on. On the security side, it's all about deep understanding of the system, really understanding what workload is actually, for example, making a network call. It's not just about understanding which part. You actually might want to understand the difference, whether this is the actual workload or whether this is an application developer running Qt control exact and running a bash or something like that. And on the observability side, yes, you still want some level of traditional network observability, but you also want the application protocol parts and you need DNS understanding and so on, right? And EBPF is perfect because it allows, it basically allowed us to build a high-scale, highly efficient network observability and security solution that can then keep up with the pace. And it's not bound to some use case that we defined a couple of years ago when we actually started, so we can continue to innovate and meet the latest end-user asks basically. So it's been great and I think definitely would definitely do it again to kind of extend EBPF to then start, to then start Celia. Listy, do you want to add something? Anything I would maybe add was a little story that Daniel Bulkman, who's a kind of maintainer, he's in our team. And this is really just a little bit of an aside around kind of how Celium and EBPF kind of developed hand-in-hand. And I remember asking him about XDP and he was talking about how like it was kind of, well, it would be kind of cool if we could run EBPF programs on the network card and, you know, that turned into reality. So, you know, this ability to innovate, this ability to pull things like essentially kernel functionality into network cards, you know, I feel like that's a very strong example of innovation and change that's being driven starting from this EBPF community. So we have five minutes here, I think three minutes at this point, but I want to go into Q&A, but I want to ask this question kind of a lightning round. Okay, real quick. So the question is, is, you know, what do you see? This is, I can't believe I'm making this question a lightning round, but what do you see for the future of like, you know, EBPF in your individual solutions? Just give us like an elevator. Maybe you want to go first, Sarah? Sure. So the elevator pitch from Microsoft, because I get to do this occasionally is we're making a big bet on EBPF because we think it's very important and it's one of those technologies that can help us leapfrog and innovate. And that is very much the space we're doing it to the point that we've brought this Linux concept out to the Windows environment and we're going ahead and learning and cross pollinating that way. It was almost slightly. By the way, how awesome are Sarah's shoes? Just want to throw that out there. All right. From the point of view of Falco, I think that the way see EBPF support evolving is Falco and security in general. I think that more hooking points, more places where you can fetch essentially relevant data, relevant information, security signals from the kernel of the operating system, Linux security modules, for example, is a good example. And in a general way, broadening that. And from the vertical point of view offering interfaces that can be clean and powerful for the specific use cases that maybe have to do with security. For example, one thing that we did earlier in the year with Falco, we donated the libraries. So that's another direction for essentially wrap our EBPF probe, wrap around our EBPF probe offering essentially high level state enhancement and decorations and stuff like that. So that's another direction where I see the community going essentially with higher level abstractions around EBPF to make it even easier to use. Hell of an elevator. Over to you, sir, Andy Randall. I'll try to keep this quick. Actually, for those who don't know, I'm with KinFolk team. We were recently acquired by Microsoft. So we have kind of insight both from what we were doing. KinFolk where we had EBPF project around Kubernetes called Inspector Gadget. It took a lot of the traditional host based BP, BPF tools and allowed you to deploy them in the Kubernetes environment. So we're going to be doing a lot more of that, taking many more tools and basically anything you could do on a host with EBPF with the BCC tools and those kind of things. You'll be able to do in a Kubernetes environment. So that's one thing. And then the other is we're just working across a lot of different teams within Microsoft and there's a lot of different applications and innovation happening internally, which will see its way out into AKS and, you know, everything that underlies the services that we're deploying. So I think there's going to be a lot more that you'll see coming out of this. If you haven't played with a flat car, please do. It's pretty cool. It's really cool. Lastly, Thomas and Liz again elevated and then we're going to go into some Q&A. So Duffy will have the microphone around ask anybody asking questions, but go ahead. Thomas or Liz want to end this up. We're really looking forward to working with the Microsoft team to port it into Windows. So I think that's a great step forward bringing all the silly magic to Windows. I think the big one for me will be and that's it's basically user user request. So that's what we're going to be looking into today. Everybody screamed for an eBPF based service mesh today. So that was definitely something that we will be looking into. We actually have a lot of this already and a lot of our users are happy with our layer seven, the balancing security and so on. Well, I think going further down that road, we're definitely hearing you and we will try to make it happen like a sidecar free service mesh entirely powered by eBPF. We could talk about deeper security, better understanding where eBPF will make a control. But I think the service mesh one is probably want to be going to be one of our biggest focus. All right, we have like a minute, I think for the Q&A. Anybody have any questions? I want to add one quick thing to that because as well as service mesh as well as service mesh being potentially sidecarless. I think we're going to see a time this isn't just about silly and this is about eBPF in general. We're going to see everything being possible to be sidecarless because we can instrument, as I said before, at the host level. I think that's going to be a big performance improvement. So my question comes from scaling. What's your name? I'm Gabe, by the way. Sorry, I'm a software engineer over at Sneak. Hi, Gabe. Hi. So one of the things we're finding is that the amount of data we collect for observability, especially at scale is something that's a bit challenging to deal with. I mean, these are really incredible advances in introspection, but being able to store the information and then make sense of it is at that scale is becoming a problem. What do you see as some of the challenges to make that easier to work with, not just by exposing the observability, but actually being able to take action off of it? I think there's actually a very key answer here that is very unique to eBPF and it's one of the reasons why eBPF got started was profiling and tracing it. It was all about removing the requirement to send a lot of data from curl to user space and then make sense of that. Instead, make the curl intelligent on what type of visibility should offer. And I think the same applies to what you just described. That's the solution, like understanding what is noise and what is real information at the source where it happens and using eBPF runtime to do so. Yeah, I tend to agree with that. So eBPF essentially gives you access to everything essentially in the Linux kernel, which is gigabytes per second, probably of data. If you actually collect all of the information. But I think the philosophy is, and more and more in the future, the philosophy will be based on sort of localized streaming decisions. Ideally, you look at the data and you summarize it in the kernel. If you cannot do that, you do it, you enrich it a little bit and you take your decision and you do your observations in the local host. And you only stream in the central place the summarized data. That's essentially what eBPF gives you is sort of the ability to program and take decisions as close as possible to the source. And the only way to survive in the data is really applying this and being close to the source. All right, so I'm sorry. Hey, don't fire me. I have to cut everybody off. We have to go to the next session. We're in a little late everyone. So, Duffy, do you want to come up and introduce our next speaker in session? And thank you all. This panel is amazing. Thank you all for joining also remote Thomas and Liz.