 All right, everybody, let's go ahead and get started. So first off, I'd like to thank everybody who is joining us today. Welcome to the CNCF webinar on how Sillian uses BPF to supercharge Kubernetes networking and security. I'm Chris. I'll be moderating today's webinar. And we would like to welcome our presenters today, Mark Darnell, who is the networking product manager at SUSE, Dan Wenland, who is the co-founder and CEO at Isovalent, and Roger Clarisse, who is the Kubernetes product manager at SUSE. So before we get started, just a few housekeeping items. During the webinar, you're not able to talk as an attendee. There is, however, a Q&A box at the bottom of your screen. So please feel free to drop your questions in there, and we'll get to as many as we can at the end. So with that, I'll hand it over to Mark, Dan, and Roger to kick off today's presentation. Guys, Roger, I think you're muted there. Thanks. There we go. What I said was, in a moment, I'm going to give you an idea about why SUSE is here talking about Sillium, along with Dan from Isovalent. Mark will give you some background on why SUSE chose Sillium. That is a little background on what people can do with it, what the needs that we wanted to address in the space of Kubernetes networking were, and some of the use cases that we see going forward. And then Dan will take a deep look at Sillium and BPF, the mechanism that enables a lot of the magic that it can do. So I'm SUSE's product manager for SUSE Cast Platform, which is our Kubernetes distribution. You may know that SUSE has been in the open source business for over 25 years. Most people think of us as a Linux distribution, but we also have a lot of other software-defined infrastructure and application delivery solutions. And I was looking and our team was looking for a way to up our game in our Kubernetes distribution in the areas of networking and especially networking security. And we decided that Sillium was the perfect way for us to do this. And let me now present Mark Darnell, who is the product manager for software-defined networking across our entire portfolio, to tell you more about why Sillium and what we chose it to address. Mark? Thanks, Roger. Everyone hear me OK? Panelists nod yes? All right, next one. OK. So let's cover why Sillium. As Roger said, we have a enterprise-grade Kubernetes based product. We've been in the enterprise space for a while. We understand a lot of the issues that come along with trying to operate cloud-based, cloud-native software. And we had a number of issues we really wanted to work through and make sure that we made it as easy as possible for the consumers of our product that we're going to use Kubernetes through us to be able to solve those problems. We've become a huge fan of Sillium for some reasons that we're going to show you. And we're basically going to walk you through a step at a time of what does a traditional, if there is such a thing as traditional Kubernetes given the fairly recent history of the product. But what is the traditional network use case for that particular product in the networking space? And what kind of problems will you run into just trying to kind of get jump-started? Those are obviously the first ones that we want to hit. And then there are other use cases that we'll walk through that we turn kind of luxury and future use use cases or what we see on the horizon. So let's first start off with a simple definition of what a Sillium. And this is from the Sillium.io website. Pretty easy to Google. And Dan and myself can easily point you at that. Sillium is open-source software that allows you to secure network connectivity between app services that are hosted in containers. There's actually more use cases for it than that. But those are obviously the ones that we're specifically going to be talking about today. And given the fact that we are getting into school season and if you haven't been in school, you have kids in school, whatever, you know that we're kicking off right now. So we've got a little bit of an educational theme running here. So that's the 100 level course definition of Sillium. Let's move to the 400 level course. This is something that you will find, I believe on the Sillium website. And it's basically a multiple definition of, here's all the different things you can do with Sillium. Extremely powerful product, a lot of functionality to it, but we're gonna start off dealing with a couple of specific use cases. And then if necessary, towards the end with the Q and A session, we can come back and revisit this slide and potentially speak to some other use cases that you might want to use it for. So let's start off with, like I said, a traditional kind of CUBE app. This one is a tiny two node, cloud native web farm, standard Kubernetes. So the legend down there at the bottom will tell you what you're looking at. The dash dot line is node demarcation. So we have two nodes. The solid line is demarcation for a pod in the standard CUBE seven-sided figure. And then a container sits inside that. So we have basically what I've done is two nodes, two pods per node, and two containers per pod, just to kind of show the typical Kubernetes structure that you can have. And for a standard web farm, you'll have a front end and a back end. And those are labeled FE and BE. And those other containers are possibly sidecars, or possibly additional container-based functionality that you wanted to partition into containers inside those pods, standard kind of structure in terms of what CUBE can do. And then finally, you see the line with the circle end cap on it. And that is intended to be the pod NIC. By default, every pod in CUBE will have its own NIC and that NIC is shared across the containers running inside that pod. So I wanted to make sure that we're starting off from, here's the common classic definition of what a CUBE app will look like in a multi-pod, multi-node, multi-container setup. All right, standard security inside CUBE will require pod IPs. So we've basically labeled each one of those. You'll see a red circle around those IP endpoints and I've labeled for pod 1-1. That's the name of that pod. I've labeled IP 1-1, IP 1-2 for pod 1-2, and so forth. So you can kind of follow the progression. This should be pretty obvious. It's basic networking. Now, here's the standard CUBE proxy leverages IP tables and there's other models as well. But historically, IP tables has been the major mechanism within UNIX or within Linux in order to enforce security. Full disclosure, I've spent a lot of years working with IP tables. It's a phenomenal firewall. It's extremely feature rich. There's a large number of modules that allow you to do things like, if you look at the 1-2-3 fifth line from the bottom IP tables minus in state, this is a single line that allows you to say, hey, once you have allowed an initial packet through for a new connection, any other packets that are specifically related to that particular TCP connection and potentially even UDP connection for an established or related connection, you will allow those through so you don't need explicit rules for that. So for protocols like FTP, IP tables is extremely powerful and very easy to administer once you know all of the magical incantations. Now, I'm gonna give you the caveat. The caveat for IP tables is it was built for what I consider to be more of a static world. I've run probably up into the hundreds of firewalls using IP tables throughout my career and those don't tend to change quickly. You'll have a service ticket come through, someone will request, hey, I need a particular rule to be opened up and you'll open that up. And it stays like that for almost ever until that particular system is decommissioned. In other words, it's built for a physical node or a virtual machine-based kind of environment. It becomes much more awkward as you start spooling up and down entities that need to be protected very quickly and that is the cloud native world with containers and Kubernetes. What I've done is I've created what would be a sample rule set for how would you protect and enforce the traffic between these front-end and back-end pods to say I want to allow HTTP to land in your front-end pods, that's pod11 and pod21, and how would I then allow my front-end pods to access a patchy, sorry, to allow access VITES MySQL, basically a cloud native configuration of a database. And these all-park six is nine rules total that would allow you to enforce these rules, allow to constrain that particular network behavior, okay? You'll see in a later slide, you actually will have to add a whole bunch more rules every time you add or subtract a pod, you'll either add or delete rules and reload an entire IP tables infrastructure. I'll show you some numbers that will scare you a bit in terms of why you don't want to do this in CUBE long-term. There is a better way to do this and that is Selium label aware security. You use Kubernetes labels to say for pod11 or any of the front-end pods, you would create a label associating that and call that label front-end. For any of the back-end pods, you would say I want a back-end label. And then one single policy statement, the one on the left there will give you the protection that we're talking about. In addition, not only do you want security, you want visibility. And for any of you who have actually looked at IP table log output, there is a log module inside IP table that allows you to dump to say syslog or syslog, other log file formats in user space, all that enforcement happens in kernel space. Those log files tend to be extremely detailed and you have to really start carving and parsing and using log processors to get what you want. The default here in label-based security visibility logs is much easier to look at. I know that, hey, I'm gonna allow front-end to back-end and I'm gonna deny anything else going to my back-end. Simple, clear, and highly scalable. So the choice you have as you're labeling IP tables is you can create a new set of rules each time and there'll be a slide momentarily. Actually, this slide is the one that shows you that. On the left, you create this rule set one time. On the right side, you can end up creating this rule set each time and this is just for scaling out your database side. If you wanna scale out the web server side, you're gonna be adding rules there as well. By the way, I do see one question popping up in the Q and A. Am I referring to cube proxy when I say cube? With the particular piece of reinforcing network rules since that's the default within cube, yes, I am, cube proxy which as I mentioned earlier, there are multiple rule methods that you can use within cube proxy that IP tables is the default and some of the issues that that's had will speak to here. So IP table scalability and performance challenges. So IP tables in Kubernetes, if you use that in cube proxy, like I just said, that's specifically what I'm referring to with each pod create and delete, you may be adding and deleting IP tables configs across all hosts. And then that lower right red rectangle that you see on the slide, you'll see a time measurement that was done. If there are 5K services, eight rules per service, so 40K rules total, it takes 11 minutes to add one additional rule. It unfortunately does not scale linearly and that's a serious, serious problem because we all know what knee bends look like as you begin scaling a system. You'll hit a knee in the system and the whole system just falls over because retries end up starting exponentially scaling. So 20K services with 160K rules, so just multiplying this by a factor of four does not move to 44 minutes, that actually moved to five hours. Those are the kinds of issues that we're trying to avoid. And due to the data structures used inside BPF, which Dan will speak to, we've actually got some hash table functionality inside there, which allows more of an order of one kind of look up as opposed to potentially this exponential knee that we end up encountering with IP tables. Specifically with the highly ephemeral Kubernetes pods, which cloud native architecture not only desires, it requires effectively. That's basically the way that Kubernetes is built to really spool up and down lots of entities quickly. The fact that IP tables was not built to do this in its data structures and its software architecture fundamentally is not built to do this. That's what's driven us towards BPF and Selium. All right, Selium scalability and performance. The graph that I wanna point you to, I wanna point you to the two sections over on the right. I've already spoken to a little bit of scalability. Actually, I'm gonna point out one scalability point here. This does scale. Right now, we've seen some Selium testing go up to 5K nodes with 100K pods and 20,000 services. And the lower right graph you'll end up seeing as we end up scaling services, the latency that occurs between Kube Proxy and BPF-based node port. We've got serious, we end up really starting to scale much, much better with BPF. We end up staying linear. We're gonna stay flat compared to Kube Proxy. Kube Proxy really tends to, you start seeing that exponential bend. That's definitely not a linear curve that you see occurring there. All right, so I've spoken to label-based security. There is another option. And so label-based security you would effectively use inside your cluster, as you're trying to say, look, what containers do you want to be able to talk to each other? So you have control over that as master of your own cluster. For something that may be going on outside your cluster, however, you may want to have DNS-enforced security of, I only want something coming to a particular DNS label or something going to a particular DNS label to actually be allowed. So DNS-based or DNS-aware security is another option that we have that allows you to address kind of both internal as well as external needs for the cluster. So fundamentally, we started off the session with why did SUSE choose CELium and why are we talking about this along with Dan from iSurveillance? And the reason why is as follows. Number one, this identity-aware security reduces OPEX. Those simple policy declarations, you saw it was around a five-line statement that said, this is what I want to allow between the front end and the back end in this web farm. And I can now scale this web farm out to hundreds or thousands of pods and containers without having to change that policy statement at all. So the OPEX scale that I end up getting, and I realized this is not a technical argument, but it's absolutely a business argument in terms of being able to get by with fewer administrators as my cluster scales. Now, what is a bit of a technical argument is, it also allows us to reduce CAPEX. The underlying tool that is used in CELium, which Dan is gonna be speaking to within about five minutes here. BPF is, in my opinion, architecturally superior and definitely more efficient for highly dynamic workloads. And therefore, you reduce CAPEX because your overall hardware utilization for underlying networking policy enforcement and stuff like that is reduced and therefore can be dedicated to more workload. So more workload on the same amount of hardware is better CAPEX and fewer boxes for administrators to maintain, which makes the technical argument solid there as well. So that OPEX and CAPEX argument is why we ended up really employing that within SUSE as just those two, in my opinion, were enough to say that CELium is gonna be the wagon that we're gonna hitch to. And we have some advanced functionality and additional performance optimizations. These are occurring rapidly with CELium. I'm gonna blast through those real quick so that I can leave Dan around 15, 20 minutes to give you a deep dive on BPF and why that is such a cool technology. So one, CELium Envoy Acceleration. You see on the left a diagram of what it would look like without CELium. You see in the middle the diagram what it looks like with CELium. And the diagram or the graph there on the right should really show you the number of additional requests per second that can be handled really gives you pretty serious, like the title says, about a 3X gain in performance by accelerating Envoy. So that's one of the things we're looking forward to deploying in ours. Another one, multi-cluster service routing. When I was at KubeCon Barcelona, I counted approximately five service mesh solutions that were being discussed at KubeCon. And I don't have an issue with that. That's one of the ways that open source works. We throw a lot of things at the wall and we potentially eventually merge things and kind of reduce the count. But while a lot of that conversation is occurring, we still have the need to be able to handle multi-cluster solutions. So CELium allows us to do multi-cluster service routing, allowing these back-end lows to be shared across multiple clusters from a security perspective with a small amount of administration. And if you need some more info on that, I'm gonna point you to Dan momentarily for that as well. Transparent encryption, this one to me is huge. The ability to, without having to put encryption intelligence inside my nodes, sorry, inside my pods, inside my containers, the fact that I can enforce that encryption basically more at the host layer is a huge thing that allows my applications to be completely ignorant of the enforced encryption. Layer seven, this is an area where IP tables, and like I said, long-time IP tables fan, I would love to have seen more layer seven functionality. But the problem that we have with IP tables and layer seven functionality is the number of modules it requires and the rule syntax to do that, they're both extremely challenging to do. So if you go look for layer seven functionality inside IP tables, you're not gonna see a whole lot of action. You're actually gonna see a bunch of open source people telling you, use a hardware appliance for that. Well, I really don't want to do that. I really would like to have layer seven enforcement inside my container workloads. So here are two different examples of layer seven functionality. One of them is an API firewall that says, hey, I want to basically filter the verbs that I can do within my HTTP, whether it's Google's RPC calls or HTTP two. I want to be able to filter those verbs from particular endpoints to other endpoints over particular ports. So being able to enforce that layer seven functionality is pretty huge and doing that natively at the software defined level on node. Next, avoiding illicit or illegal SQL access. I don't want particular applications playing with anything but particular tables. I obviously can enforce it at database level. However, some cloud native databases are not as feature rich in their security enforcement yet. So being able to enforce that at the selium level in terms of saying, hey, I'm gonna shut down any kind of select that doesn't go to against a table that I want my front end to be able to access my back end, that kind of enforcement is extremely useful. So that's a sample of a set of what I consider to be more advanced functionality that you can look for coming to you and I know that's deployed in selium today. Next, I'm gonna turn the floor over to Dan. Dan, I've left you, I think exactly the time that I said I was gonna leave you. Dan is gonna give you a deeper dive into selium internals. He's specifically gonna talk about BPF and the underlying technology. Dan, the floor is yours. All right, just wanted to make sure everyone's hearing me. Greg, give me a thumbs up. Okay, good. So hello, everyone, my name is Dan Wellent. I'm a co-founder along with Thomas Graff, a company called Isovalent, where the company that kind of is both driving a lot of the work upstream in Linux around BPF as well as kind of maintaining the selium community. So and I want to thank Suza for inviting us to this webinar and thank them for, you know, they're not only using selium in their product, but they're actually very active in terms of contributing to the selium community itself. So it's a great example of the open source model working as it should. So, you know, a bit about, you know, why we found Isovalent. So, you know, my co-founder Thomas Graff and I both worked upstream in the Linux kernel networking. He's been in the Linux kernel community. I've worked on OpenVSwitch before this. We've both been very involved in kind of Linux networking and security for quite a while. And, you know, we saw this new technology BPF coming up and we realized that it was just kind of going to give us a dramatically better way of solving a lot of the challenges that people would have with networking and security as they move to microservices and platforms like Kubernetes. So always spending the next 15-ish minutes kind of giving a deep dive on BPF and how selium uses it under the hood. I want to be very clear though, you don't need to understand BPF or understand a lot of the detail that I'm gonna go through here in order to use selium. As you saw from Mark, selium gives you very high level abstractions that you can use, you know, these services can talk to these services. You don't need to leverage BPF, but understanding a bit about BPF A is pretty cool from a technical perspective and B helps you get a better intuition for kind of why selium is so powerful because it's leveraging BPF under the hood. So first off, any acronym, you gotta define the acronym, right? BPF stands for Berkeley Packet Filter. It's actually been the Linux kernel in a simple form for a long, long time. In fact, if you've ever run TCP dump and specified a filter with TCP dump, you've actually used BPF before. That filter was compiled and run natively in the kernel to do packet filtering in a very high performance way. So BPF basically lets you write a set of code logic. It then makes it sure that you can't, that logic can't do anything bad in your kernel. It can't crash your kernel. It can't loop your kernel. And then as we'll talk about, it's actually fully JIT compiled to run at native speed in your kernel. So BPF is a way of kind of adding intelligence to different hook points in the kernel and letting you kind of extend the behavior of Linux, but in a way that works with whatever standard Linux you're using today. You don't need this custom version of Linux for this. BPF lets you kind of customize it on the fly. So we'll dive in and talk about the technology in a minute, but first off, we mentioned BPF. There's also something called EBPF. At this point, we'll use those two terms interchangeably, but in recent years, BPF has become dramatically more powerful as EBPF was fully merged into Linux kernel. And we've seen companies like Facebook, Netflix, and Google using this to do everything from load balancing to security filtering to performance profiling. So I definitely encourage you to check out these links, read about Facebook's Contron BPF based load balancer. Brendan Gregg at Netflix is a huge user and proponent of BPF for performance tracing. I always like to call up Brendan because that's how I first learned about BPF three years ago. So it's deep technology that I think are being used by a lot of the people who kind of understand where the world's going. And if you wanna check out on the Selene website, we have extremely, extremely deep BPF documentation. I always tell people, like we literally get to talking about registers within about like the fourth or fifth paragraph. So if you wanna go super deep, that link is a great place to go. I'll be keeping this description a bit higher level, but I'm happy to handle questions in the follow-up section. So there's two key concepts to understand in terms of BPF. The first is this notion of a BPF program and a hook point for running that program. So you can kind of think of BPF as function as a service for kernel events. Basically, the kernel is a set of code and there's a bunch of functions in that code that happen like every time I create a packet or every time I open a file. Each of these things are actually functions that exist in the kernel. But what if you wanna customize the behavior of what happens when you create a packet? Well, the traditional Linux code does a standard set of actions when that happens. BPF lets us define a hook point of a certain function inside the kernel code. And then it lets us write essentially what can be pseudo C code that's then compiled and attached to this hook point such that each time the kernel is executing and calls this function, the BPF hook point triggers and actually runs your BPF program, right? So, and then once that BPF program completes, you'll go back to executing the kernel workflow. So in this sense, there's nothing specific about networking or security in terms of BPF. It's more that we with Cilium are using the networking and security relevant hook points inside the Linux kernel to attach additional logic that's able to add very, very rich functionality essentially by writing our own BPF programs that adding Kubernetes and API aware intelligence into the Linux kernel using these BPF programs. So BPF programs are essentially, customized logic that you can safely run with high performance at different points in the Linux kernel. That's concept number one. Concept number two is a notion of BPF maps, right? Just running a program isn't very useful if you don't have meaningful state. If I'm trying to fill their packets, how do I know what IPs should be allowed or denied, right? If I'm trying to collect visibility data, how many times, how do I know how many bytes have been sent, right? You need some notion of state that works across different invocations of these BPF programs that run, that might run, for example, every time a packet is transmitted. So BPF maps are that mechanism. BPF programs can read and write to maps. You can also read and write to BPF maps via a pseudo file system from any user space tool. So this is the way that data gets kind of in and out of an executing BPF program. And it's very important that, that unlike IP tables, as Mark was mentioning before, it's very efficient to incrementally update BPF maps. So I can add or remove an individual entry from BPF maps as opposed to loading and unloading the entire BPF map. Similarly, there's hash based semantics for looking up BPF maps. So I don't, we can implement, for example, the Kube proxy based load balancing in a way that's hash based as opposed to a linear traversal of lists. So between being able to customize the logic and having very flexible BPF maps, this lets us be a lot more kind of intelligent about how we implement various bits of Kubernetes networking and security. So I think I already mentioned that. If you want to read more about BPF maps specifically, LWN.net has a great article on that. And it's just in general, a pretty awesome resource for stuff about the Linux kernel. So let's walk through this kind of as a more concrete example. This isn't psyllium. This is just how much you do like really simple BPF network filtering, BPF based network filtering. So obviously you have an application workload. Maybe it's running Node.js. It's running in user space. And every time, you know, it communicates on the network, it's gonna send result in packets that go out the E0 interface. So we can have some BPF aware tool that writes, you know, often auto-generates a BPF program, right? That is gonna like, you know, the goal of that program will be to say, okay, I'm gonna look at each packet. I'm gonna look at the IP address in that packet. And based on the contents of a BPF map, I'm going to decide whether to allow or deny that packet. That's code I would write essentially in C or pseudo-C mechanism. Then what we would do is we need to compile this into BPF bytecode, right? So typically a BPF tool like psyllium would work with a compiler like Clang that would compile this. Then we make a syscall, the BPF syscall, which basically says this is the hookpoint I want to attach this program to. And so in this case, maybe we want to attach to a point in the kernel where each time the TCPIP stack creates a packet, we're gonna run this program deciding whether to actually, you know, forward that packet or drop that packet, right? Now, before things can actually be loaded in the kernel, right, we need to run it through what's called a BPF verifier. This is what basically constricts what a BPF program can do to make sure that it never does anything bad in your kernel. It can never crash your kernel. It can never loop your kernel. It can't access parts of kernel memory that it shouldn't. This is why BPF is fundamentally a better mechanism for kind of expanding or extending the Linux kernel than something like a kernel module. But at the same time, after the verifier passes it, it actually compiles this into native assembly code. And so that means when this is actually running in the kernel, it's running with the full native performance as if it had been compiled into the kernel in the first place, right? And obviously, if this case, if we're doing a filtering example, the user space tool would write into BPF maps. For example, the set of IP addresses that might be, should be denied and all of their IP addresses should be allowed. And then whenever the app workload calls connect and writes data to a socket, it would end up flowing through the TCP IP stack. Each invocation of the appropriate function where we have the hook point would run this BPF program and the BPF program would either allow, decide to forward or drop that packet. So this is an example of how you can take kind of the raw capabilities of BPF, which again are very generic and use them to solve a basic network filtering problem. Cilium does that and many other things in a very Kubernetes integrated way so that you never have to deal with the details of these low level BPF programs that deal with kernel data structures and all this kind of BPF maps and all of that. You just get to write the high level policies that Mark was showing before and it's implemented using the power of BPF. So Cilium in general controls several aspects of Kubernetes networking. It can manage the main pod to pod connectivity whether you want direct routing, overlay, whatever it can do all of that. It does the service based load balancing. So this is what would be traditionally handled by Kube proxy. Instead of Kube proxy using, for example, IP tables, Cilium will do it fully in BPF. And then of course the network visibility and security enforcement that Mark showed earlier. And so Cilium does all of this by building a BPF program that natively understands the identity of the Kubernetes workload and is able to see all traffic that goes in and out of that pod. And it's important to point out here that there's no changes required to the pod itself. The workload is just using the standard Linux sockets like it always does. It's the fact that that traffic is already flowing through the Linux kernel networking stack that allows us to tap in and transparently add all of this visibility and control. So Cilium will typically plug in as what's called a CNI plugin. This is the standard kind of Kubernetes networking interface. So essentially what happens is when Kubelet spins up a new pod, it's gonna create what's called a network namespace. It's gonna run that pod in the network namespace. It'll set up off in a vif pair as part of that. And Kubelet essentially invokes your CNI plugin, in this case Cilium, and says, hey, I just added a workload. Go wire up the networking for this workload. So at that point, Cilium is staying constantly in sync with the Kubernetes API server. So it understands the workload identities. It understands all the services you've created. It understands all the network policies you've created. So it takes the combination of the data that's gone from the API server to Kubelet, and it basically will build a BPF program and attach it so that it controls all the traffic in and out of this pod. And again, we don't actually even have to, we don't have to rebuild a BPF program every time that configuration changes, because nine times out of 10, we just need to update a BPF map, which is incredibly efficient. And actually in more recent versions of Cilium, we've even added some new BPF capabilities that leverage BPF templating so that we actually typically only do that compile step of BPF once per host. And we're able to use the same BPF program repeatedly for multiple pods. So this makes Cilium extremely, extremely scalable. And Mark mentioned some of the numbers before, but scale and efficiency is a huge focus of Cilium that's enabled by BPF. So overall, that's something like kind of the nitty gritty BPF details. The highest level, you think of, you still use Kubernetes and a service mesh, if you choose, kind of as your orchestration interfaces to deploy your workloads, to define Kubernetes network policies, Cilium is in that data plane. It leverages BPF quite heavily. It also leverages Envoy. And as Mark mentioned, is able to accelerate Envoy and make it much more efficient to get data in and out of Envoy. Whether you're using Envoy with Cilium standalone for our L7 policies, or if you're using Envoy as part of a service mesh, like Istio. So if this stuff is interesting to you, I really encourage you to check out the Cilium blog. These are highly, highly technical blog posts for each Cilium release that go through all the details of what we built, why we built them. It's just the project is moving incredibly fast. And it's deeply technical and we're definitely just looking for more people to get involved. So I already mentioned, Suze has been a great person in the Cilium community. We also have a lot of Cilium users that contribute back. So again, it's a great, very highly technical community, very active feature velocity, very active and friendly Slack channel, if you have any questions. So I'd encourage you to pop on that. If you're interested in kind of getting updates in terms of new Cilium releases, new blog posts, et cetera. Of course, definitely appreciate you following us on Twitter. So with that, I think we'll do a quick pitch for KubeCon and then we'll move over to questions. All right. So thank you guys. That was an awesome webinar. So to add on a little bit, both Suze and Isovalent are sponsors at KubeCon San Diego. It is right around the corner. If you've never been, KubeCon provides a ton of value in terms of learning and simply advancing cloud native technologies forward. This year, it's being held in San Diego, which should still be super sunny and very awesome in November. So especially for me as a New Englander, that will be a welcome change. So you can go to KubeCon.io to find out more, excuse me. And definitely grab your ticket before they run out because this is a flagship event for the cloud native computing foundation. And it is big and tickets do go. Yeah. Though I want to point out that finally, it's in a convention center that can support Comic-Con. So I think KubeCon. It's a sign of KubeCon's growth that we're getting up there with the majors. Cool. So with that, we'd definitely like to open the floor for questions. I think there's a Q and A button where you can type your questions. We'll kind of try to pick some out. And I'm gonna start off with the softball question, which is, is this webinar recorded? Can we get a link to recording your slides? Yes, it's recorded. CNCF will be posting this on their website very soon after the close. So I think we've got good number of questions. I'll start kind of picking off a couple. One question is, is this like aqua or cystic? So in the sense that it's in the Kubernetes networking space or kind of security space, yes. But not really. Like if you try to look at more comparables from an open source perspective, I would compare it a bit more to something like a Project Calico or a Weave, other things that plug in at that CNI layer. So you can think of it as a CNI, but a CNI that's unique because it's powered by BPF. And so is able to do a lot of things that maybe another CNI based on another technology wouldn't be able to do. Yeah, and just connecting the dots when Mark was talking about the use cases that we selected. And then we moved into talking about BPF, probably the key factor in our selection of psyllium over a couple of other options, including Calico as our primary CNI plug-in of choice was the efficiency, scalability, programmability, etc. of BPF compared to approaches with similar capabilities, but that did them based on IP tables. Cool. All right, I'm gonna, there's a lot of questions from one person. So I'll hop around and try to get it some, a few from other ones on circle back. So one of the questions is, does it support SCTP as their integration in BPF or SCTP? So it's a good question. So psyllium itself right now doesn't have SCTP support, but it's actually a fairly common ask. So feel free to reach out to me or email me to follow up. It's BP, and this is a good example where the flexibility of BPF actually makes it very simple for us to add support there. It doesn't matter whether your protocol was designed in, originally into the kernel, it's very easy to add that support in BPF or with a BPF program in a way that doesn't require you to upgrade your kernel, right? Because it's BPF adding that logic. Sorry, psyllium is adding that logic using BPF to an existing kernel. So it's a good question. Definitely follow up with me on the particular use case there. I have a question on where to learn BPF syntax. So that link that I posted on the psyllium docs, I was probably the best kind of way to learn about programming BPF. I wouldn't so much say that there's a syntax, is there's a C-like programming language that most people use when they're building BPF programs. And I will give the caveat that it basically is kernel level programming complexity. You're dealing with kernel data structures and you're dealing with the BPF verifier. So it's kind of not for the weak of heart, I would say, but there's a BPF channel on the psyllium slack and there's a BPF section on the psyllium docs that would be a great place to start. I'm gonna jump in there real quick while you're looking for the next question, Dan. So the beauty of this system really is that psyllium, as Dan mentioned earlier, really hides all of that complexity. You will interact with this system using a very simple policy statement. So this was kind of the 90 perceiving, it's more of an 90% rule than an 80% rule. So you will be able to deploy managed security and so forth very effectively using basic policy statements. And then if you decide to move to that graduate level stuff that Dan's talking about here of, hey, let's start learning BPF, let's start learning, get down to the register level, start learning what the virtual machine that's actually enforcing the security and so forth is, there's that level of learning that you can inject but you don't have to know that to get started and run your cluster and scale it easily. And then you can be running, there are people running a thousand plus nodes psyllium clusters who have never written a BPF program in their life, they're entirely decoupled. We went into that level of detail with BPF normally just because people are interested in getting that kind of deep technology knowledge, but if you're talking about the syntax of the policies, I would check out the psyllium docs. We support both standard Kubernetes network policies. So any standard Kubernetes network policy that you find in example anywhere that will work out of the box with psyllium, for certain features like the layer seven visibility or the DNS aware that isn't part of standard Kubernetes network policies, you can check out the psyllium documentation. There's a policy section there that has a bunch of examples. So I think there's a question, could you please clarify security on services across multi cluster? Yeah, that's a great question. So multi cluster, you know, you're running multiple different Kubernetes clusters, it may be for fault tolerance, it may be for geographic location, it may be part of a migration strategy. Traditionally you would be able to do label based policies within a cluster, right? My front ends in this cluster can talk to my front, my back ends in this cluster, but psyllium actually allows you to do have kind of a single networking and security plane across your clusters. So you can create a single set of policies. And even if you have a front end running in this cluster and a back end running in this cluster, if your policy says like only front end can talk to back end, we'll enforce those policies seamlessly. And that's again, because psyllium actually is kind of natively built on the label based identity, it doesn't really care so much about the IP addresses underneath. And so it doesn't really care what cluster or workload is running in as long as it understands it's Kubernetes identity. Another question from an anonymous attendee, how psyllium is KVStore free if it uses K8's API? So this is the KVStore free, I think it's probably a reference to the blog post that I had a screenshot of. So originally psyllium required you to run your own SED that psyllium interacted with directly. That's how we propagated this kind of psyllium identity information. And starting in the latest release for, and still like our largest users, people running hundreds of nodes and thousands of nodes, we still recommend the direct KVStore access because the watcher semantics are more scalable. But for people getting smart started or running smaller clusters, in psyllium one six, there's now a mode that allows it to just use CRDs in the Kubernetes API server. So I mean, obviously Kubernetes itself requires at CD, but saying what we mean by that is that there's no psyllium specific at CD required to use psyllium starting with one dot six. So is this solution considered nano segmentation? I guess probably, I don't know. You have to ask someone who's on marketing. I mean, yes, it lets you really kind of narrowly and with a lot of granularity to find your segmentation policies. I guess you can call that nano segmentation. Is it, is BPF a DPI, which I assume would be referring to a deep packet inspection. So BPF itself is a very general purpose technology like we've talked about. It's just being able to run custom logic in the kernel at certain hook points. So there's nothing specific about BPF that lets you do DPI. It's just looking deeper into network traffic is one of the things you can implement with BPF and that we're doing with psyllium. So I would say like BPF is a very fundamental Linux technology that can be used to implement highly efficient deep packet inspection. Can psyllium support multiple network interfaces VEATHs in a single pod? Not right now. I think this is, it's a question we've gotten from time to time. We'd certainly accept contributions around it. We haven't yet kind of found the killer use case of why that would be necessary. But it's open to feedback. In the short run, I think people would look at using it with either CNI multis or CNI. Yeah, that's exactly what I'm talking about here. Yeah. And in fact, with the chaining capabilities of psyllium, you can basically get the benefits of psyllium managing it from a networking invisibility perspective on top of another CNI plugin that does support that. So yeah, it's probably another one of the reasons we haven't implemented it natively is that chaining is actually a pretty good answer. Well, thanks, Roger. True. All right. What are the key differentiators between psyllium and Istia? That's a tricky question. So psyllium is operates at the CNI layer and you can actually run psyllium with Istio and things like the envoy acceleration are actually great reasons to run psyllium with Istio because with Istio, all traffic goes in and out of envoy. And if you're running that on top of psyllium, that'll be way better performance, literally like three X better performance than if you were using IP tables to redirect that traffic into Istio. So that's the simplest answer. The reality is a bit more complicated because there are certain use cases like multicluster routing and security, transparent encryption that psyllium does as a CNI plugin and Istio does as a service mesh. So there's some amount of functional overlap there, but there's also just there's, psyllium's not trying to be a complete service mesh. There's tons of things like retries and circuit breakers and all that kind of stuff that were like, if you want that, you should certainly be using a service mesh and psyllium's a great networking layer to run with that service mesh. If all you're trying to do is just get, transparent encryption or something and you don't really have a need for anything else and you want it done really efficiently in the kernel, psyllium's a great way to do that. Does it support IPv6? Oh, that's the favorite question of some of our developers. In fact, it was IPv6 only at the start. So now don't worry, it supports IPv4 as well. It has for quite a while now, but no, I mean, in terms of CNI plugins, like psyllium has really, really good IPv6 support. It was designed in from the beginning. It wasn't tacked on. It's something that people are actively using. So in terms of V6 support, I think we've got a really strong position from that perspective. Is there an enterprise support? Oh, man, it's almost like I paid you to ask this question. All right. No, yeah, of course, ISOvalent is offering enterprise support. We have commercial offerings around psyllium as well. So feel free to reach out if that's of interest. And of course, we do it for the entire distribution level, including psyllium as well as everything else for a series of CAS platform. That's our combined 15 seconds of commercial message. All right. And then do you have anything for visualization? Do you leverage Grafana? So psyllium monitor is kind of that generic low level visibility export mechanism. We do have as part of some of our commercial offerings, kind of a more, how do you visualize the service map based on this? How do you do troubleshooting? How do you keep kind of a history of the flows and what's been allowed and denied and all that kind of stuff? But psyllium itself is kind of limited to, hey, here's an open interface to get all the visibility data. If you want to plug it into one of your systems, plug it into that. If you want something out of the box, we have that from a commercial perspective. All right. I think we already essentially kind of touched on the service mesh question. So I think we're probably good. Any last call for questions? Looks good. Let's see the others. There's one in the chat window that I guess is a service mesh proxy mandatory in order to manage L7 filtering. This is actually a really good question because yeah, it wasn't clear. So psyllium does not require a service mesh to do L7 filtering or transparent encryption or anything like that. That's all a native capability within psyllium as a CNI plugin. So if you kind of just want the layer seven filtering or just want the transparent encryption or just want multi-pluster, you can get that just with psyllium as your CNI. If you want kind of the whole kit and caboodle of service mesh, then we'd say like, well, this is why running psyllium with a service mesh gives you a lot of benefits from a performance perspective, et cetera. Cool. All right. Well, thanks a lot for attending everyone. Really appreciate the time. Obviously, like just mentioned Slack, Twitter, all great places to get a hold of a lot of other psyllium questions. Roger, you want to mention good places to get a hold of you beyond email or? Sure. There are, you know, a bunch of Susan Twitter addresses. We have, there is a SusanCASP forecast platform, CAAASP channel on the Kubernetes Slack. So that's probably the most public place. There are also on lists.SUSA.either.com or I guess it's lists.SUSA.com, although it might be lists.SUSA.DE, there are some lists that are open to the public. Or you can just spam Roger since you presented. Yeah, I'm just, I'm sure, yeah. All right, cool. It's in production as everybody forgets. But. All right, well, thanks again, everyone. Really appreciate the time. Take care. All right. Thank you.