 Really exciting first-ever Scyllium con, definitely not what we thought of we would be doing when we started Scyllium many years back. So today I want to talk about the story of Scyllium, why we created it and highlight the people behind it because there's been a lot of friends who have now become family involved in this. So let's tell the Scyllium story. I will keep this super brief because you will hear a lot of end-user talks today about Scyllium, what it does and so on. But for those of you who have never heard about Scyllium, maybe this is the first time you even see this, Scyllium is eBPF-based networking security and observability widely deployed today and we're a CNCF project. But this talk is really about the origins of Scyllium. Where did it get started? Why did we even create it? This was the first commit way, way back. So December 2015 we started the Scyllium project with a GitHub commit, very innocent. We'll get to the founding team a little bit later on. But first before we even could create Scyllium, we actually had to go and work on eBPF because eBPF was not quite ready yet to essentially implement what we had in mind with the Scyllium vision. So we first worked on eBPF. So where did eBPF come from? What is even eBPF? If you have never heard of eBPF it's actually pretty simple and basic. Think of it as the JavaScript engine for the kernel. It's kind of vague, but that's really what it is. It is programmability for an operating system. Originally started out on the Linux side, now we've heard even portative windows. So it's becoming a generic programmability interface for operating systems. So we can actually run programs, customize the operating system when certain events happen in the Linux kernel. And that's exciting. What did this start? So there was a startup called Plumgrid all the way back 2014 called Plumgrid. You see the original team, the crew there with Pear, Brandon, and Alexei Staravoytov. They are essentially brought eBPF into the Linux kernel initially. It quickly became a topic of interest and focus. Plumgrid was not the only company involved, not the only team members involved. We also see, for example, Daniel here in the Plumgrid office together with me discussing eBPF. We already had, we had Sillium in mind at this point, but we had to first develop eBPF further a little bit. Of course we cannot mention the story of eBPF without Brandon Gregg. Many of you have probably seen one of his talks. He was the person who coined the term eBPF superpowers because it quickly became obvious how powerful eBPF would be. So the question really was, how do we actually use these superpowers? What do we want to build? And one of the first use cases that really made sense and that really showed the value of eBPF was flame graphs. You see it on the on the screen here, the ability to profile an application and figure out how much CPU is being used in which part of the application. That definitely that was one of the big first use cases of eBPF where it became end user facing. But it was always a lot of community passion around eBPF. We see Daniel talking about eBPF. We also see Facebook being heavily involved in the early days of eBPF. They even had a booth at one of the open source summits. You see a couple of eBPF maintainers here discussing the complexity of the eBPF verifier. That was eBPF. Today eBPF is essentially part of a foundation and as far as runtime is concerned, a lot of it is being discussed in a variety of different different groups. We see one example here of the Linux kernel eBPF group. In this case, it's actually still part of the networking track. So we see David Miller here together with the rest of the networking subsystem discussing eBPF topics. What about Scyllium? So when did Scyllium really get started? We saw the date of the first commit. Who actually created it? This is the Scyllium founding team. You see it quite innocent, right? It's actually at the 25 year birthday party of Linux. You also see an early presentation at Iovizer Summit. So on the founding team from left to right, myself, Daniel Borkman, Modu Chola, and Andre Martins. And guess what? We also got photo bumped on this first ever picture. I actually only noticed this now when I looked at these pictures again. This was the first ever Scyllium design summit. We went to Switzerland into the mountains, into the snow. Guess what? This is the moment we realized that observability really matters because it was a horrible foggy day and the visibility was like zero. So we guess we probably have to build some observability features into Scyllium as well. This was the first conference talk, the real one, like a really big one. In Toronto, LinuxCon, Scyllium was actually IPv6 only. We wanted to be very, very forward-looking and we wanted to build the next generation networking layer. So it was intent-based, of course, IPv6 only, built for containers identity-based and so on. The IPv6 only we quickly learned was a little bit on the extreme side. So eventually we had to add IPv4 support as well. But Scyllium still looks like this in the end. The core design using EBPF is what we use today. Brings us to ISOvalent, right? How did ISOvalent get started? How is it related to Scyllium? Co-founders are myself and Dan. So let's look at this story because it's actually quite interesting as well and it tells part of the Scyllium story. My background is in Linux kernel development. I was at RADDAT for 10 years working on the Linux kernel. Dan was a startup called NYSERA working on OpenWee switch. I was actually working on OpenWee switch as well. That's how Dan and I met for the first time. We were essentially working on OpenWee switch. Then EBPF started to appear 2014. I joined Cisco for a couple of years and then we started Scyllium. So that's when we actually collaborated and then founded ISOvalent after having created Scyllium together with Andreessen Horowitz who founded our first round. List joined of course. That was a big moment in the company. Scyllium became a cloud native or a CNCF project and then we had a variety of industry giants coming in and joining the story and the journey of Scyllium. In the beginning when we started Scyllium we really focused on the core networking problem of Kubernetes. Networking layer three, layer four on all the cloud providers as well as on-prem. We went out of the world and essentially told this first story and this was the first really big talk where I think a lot of people have heard about Scyllium for the first time, DockerCon 2017. This is also when Kubernetes started to go big. A lot of people still refer to this. This is an initial demo like the Star Wars demo that we did back then. It was definitely like one of the first big moments of Scyllium. We quickly learned that we have to go beyond just providing networking. So we added network policy or network segmentation and as well as transparent encryption. We also had visits. So in this case Batman visited us at an open source summit. You see early team members, Cynthia Thomas as well as Dan and myself taking a ride to a conference venue here. Of course the company started to grow. So we got our first offices. You see Martinez, actually the author of the Kube proxy replacement feature, ironing the Scyllium back wall that we took back from one of the conference venues as well as we actually got the first logo or like we actually changed our logo. In the beginning the Scyllium logo was slightly different. At some point we figured this is getting bigger. We need to think about the real logo. So we designed the Scyllium logo as we know it today. I mentioned observability. So the next big feature we added is Hubble, which brings observability functionality into Scyllium. So we gained flow logs, metrics and troubleshooting capabilities. With Hubble of course we gained Hubble UI, the graphical visualization of Hubble as well as the network policy editor to manage and visualize network policies and to actually maintain them. We kept on growing. You see the booth that we have I think this was already KubeCon. Yes, this was the first KubeCon video like the second or third KubeCon we did. Scyllium passed 2000 stars and because we're engineers we didn't actually celebrate the 2000 of course to 2048. We were engineers after all right. You also see Michi and Joe to early Scyllium maintainers in this picture. Cluster mesh. We added the ability to connect multiple Kubernetes clusters together. I think all the way back in 2018 when we first talked about this at KubeCon. It was a little bit early on for the day. This is a major feature many Scyllium users use. It's the ability to connect clusters without actually federating the Kubernetes clusters themselves so they can keep separate and independent and Scyllium creates essentially like a data plane below allowing all the parts to actually talk to each other doing services covering our policy encryption, Hubble and so on. We also added the load balancer to stand the load load balancer that can sit outside of Scyllium and feed traffic into a cluster. Of course our Swiss routes always lived on so we see a fondue that we did in Palo Alto in an Airbnb. We also see GLIP, one of the Hubble maintainers skiing. We see the famous snow chain incident on the Yulia pass. A couple of team members wanted to go skiing a little bit earlier so they said yeah we will go really early took a car and then they had to mount snow chains. You also see the typical engineering setup here like mounting the snow chains so two engineers are trying to figure it out without reading the manual. One is recording the video of it and the product manager is looking up the tutorial on YouTube how to mount snow chains. Then things got crazy like Scyllium joined to CNCF, AWS picked Scyllium for EKSA and Google picked Scyllium for Antus and GKE. This is when we realized well this is actually becoming real, this is becoming big. We may have created something that actually works. This was a time when we talked about Tetragon and introduced Tetragon to the Scyllium family, security of observability and runtime enforcement. Tetragon is the ability to run an ancient eBPF based on all the nodes, gain security observability both on the network as well as on the runtime side and also enforce runtime rules. So essentially with this we added runtime enforcement, what a pod, what a container, what a process can do as well as runtime observability or security observability, what system calls are going on, what files are being accessed, what privilege escalations are happening and so on. Of course the team spirits always were always high. We see Duffy Cooley barbecuing, you see the summer hike of the Scyllium Europe team. The US team was taking it a little bit more chilled, CI testing, floating equipment in this case on a river. Big news, Scyllium came to Windows and eBPF came to Windows and I think that was a moment when we are sorted to understand that eBPF, the eBPF movement is now becoming industry wide. It's no longer a Linux specific kind of tech. So I think eBPF on Windows will definitely standardize eBPF as a, I would say an infrastructure language overall, not just for Linux and we're of course super excited about primary like absolute first-class citizen support on Azure with Scyllium. We're also super excited about the Grafana partnership that happened last year. Grafana Labs and Isabelant partnered together and this is bringing a lot of amazing Grafana dashboards to the Scyllium ecosystem. We're a bunch of kernel engineers. We often struggle a little bit with dashboards and visualizations. We're really good at moving our package really fast. We're really good at gaining observability deep in the system. We may be less ideal to create the actual dashboards. So Grafana partnership has really helped us to visualize all the data we are gathering. So this is natively integrated into Hubble UI, Hubble CLI and of course is part of the Grafana ecosystem. Of course we kept growing as a company when I first saw this picture. I was like, wow, we have actually grown quite a bit because this was just after the COVID. So we were all used to just being consumed. So actually seeing like a group picture was like, wow. But we didn't just keep growing. We also kept skiing of course. The team kept growing. You see Bill, Scyllium community pollinator in the official or semi-official Isabelant work uniform skiing. But of course we also did other things. We watched Star Wars. We did cross-counter skiing and so on. So lots of fun being had while we grew the company. Last year we also introduced Scyllium service mesh. A lot of the feedback from the community was Scyllium is awesome. Can you bring that awesome experience to performance, the simplicity, the scale? Can you bring that to service mesh? I said, okay, but why there are so many service meshes out there already? Why do we need yet another one? And then the feedback was very clear. Can you get rid of this sidecar proxy? That is definitely like we love the service mesh functionality. We would love to have it more transparent just part of the infrastructure without a sidecar proxy. So we launched Scyllium service mesh early last year and it was an immediate hit to the point where we now have other sidecar free service meshes out there as well, which is awesome. So service mesh really completed the picture because we're now able to essentially not only do layer seven aware firewalling, which is what we had before, we can now also do layer seven load balancing, tracing, MTLS and API gateway functionality using the gateway API Kubernetes standard. So we don't always just ski. Sometimes we also bike. You see Liz biking here. And of course, the Scyllium team ran in the Swiss mountains, as well as Sebastian, one of the Hubble maintainers. If we cannot ski and cannot bike, we have to walk. So if everything else we walk, you see Quentin here, you see the team after a nice group hike. What is next? So we saw like the story up to this point and Scyllium has grown quite a bit. What is the next step for Scyllium? It's EBPF 2.0 right with X like EBPF for Excel. I'm not hearing any complaints. No check. Let's let's let's check the date. That's obviously it was our full April's full choke this year. So what is it? There's a couple of items I think that are pretty obvious of MTLS support for our policy is one of the next big features that are coming. This is the ability to seamlessly roll out MTLS across your clusters, spiffy based without any additional complexity. You essentially use net or policy. Thank you. You essentially use net or policy and you add two lines for your YAML, which says that authentication is now required. And this triggers everything in the back and generates the certificates, handles them and makes sure that network segmentation is now not only implemented at the network level, the packet level, but also requires strong authentication. Of course, as part of this, we're integrating with spiffy and spire. This will also bring the ability to select workloads based on spiffy IDs. Right now we're of course supporting service accounts, labels, namespace names and so on. With this integration, we're now also adding the ability to select on spiffy IDs. And then a lot of day two operational focus, Sillium is getting rolled out everywhere, which means we want to make sure that you have the right tools in your hands to actually operate Sillium at day two. Of course, the Grafana partnership and the Grafana collaboration helps a ton just from a observability perspective, but we want to do more than this. And there's one more big feature that we are working on that we are announcing today. It's essentially a natural evolution of Sillium, Sillium mesh, one mesh to connect them all. Sillium mesh is essentially if you look at the history of Sillium, we had Kubernetes networking, we added multi cluster networking with cluster mesh, which was connecting multiple clusters together. The clear ask from users customers was bring this to outside of Kubernetes as well. We want to use identity based security, we want to use observability Sillium based, but for virtual machines for servers everywhere. So we're bringing Sillium mesh, which essentially involves a new transit gateway that allows to connect virtual machines, servers, existing networks into the Sillium mesh. Before this, we were only able to either run on Kubernetes worker nodes or run the Sillium agent on a virtual machine. With Sillium mesh, we can now actually deploy a gateway, a router, into an existing network, into a VPC, into an on-prem network, and connect your existing workload, your existing networks along with it. So this essentially combines all the existing components of Sillium, Kubernetes networking, the CNI layer, cluster mesh, the ingress and egress gateway, as well as the low balance and service mesh into one mesh that makes or is able to connect workloads, whether it's inside of a Kubernetes cluster on VMs or servers with each other. With that, I would like to thank you. And I think we have a couple of minutes for questions, because I just saw the five minutes warning. Questions? Yes. Just want to make sure the question mic is on. Can you? Yeah. I think it's on. Yeah. Great. Especially about the Sillium mesh, you left out GCP. Is that on purpose? I left out which one? GCP, Google Cloud. DCP. GCP. It works on all the clouds. There's just limited space available. So the question was, I left out GCP that was not intentional. I used to use two examples. Now I look at it, maybe I should have added it. But it's of course more than just the three cloud providers as well. Of course, this is completely agnostic of all the cloud providers. Yes. We have a question in the back. All of this has to run. Yeah. We are currently still on another CNI. I know that there are some small migration paths described, but it's very rudimentary. So not at that level yet. Is there any plan to increase this more? Because I think this also increases the users again. Absolutely. The question was, what are we doing about migration? It's an absolute key focus. So we just added node specific config map support, which absolutely simplifies migration because you can now essentially migrate node by node. There was also a fantastic echo episode on migrating Sillium by Duffy a couple of weeks ago, which shows one of the paths. Overall, the migration is what is in the day two operational buckets on our Sillium roadmap. So that's definitely a key focus. We'll make it as seamless and as simple as possible to migrate from existing CNIs. Also, maybe it's worth mentioning in this case, it's not always required to migrate away from your existing CNI. You can also run Sillium in chaining mode on top of your existing CNI. So if you want to benefit from Hubble, from network policy, from service mesh, from the low balancing, from cluster mesh, you can also run, keep your existing CNI and run Sillium in chaining mode on top as well. Other questions? Mike is not working yet. Hi. So quick question regarding Qproxy. What vision do we have for it and maybe in future deployment to stick with it or remove it? Apologies, I didn't get the question at all. About Qproxy. Yes. Do you plan to stick with it or remove it in the future? I plan to make sure I understand is the question whether we are continuing with Qproxy replacement? Yes. So of course, Sillium can run with existing Qproxy or you can replace Qproxy and essentially use Sillium as the Qproxy replacement. Both features will continue to be supported. Oh, cool. Thanks. I think we have time for one more question. I try to deploy Sillium on-prem and I would like to know if there are any kernel version limitations because I've tried with, I think, Ubuntu 0.04 and that didn't work like 18.04. Then I had to upgrade to like 2204 LTS, I think, and that worked better. But I just wanted to know if there's any hard limitations on current version. Yes. So the minimal kernel version required is it depends a bit on the Sillium version that you're running and it used to be 4.9 and that should also handle the on-prem case. And I think in the latest Sillium release, we bumped the kernel requirement a little bit to 4.10, 11 or something, 19, 4.19. I don't think there is anything specific about on-prem. If you have issues, feel free to join the Sillium Slack and ask us. We're happy to help there. All right. Thank you so much. We have run out of time. Thank you very much. If you want to learn more about Sillium, see us outside. Thank you.