 OK. Hi, everybody. My name is Shedra Kekintaro. I am a community manager for eBPF and Cilium at Isuvalent. And today, we are basically going to talk about Cilium Hubbell. So the title of my talk is, What's going on within my network? It's up to introduction to Cilium Hubbell. So quickly, we are going to cover these topics. eBPF, Cilium, Hubbell, you know, Hubbell architecture, we're going to go a bit deeper into the structural components of Hubbell. Before I start, is anybody here that knows Cilium or eBPF? If you do, can I see your hand? OK, nice. Representation, nice. Awesome, awesome. So let's just get started so I don't take too much of your time. So in terms of observability, in cloud native space, for example, there are a bunch of things that we consider. I like to classify them in three levels. Circuit application level of observability, operations level of observability, and security observability. So for the application level, we're talking about things like performance metrics, like things that make the metrics based on your application performance, then logs, traces. Then for the operation level, we're talking about in the Kubernetes context, we're talking about cluster health, resource utilization, scaling events, et cetera. Then for security observability, we're talking about network traffic and access logs. So all of these are some of the things we consider when talking about observability in the cloud native space. This is what brings us to the agenda for today. Cilium, eBPF, and Hubbell. So these are some of the three things I'm going to be talking about today, and I hope it gets you excited. So let's talk about eBPF. So it looks like from the question I asked earlier, a bunch of us have experienced or heard about eBPF. So for those of us who have not heard about eBPF, eBPF is basically a technology that allows you to basically instrument code into the Linux kernel without exactly affecting the kernel in any way. So you can basically send things like applications, et cetera, into the Linux kernel without making any form of changes to the Linux kernel. So eBPF is really, really popular these days. And there's various use cases for eBPF, which we're going to talk about later in the talk. Then also a bunch of projects already existing in the eBPF landscape. So if you see the diagram I have here, it basically just represents eBPF, the user space, the kernel, the current architecture, what it looks like, and then some of the use cases, which is mostly around networking, security, and observability. Like there are other use cases for eBPF, right? But today we're just going to talk about networking, security, and observability. So basically, one of the interesting part of eBPF is I personally think eBPF has been a revelation for the cloud native space because we have in the past few years, we've had a bunch of tools that have been created based on eBPF. This is because of setting characteristics or setting advantages like the performance. So it reduces the performance overhead. Then it gives you, because of how close eBPF brings you to the kernel, it gives you deep insights into your application or gives you deep observability. Then it is widely available, meaning it's standard that dies across all modern Linux systems. Then in terms of tools in the cloud native space that are using eBPF at the moment, we have Parker, which is a continuous profiling tool. Then we also have Silium, which handles networking, observability, and security. Then Silium Hubbell, which is also a part of Silium that is the observability layer of Silium. Then Tetragon, which is the security layer of Silium. So what is Silium? So basically, Silium is created by Isovalence, the company I work at. We are currently like the creators of eBPF and Silium. Silium is mostly used around networking, security, observability, service mesh and ingress. And it is currently a CNCF-graduated project. I think it got, I think, a gradation last year, which is a really, really cool thing. Then in terms of technology, Silium is mostly built on top of eBPF and Envoy. And it basically provides security, allows you to observe your network connectivity between log workload and, like I said earlier, it's built on top of eBPF. So today, the main focus for today is Silium Hubbell, which is like the observability layer for Silium. It is a fully-distributed networking and security or ability platform, which is open source. It is built on top of Silium and eBPF. And it is currently, you can use Hubbell in two ways. We have the Hubbell CLI, which allows you to, let's just say, I would say, I like to call it like a high-budget TCP-TCB dump. So think about TCP-TCB dump, but better. Then it basically allows you to connect to the Hubbell API and then render some of your flow events to your CLI. I would say it's really, really nice compared to experience with TCP-TCB dump. Then it gives you also a UI for some of us that are much more interested in the graphical aspect of things. Like, not everybody likes to look at the green screen or the black screen of the terminal, depending on whatever color is your terminal. So it gives you also a UI that you can see like a service map of all your services and pods talking together. I would also demo that later in the talk. Then what can Hubbell do for you? So Hubbell provides metric collection, including L7 metrics. So ideally, a lot of observability tools can mostly do metric collection at the L3 and L4 layer, which is like IPs, pods, et cetera. But with Hubbell, Hubbell comes like an added benefit, which is L7 metrics. Then it also provides your network flow so you can see the flow of the communication between all your pods, services, et cetera. Then a good thing about Hubbell is if you already use open telemetry or any form of distributed trafing tool, you can take your metrics generated by Hubbell and then render it on those and then display it on those tools, for example, Grafana. So which leads us to visualization with Grafana. So you can also take the metrics produced by Hubbell, you can take it and then this creates really, really good, nice dashboards on Grafana. So where can Hubbell help? So they're setting questions that Hubbell can answer for you in several categories. In service dependencies and dependencies, a Hubbell can help you answer what services are communicating with each other, like how frequently and what does the service dependency graph look like. Hubbell can help you see what HTTP calls are being made. This is thanks to the L7 visibility it gives you and some of your Kafka topics. So in terms of networking and monitoring, Hubbell can show you where some certain communications are filling in your network, what communication itself is filling, and then is it DNS? I mean, it's always DNS, right? It can help you answer that particular question. Then it can also tell you which of these services has experienced issues with DNS. So Hubbell gives you the answer is it DNS, which is a really, really cool thing, if you ask me. Then for application monitoring, it can show you the HTTP status code for whatever request you make. Then it can also show you latency between HTTP requests. So Hubbell gives you this full blown availability layer for your cloud-native applications. And then it also shows, in terms of security, which one of the things I talked about earlier as things you consider for vulnerability, it answers what services had connection blocks due to network policy. This is really, really interesting because I've had people ask me over the years, how do you know your network policy is working? How do you know? So Hubbell can help you answer this particular question. For example, if you create a network policy that maybe does a default deny, like ingress and egress, everything, all the traffic is dropped. Hubbell can show you a flow of all these traffic being dropped, which is also one of the things I will demo today. Then also which services have resolved a particular DNS name. So Hubbell has a really, really interesting architecture that consists of several things. The first, one of the interesting part is the selium agents. So Hubbell runs the selium agents, which acts as a CNI to manage connectivity for all CNI managed Kubernetes pods. So every single pod that you have selium running, Hubbell can basically get a lot of information from the selium agent already running in those pods. Then the Hubbell server, which runs on each of your nodes in your cluster and retrieves the eBPF-based visibility from selium. So this is like the interesting, I like to call it the engine, right? This is what takes all those eBPF metrics and gives you visibility. Then Hubbell then exposes GRPC services from the selium process that allows clients to receive flows and other type of data. And the Hubbell relay is basically there, which is also one very important part of Hubbell, is basically there to provide you full network visibility across your entire cluster than across multiple clusters. So how does selium eBPF and Hubbell basically, what's the correlation between all these particular components? So the selium agent attaches eBPF programs to all Kubernetes pods. Then the selium basically subscribes to the Kubernetes API for updates on setting resources. So this is where the metric starts to be like, start getting generated. Then selium converts those resources that it has received into eBPF map for the eBPF data parts across access across the entire cluster. Then the eBPF data part inspects incoming connection to the pods. So if you look at the diagram, you can see that there's a selium monitor there, which basically helps the eBPF inspects incoming connections to the pods. Then the eBPF data parts emits traces and policy events. All these are like the metrics that are collected and then pushes it to a eBPF map, which acts as a ring buffer. So now this is where the metrics have been generated. Now Hubbell needs to take those metrics and make it sensible to you, right? It makes you have access to these metrics. So a Hubbell Instance reads that, a Hubbell Instance running inside the selium agent reads that particular data from the eBPF map and then collects networking events, storing them in a historical buffer so you can always access them. So Hubbell has a limit to which it can give you setting metrics based on the amount of a specific period of time. So Hubbell then exposes the quality data through GRPC and metrics like with the Hubbell API. Then the data is now accessible across other components like the Hubbell UI, taking it to Prometheus and getting monitoring and then for your analysis. So basically this is now the time where you can actually export those metrics to maybe the Hubbell UI or maybe if you want, you can take it to Prometheus and Grafana. Then Hubbell has a bunch of components. Like if you've throughout the talk, I've been talking about like, you know, Hubbell UI, Hubbell metrics, Hubbell CLI, like so what are these tools? Like what are these components? So the Hubbell CLI gives you detailed flow visibility in your time and hour by just running the command, Hubbell Observe. It also allows you to do extensive filtering. So you can do filter based on you want to get the natural traffic for setting pods in the namespace, you want to get based on setting labels, you want to get based on setting criteria. You can do that with the Hubbell CLI. Then if you're more of a, like you need to export the data, you can also get your network traffic in JSON. I would show how you can do that. Then for the Hubbell UI, it basically provides a graphical interface for all your network traffic and basically every single thing that, all the metrics that Hubbell gives you, you can basically render all those with the Hubbell UI. And it gives you flow display and also filtering. Same thing you can get in the CLI, but this time in a much more graphical format. And also you can see your network policies, how they are working in real time. So you can see your network policies. And then for Hubbell metrics, so this is basically the engine for generating metrics. And with the Hubbell metrics, you can then export most of your data to VR parameters and Grafana and any other open telemetry tool. Then for the Hubbell CLI, this is what the Hubbell CLI looks like. It basically gives you information on the network and application level. And also you can see things like your TCP connection, DNS queries. You can see Kafka communication and so much more. Like with the Hubbell CLI. And you can also carry extensive filtering. So if you look at, I'm not sure this is visible enough, but if you look at the diagram, you can see that I am basically filtering based on setting labels. You can also filter based on ports, services, IP addresses, if that's what you're looking for. And also you can use some HTTP methods. Like if you want to see all the GET requests or the POST requests that you're having, you can also see that with the Hubbell CLI. And for JSON, if you wish to render your network traffic to JSON, you can just run Hubbell Observe and add the flag dash o json. And it's going to render every single traffic that you have currently in JSON, which it's really, really useful. So for the Hubbell UI, the Hubbell UI basically provides a graphical interface where you can see all your network flow traffic in one dashboard. It also gives you visualization with service maps. And then you can filter network traffic by verdicts, verdicts meaning this connection drops, this connection allowed, et cetera. You can filter by those on the Hubbell UI. Then you can easily also switch between namespaces in your cluster. So in your cluster, if you have multiple namespaces in your cluster, you want to see traffic going on in those, but you can switch with the Hubbell UI between those clusters and see how things are functioning or how the connection is going overall. So for Hubbell metrics, Hubbell metrics basically exposes a metric endpoint which can be scripted for something like Prometheus. You can script those metrics and use it however you wish to use it. Then it also has certain beauty metrics like HTTP, TCP, UDP, DNS, ICMP, and another good thing about Hubbell is that you can write your own metrics. It's customizable and extendable, so you can write your own metrics. So if you have certain things that you want to look for, you want to filter based on, you can write your own metrics. And you can also visualize this data from the Hubbell metrics with Grafana. If Grafana is one of the most popular tools for visualizing and creating dashboards in observability. So if that's more of your thing or what you're already using, Hubbell already has, you can do that directly on Grafana. So this is, this diagram basically showcases Hubbell and the whole Hubbell, Prometheus, Grafana relationship. So it's basically a Grafana server with a data source pointing to Prometheus and also with all seven HTTP metrics from Hubbell. So Grafana has a data source plugin for Hubbell, so you can have this done on your dashboards or on Grafana dashboards. So I've spoken a lot, so maybe it's time for us to see Hubbell in action. So let me just switch this. I need to figure out how to switch this to, okay, one minute. Okay, so if I can just switch this here. Okay, have this. So this basically is a demo I prepared for this session. You can also find out GitHub, which I will share the link later. I just need to get my terminal here. Let me find these things. Okay, can you see my terminal? Is it visible enough? Okay, okay, okay, cool. So the first thing we need to do is, I currently have a kind cluster with two worker nodes and one control plane currently running. So if you want to also follow along, this would be really, really interesting to do. So one minute. Okay, sorry, is there a way I can mirror my screen? Like, I don't like extending it. Like, technical support, is there any way I can mirror my screen? Okay, it's like, oh, go, cool. All right, found that. So the first thing we need to do is to create a client cluster running with two worker nodes and one control plane. So I currently have that running. So if I do a cut, kind config, do I email? So I currently have that particular running here. And I have also disabled the default CNI because, you know, Hobul is a component of CNI. And you know, if you want to bring a CNI, you have to disable, you know, the default CNI that comes with Kubernetes. Then the next thing we need to do is to install Cilium. It's always advisable to install the stable version which is version 1.15. So if I check my Cilium installation, hopefully I'm following best practices. Yeah, I currently have Cilium version 1.15 running. And then the next thing we need to do is to check if Cilium is, you know, after the installation is to check if Cilium is ready. So we can do that with Cilium status. So it says, like, you know, Cilium is okay. Operator is okay. And then also Hobul Relay is okay. This is because I've enabled the Hobul earlier because, you know, conference Wi-Fi is not the best thing to demo on. So the next thing is also to enable Hobul, which I have done already. So you use Cilium enable Hobul. So to check if your Hobul installation is, like, you know, running properly, you can just do Hobul status. If you check that, you can see that, yeah, it's connected on all three nodes and there's also flows. There's also the health check says, okay, so we are good to go. Then to make sure that, like, you know, pods are not clashing, we need to port forward Hobul in a separate tab, which I have basically done here. So this basically is what is going to act like our server for Hobul Relay. So I have basically done that here with Qubesit here, port forward, then, you know, the Hobul Relay service. After that, we then need to enable Cilium Hobul UI, which is basically done. So you can do this with Cilium Hobul enable dash UI and have done that at the moment. So if I go to my Cilium, I can see that Cilium Hobul UI is running. The next thing we need to do is to start Hobul UI. So I have that already running. So if I, let me just kill the server, then run Cilium Hobul again. Hobul UI. Hopefully nothing breaks. Okay, awesome. So we currently have, so this is what the Hobul UI looks like. And it's, like I said earlier, it gives you access to all the namespaces in your pod so you can switch between certain namespaces. So currently I have, you know, the regular namespaces and I also have the default namespaces. It's mostly where we're going to be doing a lot of the demo. So I'm just going to leave this open. So now nothing is rendering because we've not made any form of, you know, requests. You know, there's no traffic, there's nothing going on. So we need to find a way to get this to pick up a request, you know. Let's simulate traffic. So the next thing to do that, we need to deploy an application, right? And we can know there's no demo with an application. So let's deploy an application. So I can do that by just running, I already have the application installed on my, the application, we have a file already on my machine. So I can just do, Cube, Cilium, fly, F, YML. So it's basically, you know, two applications. So if I, let's just see what that looks like. Nothing special, nothing too complex, just regular. So we have like a deployment of one application with an NGX image. And also the same thing with another NGX image with like one replica each. The next thing we need to do for that is to check that, yeah, every pod is running. So we can do that, we get pods and it's still creating. Hopefully this creates fast enough. Okay, okay, there's no big flag, okay. So we have to wait a bit for that. Hopefully the Wi-Fi lets us, okay. Wi-Fi lets us, let's start create. Okay, it's still creating. The containers are still creating. Sorry guys, Wi-Fi is not great. Okay, this is taking too long. I probably should have done this earlier, but yeah, I wanted to take everybody along. So for the next step, we would be exposing, really not the deployment, it's the deployment to make sure that we're going to be exposing the deployments, then we're going to go into one of the pods and then making a call request to simulate some traffic and see if Hubbo picks that for us. Then we're going to also apply a network policy, a Selium network policy to drop traffic, to drop traffic in our pod and see if Hubbo can also pick that up and showcase that to us. Okay, it looks like I don't have enough time, but yeah, let's see how that goes, see. Okay, we have that running already. So we have the pods running. So let's quickly expose our deployments. We've exposed, so cube still gets as we see. Okay, okay, okay, everything is good. Then the next thing we need to do is to get our pods, one of our pods, let's just get the first pod and see that yeah, it's running. So let's execute, let's go into that pod and make a HSP request. So you're just going to copy this real quick. Then paste this. Then let's scroll, for example, Google.com. So it's basically, we've made a request and if you check the Hubbo server we have running, you can quickly see like this setting traffic has been picked up by the Hubbo CLI. And if you go back to the Hubbo UI, you can see the request we've made. So the app one made a request outside the cluster to which is world and you can see it, you can also get more information like oh, this is the traffic direction is egress, the verdict is forwarded and then you can see setting the destination IP and you can see all the labels. Now let's apply a network policy that basically denies us from doing something like this. So let's quickly apply a network policy. So all this network policy does is it denies, So this basically prevents us from making any form of a request outside our cluster. So it's basically just a Selome network policy that denies all traffic but also allows the Kube DNS because we need to be able to resolve DNS. So if I go back to that pod real quick and I make the same request, you can see that we can make that request and Hubbo has also picked up that this particular network, this particular call is basically denied, right? And if we go back to the UI, you can see that a lot of it says dropped. So the very good for this is because of the policy has denied it, you can see dropped, you can also see that the drop reason is because a policy denied it and the traffic direction is also egress. So this is like some of the capabilities that Hubbo can give you. So quickly, less enable Hubbo to show us like the kind of request we've made because right now we are not getting like L7 info so we need to like, you know, tell Hubbo to actually do that for us. So we can do that by applying a policy. So let me just quickly get the policy real quick. So let me just delete the network policy. Okay. So we've deleted this network policy now. So now we can actually make certain requests back. So let's apply, so let's give Hubbo the L7 visibility capability. So if you go back to our pod and make Canada request, so if I call google.com, it takes like a couple of seconds but you should be able to see like L7, let's give me one second, takes a bit. And also if you see this red line, this was because of the policy we applied earlier that shows that this particular components cannot, they cannot make this particular request. So it basically also renders that for you to see. And if I should refresh this real quick and then see if I can make another request. Okay. Oh, okay. I've not deleted the network policy I was supposed to allow. Okay. That's it. Okay. So if I go back and do this, should work now. Yes. Okay. Looks like it's still being denied. Looks like, yeah, everything is good. Problem with demos. Not every time, you know, it works. But yeah. Let's see how that goes. Okay. So if I call, so one minute. Okay. Looks like something is wrong. So let's debug it together. Let's find what's going on. Let's see. What did we do wrong? Sorry? Yeah. Let's try calling the other pod. So let's see. Curled HTTP to service, right? Okay. DNS resolver. Cube detail is not resolving this. Yeah. There's no visibility. There's no, we've deleted the network policy. So this shouldn't be happening. But yeah. That's how demos go. Yeah. Everything looks good. Okay. So let's just, you know, let's make a request between the pods instead. Okay. So let's take the pod IP and then the pod pods. Okay. Okay. Okay. Sorry. Sorry. Thanks. Okay. So that works. Thank you. Yeah. So if you see, if you see the request we've made, you can see that we made a get request and Hobu was able to quickly show us that we made a get request and he can show things as much as like, you know, HTTP ads, et cetera. So basically that's like at the base level what Hobu can do for you. And yeah. That's basically the end of the talk. So let me just quickly, if you are interested in like doing this outside the conference, you can quickly go to this link and you can do the talk. You can do this lab yourself. And then if you're interested in any, like, you know, learning more about Hobu, you can check like, you know, the official stadium and obu documentation. We have some labs for my surveillance. We also have like setting talks that I personally recommend that you should see if you are like interested in Hobu. And like, you know, thank you so much. And you know, thanks for coming. Once again, sorry about the demo. Demos never work at conferences because I checked this before I left. Thank you guys. Thank you so much for coming once again.