 Good morning, everyone. Buenos dias. I hope everyone has a great time. Today, I do a first lightning talk about connecting Kubernetes clusters on the edge using Cilium cluster map. And the agenda for today is starting with an introduction about Cilium and eBPF to understand how it works and how Cilium uses it to publish services across clusters or to have identity-based security across clusters using cluster mesh. And then I want to do an overview and deep dive on cluster mesh use cases. And I'll talk a little bit about how, for example, cluster mesh can be used on the edge to support edge workloads in a design or topology which extends clusters across multiple Kubernetes clusters. My name is Raymond De Jong. I'm a solution architect working at Isovalent, the company behind Cilium. So let's start with eBPF. What's eBPF? It stands for Extended Berkeley Packet Filter. That doesn't tell you a lot. What it basically is, it provides technology software for having a based sandbox virtual machine to run custom code in your kernel based on events without actually updating or changing the kernel. What it means is that if you have a recent enough kernel, you can leverage eBPF to do a number of things based on events. So like I said, eBPF runs custom code in the kernel based on events. So it has a user space and a kernel space. So when an event, for example, a packet arises, it arrives at the network interface, an eBPF program can be triggered on that event and do custom code or logic with it. So it runs eBPF programs on events. And today we're talking about networking. So what that means is that every time a process connects or TCP retransmit happens or a packet leaves or arrives at the network interface, Cillian with eBPF can do things with it and inspect that traffic to provide rich observability or to provide services across clusters. Now Cillian is obviously software for providing connectivity across Kubernetes clusters. And it provides security, connectivity, services, and observability for your container workloads. And it uses eBPF technology under the hood. And it abstracts eBPF for you as an engineer interacting with the Kubernetes cluster. So you don't have to know eBPF or program eBPF. It just leverages eBPF under the hood to provide network connectivity or rich observability. So a number of things we are currently doing with Cillian and especially ClusterMesh. So we provide network policies, identity-aware network policies across clusters. So we can see identity from other clusters and secure that kind of workloads based on their metadata, so their labels. Providing services and low balancing, and in terms of ClusterMesh, that means that we can provide services across clusters. Banned with management, flow policy logging, and operational and security metrics. So also we use Hubble as a component which talks to the Cillian agent to inspect and show traffic leaving and arriving your workloads. So in terms of ClusterMesh, you can use Cillian for availability, security, and manageability use cases. First of all, you can run services across your Kubernetes clusters. So you can connect on-prem or in-the-cloud or a hybrid kind of combination of clusters using Cillian ClusterMesh. And you can have global services defined which are basically make a service and their endpoints available across clusters. But you can also do centralized services kind of topologies, which I will show a bit later. And while you're doing so, you can also enforce security using Cillian network policies. So like I said, each set of unique labels from a pod creates a unique identity. And using Cillian, you can create Cillian network policies or cluster network policies to enforce that security across your clusters. And then finally, manageability using Hubbell. We can provide visibility for the workloads across your clusters. So you can inspect traffic across your clusters. So for example, when a given pod in a given cluster reach another endpoint in another cluster, you can inspect that traffic. And Hubbell UI understands where it comes from and which identity it has. So let's dive a bit deeper on ClusterMesh use cases. So first use case is obviously the high availability use case, where you have two or more clusters connected with Cillian ClusterMesh. How it works is that in each cluster, you will create a namespace, and you create a surface. And then you annotate the surface with a global Cillian IO global surface to enable it to be available across clusters. And what happens using eBPF is that obviously, Cillian understands endpoints in each cluster. And it will advertise those endpoints across clusters. So in each cluster, you will see endpoints from both clusters being available. That means that, for example, if you have pods in some kind of cluster which has a misconfiguration or being redeployed or there's something wrong with that, actually, the service can fail over connectivity to the other cluster. Another use case is shared services, which means that you basically want to abstract centrally configured services in a given cluster and have smaller clusters across, for example, on the edge. So what this means is that you most likely have things like DNS or logging or some kind of storage where you need to configure and store the state of your workloads in a shared services cluster. And you don't necessarily want those to manage them in your edge clusters in other clusters, because you want to be flexible in the lifecycle of those clusters, or you want to keep those clusters as small and as agile as possible. So what you can do then is just you just expose that service in the front end clusters and then have a shared service in a single shared service cluster. So each front end in this example will connect, for example, to a vault service in their own cluster and get redirected and connected to the shared services cluster. And this also works very well for providing segmentation, for example, between tenants or separation between security levels. So for example, if you're high secure, you don't want to expose necessarily in that cluster that given services workload. And then finally, a very useful use case, I believe, for edge clusters is splitting services. And this is about creating stateless versus state full clusters. So obviously, on Edge and on some Kubernetes, other top policies as well is stateless clusters creates the option to have more flexible workloads on the front end, also better performing, because you're not storing state on the front end. You want low latency, quick response. But if you need to store or consult data stores in another cluster, you can create a state full cluster, expose that service across your front end clusters and make that connectivity available. That also makes it easier to lifecycle your front end edge clusters and just maintain a single state full cluster, which obviously needs more configuration and care in terms of making that available. So if you want to know more about Cilium cluster mesh, I recommend to go to Cilium.io and learn. There's a lot of documentation on the Cilium.io website to get started with cluster mesh. It has a lot of examples to create a service, a global service, and to show how it will fell over across a cluster. What's also coming is that we're close to releasing a feature that you can have, an affinity. So for example, you want only the local endpoint to respond when it's available. And when the local end points are not available, it will fell over to another cluster. Or the option to only expose remote end points, which makes sure that when a front end is hitting a service, it will get redirected to another cluster and have the workloads running there. This also gives a lot of flexibility if you do some kind of maintenance in a specific cluster to have that service redirecting traffic to another cluster. There are also weekly interactive Cilium introduction and live Q&As with Thomas Groff, our Cilium co-creator. And we have also installed this if you get started with Cilium, where some of us is walking you through installing Cilium on a kind-based setup, for example. That's what I had for today. We also have Slack, GitHub, websites, and obviously Twitter to follow and to know more about Cilium. If you have specific questions about Cilium cluster mesh, I will be outside. And please ask me. I'm happy to help. We also have a Cilium booth in the expo starting tomorrow. So myself and other contributors of Cilium will be there to answer any questions. Thank you for spending time with me. And there will be another tech talk right after this. So thank you very much.