 All right. Good morning, everyone. Let's get started. Welcome on this session about Cilium cluster mesh and also a bit of service mesh and how you could use them across your Kubernetes clusters and on the Edge. How many of you know about EBPF or Cilium? Good. Cool. That's great. For those who don't, I will introduce the technology before we get started with Cilium cluster mesh. My name is Raymond De Jong. I'm a Solutions Architect working at Isovalent. And Isovalent is the company behind Cilium and also in the foundation contributing to EBPF. So Isovalent founded Cilium and open-source Cilium, donated it to the community. We're an incubating project. So today we're focused on Cilium cluster mesh. But before we get started to explain about EBPF, what EBPF is, we like to say what JavaScript is to the browser. EBPF is to the kernel. It makes the kernel programmable in a very efficient way without changing the actual kernel. What it allows us to do is to, based on kernel events, attach EBPF programs to it. Today we focus on networking, but it can also happen on things like files, processes, and such. So a kernel event can be something like a process opening a socket or a network interfacing sending a packet on the wire. All those events could trigger an EBPF program. And Cilium is built on that technology. The good thing is that you don't have to know everything about EBPF to work with Cilium. Cilium abstracts that technology for you. And depending on the flags you set in your Helm values file, for example, it will trigger and mount the right EBPF programs to work for you. So Cilium is built on EBPF, and it provides advanced networking capabilities for the cloud native age, especially for Kubernetes clusters, but also beyond. Today we'll focus on the networking piece, which is mainly about low balancing across multiple clusters and a bit in combination of service mesh provided with Cilium as well. On top of Cilium, we also have observability platform, which is also useful for tracking workloads across clusters and being able to observe flows between services within your cluster and across clusters and being able to make informed decisions about network security, for example, for these workloads. So overview of cluster mesh. First of all, cluster mesh provides for multiple clusters the ability to connect clusters together to provide a unified data plane to be able to route and connect and low balance workloads across clusters. On top of that, we provide observability for those clusters using our identity aware solution. Instead of tracking IPs, we are able to create identities and attach identities to the workloads and being able to follow and observe identities across clusters. Provided we have released service mesh, you can also use service mesh across clusters and also encryption and pod IP routing are available. To get started, you have to have, at the moment, at least, non-overlapping IP ranges in your clusters. You need to be able to connect the clusters together using, for example, a VPN or some kind of physical wire, depending on your topology. And it doesn't matter if you run on clouds, a hybrid, or on prem. It can run on Opyshift, but also on GKE. And if you want, you can also run it on bring your own, let's say, Demian release, as long as you have a fairly recent kernel for the EPPF programs to work. A little bit more about the architecture. It works in such a way that each cluster has a cluster-based API server, which keeps track of its identities and exposes those identities to remote clusters. And each cluster has only read, only access to another cluster. And that means that agents in each cluster are able to read and know about identities in other clusters. And that provides all the security and connectivity and observability across clusters. So the best practice is to expose an API server through a load balancer, because obviously that's high available. But if not available, you can also expose it through a node board. So let's go through some of the use cases. The main use case is obviously high availability, where you want to expose or low balance traffic to a given service across clusters. So the cluster mesh technology works with the concept of a global service, which means is that in each cluster, you create a namespace with the same name. You just create a service in each cluster, a simple cluster IP service. And you annotate it with a special annotation, which I will show later. And that will trigger Sillium to learn about or advertise its endpoints across clusters. And that will, depending on the configuration rules set, will low balance by default traffic across clusters. And that means that if you have endpoints failing in a given cluster, traffic can low balance across cluster to the remaining endpoints in your clusters. Another very useful use case for cluster mesh is that of shared services. We see more and more, and also on edge clusters, smaller clusters or multi-tenant clusters for which you don't want to necessarily set up shared services in each individual cluster. Using cluster mesh, you can create a centralized services cluster for things like monitoring, things like secrets, things like DNS, and then expose that service through a global service. And connect that service to remote clusters for them to consume or to connect to those services in your central services cluster. And this obviously allows you to be more agile with your tenant clusters or edge clusters in such a way. Another use case is splitting services. And this is about stateful versus stateless. So similar to shared services, in this case, you would have a centralized cluster which you want to store data. You need to know the state. But you want to keep your edge clusters or your remote clusters nimble, agile, and small as you can because you want to be able to lifecycle them more easily as such. Again, the same principle, exposing a service, in this case, a data store service, for example, as a global service and exposing that service across clusters for them to be able to connect and be routed to that cluster. Then a little bit more about the actual core of this session, that's the local and remote service affinity. We like to call it topology routing. What this helps a lot with the previous use case is that you can engineer the preferred way of routing to your endpoints across a cluster mesh topology. So in this simple example, we again have a backend service. And in this case, we want to prefer, obviously, local endpoints above remote endpoints. What this allows you to do is to obviously create high availability across clusters, but only when you need it, only when all endpoints in a given cluster are failing. And this also, obviously, is very useful in multi-cluster topologies. You want to avoid cross-cluster traffic because that induces latency and such, or cost. Using a local service affinity makes that more effective and reduces latency. The other way around is the remote service affinity. The most useful use case here is to temporarily annotate a given service to prefer remote endpoints instead. What this will trigger is that all traffic by default will be forwarded to the remote endpoints. And this is very useful in rolling updates or upgrades or restarts of workloads in a given cluster. And this is how it looks like. So this is a simple example of a service of type cluster IP. And you can see that it's a backend service. And you can see two annotations. The first annotation, that one of global service equals true, that triggers that given service in that cluster with its endpoints to be advertised across clusters. So you have to do this for each cluster you want to expose endpoints for. And the second annotation for service affinity local triggers the local preferred endpoints service affinity. The other way around is the remote service affinity. Looks very similar, sorry. And in this case, you would annotate it with service affinity remote to prefer remote endpoints instead of local endpoints. So if you don't have a service affinity endpoint, it will low balance across clusters by default. And now a little bit more about how you can leverage Cilium cluster mesh in combination with Cilium service mesh to do more advanced things. Cilium service mesh has been released in Cilium 1.12 last spring, early summer. And this allows you to encourage, for example, resources such as Ingress, Layer 7 path-based routing, Percentage-based routing, Canary rollouts, things like retries and such. So in this example, you can use, for example, an Ingress resource to attract traffic into your cluster to do some kind of Layer 7 routing. In this case, Percentage-based routing. But you can also, those services, which is traffic being forwarded to, can also be global services. And you can do that same traffic engineering with those services as well. This is another example where we have the topology array routing, the service affinity across clusters, where each front-end service prefers, obviously, local endpoints. And each back-end service prefers local endpoints. But again, in case of failures, traffic is being forwarded to the remaining endpoints in other clusters. Another example is that you can even use cluster mesh in combination with service mesh to spin up new clusters. And to do, for example, Canary rollouts to that new cluster, there may be reasons why you want to migrate to a new cluster. Perhaps it's a new cloud provider, or you want to move from on-prem onto cloud. Your applications are already running. And you may want to migrate them in a step-by-step way slowly introducing more traffic, or even deploy a new version of your application on a new cluster, and then slowly introducing traffic into that new cluster using both Cilium service mesh and cluster mesh. So this is just one example. It's super flexible in terms of how you architect it. And also, without sidecars, Cilium doesn't introduce sidecars in their service mesh solution. So that means that on node level, if endpoints are in the cluster already, we reduce latency like three to four times compared to a sidecar implementation. This concludes my lightning talk about cluster mesh. First of all, if you want to know more how it actually works, if you want to try it in a lab, isofaner.com.forge.labs is a great resource for learning more. Also diving a bit on how you can, for example, use network policies across clusters using Cilium cluster mesh. On the open source side, we have the Cilium.io website. We're also getting started guides of cluster mesh, also with service mesh. And our Slack channel is open where we are able to support the community. People are happy to answer questions as such. We also have a booth, both a Cilium booth and Isovalent booth. If you want to visit us, please feel free to do so. We also have Liz Rise with us who's signing the eBPF book, so you can get a side copy from us. Very good book to learn more about how eBPF works. And with that, I'm happy to take any questions. Thank you.