 With that, let's get going and let's look at what's new in Cillium 1.12. First of all, very high-level overview of what is Cillium. For those, maybe completely new to Cillium. So Cillium, originally created by Isovalent, our company, is EBPF-based networking, security observability, and new in 1.12 also service mesh ingress, obviously we'll talk quite a bit about both service mesh and ingress. Cillium is a CNCF project. It is at incubation level and we're currently completing all the requirements to graduate, hopefully later this year or early next year. Underlying or underneath of Cillium are two core layers of technology, one's EBPF, and we'll do a very super quick deep dive on that, as well as Envoy for layer 7 processing. Cillium is used across the world, lots of different users, many of them CNCFN users, but obviously also many enterprises, on-prem cloud, telcos, Cillium is very, very universal. Cillium is also used heavily across managed Kubernetes platforms, such as Google's Anthos, GKE, as well as EKS-A, when GA a little bit earlier, as well as for maybe slightly smaller cloud providers like DigitalOcean and so on. So you may, in fact, already be using Cillium if you use one of those Kubernetes distributions. So let's dive in, EBPF in one minute. So what is EBPF? It is what's driving Cillium underneath. It's this powerful technology. You don't really need to understand it to use Cillium at all, but it is very exciting because it's kind of the enabler for what Cillium does and what Cillium makes unique. EBPF is essentially a programmable engine at the operating system level. It was originally written for the Linux kernel since Microsoft has also ported it to Windows, so it's now also available on Windows. In a nutshell, EBPF is quite similar to what JavaScript is to a browser. So similar to how JavaScript is making a web browser programmable, EBPF is making the Linux kernel or the operating system programmable. So we can run programs when certain events happen, such as when we process network packets, when a system call is happening or being done, when trace points are invoked, or even when applications call certain functions. This is how Cillium is implemented. This is how many other EBPF-based projects leverage to provide infrastructure or solve user use cases. If you want to learn more about EBPF, feel free to go to evpf.io. It's a website where we host community-oriented content around EBPF, tutorials, documentation, videos. You will also find the link to the EBPF summit with lots of recordings of former EBPF summits. We actually have the EBPF summit coming up in just a couple of months. So if you're interested to dive deeper on EBPF itself, feel free to sign up to the EBPF summits, completely free. And if you are really into EBPF and want to still submit a talk, feel free to reach out to Bill or Liz Rice on our Slack. I think it's not too late yet to just to still put a talk in. Let's start diving in Cillium 1.12, what is new? One of the most exciting features is that Cillium now is a fully capable ingress controller as well. So as you install Cillium 1.12, you are gaining the ability to run Cillium or to enable the ingress controller. This means if you define Kubernetes ingress resources, layer 7 load balancing, you can now implement them directly with Cillium and don't need to install another or an additional ingress controller. This allows you to do cannery releases or cannery rollouts, path-based routing, TLS termination, and so on. Some of these features have been available before in Cillium with manual configuration or with our policy configuration. We now have a fully conformance. So we pass all the Kubernetes ingress conformance tests fully conformant ingress controller for your for your Kubernetes cluster. This applies to traffic into your cluster, within the cluster, and you can even use ingress across or in the multi cluster scenario as well. This is an example on how it looks like when you probably, many of you have been using ingress before, no surprise here. You can define, in this case, an ingress service and do, for example, differ or hit different load balancing backends based on the URL or based on the path. In this case, when a user hits slash details, it would hit the different set of backends than if the user hits slash or the root of the URL. Ingress is implemented using the existing on one integration. So it is using on void on one proxy underneath and just has all the functionality and the performance metrics of ingress or of the on one proxy. Switching gears and talking about service mesh. So one point 12 is the first release where we have a new exciting side car free data path. So so far, we have been offering option number two on this slide, which is the Istio integration. Some of you may have been using Istio so far and have been combining that with Silium. And in this integration mode, you essentially made Silium aware of an existing independent Istio configuration or installation. And Silium will provide a couple of additional functionalities to functionality to Istio, such as removing the unencrypted traffic, as well as giving Silium the ability to enforce its own layer seven policies in the sidecar of Istio. So this is what we have so far. And you can, of course, continue using this. It continues to be fully supported with one 12. We have added an additional option that you can now run a so-called sidecar free data path. And I see the questions or I don't understand if all the expressions sidecar free will talk about this right now. So in case you don't understand this yet, we'll talk about what this means. So what we are adding is essentially a sidecar free option. What is this sidecar? So in in general, service meshes so far have been implementing service mesh with a so-called sidecar proxy. This is the green box on the left side here. So it's essentially a proxy that would run one instance for every part. So for every blue part that you're running here on the right side, you need a sidecar proxy. Whether this is LinkedIn or runway or or or or NGX or some other proxy, you need another green box, this so-called sidecar proxy. So you need it like a lot of different sidecar proxies. These sidecar proxies would be injected through the through a network IP tables rule. So you see this first blue arrow that's essentially across the network stack here and an IP tables rule would naturally kind of transparently redirect all the traffic into a sidecar proxy. This is the so-called sidecar model. This is how service meshes have been operating so far. This is how LinkedIn operates. This is how Istio operates and so on. When we now talk about sidecar free, we now have an option where you don't have to run one sidecar per part, but you can actually run a sidecar free data path. And this operates in two modes. It will use eBPF, so no proxy at all, whenever possible, and we'll look into when that is the case. And then if it, for some reason, is not possible to do the required functionality in eBPF itself, it will use a per-node envoy. Right now, this is a per-node envoy. This proxy could also run at a different granularity, for example, per namespace or per service account, or even you could then go back and actually do still per part if you really wanted that. So far, we essentially offer you to run it at the granularity of one per node, very similar to how the ingress controller does this as well. This gives essentially native network performance for a lot of the use cases. And we'll look at those use cases where we don't need a proxy at all and it even improves the performance when a proxy is needed. We're also introducing this, unfortunately, didn't make the cut for 1.12, so it will be included in 1.13, a new way of doing MTLS that supports any network protocol. So MTLS will no longer be limited to TCP, but you can essentially apply it to any network traffic. And again, without proxies, and this will be another massive gain because we can keep native network performance without having to terminate connections with a proxy, but still offer MTLS-based authentication. And then we're also introducing, and we'll look at that, an Envoy CRD. This is a new CRD that allows you to use raw Envoy configuration for your service mesh needs, in particular for advanced use cases, and then a choice of additional control planes. What we have implemented so far is Istio in the SACAR model, ingress and services in the per node model, Envoy CRD in the per node model, and we're now completing Gateway API. So that will be coming as part of 1.13 and will probably actually be available independently before we even release the full 1.13. Spiffy is also underway. Unfortunately, they didn't make the feature freeze for 1.12. What will be merged, hopefully in the next couple of months as well, allowing to essentially use Spiffy identities as well as Spiffy spec or Spiffy service ideas in our policies and bring the Spiffy certificates to the MTLS model. For observability, you can leverage the existing integrations that we have so far. What you're used to, Prometheus, Fluent D, Grafana, Elasticsearch, OpenTelemetry, and so on. I said we'll look kind of a bit deeper into this SACAR-free mode and what that brings. Obviously, it brings a reduced footprint, so you run fewer proxies. So instead of having one per pod, you will run one per node. This can massively reduce both the memory and to compute resources you require because you don't need to bootstrap a new proxy every time you start a pod, for example, and you don't need to shut down the proxy when the pod terminates as well. But it also has performance benefits and we'll get to that. But before we see kind of the promise impact, this shows what can be done completely in EVPF, so no proxy, no sidecar, nothing, and what still needs a proxy. And when it needs a proxy, it could be the per node proxy or it could be the sidecar proxy. So things that we can do entirely in EVPF, traffic management, obviously anything at layer three, layer four, any load balancing at layer three, layer four, cannery rollouts to policy routing. I will have an example later on in this, as well as all of the multi-cluster capabilities, obviously network policy and TLS and then very interesting HTTP or layer seven observability for the protocols HTTP, TLS, DNS, TCP and UTP. This data can be exported using either open telemetry, traces and metrics or from UFIS metrics or JSON tracing data as well. So you can feed this into existing dashboards or existing tracing utilities or dashboards that you have. When you configure things like traffic management, this means layer seven load balancing. So path-based routing, header-based routing, SNI or host-name-based routing, as well as ingress, this is when we still use a proxy and it will inject envoy. Also for all the resilience features, retries, layer seven rate limiting. You can limit the bandwidth of a part without a proxy using network bandwidth, but if you want to limit number of HTTP requests per second, you need a proxy. As well as TLS termination and origination, important here, this is not empty less. This is the case when you actually want to terminate TLS on behalf of the app or you want to originate the TLS connection on the app of the app. This is where you also still need a proxy. Is empty less available for the Silicon Community Edition? We have not decided this yet. We'll get to that as we release it. So this is empty less. If you want to learn more about this new empty less model that essentially decouples the payload or the data path from the actual authentication piece, there is a detailed block that you can find linked here. So as we shared those slides, you can click on this link and get the details. The great benefit here will be that you can benefit from fuel full MTLS from the authentication piece using integrated Spiffy or serve manager, potentially even something like Istio or Walt in the future, but support any network protocol and not having to terminate those connections, which means they will flow on the network like they have flown before, same performance, same latency, no additional overhead. You can still optionally secure and encrypt them with IPsec or WireGuard, but you can then gain or add the authentication piece, decouple from this. For more details refer to the blog and reach out on Slack if you have more questions. So I promised a couple of benchmarks here that kind of show the impact. The left side is one case where we can do something entirely in EBPF. In this case, HTTP visibility or tracing. So parsing HTTP and emitting traces. This is the HTTP request. This is the HTTP response. What is the HTTP return code? What is the latency between request response and so on? Blue is no visibility enabled. So no parsing. Red is the latency. So just a tiny little of overhead when we have to do this parsing of HTTP in EBPF. This is for HTTP one and HTTP two and yellow is the sidecar. This latency addition is almost the same regardless of the specific sidecar. Often they use the same HTTP processing library and often the overhead is actually injecting the proxy and not the proxy itself. On the right side, we see the difference in performance that can be achieved if you need a proxy, but you're not running sidecars. Again, blue, no overhead, no HTTP authorization. Red is with Cilium Envoy and the Cilium Envoy filter and green here is the Istio filter which has a couple of additional capabilities compared to the Cilium's Envoy filter, but it's also more complex. And in the green case, in the Istio case, we're running two sidecar proxies. In the red case, we're running one Perno proxy because to gain HTTP authorization, we don't need to have two sidecars. Again, we can do empty less without proxies so we actually don't need two proxies just to gain HTTP visibility. Last but not least, the Cilium Envoy config or Envoy CRD. This is a new CRD that allows you to essentially bring Envoy configuration, could be listener configuration, could be retries, could be termination, could even be tunneling TCP and HTTP or like other very complex or very advanced Envoy functionality that is available and you can configure that directly in the CRD and essentially redirect any connection you want into that Envoy and apply that configuration. So if you, for example, have an existing Envoy-based service mesh and maybe you have written your own control plane, you can actually port that over and potentially use our data path while you provide the Envoy configuration or you have been using Envoy kind of with manual configuration that you can now automate that in the CRD. So this is a way for advanced users to make use or full use of Envoy's capability in Cilium. Some of the higher level control planes integrations like Ingress and Gateway API actually just map to this Envoy CRD. So this is the lower level construct and the higher level service mesh control plane integrations leverage this as an implementation. Last but not least here on the service mesh side, Hubble integration with open telemetry. We have massively extended this in 1.12 so you can use both the network visibility and the service mesh visibility at layer seven and export metrics and traces in open telemetry and feed that, for example, into a UI or a dashboard here that allows you to show spans. I see one more question here. Can you explain why Istio needs two proxies as you refer in your performance comparison slide? Yes, so in the Istio case, essentially what Istio does and also what Linkerd does, it starts one proxy for every single pod that you start and it will force all the TCP traffic of that pod in and out. You can limit that to certain ports but by default you want to redirect all the traffic both for any traffic that's leaving that pod and for any traffic that's entering this pod. So in a service mesh, you essentially go through two proxies unless you find the way to somehow disable one of those two sidecars. As soon as you start running MTLS in the sidecar model, you need two proxies because the TLS connection will be between the two sidecars. The apps themselves cannot or will not talk TLS. You will need the originating sidecar proxy to initiate the TLS connection. You will need a receiving sidecar proxy to terminate TLS, perform MTLS and then the actual app that receives the traffic. There is a psyllium service mesh sidecar free blog post on isovalent.com. I will also list that in the chat. I think Cornelia can take care of that and list that. That goes into details of how this works and how a sidecar operates and how we can remove sidecars in certain situations. Switching gears a bit and talking about cluster mesh. Cluster mesh is the ability of psyllium. So it's not service mesh, it's not cycle related. This actually existed for several years now. It's the ability to connect multiple clusters, multiple Kubernetes clusters together. Purely on the network level. And it essentially enables you to connect them and force network policies across clusters, gain visibility across clusters and also do service discovery or global localancing across multiple Kubernetes clusters as well. Exciting new feature in 1.12 is cluster mesh service affinity. So you can now take your existing Kubernetes service, keep the existing annotation. This is the IO psyllium global service. So this is the annotation that marks this Kubernetes service as global, which means it should be low balancing across multiple clusters. And you can now add a service affinity and you can say, please prefer a local backend. And what this now does is that if there are backends available both in the local or in the remote cluster, it will always prefer the local backend or the local instance until no local instance is available. And then it will go to the remote instance. You can obviously also turn this around and say, please prefer remote. So to illustrate this, this is kind of how it looked before, what you have been able to do before. So you can define, let's say a backend service, which is backed by both pods in both clusters and the front end talking good is backend service. It would balance equally here. In this case, you could not define a particular affinity. What you can now do is you can say set it to local and then it will always stay local to that cluster whenever possible. But as soon as the backends in that local cluster die, they become zombies for some reason, then it would fail over and actually go into two backends in the remote cluster. So this is a great way to do HA or to make services highly available with a backup cluster, which means you can stay a local, you can benefit from the local latency, you can avoid cross region, cross AC traffic, which is expensive, but have to fail over opportunity into another cluster that may be running in a different region. Same here, if we say to remote, it would go always prefer remote first and only use the local instance when required. This can be useful if you, for example, want to log to a centralized logging service, but you have a backup available in the local cluster. So for some reason, the remote cluster is not available, then you want to fail over and actually log to a local logging instance, for example. Another exciting feature around cluster mesh is the ability of running essentially minimized or miniature clusters and connect them together. So you no longer need to bring the full cluster mesh control plane in every cluster. You can have essentially small edge clusters. This is in particular useful for external workloads or edge use cases, where you maybe only want to run a couple of parts and these edge clusters or remote clusters, they can leverage the control plane of a centralized cluster or several centralized clusters. So this allows you to run cluster mesh, in particular for edge use cases, minimize the footprint at the edge or for the remote clusters and centralize if you want more of the control plane infrastructure. For more details on this, the release blog has plenty of details on this. External workloads. We had massive improvements around integration of external workloads. External workloads are non-cubinatus workloads. You can integrate them into your Cilium cluster or Cilium mesh. So you don't need to contain rise or integrate everything directly or run everything as part of Kubernetes. You can run Cilium on a virtual machine or on a metal machine and integrate workloads on that machine directly with a Cubinatus cluster. What we have improved in 1.12 here is first of all, the egress gateway was promoted to stable. So this is the egress gateway is the ability to bind pots which have constantly changing IP addresses per Cubinatus IPAM and bind them to stable IPs or to fixed or static IPs, which means you can use those static IPs and allow them in traditional firewalls. This allows you to add essentially network policies in your existing traditional firewall, which does not understand Cubinatus but only allow certain pause instead of the entire IP range of a cluster because you can now, for example, for pots if a certain label or for an entire namespace, map all of that traffic to either a specific IP or a set of IPs. We have added net for six support to the load balancer. Very exciting. You can now translate between IPv4 and IPv6 at the service level. So you can have an IPv6 only pot or even an entire cluster running only in IPv6 and then expose that pod with an IPv4 service IP. And then at the service IP level, at the load balancer level, Syllable automatically translate from IPv4 to IPv6. So you can essentially have, and this is what this feature was developed for, you can have a cluster with hundreds of thousands of pots where IPv6 is incredibly useful because you will have an unlimited amount of IP addresses or virtually unlimited, but then still expose those pots with an IPv4 address. And you don't need to give the pots themselves an IPv4 address. You can essentially do that at the service level where you need much fewer or a lot fewer IP addresses. So very exciting. This is supported for the standalone load balancer. BGP enhancements, BGP has been supported for a couple of releases in Syllium. In 1.12, we have added full IPv6 support and we have also changed the unlike mechanics of the BGP implementation to support additional BGP control plans. This means that we no longer only depend on the metal LB implementation, but we can actually go from metal LB to go BGP, which has additional functionality, including IPv6. With this move, we have also become more, more pluggable or more modular. So we can actually support additional BGP implementations if required. VTEP integration, great contribution where we can now integrate Syllium with an existing VXLAN tunnel endpoints that's potentially running in a traditional data center. So if you have existing VXLAN tunnel endpoints, whether these are routers or advanced switches or other network appliances, you can now essentially terminate or create a VXLAN connection from a POS directly to a VTEP. Security, obviously, I think most of you probably have not missed this. We have released Tetragon. So Tetragon is essentially our entry or Syllium's entry into runtime security from both a observability and from an enforcement perspective. So we're bringing our knowledge of EVPF to the runtime security world. We have released Tetragon. Initially, this was part of our enterprise distribution. We have open sourced large parts or vast parts of Tetragon into open source and have contributed it to the CNCF. We'll now look a little bit at Tetragon, but at a high level, you have very similar to Syllium. You have an user space agent called the Tetragon agent that runs as a demon set and you have a kernel portion that extracts the visibility and does the enforcement. On the observability side, we can look into a vast thing, vast level of different things, like very deep observability from system, very deep like network packets, network protocols, file systems, files accessed, namespace boundaries, privilege escalations, TCPIP stack, system calls that are being executed, process execution, forks, new sub-processes being introduced or being forked and so on, as well as even function call tracing in the user space side. All of this is collected based on so-called tracing policies, which are CRDs, and they are then exported either as Prometheus metrics or open telemetry metrics or via JSON or FluentD into, for example, NSIEM. So you can have logs, traces and metrics for all of these security-relevant observability data. So it's obviously very deep visibility. It's completely transparent because it's using EBPF, so no app changes are needed. The applications don't even notice or understand when they are being observed. It's low overhead thanks to EBPF and it brings all of the existing integrations we had so far on the observability side. So if you are already running Grafana and Prometheus metrics for your Cilium deployment, you can neatly integrate Tetragon observability as well. Tetragon also offers enforcement. Here, the big difference to existing solutions like Falco is that the enforcement happens in the kernel itself. So the rule engine, what should be allowed, what should be not allowed, when certain things are being observed, how to react on that is all in the kernel. For other systems like Falco, they export the visibility to user space and have this filter engine in user space, which means they need to react asynchronously in user space. Tetragon essentially enforces and reacts almost in real time in the kernel and can thus prevent a lot more attacks instead of reacting to them. Integrations, right now there is a Kubernetes CRD and obviously JSON import. So Tetragon is not Kubernetes specific. It is aware of Kubernetes, but you can run Tetragon outside of Kubernetes as well if you want. And we're working on an open policy agent integration as well as a tooling that will allow to convert existing Falco and pod security policies into Tetragon or just enforce pod security policies with Tetragon. On the left is what is available in Tetragon OSS. So visibility, enforcement, and on the right side are the extensions that are available in the enterprise distribution of ISOvalence Celiom. We have also improved Celiom as a security model in 112, actually massive improvements. We've worked with several external contributors who were very interested in improving the security posture of Celiom itself. This means that we actually removed a lot of privileges that were previously required by Celiom. The most notable one is that now Celiom can be run unprivileged. So you no longer need to run it with just privileges. It still needs CapNet admin capabilities, but it is obviously a lot less or a lot smaller service than a full privileged container. Also, if available, CapBPF. We've also massively reduced the number of Kubernetes privileges that we need in terms of objects that we need to modify or even have read access. For a variety of the objects, we have been able to move modifications to either completely remove them or to move them from the Celiom agent, which runs per node to the Celiom operator where you have essentially just a few deployments. It's a Kubernetes deployment, so you will run one, two or three replicas with leader election. So you can actually run them particularly on particular nodes where you may have no untrusted users on it, for example. So you can lock down Celiom further from that perspective. We will continue to prove on this side based on what we learn and kind of what we hear back end. As Celiom progresses from a number of deployments perspective, we obviously have more and more people looking into this as well. Now switching over to the networking site. So BBR for pods, BBR is a very exciting, relatively new congestion algorithm for TCP. It has been primarily developed by Google and it's essentially allowing to improve latency massively in a lossy network like the internet. So if you are exposing services, Kubernetes services publicly on the internet, which I assume many of you are, you can now use BBR and actually improve the latency of those services massively. One of our engineers has done an extensive talk about BBR at the last QCon, including a very impressive demo where we had a video being streamed over the Wi-Fi at QCon and it was lossy of course, right? It was a Wi-Fi, we then enabled BBR and the video performance, the video kind of the quality of the video improved significantly because we were able to reduce packet loss and improve latency for the video streaming. I saw that I missed one question on TetraGun. Let's do this. Does TetraGun have an impact on applications performance? Yes, it does. It will differ greatly on the level of policy that you implement. So if you go back to here and if we say, if you want to log every single system call that is being invoked, every single write system call, every single open system call and so on, then writing all of those logs, even to a local JSON file will consume CPU cycles. And in fact, writing to file and doing the JSON encoding and so on, that will actually be the biggest portion of the overall overhead. So the first kind of factor will be amount of observability data you want. So a number of log lines per second. And then metrics are more efficient than logs. If you can accumulate metrics in EDPF, that's a lot more efficient than if you have to emit a log line 10,000 times per second. So it will really depend on the level of observability you want, as well as kind of the granularity and essentially how many log lines, how many events per seconds this results in. The great news is that Tetragone allows to essentially export a lot of high level signals, such as I want to log every time a pod changes privileges for example, becomes privileged becomes privileged or when a pod gains capabilities, for example, if somebody invokes pseudo or if a process changes namespace boundaries, for example, when a process enters or leaves a container namespace. And these events are high level, so you won't have many of them, but they're very relevant. So you can monitor that monitor this and at this level, the overall is absolutely minimal. But as soon as you start exporting a lot of data, that's when the overhead comes in. I hope that was a good answer to kind of show you kind of the amount of impact that can be anywhere from 1% to 20% really depending on what type of observability you want to get out of. So going back to the networking piece, we talked about BBR, the slide deck here will link to the video. It actually has a full blown demo of this feature as well as the QCon recording. We've also promoted the bandwidth manager to stable. The bandwidth manager is able to control the network resources of a pod. So you can limit the traffic of a pod for example, to five megabytes per second or to a certain number of packets per second. So you can actually do resource control, very similar to how you can do this with CPU and memory as well. This supports the standard annotations that are available in Kubernetes, deployments and pods. And you can also automatically configure the TCP congestion algorithm and optimize it. So Silym will automatically enable a variety of kernel level TCP optimizations that will improve the performance of the network for your particular pods by understanding what the pods require. Dynamic allocation of pod sider. So this is exciting. We can now instead of doing pod sider IPAMs, so one big block per node, we can now do multiple blocks which means we can for example assign slash 26 or slash 28 or slash 30. And then nodes as they run out of IPs will go grab a new block. So this is kind of the middle ground between doing let's say a slash 24 pod sider per node where you will quickly run out of your total IP space or doing slash 32 in the middle IPs which is very expensive because the control plane needs to do a lot of work. So dynamic allocation of pod sider is the middle ground. This is in particular great for BGP integration because you need to announce a lot fewer number of prefixes on the BGP network. Quarantine service backends. You can now in Kubernetes with Kubernetes services actually quarantine a particular backend which means this backend will stop receiving connections. This means you can gracefully shut down pods. What most people do today is actually just scale down the deployment and it will just cut off connections to that pod. So if this is a publicly exposed service whatever customers or clients are accessing the backend that is being scaled down they will just have their connection cut off because the pod is turning off. With quarantining you're essentially stop redirecting new connections to a backend and as soon as no existing connections exist you can safely shut down or scale down the pod. Improved multi-homing for low balancer so we are adding the ability to essentially have multi for the standalone low balancer to support multi-homing. This was not really optimized so far for multi-homed nodes. We've now added the capabilities for this. This means that if you have nodes which are in a multi-homed architecture so they're a part of multiple networks they have multiple network interfaces you can now define exactly how the low balancer should behave in this environment. For example, should the client connect they'd be coming in from one interface and then should it pick backends on another interface and so on, this is very useful in particular if you're on-premise or if you have more traditional network use cases. AWS Unite prefix delegation so we can now finally really scale with our AWS Unite integration and we now support the new prefix delegation mode of AWS Unite mode. This means that you can essentially run a lot more pods per node and have a more efficient IPAM mechanism if you're running on AWS with the AWS Unite IPAM mode enabled. Although we have a new EC2 instance tag filter for IPAM with additional capabilities details here on the slide or in the release block. Nice this feature was contributed by an end user from New York Times. We love when we see new features getting added by end users. And I see another question from Jerry. How to accomplish zero downtime upgrade from 1.11 to 1.12? I drive a Syllium CLI and Helm chart. Not all of the Syllium agent pods or Syllium operator will come up properly. That sounds like a bog approaches on Slack. We're also happy to hook you up with one of our solution architects. This is Syllium supports rolling upgrades or rolling pod upgrades. You should have near zero or zero impact upgrades or there are a few components that will have build impacted such as proxies or in the open source version of DNS proxy. But in general that you should not have drops and all the Syllium pods should of course come up when you upgrade. So reach out to us. And I see we also have a bug report. One of the getting started docs does not exist. We will get that fixed. You may need to change the URL from latest to stable. That may actually just be a stale URL that we need to fix. So whoever wrote that question, if you change that link or if you just go to docs.syllium.io and click through service mesh, you should find the right post. Can a CVE be a trigger for a quarantine of a pod? Yes, you can use whatever trigger you want. The typical use case is scaling down. But of course you can essentially, if you believe that a pod has been compromised, you can quarantine it. So it doesn't receive any new connection. You can keep the existing connection including the connection of the potential attacker alive and maybe start investigating what is going on. All right, user experience. One ask we have received a lot is that the Syllium CLI is great at automatically deriving the ideal configuration of Syllium for a particular environment. So it will automatically that your cluster is running in AWS or it's an AKS cluster or it's GKE or it's Rancher and it will automatically can generate the rights or the best potential configuration on how to run Syllium. With the latest 1.12 release, this feature is now available where it essentially emits Helm flags. So instead of requiring to use this feature to now then install the Syllium install or with the CLI install, you can essentially generate the ideal Helm values for a new cluster. So instead of auto-detecting the values and simply installing right away, you can essentially auto-detect the values and emits Helm flags and then install Syllium using your own Helm pipeline or your existing Helm workflow. Azure bring your own CNIs or Azure cloud added a new feature called bring your own CNI which essentially allows to install AKS on Azure without any CNI installed. This massively simplifies the installation of Syllium. We have a screenshot here before and after. So before you had to do a lot of different steps to install AKS including creating node groups and tainting nodes to make sure that Syllium is installed first on the node before any other poly gets scheduled very complicated with bring your own CNI. This essentially becomes very trivial. You install a new AKS cluster in bring your own CNI mode and you can simply install Syllium using the default installation path. No special workarounds required anymore. Unfortunately bring your own CNIs only available for new AKS clusters at this point. I know several of you have already asked Microsoft to support this on existing clusters as well. Hopefully we'll have news on that later this year. All right, a couple of things on the Syllium isovalent enterprise signed for 112. We have released an extension or a new version of Hubble timescape. Timescape is the time series database that we offer where you can store all the observability data of Hubble, of service mesh on the networking side, on the tetragone runtime side and so on and feed that into a time series database. So you can store that data persistently over time and then query it. Hubble timescape is based on Clickhouse. So it's a modern time series database and you can use the existing Hubble CLI or the Hubble UI or the Clickhouse query API directly to essentially query the data to generate metrics, to query and look for particular events or logs or even just to, for example, store your network metrics or your network logs or your runtime logs or your security events for, let's say, the last three months with the ability to go back in time and recalculate metrics or find particular events. So great for security use cases and also great for network troubleshooting and network operation use cases. OpenShift certification updates, of course with 1.12 we have renewed all the certifications for both enterprise and OSS. So we of course remain completely certified on OpenShift or certified operator as well as a certified CNI. We have added offline installation support in 1.12. So that's a new addition with the latest release. Network visibility, this is a new functionality that comes via tetragone enterprise. It's essentially the ability to look into a lot of very low level network metrics and network data completely passively. And passively means that we're not actually parsing the network traffic directly. So we're not really in the data path. So we're adding no additional latency but we're essentially parsing or understanding what the TCP IP or the Linux networking stack does itself and can extract, for example, round trip times of connections or we can understand TLS hand shakes or we can understand number of like amount of traffic by looking at the sockets or the socket counters instead of trying to parse and count network, number of network bytes. So this can give you essentially the ability to monitor your network even in a super low latency environment. This particular feature has been implemented for environments such as financial transactions with very low fixed latency guarantees that need to be met. So it's essentially a feature that allows you to monitor your networks and to gain visibility in your network at extremely low overhead without introducing additional latency in the actual workload traffic. FQDN HA proxy. So this is now fully stable and has been released as part of the enterprise version. This means that DNS policies, so the ability of defining network policies based on DNS names with wildcards, for example, podx can talk to star.twitter.com and Scyllium will automatically allow only connections that have been where DNS solution has happened for this allowed pattern and then only for the IPs returned by the DNS. So this feature is now available in a highly available fashion, which means if you upgrade Scyllium, if you restart Scyllium, if Scyllium is down for some reason, the FQDN or DNS proxy is separate and is highly available, which means the pod can continue to resolve DNS even as Scyllium is restarted and connections are not impacted. This was the overview of 1.12. We now have more questions for Q&A and I will go through the list again. If you have more questions, feel free to start listing them in the Q&A. In the meantime, if you want to learn more, we have a couple of options available. First of all, if you want to see a demo of anything that we have just talked about, you can schedule a demo session with one of our ISOvalent experts here, one of our solution architects or even one of the feature owners and get a demo. Cornelia just posted a link in the chat. You can simply click on that if you're interested and pick a time slot that works for you and we will schedule a demo session during that time with you. We also have labs or hands-on tutorials. They're instruct-based, which is amazing because it gives you essentially within a couple of minutes, a full Scyllium environment where you can play around with new features, for example, with the service mesh piece without having to install Scyllium yourself. So you can get hands-on and actually walk through some of the Scyllium features and try them on without running Minicube or kind on your own laptop or without bringing up a GKE cluster. So a great way to get hands-on very quickly within minutes. And then last but not least, we have a webinar series around eBPF. So if you want to learn more about eBPF, if you're interested in what is going on around eBPF here, this exciting technology, maybe you have seen Les Ries talk about eBPF and you really want to learn more what is under the hood of Scyllium. We have essentially a eBPF webinar series, how the Hive came to be, where we will specifically deep dive on the eBPF site. Cornelia has just posted the link there as well. So if you're interested in any of them, feel free to click on the links and sign up. Then I saw, I missed one question from Jim. Hello, thanks for this awesome webinar for some of us who has used managed Kubernetes services like GKE, the Scyllium capabilities we get via data play eB2 seem limited and unconfigurable in such a way that we can utilize all the capabilities we have seen on this webinar. And especially since GKE doesn't allow, bring your own CLI like a data source, for example, yes, like legacy mode host routing. I read somewhere in the docs that only way you can run Scyllium in eBPF mode in Google is by bootstrapping your own cluster via Kube admin, which we may not want to do. How do we navigate this challenge? So what we can offer you as ISOvalent, ISOvalent does have a partnership with Google. If you want to run ISOvalent Scyllium Enterprise on GKE, then we can help you there. Unfortunately, Google does not allow running Scyllium OSS easily on GKE itself right now. Unfortunately, I cannot give you a better answer. If you want to run the full feature set of Scyllium, your best option right now is come talk to us and we can bring you into essentially the ISOvalent Scyllium version on GKE. I know that's not the perfect answer, but that's the best answer that we can provide right now. Let me double check if I got all the questions. I saw that regarding the service mesh, there was one question. Can you explain the option two a little more? So let me talk a little bit about that. So that's the Istio integration. So the Istio integration has existed for several years. In this mode, you install Scyllium as you do normally, no change required. Then you enable the Istio integration and then you install Istio as you would normally do. There's instructions in the docs. What this will result to, it will actually change the specific Envoy version that is running as part of the sidecar. So it will bring Scyllium's Envoy filter into Istio as well. What this means that you can now run Scyllium and Istio side by side and Istio will run a sidecar proxy. In this mode, whenever you define a Scyllium that are policy or when you define layer seven observability and the policy contains layer seven aspects, these aspects will be enforced or done via the sidecar of Istio. So Scyllium will not start its own Envoy proxy, it will use Istio's sidecar proxy. This mode will also optimize and it will quickly go back to the slide because it will make this a lot clearer. So if we go all the way back to the complicated injection strategy of Istio or service meshes with the sidecar and channel, there we go, there we go. So this initial kind of hook here, this blue hook all the way down and into the service mesh or a service mesh sidecar proxy. With Scyllium Istio integration, we can essentially have this blue line connect at the socket level right away. So we can essentially short circuit the sockets directly together. This actually works for other node local traffic as well. It's not sidecar or service mesh specific. If this feature enables any connection that remains on the node, you can optimize this way. It means that there is no network connection really or no network payload anymore for the path from the app to the sidecar which also means that nothing is left unencrypted because if you run in TLS in a sidecar mode, this initial blue hook is actually unencrypted because the sidecar proxy will start TLS which means only the connection that's leaving the sidecar will be TLS encrypted. And with the Istio integration because that blue hook gets shorter and there is no actual TCP IP involved, it means that there is no network packets unencrypted on even a virtual wire. For more information, there is an Istio guide in the docs that you can follow to get started with this otherwise reach out to us on Slack and we're happy to help. Going back to the Q&A and we'll check one more time if we have any more questions that came in. All right, we're right there. Lots of slides, here we go. Downgrade, we recovered. CV, we covered. Yes, I think that was all the questions. So again, thank you very much for joining this webinar. I hope this was useful. Please give us feedback. Again, if you want to learn more, feel free to check out one of the links here. Of course, also feel free to join Slack if you have not done so far. Simply go to cilium.io, click on the Slack link and you can get in contact with the team, with the devs, with me and so on, reach out for follow-up questions. And with that, I would like to thank everybody for participating and I'm looking forward to the next release webinar, 1.13. Thanks a lot, everybody.