 I'm Rahul Jadhav, engineering lead for Acunox. I'll be talking about our journey of integrating specific identity for Selium. Before I go into the details of what that integration looks like, I need to give a brief idea about what Selium is. Selium is a Kubernetes native of CNI, which leverages EBPF for networking observability and security. So EBPF allows Selium to insert for dynamic insertion of control logic at runtime in the Linux kernel. Using this, Selium makes sure that it can handle the network policy enforcement in a much more performant and scalable way. It does away with the use of IP tables, net filters, which can very much limit the scale of operations that you can do if the number of nodes or pods increase beyond a particular limit. The number of IP table rules can severely limit the kind of packet processing that you can handle in your network. Apart from that, Selium has a unique concept of using identities for policy enforcement, for policy authorization. I'll talk about that in the next slide. With all this, Selium allows you to do advanced network policy enforcement. So Selium allows you to have L3, L4, L7-based policy enforcement, as well as it gives you detailed visibility into the flow level information, using which you can have advanced observability. Well, I need to talk about Selium's identity mechanism, because that is basically what I'll be comparing, what specific identity allows you to do. Selium today has a notion of identity in which a set of Kubernetes labels are mapped to a numeric identity. That numeric identity is then used in the EVP of Data Plane to enforce L3, L4 authorization. And this can be done on per packet basis. Since the authorization is done on per packet basis, the identity you cannot have Kubernetes labels as identities in the kernel space, it has significant performance over it. And it might not be feasible to manage that control over it in the kernel space. So for that, Selium converts the label set into a numeric identity, which in turn is used for authorization in the kernel space. With this, Selium does away with the use of IP tables and net filter. Obviously, once you have an intermediate mapping of this identity, it needs to be synchronized. And the synchronization is achieved using KV store. Another advantage of using such kind of numeric identity in place of IP addresses is, first of all, it completely decouples your IP addressing from identity solution. And when the number of pods or number of nodes increases, the number of rules that are used for policy enforcement do not increase exponentially. In case of IP tables based rules, if you have a single node increase, a single multiple pods are added to the mix, the number of IP table rules can exponentially increase. This particular problem is not faced by using IP table rules. So I need to compare at a high level the components of identity and how do, fundamentally, Selium identity ends before identity differs. And then I'll go into the integration details of integration challenges. So identity attributes and attestation, that is the first component of any identity solution. In case of Selium, of course, Kubernetes labels have been made use of. There is no explicit attestation procedure there. Kubernetes itself, the Kubernetes control plane takes care of Kubernetes labels management, and that is essentially leveraged by Selium as well. In case of Spiffy, Spiffy has a Kubernetes plugin which allows it to attest for Kubernetes labels. And other aspects of Kubernetes, but that attestation logic can in turn be extended to handle other information such as, other attributes such as container attributes. It could be container image name or location, for example, location of the cluster. These attestation, there is an explicit attestation to verify these attributes. Then there's identity mapping. Selium maps the identity to a random unsigned integer. And this mapping is kept as part of KVStore. Thus, there is a synchronization, that this implies that there is a synchronization in one. Even in case of Spiffy, Spiffy ID is essentially Spiffy URI, which is part of X509 certificates or short toolkits. In case of Spiffy, because of this attestation logic, there is a separate control plane in one. In case of Selium, there is no separate control plane. The Kubernetes control plane itself serves as a control plane for Selium identity. When it comes to identity, carrying the identity across the peers, Selium achieves it by making sure that the IP cache, which is a mapping table to map pod IP to identity, is been made user. This IP cache is the one which is used by the peers to identify the identity of the remote entity. In case of Spiffy, MTLS handshake is used at the end of MTLS handshake, both the peers know each other's identity, which is carried as part of the certificates. Identity derivatives, does the ID allocation results in any derivatives, such as tokens or credentials which in turn can be used for other purposes, such as authentication and encryption. In case of Selium, there's no such derivatives. The Selium identity mechanism can be used only for Selium policy authorization. In case of Spiffy, X519 certificates are provisioned and these X519 certificates are tokens could in turn be made user for other purposes, such as it could be used as credentials for IP sec or wire guard tunneling. What was our need for Spiffy? The, our primary use case was that we didn't want our solution to be binded towards Kubernetes only. We wanted a consistent identity that can span across ecosystem, not just Kubernetes workloads. In the example given below, I'm sure IoT Edge, PyG, bare metal and virtual machine and we could, we wanted to also federate our identity solution with third party service providers. Spiffy gives us that flexibility that we can think of all these scenarios in the future. There is a, so the other point is that we want a common identity solution that we could leverage across all the policy enforcement engines. We have several policy enforcement engines that do the network, the former Selium systems, as well as data policy enforcement engine. We wanted to make sure that there is a single identity base covering all three of them. And some advanced use cases such as use of TPMs of enclaves for security attestation. These are, these are a nice use cases which are required by some of the deployments and essentially Spiffy allows you to have plugins or the flexibility to have plugins to do such kind of security attestations. Now I'll jump into the integration challenges. So Spiffy is already integrated with most of the service measures out there. For example, Istio, while all these service measures have one thing in common. Most of them operate in a sidecar model which means that the ANWI process, ANWI proxy, is located as part of the user part. In this case, ANWI can attest on behalf of the workloads because it is part of the CMC groups. In case of Selium, Selium deploys ANWI in a very different model. It uses a node singleton model wherein there is just a single ANWI proxy on the whole node. All the user ports will redirect its traffic and the redirection of that traffic is handled through eBPF logic. So essentially this is a significant design change when it comes to how Selium is deployed and this had major implications on Spiffy integration for us. So what the implications of an ANWI node singleton model used by Selium, first is that the Spires Kubernetes workload attestation model currently expects that the attestation APIs could be called only from the same C groups of the workload. In case of Istio, ANWI is part of the same part as that of the workloads being a sidecar model. So it is able to attest, use the attestation APIs. But in case of Selium, ANWI is no more co-located within the workload ports and thus it has no access to C groups. This led to the development of delegated identity APIs and delegated identity APIs is what allows a privileged process. I'll talk about what privileged process means. It allows the privileged process to fetch a suite on behalf of the workloads outside the C group. So in this case, the Selium agent is a privileged process and Selium agent could request a suite on behalf of the workloads from the Spire agent. Now how do you ensure, so these are high security risk APIs. So how do you ensure appropriate API access to this? What are the guardrails that a user of such APIs needs to put in? The first is only the node local, node local access is allowed for these APIs by making use of a simple unit store in sockets. The caller has to be registered with Spire agent. That is a requirement here. And one should ensure that as a user of these APIs, one should ensure that the selectors to be used for attestation by this privileged process should be, only the privileged process should be able to attest for that those selectors. This is something that the user of this API should keep in mind. So given below is the details of how this delegation API is provisioned. So in this case, the Selium agent is marked as a authorized user for the delegation APIs. And in this case, you can see that the Selium agent is a child ID for the parent ID of the Spire agent. Using Spiffy ID for L3, L4 authorization. Now, in most cases, in all the service meshes, the Spiffy ID is used for L7 authorization. In case of Selium, we wanted to make sure that we use the same Spiffy ID for even L3, L4 authorization. One design rationale that we decided to use was when the Selium agent gets Retrieu's SWEED on behalf of the workload, the Selium agent will, in turn, create a Kubernetes label on behalf of, based on that SWEED on behalf of the workload. Now, what this means is that once the Spiffy attestation registration is done, you have SWEED and as well as the Kubernetes labels for the corresponding workload. These, this Kubernetes label in turn can be made for L3, L4 authorization. Given here is an example which shows that you can use match labels construct to specify the Spiffy ID and you can have your L3, L4 authorization based on this. So this is very different. So the L3, L4 authorization is very different from secure service mesh solutions, which only do L0 authorization. Other use case for us was to ensure that all the non-secure connections are upgraded to secure connections. We are primarily a zero-trust security solution. We want to ensure that all the workloads, all the interest traffic is also secure traffic. Obviously, as part of Spiffy, one of the advantages that you get certificate provision for the workload, we could use the certificate for TLS origination and termination. Given here is an example of a policy which could be used for TLS origination and termination. So here, typical X-Wing, that star example has been made use of X-Wing and that star, both of them are unsecure application, which means that they use HTTP for communication. The ANWI proxy will transparently upgrade the connection to a secure connection and use Spiffy certificate for authentication purposes. Other perks of using Spiffy, of course, Spiffy has an integrated certificate management solution, that's a big plus. It integrates with other CA providers, which essentially allows us to integrate with a third party service providers. Another point is that we are a security solutions providers and we have a SAS environment, multi-tenant SAS environment. We want to make sure that there is a hard isolation across multiple tenants and some of the concepts of Spire, such as Nest at Spire, allows us to do that hard isolation of resources. Spire readily integrates with words for secret management, that is the big plus for us. And the developer community is extremely vibrant. The kind of design discussions we had on Slack, the kind of design discussion we had on good GitHub pull requests were amazing. And it was very fun to interact with this developer community. To sum up, Spiffy does provide a strong identity base, flexible for all the scenarios. And there are certain advantages of integrating Spiffy natively in Selium. One of the advantages that I've already mentioned is, apart from L7 authorization, you can still have L3, L4 authorization, also as well as well based on the same Spiffy ID. Well, the integration that we did didn't have any impact on the data path, the EPP of data path handling for Selium, that's a big plus. That means that there is no additional control over it that is inserted by this identity solution. Spiffy now supports delegation identity APIs. The user should be very cautious of making use of these APIs because these are high-risk APIs. You should make sure that the processes which can access these APIs, make use of appropriate suitable selectors. Only the privilege process could request a switch on behalf of the workloads. And of course, the privilege process needs to be on the same note, but not in the same part. What are some of the next todos for Selium? One is to make use of the same Spiffy provision certificates for IPsec and Viagra that is still not done. It's a work in progress. We are hoping that in the future, we could as well integrate with JAR tokens apart from X519 certs that we have. We have integration as well. Well, credits, many thanks to all the code contributors and the reviewers. We had extremely good reviews from both the Spire and Selium community. I'm looking forward to working more closely with these teams to get the PR handle. All the work item, all the work that is done in the context is available in the open source, including all the design documents. There is a GitHub repo for Selium's Biotutorials, which allows you to deploy the images, Selium's Spiffy integrated images and try out the policy examples that I've given in the slides in those tutorials. Thank you. That's all from my set. Thank you. Any questions?