 Great. So, the topic that we are presenting today is securing microservice interactions in OpenStack and Kubernetes. So, it's going to be presented by Yoshio, my co-founder and I who started Banyan like two years back. So, just one slide about Banyan and I won't bother you with it anymore. So, we were founded in 2015 in San Francisco area. We come from VMware and HP Labs and a PaaS company called MooWeb and we've done a lot of work in virtualization, network security and big data and we were incubator at Stanford, Stardex and our product currently in private beta. So, if you guys have questions, happy to answer that after this. So, before I go to the outline, the show of hands, how many of you are actively using Kubernetes or containers right now? Okay, great. Good to know. About 50% of the audience. So, this talk is going to be a little broader than what was just purely OpenStack and Kubernetes interactions. We're going to talk more about what are microservices, why is it creating a new type of attack surface and how the existing solutions that are there, both in OpenStack and Kubernetes, are trying to address it and why. We'll go over as to why that's not at the right level for a lot of reasons if we want to, that it will become clear. And then we're going to talk about a new approach that we think was the right way to go about it moving forward and we'll show a simple demo of what we have as a prototype and then a conclusion and discussion about it. So, introduction. So, what are microservices? So, microservices is a kind of software architecture which takes basically big applications and lets you break it down into small pieces, each of them being a very function-specific piece that can be developed, managed, and deployed independently. And the benefit is that now each component can now specify an API with which the other components need to talk to it and so each individual component can by itself be developed and enhanced independently very quickly. So, to just formalize the three high-level benefits of this architecture, it falls into the typical people process and technology domains that are like kind of the bare-bone back-end, so to speak, for any big organizational shift. On the people side, it allows different teams to build their own single-purpose application component as fast as they can very quickly and it's completely independent. As I said, they just need to specify APIs with which the other guy they're going to interact with it. On the process side, it enables really fast deployments because people can take that single component and tailor it to their needs and then they have experts in that area that go and chug on it and then release it very quickly. So, that way what people have noticed is that what used to take months, what used to take several months to deploy now goes out in days and a matter of hours in many cases and minutes in some cases we've heard too. And finally, on the technology side, it's also critical because each team can or each, for each component, they can choose their own language, their own development framework, their own deployment framework and they can really focus on making it really good for that particular application component. So, with all these great things that are happening, one caveat to note though is that not all applications fit this architecture. So, things like applications that are either closely coupled or applications that have latency requirements or classic examples that are not really extremely well suited to this kind of an architecture. But that said, it is gaining a lot of momentum and I think that a lot of companies out there who are deploying microservices in some form or the other and actually OpenStack itself is an example of a microservice at many levels because it has a neutron and a barbican and firewall of the service and so on, different components that are being developed independently and fast and they all interact with each other. So, with time we are noticing though that the microservice architectures are getting more and more complicated. So, here we see one application that we saw was broken into multiple pieces but typically what we are seeing is that these applications are now getting spread across different clouds. So, in this case you have AWS, someone on-prem and some Azure and then this microservice also has interaction points with the Internet at not just one place but now multiple places. And the other thing to note is that these microservice architectures and microservice applications don't live in a silo of their own. They are present in the context of other applications in the system. Like in this case, for example, this microservice is interacting with SQL database and it's interacting with Kafka messaging system which are not by any means the microservice architecture right now. And this will see over and over again therefore a solution that we come up with needs to address not just simple microservices but should go beyond that. So, what is the problem with microservices in terms of security? And that's the primary question we were trying to answer. When we had monolithic applications, it was relatively easy to secure them. So, we just had this perimeter based on which you could say, you know, things like you wanted and you basically divided up your security space into AppSec and NetSec kind of components and you said on just to clarify here, AppSec people usually typically just refer to application firewalls and OS systems. You are treating it as a much more broader term out here which includes API gateways, TLS proxies and so on. So, more, you know, security mechanism that are at the application level. And then there are NetSec solutions people deploy on top of it which are for more for segmentation, firewall rules and IPS IDS kind of systems and so on. And this was relatively okay for single tier and even three tier and going on to some more than three tier to four tier. It's a little more multi tier. It was working okay. But once we start going into microservices domain, this thing gets much more hairy and complicated. So, just to see what are the new types of attack surface that this microservice is introducing, if you look at this earlier picture, first thing that you see is that there are rapid deployment that are happening to each of these components. Each of these components are individually being developed and pushed at a rate that's commensurate to that team's velocity. So, that makes it very hard for on average to track as a security product to see what's going on at any given time. The other interesting bit is that before you had just one application and so it had a bunch of and it exposed a few APIs. But now once you break it down into small pieces, like taking a sphere and breaking it into small balls and so you currently there's the number of entry and exit points into it suddenly blow up out of proportion. That kind of what happens with your more exposed APIs. And not just that, as I said, they also interact with existing SQL and Kafka and whatnot, other systems. So, even those, the attack surface, even for those kind of blow up because of this. The other aspect is the communication channels. An interesting part of microservices is that they take function calls, so to speak, and convert them into network traffic and network calls, network API calls. And so therefore you need to not just worry about data flowing just within your application, but now it's data flows on the network and potentially across clouds. So, that opens up one more communication, one more attack surface to the whole system. And also these, and finally these apps and infrastructure also are becoming less trustworthy. So, because these are so fast, developed so fast and also many of times they're just pulled off the internet and open source components and just run without proper wedding. So, it ends up being that they can easily, if one of them is goes wrong, it can attack your other applications and this can spread very quickly. And also the infrastructure in terms of people using different types of clouds and so on makes it less reliable as well. So, with that, why do we care about this attack surface becoming larger? What are the primary issues with that? The major thing is that it puts your sensitive data at risk and that's the key problem by going to microservices. So, just to put a little more meat into it, let's say you have an on-prem system with OpenStack and Mesos and so on, deployed on your, deployed on premise, then you have AWS and Azure running Docker Swarm and Kubernetes kind of deployments. And their communication is now not just within a cloud, now it's across clouds. And also the communication, like here there's an orange service talking to a blue service and if you actually expand it out, each of these services are potentially spread across multiple hosts. And you have communication not just across hosts but also within hosts. So now you've entered this world where you have communication all the way from intrahost to cross cloud. And this is where most organizations we see are moving. So, the cloud-native and microservice architectures are changing the function calls network traffic, causing this intrahost to crossword communication. These agile deployments of containers also introduce, in addition to the dynamism we talked about of new containers and FMR infrastructures coming all the time, they also introduce these privileged components into the system, like orchestration engines. And they themselves can be the source of vulnerabilities and problems like the RSA conference recently, I don't know if you guys saw, they showed how a Mesos cluster can be compromised by leveraging the fact that there is some part of the orchestration system that wasn't secure enough. And finally, as we move to public, private, I'm sorry, public, multi and hybrid clouds, the other thing that happens is that there is no, it gets really hard to have global identities for all these different components in the system. So, people used to rely on IP addresses and so on for identifying any given service and now it gets really hard. And as you go to multiple clouds. So, with this in mind, if you look at the evolution of compute, cloud, et cetera over the last three decades or so, see that the physical machines turned into virtual machines and containers are the faster, better arguably, versions of virtual machines and so on. And that has led to cloud deployments also being like moving from physical to private clouds, public clouds to hybrid clouds and so on. And applications moving from monoliths to three tier and now more like microservices and function-based applications. And it's interesting here how security is evolved in two dimensions here in silos, NetSec and AppSec. They've been evolving at their own pace but they've still been in silos like network appliances, overlays and VPCs and security groups and so on are popular in different environments and SDN micro segmentation are the newer ones, newer kids on the block, so to speak, that are trying to do more finer grain network-based isolation. And on the AppSec side it's more like physical appliances, virtual appliances and more recently HDKs and RASP and so on that are coming that are trying to secure your application. But all these are fundamentally built pre microservices and pre this dynamic sprawling infrastructure era. And so the question is what is the next thing that we think could actually come and address these new attack vectors that have been opened up based on these microservice architectures? So you should now go over some existing solutions and talk about the trade-offs. Thanks, so we looked at this problem and then tried to figure out what are the requirements for a good solution to it. And so this is what we came up with. So the first thing is that it actually has to solve the problem that we laid out this sprawling and dynamic attack surface has got to secure it. It's got to be as dynamic as the infrastructure that you're trying to secure and it's got to be as distributed as that infrastructure. And it should have minimal trust in the applications and the infrastructure that's running on. The second requirement is that it should work across both traditional and modern environments because it's just a fact that microservices need to interact across these different environments. And so kind of as a consequence of that, a requirement of the solution is that it needs to be independent of that underlying infrastructure like to the network frameworks you're using and so forth. And it should work in polyglot environments that shouldn't lock you into some set of programming languages or frameworks. And finally, to be actually secure, it has to be easy to use because if it's unusable, it's not going to be secure. So it needs to be able to express things, to express policies at application level, something that humans can easily reason about their deployments rather than having to deal with the low-level infrastructure constructs, things like IP addresses, and so forth. It should provide single-paying visibility and control so it's easy for a security team to come in and just get an overall picture of what's happening and control. And then it needs to be self-verifying so that you just set it and then forget it and have confidence that it's going to do the right thing. So let's take a look in the context of OpenStack and Kubernetes and see what are the kind of tools that you have out of the box that you can try to put together and configure a solution that gets as close as we can to this. So in OpenStack, we have actually a very rich set of network APIs in Neutron that allow you to configure virtual network topologies and then to provide connectivity between them through virtual routing and firewall as a service. And then at the instance level or even the virtual interface level, you can attach security groups, which are sets of rules that define the kind of ingress and egress traffic that's allowed. And these are expressed in terms of the IP protocol like TCP, UDP, and so forth. The address range, CIDR, and the port range. And you can also specify the peers security group ID as condition. There's also a role-based access control mechanism now that allows different OpenStack projects to share access to a Neutron resource like, say, a virtual network that you can then share between multiple projects. So these are all very rich, complicated mechanisms that you can drive through programmable APIs, which is very nice. But it's kind of at the wrong level of abstraction. What we really want is to be able to control microservice to microservice interaction. And what this is giving you is kind of network primitives. And so you have a semantic gap that you have to bridge yourself. At the application level, there are a couple of things that OpenStack gives you. Now there's the barbican, which provides you a secret store API that allows you to store secrets and certificates and stuff like that in things like hardware security modules through a plug-in mechanism. And then applications, your applications or microservices could be developed. The code in those applications could be developed to consume those secrets and exchange them and kind of use that as a basis for doing policy, like controlling what APIs something is allowed to access. And then there's also, and Neutron provides you the load balancing as a service plug-in. Some of the load balancing as a service plug-ins can do L7 policies as well to kind of control which APIs things can access. So from our perspective, this doesn't really meet those requirements that I set up upfront because it interferes with the application developer in the case of secrets. And we have to kind of rely on them to do the right thing. And there's kind of no self-verification mechanism. And in the case of load balancing as a service, well, you've got to get the networking right to make sure that traffic cannot simply evade the load balancers. And the load balancers are fundamentally separated from the applications. And so they don't have the kind of application context that they need to always make the right decisions. So in terms, if we look at Kubernetes and see what's available there, Kubernetes has kind of a much simpler networking model, kind of a flat networking model by default. And the network policies which are used to control reachability between Kubernetes pods are defined at a more abstract level than what's available currently in Neutron. Here you define reachability based on the labels that have been assigned to pods and the values of those labels. And you can define very complex label selection rules that define what kind of traffic is allowed into a particular pod that has a network policy attached to it. Segmentation is also possible in Kubernetes installations through various CNI plug-ins, container network interface plug-ins that are available like Open Contrail and Calico. But it's kind of not a first class citizen of Kubernetes itself. And then at the application level, there's been some recent work. There's a fairly new Secrets Management API. And there's a lot of discussion in the community about adding service level policies in addition to the network policies that I described. So again, these kinds of solutions are different levels of the stack, the network level, application level. They still kind of require applications to do things like the application code. And so there's kind of no easy way to both bridge that semantic gap and make sure you've got everything configured coherently in harmony to achieve that high level goal that you really want in terms of microservice to microservice interaction and security. I should also mention in the OpenStack world, we have some higher level tools that kind of can coordinate across the different levels. Like there's the Heat Project, which allows you to create templates for reusable deployments. And then there's the Congress Project, which allows you to specify kind of policy invariance that should hold in your configurations. But they kind of don't really solve the problem of having to get all of these low level details right in the first place. So how do people combine OpenStack and Kubernetes together? There's kind of two fundamental ways you can do it. One is just deploy them side by side as separate clusters. The other way is to deploy Kubernetes on top of OpenStack NOVA instances, like VMs. And there's a couple of different alternative ways to do that. One is through the cube up script that Kubernetes provides. And another way is to use OpenStack Magnum. So either way, with your doing side by side or Kubernetes on OpenStack, we have the issue of what do we do about the network? How do we, you know, what level of integration do we want between these two networks? And what implications does that have? So this is a highly oversimplified explanation. You can get super complicated with all the networking options available for these two environments. So one approach is just treat them as completely separate networking frameworks. Don't try to unify at all. And then microservices that are deployed in the two different environments can communicate, but they can only communicate through the cluster exposed external IP address and ports that services can, you know, kind of export from their cluster. But then you lose some of the advantages of network integration, like you lose some of this ability to provide segmentation and so forth. So there is a project in OpenStack, Kerr, which you've probably heard about. So this is a very promising project that allows you to have Neutron spanning both of these environments, OpenStack and Kubernetes. It's basically a controller that sits there and listens for Kubernetes events, like new pods are launched in Kubernetes, and then it assigns Neutron resources to it. And that assignment is accomplished through a CNI plug-in that sits on every Kubernetes node. And the end result of all of this is that you get all the Neutron functionality of the virtual topologies, security groups, load bouncing and servers, all of that stuff now applied to Kubernetes pods, as well as your OpenStack Nova instances. Okay, so there is at least some way to integrate them and then you can use all of those as long as you again get all the details right in that low-level networking to achieve your high-level goals, you can kind of do that with OpenStack as well. It's a little unclear how this will work with the Kubernetes network policy, which specifies connectivity at a kind of different level than this. In terms of application level integration, I haven't seen very much, I don't know if you know better, but as far as I know, the secrets management frameworks are kind of separate things. And as I mentioned before, secrets kind of impose requirements on applications that can be burdensome. And similarly for the L7 authorization, if you're using Kerr, then I guess you can use the loop balancing of service kind of across those environments, but otherwise you kind of have to have some kind of custom solution. So I think that the takeaway here is that you have a lot of knobs, you can turn a lot of configuration at different levels. In both of these systems, OpenStack and Kubernetes are quite rich, but there's still a huge gap between the capabilities that are there and kind of the objectives that we really would like to achieve. And even if you could achieve that with these two environments, it's just a subset of the general problem that Giant's laid out where you have traditional infrastructure as well as private cloud and public cloud resources and you want to control microservices deployed across all of these environments. And so that's kind of motivating us to look at seeing whether there's a kind of fundamentally different approach that we can take to this. So I think Giant will lay that out. Thanks, you're sure? Yeah, all I heard was everything is very complicated. All right, so all these indicate that we need to think about this problem in a slightly different way. And I think one of the key approaches, a new approach that's gaining a lot of momentum to securing these microservices that are distributed and dynamic and spread across all the different environments is something that we're calling security micro engines. So you can think of security micro engines as like these app sec and net sec tools that we have. Think of them as miniature versions of those that kind of spread across or distribute across your entire environment. The idea is that they can now be very specific to your given microservice that you're trying to secure and you can invoke the kind of functions that are needed for securing that specific microservice. For example, the kind of microservice you want to do for like an HTTP-based application, microservice might be very different from the one that you want to do for like a SQL server, for example, and so on. So you can actually create these really tailored solutions for the different parts of the equation. I think you can think of this as like going one level from like as we moved from physical to virtual we had some level of programmability and now we are adding a newer level of programmability by actually having these micro engines spread across your entire infrastructure. And we believe that that's probably the right way as we go from this current set of security tools that we have to the next gen where we think that the security micro engines will play a really key role in it. We envision the security micro engines as kind of converging these application and network security layers, so to speak, into this brand new micro services layer level security that kind of lets you achieve a lot of these objectives. And I think one of the way to think about it is that a lot of intelligence for security was placed either in SDKs and applications or in terms of underlying network like in terms of security groups and so on. So now you can imagine yanking out a lot of the complex functionalities of those and setting it at this level which is at the micro services level and so you can talk and reason about it at that level. So that's kind of what we think the world is moving towards. And that's based on this is what we've built out a platform which we call Banyan security micro engines platform and that basically tries to bridge the gap between traditional and modern by creating a layer that's universal across these different environments. We've come up with a layer called Cryptovisor. I'll go over a little bit about what it is. It's a virtualization layer that basically lets you have a centralized policy management based on which you can control the activities that are happening all across these different environments by providing a distributed control plane and enabling enforcement of all these policies in a transparent manner and being able to visualize it. So Cryptovisor, you can think of it as a programmable security virtualization layer that's present on a host or a VM, depending on what you have. It sits between the application and the network stack. It's completely independent of the application and underlying network stack also. So that way you can plug in your favorite network network infrastructure. You can have your different applications in any languages you please. And this layer is a very thin virtualization layer that kind of builds in security into this ecosystem. And so any traffic that goes between applications goes through this layer. And so this kind of knows what exactly is going in and out of these different applications and uses that knowledge to kind of program different levels of security into your system. So as an example, let's say we have this case where you have the web browser and the mobile app accessing these front end kind of applications of services like web portal and catalog services and some middle tier services like payment services, user services and so on. And then you have some more database services like MySQL and Kafka or it is. So what traditionally instead of this perimeter based security for application security, think what which would have allowed either micro service going rogue and accessing other applications or having an intruder that can actually either access any given microservice or access the traffic between two microservices. We envision the final system should look more like this where we're able to program or configure the interactions between microservices in a way that is appropriate for that microservice. For example, here the payment service we can based on policies you can specify that it can only be accessed by a web portal service and it can only be accessed with mutually authenticated TLS. And something like Redis can only be accessed with notification service and it needs to be in a way that it needs to be in a way that you identify the client as a notification service, not based on its IP address, but as a more powerful cryptographic mechanisms which we'll go in a little bit. And finally, you can also imagine that this layer is gonna take something like protect applications like Kafka and say it can only be accessed by user service and web portal, but in addition to all that, addition to having mutually authenticated TLS to access it, you need to be able to provide differentiated access depending on which resource. And that's for Kafka, that's what they're called topics, so which topic you're accessing, you can provide differentiated access to it. So this is kind of where we're going to end and so that we can thwart all these intrusion attacks, so to speak. So what are the key attributes of this solution that I just described? One is that it's a transparent security layer, so you don't need to change your application or infrastructure and it works across modern and traditional IT, so you can insert this and you can automatically take your connection and now you can upgrade it to like TLS, for example, quite easily. We enable you to, we provide a global identity to all the endpoints in your system and this is cryptographically secure and signed and so on, so that way we can be sure that who we are interacting with is what it says it is not just some IP address support. And finally, this is high performance and low latency kind of a thin layer and we can provide the desired level of security depending on what service you're trying to protect. And we'll show an example of RBAC and ABAC policies that we use to kind of make this happen. So this is just a very high level picture of how this thing happens and I think I don't wanna go into too many details here but we have like in this case, you have two VMs running, like one of them running S1, the other running service S2 and they're trying to communicate with each other and they're potentially present on two different clouds. So we have our crypto visor layer that's sitting between your application and the network stack and all of these are controlled using this centralized dashboard which allows you to have all these different features that we talked about. So we provide cryptographic identities to all these endpoints and when they wanna communicate with each other we automatically create these secure trusted channels based on TLS between them and this allows us to communicate securely and we know exactly who we are talking with on both ends. So I think your show will now show a demo of some aspects of this that would be very interesting to look at I guess. Okay, so this is the dashboard showing kind of the overall snapshot of the current state of all of your clusters that you've got us deployed on. We can go to the network map and then we can see all the individual clusters like here is a Kubernetes cluster we've deployed and the applications on it. We have a databases cluster and a Kafka cluster where we have one application for Kafka. If we go to the all clusters view here we've grouped by cluster name so we can see those clusters that we've defined and we can see for example that the Kubernetes cluster is talking to the database cluster and let's see. So if we go to one of these applications in the Kubernetes we have this thing HelloCab the microservice application we developed and here we've chosen to group by service name so we can see all the services. Oh, that's the wrong mode, okay. So here we see all the services and the numbers here indicate how many instances there are of that service. So like this one passenger management says that there are three instances we can kind of expand that and then you can see those are three different containers on three different machines I guess. These are Kubernetes pods actually we can click on the links and see the kind of traffic that they're generating. So you support a few different L7 protocols. So basically that crypto visor layer is seeing all the traffic and then we kind of can accumulate that information, compress it down and send summary statistics up to this service. So let's jump to the Kafka one. So this here we've grouped by host name you can see there's two client hosts that are accessing the Kafka service and what we've done actually is we've set up a policy for this Kafka service and the policy has two parts to it. So one of them says that KClient one which is actually that first client has access to all resources that's a star and can do any operation. So here the resources are Kafka topics and the second client KClient two has access to topic one and topic two and it turns out those are the only topics it's accessing. So if we go to the events page and we select just unauthorized access events for the Kafka service, we see there's nothing. Okay, so now let's take a risk here and try to change the policy. So go to the JSON view and I'm going to remove topic one from the access from the list of resources that Kafka KClient two can access. So now it should only access topic two and if it accesses anything else that's an error. So I clicked update there and then we go back to the events tab. Oh, okay, yes and we see an unauthorized access. Sometimes it takes a couple seconds but whoops, I always do that. We can click on this and we can see the details of it so we see that a client with role KClient two has tried to access topic one and that is not allowed, it's an authorization failure. So that all kind of was a very simple user interaction but under the hood what has happened is the crypto visor has identified all the workloads in all of these different clusters, has assigned cryptographic identities to all of them which has transparently encrypted the traffic to the Kafka service, mutually authenticated so we know who the client is and has been parsing the Kafka protocol and applying the policy, looking for changes to that policy in real time and then adjusting its enforcement accordingly. So all that has happened with click of a couple of buttons so that's sort of the feature set that we wanted to show you today. So, okay, so let's just go back. This is where my lack of PowerPoint skills comes in, can you help? Okay, yeah, so just to sum up, so we have described a new attack surface that are raised by microservices and containers and hybrid clouds. I've described the kind of the mechanisms that you have available in OpenStack and Kubernetes to kind of deal with that and showed you why we think that there are shortcomings with that and motivated the development of a new solution, tailored to these microservice architectures and working across modern and traditional environments. It's based on this, the solution that we've come up with is based on the security virtualization layer that brings together AppSec and NetSec in bridges these different application environments. So we think this is a kind of very powerful layer that you can use to embed programmable security functionality over time. So with that, I'd like to open it up to questions, discussion because you could come to the microphone. Hi gentlemen, Scott Fulton from the new stack. I'm curious, first of all, we've talked a lot today about the two ecosystems, the OpenStack ecosystem and the Kubernetes ecosystem kind of overlapping one another. Do you gentlemen foresee a micro engines ecosystem emerging a third oversight scenario where you take something like you've shown here in the demo and integrate that into a separate management platform that's used by a separate group of InfoSec personnel or do you see this being integrated somehow into either OpenStack or Kubernetes? So I think you could do it either way. The approach that we've taken was to do the first way to have it be a separate ecosystem, so InfoSec people could come in and apply their policies across a diverse environment in an enterprise. And we tried to design in such a way this hierarchical so that the administrators of a Kubernetes cluster could control just the things that were relevant to them and a broader InfoSec team could control things across different clusters or across clouds. And one other thing I'd like to add is that a lot of platforms are usually responsible for the infrastructure and network layers and a lot of stuff that we're doing is more at the next level which is the application layer. And so a lot of functionalities arguably fall in a completely different domain. How do you solve the performance problem especially with microservices? The transaction gets smaller but the overhead remains the same. So the ratio rose with the amount of compute resources that you need for analyzing that particular transaction is authentic or not. So I think the overhead doesn't really grow like that. It's gonna be scaled proportionally to the throughput of your system. So if you have N transactions per second and you have some overhead of 2% for every transaction, there's still 2% scale by N. So whether that's in one node or spread across multiple nodes, it kind of doesn't matter because we don't kind of have an additional kind of communication or coordination overhead. It's just like a per transaction kind of cost that you're paying. So it kind of doesn't matter how it's split up if I understand your question, right? Well, I was thinking like a typical DNS transaction. The reason why we stack things together and get rid of security between them is because it takes like a one transaction actually multiplies by three when you wanna do a security DNS request as opposed to a non-secret. Oh, I see, okay, yeah, yeah, yeah. And in general, microservices are increasing the amount of communication you have. And so your communication overhead in general is going up because of your adoption of microservices. You know, that might impose a limit to how much you distribute, right? And what is the proper grain size that you want to support. But I think you can think of a system like ours as being something that is a local overhead, right? It's processing local traffic and it's not adding additional messaging. It's just adding some cost to every message that occurs there. So but there is a fundamental problem with microservices that as you scale out more, you're gonna have communication that scales probably super linearly with the number of nodes. I was thinking if there's any kind of data you have on latency that's added, so the system might decide that this is not good to be separated, it's more expensive to separate. Oh, to kind of help you do capacity planning or something like that. So when you scale out to a particular size, it's bad to scale this way. Yeah, so we haven't kind of gone in that direction of performance, helping you optimize performance and re-architect your applications. We're just saying, given the application you have, how do we help you secure it? But I think in the future you could kind of look at systems like that. Yeah, and I think one point to add there is that, yeah, so I think the fact that you can tailor security solution for each microservice means that you can, like let's say you figure that, parsing your L7 protocol is adding a lot of overhead, but you don't really care too much for it, you can just turn it off. Things like that are much more feasible if you have programmable architecture and that's kind of where we were going with the whole thing. Can we take one more question? Yes, I was actually gonna ask the same question from the gentleman over there, but let me ask you a bit differently. So when you establish the connection, then you do the negotiation with TLS or whatever that is. The expensive operation is the asymmetric encryption and the negotiation of the symmetric key. So is this an operation that you do for every connection or once it gets negotiated and then you're operating with a symmetric key, then you just keep using that? Yeah, so I think the answer to that is probably session reuse so that, I mean, a lot of applications are smart enough not to generate tons of connections between microservices, some are not. And so for those that are not, you need to do some kind of session reuse so that in TLS has support for that so we can leverage that pretty easily. The challenges with that come in when you have, when you have to load balance or where the traffic could, from one client, separate connections might go to different instances on the back end, then that session reuse state has to be portable across all of them, right? There's techniques to deal with that. So I think that we're out of time but we'd love to talk more if you wanna come up here. Thanks. Thank you.