 All right everybody, welcome back to yet another OpenShift Commons briefing. Today we're going to talk about building trust-based authentication with a whole bunch of Open Source projects, Spires, Bithy, NSM, OPA, and two OpenShift Commons members, Frederick Coutts from ..ai and Bobby Samuel from Anthem AI are going to walk us through some really interesting content here and maybe lay out what all of those wonderful Open Source projects are that they used to make this happen at Anthem and other places. So I'm going to let Frederick and Bobby introduce themselves, take it away, talk for probably 30, 40 minutes, depends on how much content they have, and then give us some time for a conversation at the end. So Frederick, take it away. All right, Frederick, do you mind if I'll start? Sure, I think it's good, you first. All right, thanks. So good morning, everybody, and thanks for taking time to join with us today. I hope we had a lovely Memorial Day weekend and Frederick and I are going to tag team here. So what I'll do is let me set up the business context into sort of the why. Why did we start this? Why are we on this journey and why is it important to all of us as well as to people that, you know, carry insurance and people that work in this space? Why is this important? So let me set that up and then Frederick will walk into some of the technologies that we're working on. So Frederick, if you could hop to that first slide, I will also set us up. So I don't know where your eyes go when you look at this slide, but I was here at Anthem and I only see just three words on here, 2015 data breach. That's, I lived through that here at Anthem and it was not a great time. And so Anthem, since that time, and some of you saw it in the news, it was over 25 million, you can go search on your favorite search engine. It's 25 million or so members had their private help and information taken and taken out and we're still facing the repercussions from that. And so as a result, Anthem's reaction was to beef up traditional security and to put those traditional security in place at one point in time. I believe it was a little over seven hops to get through our firewall to get to an external website. And so we took a traditional model and we bolstered that or galvanize it even more. And as we've looked at that, it's not sustainable long term. And we're taking a second look at this. So health care itself is just the cost of health care. Those of you, actually, many of you that are in the US, US health care spend is about 18 percent of our gross domestic product. It's a huge amount to give you an idea of how that this compares to other Western countries, European health care costs are about 9 percent. So stark, stark differences in how much the US is spending on health care. And so what we've done is we've we've bolstered up our health care. We've seen both of our security. We've seen health care cost rising and we know that the model today that we have is likely not sustainable for, you know, the generations to come. And so as we look at this, we've been partnering with many open source open source partners. So everyone from Spiffy to Spiffy to Spire, if you saw the last CNCF presentation, we were talking about it there. We're also in our world, we're using OPA, NSM and then key cloak to to bring together a zero trust environment and ecosystem. The the why is really driven around. We believe health care and the cost of health care shouldn't continue to rise. We believe that the the data should be should be should be shared and should be in places where it doesn't take, you know, entire teams of hundreds and hundreds of people watching firewalls and watching security protocols to make sure that our our data is secure, that our transactions are have integrity to them. So what we've done is we've started to work with various groups and among this group. And what we'd like to do is we're we're working with cloud native engineers, developers, and I know you don't most people don't think of Anthem as a place where technology is happening. So what better place to do it? So we've we've collected, we've gathered. So Frederick and many other folks that are on Frederick's team as well as my team, we've all gathered with the with the fundamental belief that we can shift the way that we do security, which then fundamentally shifts the way that we do and we conduct our business for the better of the people. So with that, this is this is our approach. This is what we're doing to set up a new way of doing doing business. And we believe that it's not just for our industry, but for several industries that are already working this direction that we're just partnering and moving along with them. We want to be not just at the forefront, but we want to be part of the change that's happening. So with that, Frederick, I'll I'll turn it over to you and I'll be available for questions as we walk through this. Right. So let's go ahead and get started. So let's get started with with a recap on some of some history. So in order to really understand zero trust, we have to look at where we're coming from and most security postures historically focus primarily on what we call parameter defense. Yeah, something that's untrusted that that goes through some security thing off in the firewall into some trusted environment. And so very often it would look something like this where maybe you have the Internet, you have some DMZ with some where you have your application gateway and then another firewall again to your corporate network. And this also goes in the opposite direction. Maybe you want to connect from your corporate network to your Internet. You have these multiple hops that you end up going through. And the connections from the Internet and the corporate network and back are implicitly denied. You have to go through the through the firewalls. There's a variation on this. This is a lower cost, maybe simpler to manage a version of that where you have one single firewall and you have your DMZ and you're you're a hop closer to the Internet and corporate network. So a little bit easier to set up a little bit less. I'm not going to say it's less secure, but if you have less places where you can defend but also still perimeter defense, we also have this in Kubernetes. And so when you connect in with client, you have some ingress controller that then ford you into your pods. So once you're in the into the pod network, then there are ingress and ingress controllers you can put on which help tie it down. But you're still very much in a in a perimeter defense model. Multisite, then another another variation. We have a trusted a trusted network and another trusted network. And we have nodes in one side and services on the other that the nodes want to connect to. So you can see this building on the previous set of defenses that we have. And so, however, the details matter. And the reason I'm getting into the details is because it in order to produce a good security posture, a good security paradigm, it's not just locking everything down because you can lock down things very, very easily. The problem is that you have to be able to lock them down while still providing your service and have the capability to scale that across a large number of systems. So you're focusing on a single computer. Yeah, you can do a lot of things, lock it down, eye touch and techniques and so on. But when you're starting to say, hey, we have an entire company that we need to lock down, or maybe you're working with Fortune 500s and a lot of Red Hat customers can appreciate this where you have thousands, tens of thousands, 100,000 systems, then these details, these security system has to work in such a way that it integrates with such a system. And so in this scenario, and going back to this example, we have two trusted networks. There's a public set of IPs, some private set of IPs. And a very common thing is to say, hey, let's go ahead and allow outbound from this network over to the set of IP addresses and allow from this IP address to these particular services. And so we have some form of a network admonization that's going on here. And so a few questions then start to pop up, like what if I want a differentiate between node one and node two? I'm turning them all into one address in here when I do my network admonization or what if I want to connect to more than multiple services in here, which is a little bit of an easier use case, but it's still something that discoverability is still important. On the server side, you also have questions like how do you trust a client and how do you expose your services out to others and how do you differentiate between node one and node two if it matters? And there's also a whole other set of questions about how do you populate and rotate your certificates over time in the ways that these nodes over here are able to connect in when you start to bring in encryption. And one potential answer to some of these questions is, well, let's go ahead and bring the VPN in. And so what we'll do is we'll say this entire trusted network and the one I 216A10 will trust and connect to this and vice versa. And what they will do is they will inject routes so that they're capable of communicating with each other. And when you start adding in third one, you end up with this mesh type thing. You can sometimes reduce down and let's go down to a spoken hub model. But from a mental perspective, this is what people tend to think of. A little bit of an aside. So we're going to deal primarily with layer three VPNs and why layer three rather than layer two. It all comes down to scalability. I'll publish these slides earlier. I don't want to go too much into this but basically layer two tunnels are difficult to scale. Layer three is built on IP addresses which is designed to scale and we use that to scale the internet. But let's talk about scaling subnets. So when we look at scaling subnets, when you have two networks, you're looking at one possible connection through networks three, for networks you're looking at six connections or six subnets you end up having to de-conflict. And so over time, what you have to do is you have to make sure that these all these subnets that you're hooking up to each other and use private networks, either you've de-conflicted them or you're producing some form of net network address translation in a very careful way in order to make sure that they don't conflict with each other. So this becomes a very global problem very quickly. And so in terms of handling the conflict you end up with very careful planning but maybe you've done a lot of planning and maybe you didn't expect the type of growth you were getting or maybe you brought on a new partner or new customer who has a larger network than you were expecting or has requirements that they conflict with you. What happens when things begin to break? You end up bringing in your firewall experts, things like checkpoints or Cisco firewall or so on and you end up building up an IT army in order to start handling all of these edge parameters and handling all of the network access control list. And so maintaining that list becomes very expensive very quickly. So back to the original question then how does all of this stuff that I describe end up improving your security because you end up with a lot of complexity? You end up with fragile configurations and you also have entire sections around observability and debugging in terms of how do you deal with that? And also we're still dealing with the assumption that the attack vector is coming from the outside and I would say out of the entire set of slides that's the most important line right there. So in other words we are defending our infrastructure with 11th century technologies. What if the attack comes from in here? So back to perimeter defense. This is the world that many of us currently live in and what we are pushing towards is a zero trust environment and I'll leave it here for a brief moment. So what we're doing is we're saying the network is untrusted it is no longer the core trusted thing that you have and when you stop trusting the network that means you have to shift the trust somewhere else and so where we are shifting this trust is to the workloads themselves where you're saying we can secure the workloads and we can then develop secure connections between the workloads. Now to be clear just because I say that this network is untrusted does not mean that it is not private it could still be private you still have layers of defense that you're building around but if someone were to breach your network you have other things to mitigate and the secure connection could still be a VPN but you're not relying on the security of that VPN for the majority of your security and so when it comes to untrusted networks please don't think that I'm saying everything has to be public although if you want a really good business test as to how well you're doing you can play this as a thought experiment what if I take my network and make it fully routable on the internet like will I be able to sleep at night and if the answer is yes then that means you're probably doing it right with ZeroTrust maybe you're doing it right so and so the idea is that if your attacker gets in here then they're not able to connect to other things simply by being a member of that network so the question then becomes how do we achieve this so we'll start with establishing a trust domain we will attest the workloads we will establish policy between the workloads and we will also show an example with establishing trust between organizations so the trust domains we're going to use the examples from Spiffy Inspire they're both CNCF projects and Spiffy is a specification on how to on how to get work identity to workloads and how to rotate that identity that the certificates that they're assigned and do so in a hierarchical manner so we start with the CA which is based on the X519 infrastructure and what we'll do is we attest the applications underneath of them and of course there may be multiple layers in here you might attest a sub organization which then attest a cluster which then attest your application gateway but the mental model think of it like this you have attested a workload from some common route and then once we've established that identity and they've received X519 certificates on each one we can then create policy so this policy is a this is a modified example from Open Policy Agent and so the important things in this scenario is where is the is the we we consume the X519 certificate from OPA itself and they they call them SIDs we can we can consume the SID the identity from it and so once we have the identity in here we can check is this ID equal to this and so we say in the storage API OPA would be sitting somewhere here or your proxy right outside of it is it receiving a request from this workload here so if it receives something that does not have this identity in a certificate then it will it will reject that so once you have that policy set up then what we can do is we can is we can have a second organization that follows the same set of patterns and they attest each other and if we want if we want to trust to work those together what we do is we trust the two organizations at the top the two routes trust each other and say I I or one says I trust org to will attest properly and vice versa and you also scope it down to their their domains and so this means that when the front end app when the front end app server tries to connect to the storage API it knows exactly who it's connecting to and vice versa knows where the connection is coming from using mutual TLS which is available in TLS 1.3 and there's libraries to make this easy as well and you can also use envoy to listings that are not aware of these types of concepts and then from the policy perspective what we're doing is we're saying allow connections to and we're setting a destination will match on the destination of where it's connecting to and this site says allow connections from and we specify the source destination from where it's coming from so this will allow those connections to persist and to run whatever set of an HTTP verbs that that you allow it to do and so let's drive this down a little bit deeper though so when you're connecting to organizations together you cannot assume that the underlying network exists and so you have in this scenario we have a character named Sarah who's trying to connect to your secure corporate internet using her application so just because you have the policy set up and the networks and your your identities are set up doesn't necessarily mean you have a path to actually connect so you're so you talk to your ops people and the ops people say well in order to connect we really need a firewall an intrusion detection system and then this VPN so so in that scenario you have to think well how do I actually go about doing all of this all this stuff and so what we're pushing towards is in the network service message we're saying well let's separate out the data plane which is the things that that Sarah wants for her application and let's put a northbound control plane over this over these elements and so you can think of this similar to to an SDN in some ways the primary difference with an NSM is that it's it's more like an SDM for SDNs it'll talk to your two Kubernetes and it'll talk to your checkpoint firewall or whatever you're using talk talk to your intrusion detection system and talk with your VPN endpoint and make sure that it passes information in and out but I won't go too deep into that I have a lot of other material that that I can point you all towards for the internals as to how that happens just the important part is think of it as a control plane and a data plane and when these connect with each other and communicate with each other then that allows us to configure the things within the data plane so what we do is we actually give each of these an identity so your firewall intrusion detection system VPN they all get specific X519 identities and so this allows us to know that when the pod is is negotiating with the firewall it knows that it is talking to the firewall using a zero trust model and simultaneously drill the chain so this allows us to build once we have this identity that actually allows us to build policy we can say this pod is only allowed to talk to this firewall this fire was only allowed to talk to the pod and the intrusion detection system and vice versa all the way up until whatever is terminating that that connection for you so it allows us to build paths through that that that translate into your into your data plane using spiffy and opa has those as those things so in essence we want something that looks like this we want Sarah's app it gets wrapped into some vpn gateway goes over some clouds some internet to your vpn concentrate or to the other side in your api and so pushing those up the same concept as we've had in the application side we have the same set of attestations on each side that occur and we have the trust that is put on the top opa implements policies at each of the relevant locations and a side effect of this is that there's no more application specific network access control list in this in the way that you traditionally do it so this one because we're saying this this application is allowed through some path to communicate with this api using open policy agent and vice versa so we've actually lifted that into something that is more declarative something that's human readable that does not allow that does not require you to put down and this is the IP address of the api server this is the this is the IP address of where the vpn gateway is coming from because right now we're relying on cryptographic identity to do that rather than rather than IP so one important detail to unify this is that these identities are can be cross cutting we can say this identity is the same one that is used by your app which is used by your service mesh which is used by your pod infrastructure Kubernetes and eventually we'll get it down to the hardware tpm so that we can say this hardware tpm has came this hardware came from location that you have authorized that you've that you have yourself deployed so even if a rogue hardware comes in and managed to replicate your stack even if it managed to gain access to some of your identity infrastructure the fact that it doesn't have the right hardware tpm means that it would not be able to to fulfill the attestation when what it asks for what its identity is and so we also have identity in the infrastructure and the other side has it's a set of identity as well so we have this cross cutting identity for each of these and so when we put it all together we have the status quo which is the internet you have some client connecting in and connecting into your application database I apologize like this is a slightly different this is a slightly different use case from what we were showing before the previous one showed it being in a private network I wanted to show off one more one more use case so as we put it together we have a client goes into connects into your your firewall or so on application to your database and so the question that I would like to ask people in a scenario to ask yourself is if you have an attacker here how much access does this attacker have and there's been a number of very high profile breakins where the attacker got in through Jakarta struts or other similar systems that were unpatched or even worse zero days once they got in here they're able to do scans on the network and connect to the database or even connect to a database that it already had a connection to and start just asking queries that that are above and beyond what it should have been asking for and when we start driving this towards towards a zero trust model and the one one modification we can make as we could say well we have a client that we don't really know on the outside but we trust that if the user has logged in properly they'll have a JWT ID that is that is cryptographic so when we pass it in through the firewall to the application server this application server already has a zero trust chain between your application server your database server and database and so we're already saying that the source and destinations for each of these are have been established properly and so that limits the the ability for this attacker to connect to some random database that's on the outside and it's in the outside of this chain and then we've conferred the scope it down and say well we have a cryptographic token that comes from from the client and let's pass that in from we receive it on the application server or we've tested it at the in the inbound API server before it hits the the application server and so what if we were to pass that on to the to the database servers or to your database proxy and say well in order to unlock this API right here this database server it not only has to come from app server but must have a valid JWT ID that we can then check does this ID here in the HTTP request path actually match what is in the JWT ID but do this not at the edge of your infrastructure coming in but actually do it further down and every time it passes to your infrastructure it'll check this is JWT match is this JWT match and so this means that you have that you have two conditions that you have to pass you have to pass where is it coming from application server in this scenario and who authorized this request well the client did from a client who was authenticated and an important aspect of that is that means your violations and policy if you have a violation and policy that usually means one of two things either you have an active attack going on or you have a programmer who made a mistake that is causing your violation a violation in the policy either scenario you have to get somebody to look at it and to and to do with the problem so where does that JWT come from in in the open source in the open source path we bring it in using something like key cloak so key cloak communicates with our identity provider and that identity provider you notice it is not part of this infrastructure is actually something that is managed separately in this example it could be in here it could be part of your application infrastructure but it may also be a good practice to actually separate that out so that even if these continue if these happen to get compromised that your identity still still has some defense in it or some increased defense I'll say because you already have the zero trust model that's in there and so this so the user is logged in they receive a JWT and they receive it from a key cloak so key cloak itself allows you to have single sign on LDAP horizontally scalable it's you're able to log in with OIDC, OAuth2, SAML, social networks it was originally created by Red Hat and I believe it's been donated to the CNCF perhaps somebody can help me out there after we're done with the slides but the important part in this scenario is that it has the relevant things in order to allow us to integrate with identity providers in the back end give us a uniform give us a uniform interface for that handle the login and receiving and receiving and transferring of the JWT which then gets passed into your infrastructure and so this allows us to lift that zero trust even up to to try to defend the up to the client level even though there's limitations on what you do on the client at this point with those we have some some locations that you can go look at for some of this that were service mesh Biffy open policy Key Cloak, Envoy and Kubernetes a couple interesting areas where some interesting work has been done in here is of course never service mesh is continuing to build out this stuff at the Layer 2 and Layer 3 level Biffy is working on something called transitive identity so transitive identity it's imagine this client this is like a pseudo transitive identity that I describe we pass the JWT down in order to pass that through but you think of transitive identity as imagine the client itself having an X5 Online certificate that instead of receiving JWT it receives a certificate that certificate has been designed in a way that it can say application server I'm going to give you the authority to perform some action on my behalf to to the underlying infrastructure and so sort of like imagine you want to talk with a like you want to have a lawyer do something on your behalf for it or and so you're so you give them permission through some legal agreements to do something on your behalf through some transitive authorization do you think of a transitive identity as something similar and with that I think we have a little bit of time for questions we we certainly do and I'm going to because Luca asked a whole bunch of questions I'm going to try and unmute him and let him just ask his questions directly here there you go Luca hit it you guys you guys hear me yep yes yeah so thank you for the presentation first of all was really nice and really interesting I myself work as a solution architect in API management and I touch on some of these topics so I find this particularly relevant so when you were explaining the zero trust scenario I think you were mentioning the fact that VPN and ideas are still relevant but do I understand this right that in in this case you would basically just need to configure them to connect entire networks without worrying about sunnets and ACLs is that a correct assumption so that's a really good set of observations and a good set of questions and so in terms of intrusion detection systems I put that in there because some of the companies that I've worked with are they their current policy is that they have an intrusion detection system that's in there in fact if in the most basic level of intrusion detection system actually I personally don't think it's going to buy you that much some more advanced versions of these things could maybe help help you out a bit where if you had something that was capable of analyzing HTTP requests that come in and look for anomalies in the type of request that are being made or like then things that are there are more in band or that are working at the l7 layer and then I think that we can gain some significant benefit in in the model that I showed with with those type of with those type of things but they the traditional l2 and l3 model that's that's on there it's there may be some areas where it's still useful but it's but because you're you're limiting where your word receives messages from at such a fundamental level that it does reduce the total amount of value that you would get out of an idea even though there's still value there. Okay, and the other question I had was when you were showing the JWT validation I think this is the the OPA is getting a little bit into the API management field right because typically that's the one of the tasks that the API gateway can do so I don't know if you guys had already also an API management solution and managed to use that as well or or. I mean let me go a little bit more into this particular path and so open policy agent is just all all it really is is you think of it like as its own server and you can embed it into an application if you choose to do so you send it you send it a string of of JSON it responds back with a successor failure and maybe an explanation why depending on how you've configured it and so it's capable of consuming within those strings or within those best of skills you may pass in things like the headers or you might pass in some of the svids or JWT tokens and so on so it's not actually sitting as a as is not sitting here in this area of controlling the axis in between or or so on this is actually something that we would want to rely on something like on way proxy and all the proxy has to inspect and to perform that interestingly a lot of service meshes can help there because they already have connections and capability configure on way the there are still some gaps though because many of the service meshes don't implement the spiffy protocol or if they do implement spiffy protocol they've implemented in a very opinionated way that may not make it as easy to integrate with other systems and so one of the things that I've been looking at is working with some of the organizations for example I've been working with I've been having some initial conversations with some of the individuals over at Kong as an example in collaboration with with network service mesh because we have a core committer who is both a part of Kong and part of network service mesh and we've been discussing about since we use spiffy and spire within network service mesh already it would be good to be able to get that identity from spire in both NSM which we already have built out and also be able to get that identity from push into Kong so that that way you can reuse those identities at that location rather than having two different identity solutions that we have to work at how to make them play nicely with each other and so Istio also has a spiffy support through a product called Citadel I have not looked at Citadel so I don't know how well it integrates with like if I were to stick a if I were to stick spire on talk could I have spire control Citadel or can I do nested Citadel's can I get Citadel to to work standalone if I don't want to bring Istio and I want to start bringing in things like like these patterns that I showed off you could easily run them in an open stack or or other environments with given a little bit of development towards that direction and so and as far the reason I was looking at something like spire for this was because spire runs as a standalone it does not require Kubernetes but it ingrates very well with Kubernetes. Right. Right just one last question you were showing at the beginning when the you were explaining the setup for zero trust the trust between org one and org two I remember right. Yeah, this slide so is this like a typical scenario or in in practice you would have actually some more maybe just the department or even development group trust another development group to make it more strict in terms of validation so that nobody can actually like reuse the certificate and actually access somebody's else back end or would you limit that in the policy with OPA? Okay, so there's a there's a couple things towards this so what the way spire works is these certificates are very short lived so the default out of the box is that the the northbound CAs are rotated once a day the southbound workloads are rotated once once an hour I've actually rotate every 30 minutes but they're they have lifetimes of an hour and they're rotated 12 hours but lifetimes of a day so someone were to compromise a system and extract the certificate out they have a very short window to perform their their attack so that's first mitigation the second mitigation towards this is the way that spiffy that spiffy actually does the attestations so the attestations to go up the chain until you get something that can form the actual approval and so you're able to scope what type of approval so if I have say that I had like a third system under or the one that was like payments payments API and I tried to to a test that through the front end app server path and say well when I'm established in this connection and they're not supposed to be in the same cluster then they wouldn't it would not be allowed to act to perform that attestation because it would not meet the the requirements for that particular system so we can scope this down so that this cluster this sub working cluster is only able to receive certificates that are relevant to its to its needs and prevent it from the testing other things the reason that we want to trust them at the very top with the top level CAs in this scenario is that we also want it's a tradeoff to its to some degree where you want to minimize the the total number of connections that we don't want to say here like let's send the front end app server every time we rotate certificates here let's send that all over to second organization vice versa and so by establishing at the top you reduce the total quantity of communications on there you're not telling them hey I have an organization here's the structure by organization you're just telling them enough information so they know who they they're allowed to connect from and the other thing towards that is that it also gives us a single break the glass location because if org one and org two decided they don't want to communicate with each other we can literally destroy the trust here and that'll propagate through your system relatively quickly because when it asks for the next for the next bundle then that'll that'll end up removing the the bundle on there and if you compare that to status to the status quo of most organizations in this particular area I'm trying to perform that style of rotation without spiffy it can often take weeks or months if you need to if you need to go and configure and redeploy a bunch of software so it gives us that dynamicness that helps mitigate some of those some of those concerns that you are describing right so the just one last question given the explanation about spiffy so the whole rotation is managed by spiffy itself because I know that certificate management is always a headache in general yeah spiffy is a is a specification as far as cncf it's a cncf project now but it is a specification designed specifically for for doing the attestation of workloads and then rotating this over time so it's it's it's a gRPC based API it's very easy to to to build against but it's designed specifically for for solving that that rotation problem over over large organization okay thank you sure my pleasure I have just if you're done Luca I have a question a lot of this zero trust model is really about communication between services and technical between your internal systems and services but maybe for for Bobby how do you communicate this new model to your internal compliance audit IT people so that they trust you that you're implementing this correctly I mean that's got to be a big issue at a company like Anthem especially after the breach how did that conversation go down and how do you do that with like multiple acronyms throwing that all at the compliance officers and your IT audit people how does that fall out at Anthem that's a very insightful question it has not been it's not been as straightforward as it would seem but the the way we've started it is and just just to be clear today our chief information security officer our CISO is one of our key sponsors for making this change so that was something Frederick and I took on day one as we were talking about this and socializing it so it you're right it's a lot of acronym soup and it's a lot of new it's a very new models for traditional sorts of infrastructure so what we've started with is doing this outside of the traditional systems that are up and running and that are really running today so we're going to start first with proving this out in systems that are maybe let's call them less core if they go down things are okay if they get they get hacked or breached it's okay as we prove it out we'll see and the the CISO is working with us team working with us to watch how it works to do pen testing in fact we're making our source code public we and they were their eyes were like kind of we're in horror like are you are you serious yes no no we want you to know where every hole every weaknesses we want you will even make it public to the people you pay to come in and out your white hackers just white hat hackers just come on in and do this because we're we want to prove this out and we might as well prove it out with in the most dramatic and open and transparent fashion because this is what this technology is about so that's how we've started in to give us give us about six months and Frederick and I will have our internal prototypes not prototypes actual like functioning apps up and running we're past the prototyping stage and then Anthem is in general has not been a fast follower but as we're looking to move more of our core on-prem type applications into cloud they will just come on to our stack so we're even building our entire stack to that core applications they can bring their old security they can bring their old security paradigms with them but they will be on this new stack so they'll just be moving to the new stack and one of the issues Diane that people run into in the insurance or in the old traditional brick and mortar business that involved into using technology is if there are no customers it's hard to get people to change well they're actually bringing their customers with them so it's not a migration from an old app to a new app they'd be just moving their entire customer base onto the new platform with the new apps that they're moving from on-prem into cloud native so it's a big strategy it takes a lot of time and effort a lot of people to keep let's just call it kissing hands and shaking sorry shaking hands and kissing babies a lot of that as soon as Covid's out but just keeping the connections warm reminding them what we're doing and keeping this to the forefront of their mind but they also have they also have some headwind that headwinds we have some some wins in our sale in that it just costs a lot to do it the way we're doing it today and it's not sustainable those are all helpful things that work in our favor and so we've been welcomed with open arms well that's good news and in six months we're going to have you back to talk about where you are in production readiness and whatever lessons you've learned over the past six months and so I'm looking at well lead you kind of had one I think you had one other question and I'll see if Eric Eric is that has just asked a question a little bit ago does this rotation itself represent an attack surface me just see if it's a great question I would argue that anything that you can communicate with represents part of your attack surface that includes the rotations which means you have to have the proper security audits done on it make sure that you're securing those in points properly so I would argue yes that is part of your part of your attack surface and the same way that your firewall itself can be a or your virus scanner there's scenarios where fire scanners have had bugs in them that have allowed people to compromise systems and so please please treat I in fact I insist you treat it as part of your attack surface and that you you analyze it to make sure that it's being secured properly and that problems with within it are also are also mitigated spiffy and spire were designed with that in mind in terms of reducing the total quantity of privileges that towards the south pound similar to Kubernetes does is with with pods where he can expose portions of the API out but it will limit what what things in the southbound can do based upon its its capabilities and so but yes and ensure that that doesn't need to be that doesn't need to be considered well we're almost out of time and this is all new to me and new to a lot of the people who are on this call so we're definitely going to have to do some more follow-up conversations around this so that we can all learn from your experiences and your expertise so thank you very much everybody for for sharing this we really hope that you get something from this conversation I'll be posting it on YouTube and I'll get the slides from Frederick and post them as well on the open ship dot com blog and please reach out to both Frederick and Bobby the slot if you throw up that very first slide we didn't put your emails up there good on that that's a that's a trust surface that we didn't want to touch on or something no spam but we'll we'll figure out how to get you guys connected so as well as if you come into the slack for open ship commons if you're not there yet let me know and I'll add you in and we will continue this conversation online so thanks again Frederick for reaching out and Bobby for taking the time today and as always for being participants in the commons we truly appreciate that so thank you all very much and take care fantastic thank you very much thank you take care bye bye