 Well, hello everybody and welcome again to yet another OpenShift Commons briefing. We're really thrilled to have a new OpenShift Commons member with us today, Apartha, to talk about a very new topic to me, application segmentation for using on top of Kubernetes or anything that's Kubernetes ready and definitely means it works with OpenShift. So we're real pleased to have them come on board and explain what they mean by application segmentation and how it adds value to the ecosystem. So without further ado, I'm going to let Dmitri and Bernard walk us through what they are doing and you can ask questions in the chat. We're going to let them do their demo and their talk for about 20, 30 minutes and then there'll be Q&A at the end and this whole thing is being recorded and will be posted on Monday morning on the blog.OpenShift.com. So Dmitri and Bernard, welcome to the OpenShift Commons and please tell us all about what you're doing at Apartha. Thanks Diane, thanks for inviting us and giving us the opportunity to serve with the community. What we are doing here, as Diane mentioned, we are from Maporetto and Dmitri Stiliades and I have with me Bernard Vandeval that is going to help us with the demo and we're going to talk about the problem of application segmentation. As Diane said, this might sound like a new term. It's not so much a new issue. Let's say it's probably a new term for an old issue and the main problem that we're going to discuss is that of application security as applications are deployed in an environment like OpenShift or Kubernetes or in the more general sense in a distributed environment. What we're going to talk to you about today is TriRim. TriRim is an open source project. TriRim is essentially a library that allows you to implement this type of security functions and will explain them in detail and it also provides a Kubernetes integration and because it's transparently installed in Kubernetes, it's also very useful to the OpenShift community. As you will see, one of the key value propositions of the key benefits of TriRim is that one can achieve actually security on an OpenShift deployment with very little operational complexity. Probably the biggest benefit out of the TriRim approach is exactly that its operational complexity is simplifying actually the operational deployment of large OpenShift clusters while at the same time providing security. Let's try to a little bit give an overview first of what we are going to talk about. We'll explain what is the security problem and what is happening in the Kubernetes community in order to address the security problem. Kubernetes introduced in the 1.3 releases an alpha or beta API the option of a network policy. A network policy essentially allows someone to partition workloads and create a policy that will define when workloads can exchange traffic or when they are not allowed to exchange traffic and what this does is it limits the attack surface of workloads. It can be used for governance reasons because you want to separate a dev application from a production application it can be used for security reasons because you want to isolate the public facing workload from an internal facing workload. Now although Kubernetes introduced the API for network policy and I believe this API is slated to become official or move out of the beta phase in the next Kubernetes release and I believe also in the next OpenShift release however it is just an API it doesn't describe the implementation the implementation is left actually up to third-party integrations to figure out how to implement this network policy. So Trirem is one such implementation and the key idea of Trirem is that instead of using network tricks instead of using networking techniques like VXLAN or using network ACLs or whatever it introduces a transparent end-to-end authentication and authorization function inside Kubernetes. So what this function does is essentially it provides an identity to every workload in Kubernetes and it guarantees that two workloads will only interact can only exchange traffic if you want if there is a policy that allows these identities to exchange traffic the underlying assumptions for network are very simple it just requires no complex network in order to deploy something like that it assumes the very simple Kubernetes model with a subnet per host and we'll discuss this in more detail and because it simplifies significantly the network requirements it makes the deployment of large clusters actually very simple and last but not least the technology because exactly doesn't depend on infrastructure networking techniques or anything like that it can be deployed seamlessly whether it's on a private cloud or whether it's in the Google Cloud Engine or AWS or whatever because it doesn't really interact with the infrastructure or does any routing tricks or anything like that in order to achieve the security isolation. So what is the application segmentation problem? The application segmentation problem is actually something very simple and it's probably one of the most used security policies in any environment if I have two applications A and B and these applications can be not just one instance right it's obviously one of them can consist of hundreds of containers running all over a cluster but if I have two applications A and B I want to guarantee that they can only exchange traffic they can only communicate if they are explicitly allowed by policy to communicate and there are two reasons that people select to do this one reason is isolation I want to isolate my dev environment from my production environment and the other reason is reducing the attack surface essentially if application A for some reason gets bound and somebody takes over they don't necessarily have access to talk to everything in my data center but they have very limited access to where they can go and what they can do. Now in order to solve this problem over the years we have looked at the network and network techniques to address this problem. So we have used in virtualization environments or in private data centers we have used VLANs we have used VPNs we have often used network virtualization with VXLAN we have used SDN techniques AWS have security groups Docker introduced in network there is a whole set of networking level techniques that have been used in order to provide this isolation a classic method will describe it a little bit or more detail is I put application A in one virtual network domain a VXLAN domain I put application B in another VXLAN domain and they cannot talk to each other. Now all these techniques fundamentally they identify applications or application components by IP addresses and then they use rules around the IP addresses to try to achieve this isolation and although this is viable and possible it has some significant scaling issues and some significant operational issues and I'm going to talk a little bit about the scaling and operational issues of this traditional networking techniques but before I do that let me explain a little bit the network policy API that is becoming available as mainstream in Kubernetes then it's already available in a beta version so Kubernetes introduced this API as a mechanism for a user to define the set of pods or the set of services that can interact with each other so a client can use this API to define a policy on the Kubernetes master and then Kubernetes has the ability to push this policy to external policy controllers and then the policy controllers will implement the security policy down on every host that is deploying Kubernetes pods so in this architecture and in this implementation the Kubernetes upstream tree includes the capabilities for the API and for storing the API and providing the notifications doesn't include the policy controller the policy controller is something that usually is delegated to SDN systems and there are several solutions out there that implement this type of network policy using some of the networking techniques that I described earlier and we will go into a little more detail what you will see that is happening with Tryrim we provide a native implementation with Kubernetes without an external policy controller and that's already simplification number one because there is less components to manage to begin with but also we provide better security by doing this so what is an example of a network policy in Kubernetes? I'm showing here a standard API example of a YAML file that is a or a JSON file that describes a network policy and what we describe on this API in Kubernetes is that there is a network policy it has some metadata and then there is a specification and the specification has essentially two components it has the allow component that says from which pods is a set of pods allowed to receive traffic from so the pods are defined through label selector so for example here we say that from all pods that are on the front-end segment we're willing to accept traffic to which ports are we willing to accept traffic so here for example we say we're willing to accept traffic on port 80 from these other pods that are tagged with these labels and then finally there is a pod selector that says to which pods this policy applies so in the particular example for example we say that this policy will apply to all the pods that are labeled as segment backend so this is a very generic policy framework that essentially allows somebody to define through pod selector to label selector sets of pods that can communicate on particular protocols and ports with specific destination pods and you can understand now why this type of policy enables a security framework into Kubernetes a very strong security framework because now applications can be separated arbitrary based on the user requirements as I mentioned this is an API this is a policy definition the question is now how is the policy going to be implemented underneath in a Kubernetes cluster so in order to understand how this can be implemented and there are several networking techniques to achieve that one has to go back and see a little bit how networking is achieved in containers and Kubernetes more specifically so as I mentioned earlier one of the techniques that can potentially be used for this type of technology and what Docker provides with LibNetwork is the ability to partition to create multiple virtual networks that are attached to every host this is a classic technique that comes actually from the OpenStack inheritance if you want where OpenStack has similar techniques where essentially there are multiple virtual subnets the virtual subnets use VXLAM to isolate themselves and then because there are multiple virtual subnets as containers get activated they get activated on a different virtual subnet now although this can provide some type of isolation it turns out that first of all it's not very compatible with Kubernetes because Kubernetes has a different model and in addition to that it creates a lot of complexities it creates complexities because in order to send traffic now between the subnet somebody needs gateways somebody needs complex routing tricks alternatively somebody can have multiple interfaces on every port this creates other complexities around applications needing to know what's their default route and so on maintaining states for the subnets can become very complex in a lot of solutions it uses routing protocols like BGP in order to achieve that in other solutions somebody can potentially use ETCD but maintaining convergence for this becomes a challenge so although it's a viable solution it ends up being a very complex solution because exactly for this reason the Kubernetes and OpenSafe models are actually much simpler than that so the classic Kubernetes networking model says that for every host we actually allocate a subnet so we give for example a Slash24 or a Slash26 subnet on every host every container, every port on the host gets an independent IP address out of the subnet and then we have a static routing throughout the infrastructure of the data center that will forward traffic destined to these particular ports over to the to the specific host the infrastructure becomes extremely simple it's very static actually somebody and I'll show you an example of this infrastructure but there are no route updates there is nothing that needs to happen in the network every time a container is started or stopped every time a pod is started or stopped everything pretty much on the network is static network is very simple it's just one additional route per host so everything is very nice however every pod in this architecture can talk to every other pod there is no essentially isolation there is no security, there is no partitioning every pod or every container can directly interact with every other pod irrespective of whether they belong to the same Kubernetes namespace or they belong to different Kubernetes namespaces so there is no isolation in this infrastructure in this type of approach and this is the challenge that the Kubernetes network policy tried to achieve can we add essentially a form of isolation for the workloads on top of the simple infrastructure without making the infrastructure very complex so an example of this infrastructure is shown in this type of picture where I have a lift switch or a top of rack switch and I have hosts and then every host just gets a subnet and they get a subnet out of a much larger subnet they get a slash 24 subnet out of a much larger subnet and every pod now gets associated with an IP address out of the subnet and everything in my network is layer 3 load balancing is layer 3 or high availability can be done with layer 3 techniques I don't need to worry about bonding I don't need to worry about layer 2 tricks I don't need to worry about anything in my infrastructure this is the simplest form of infrastructure for supporting a Kubernetes cluster and it actually works equally well in an environment like the Google Cloud Engine or it will work in an environment like AWS as well as on a physical infrastructure so since the infrastructure is so simple now how do we deal with the network policy thing and what are the challenges there so the classic approach or the approach that most networking techniques use in order to solve the network the segmentation problem or the network policy problem is the access control is by using ACLs so in the case where you try to use ACLs if you assume that application 1 consists of a number of pods let's say 100 pods and application 2 consists of another 100 pods then the network based solutions will end up introducing an ACL rule that says this particular pod can talk to pod 1 and they can talk to pod 2 and pod 3 and so on essentially in order to cover application segmentation using ACL rules somebody ends up deploying n-square number of rules if I have n pods I will end up deploying a quadratic number of ACL rules and this can work for like an application with 5 or 6 pods but as you scale up and you have hundreds of applications with hundreds of pods then this becomes a significant convergence challenge to illustrate why it becomes a convergence challenge assume that an application scales up application 2 scales out and it adds a pod the moment we add this pod for application 2 what we need to do in the infrastructure is go back and update the ACL rules on all the other pods saying that now this pod can talk to the new pod and this other pod can talk to the new pod and the other pod can talk to the new pod and this creates significant challenges both in terms of complexity of control playing complexity as well as it depends on in terms of the number of ACLs that somebody has to deploy if you see how this is done in a Linux host this is done usually using something like IP tables and IP tables has this locking property that every time you write something you need to read the whole table and write it back in and as you add a very large number of ACLs in order to achieve something like that the update of IP tables becomes can take several hundreds of milliseconds so this becomes a problem the control playing convergence can take a lot of time assuming you have thousands of pods over there or a large cluster the update of saying to every host what new ACLs to add or remove every time a pod gets activated becomes a big control playing problem so we fundamentally think and we have seen this in real deployments we fundamentally think that such techniques cannot really scale and they don't match if you want the intent or the mentality of cloud native applications and scale out applications so what is the fundamental problem here and that is a little bit of a philosophical thing that we wanted to discuss is that there is an inherent assumption everywhere that network mid-reachability means authorization essentially if two pods can reach each other through pink let's say then this means that they are authorized to talk to each other what we realized is that we actually need to decouple these two very important terms over here network reachability is something that is related to the network it has to do with how the network is set up and it can be completely decoupled with how pods are authorized to exchange traffic with each other the authorization problem is essentially an application problem it's not a networking problem and by actually decoupling the two we can come up with much simpler mechanisms that allows us to achieve security while maintaining a simple network implementation so what is the solution the solution is actually very simple that every time we have a service and by service it can be a Kubernetes service or it can be a set of pods that are implementing the service that is trying to interact with another service we need to introduce an end-to-end authentication and authorization step in between the services so the underlying assumption then becomes that the internet is the network we don't care in other words what is the network the services can be in different clusters they can be on the same cluster somebody can even deploy Kubernetes federation with completely isolated clusters the clusters can be in the same data center they can be in the different data centers we don't care where they are what we care about is that every service has an identity or a form of an identity and every interaction between services is authorized based on the identity of the services and the mechanism for introducing this end-to-end authentication and authorization is pretty transparent to the workloads so we don't need to modify the workloads in order to achieve that now the reality is that a lot of the cloud scale providers are actually using very similar techniques inside their data centers for isolation because they realize that the networking techniques can just not scale to the cloud-native environment so how does tri-rim now get into the picture and solve this transparent application authorization step so the tri-rim approach is actually very simple every application, every pod if you want that is activated through Kubernetes is usually associated with a set of labels so when we create a pod through OpenShift or through Kubernetes we associate a set of labels that the application type is production the instance is web, the image is NGNX so these are classic labels information that we can immediately understand whether from the YAML file that describes the container the pod or whether we can get the information from the Docker metadata for example so every time a pod gets activated we actually receive these attributes that describe the pod similarly every pod that is activated for application 2 we receive these attributes that are defining sorry are selecting are defining this other pod and then a Kubernetes policy if you think about it it says for example that these pods in application 2 are willing to accept traffic from other pods that are tagged with a label instance is equals web that's essentially a Kubernetes policy that needs to be loaded once the way now tri-rim works is that when a pod tries to do a TCP request to access another pod we actually grab this metadata these essentially attributes that describe a pod and we sign them with a local key and then we attach them as payload on the SIN packet that initiates the communication between the two entities so now this identity essentially of the pod is signed and verified identity is transmitted over the network it doesn't matter what is the network technique used to transmit it whether it's a tunnel or not a tunnel whether it's the internet or whatever it doesn't really matter the SIN request is going to arrive at the host on the other side and when it arrives at the host on the other side we are going to validate the signature of the identity that has been set to us we are going to check the identity locally against the policy that we have from Kubernetes and if Kubernetes allows this policy to be this communication to happen we are going to propagate the SIN packet to the application now on the reverse direction we do the exact same thing and we will discuss the protocol in a little bit more detail that when the receiver sends back the SIN packet we get the identity now or the set of labels that describe the receiver we sign it we send it back to the transmitter the transmitter looks at the identity and then validates is that somebody that I really meant to talk and is that somebody that I am really allowed to talk and then if the validation is achieved we send back the SIN packet to the transmitter and that's how a connection is established now the big benefit of this is that we have completely decoupled this security policies end to end network authorization from the underlying network connectivity we don't look at IP addresses anywhere on this infrastructure in order to achieve this connectivity we don't care if there are 7 layers of network address translations that have been introduced we don't care if the one pod is in a cluster in AWS and the other pod is in a private cluster we really don't care about any of the network details and that's exactly the operational simplification instead of doing security based on IP addresses and port numbers we do security based on identities at this level so for someone that wants to understand now if I add a node on the system this is a node 1 operation I don't need to go update any nodes or anything like that when the node is activated we know the labels of this node we know the policy for this node we don't need to go to any other host and introduce any ECLs or anything like that in order to enable communication to this node now going a little bit more detail for somebody that wants to understand how this basic protocol for mutual authorization works it's actually using and it's overlaid on top of the TCP fast open options the TCP fast open option is something that I was recently released it's part of the Linux kernel for maybe the last couple of years and what essentially this allows you to do is it allows you to send payload on the TCP scene and CNAC packets and you don't need to wait for the full TCP negotiation before you send a payload so what we do in our case is we use this capability but instead of sending payload on the TCP scene and CNAC packets we actually exchange authorization information so the way we do it is we actually get the labels we call them attributes and we also get we create a random number and I'll explain why we create a random number we get the labels a random number and we sign it on the transmitter and we piggyback it on the on the TCP scene packet using the TCP fast option fast open option this arrives on the server the server validates the signature it takes the attributes against policy if it's authorized it returns back a CNAC packet the CNAC packet again uses the attributes of the server in this case together with a second random number and it signs all these attributes a random number the transmitter is going to validate now that the signature is again correct is going to validate the attributes against the policy again and then the transmitter finally is going to send on the packet a signature on top of the two random numbers and why do we do that is we actually do that by signing these random numbers we can actually achieve protection against money in the middle attacks or replay attacks or any other mechanisms that employ the middle and spoof packets and note that this is a net vector for attacks that is actually not available or not possible for any technique that is based on ACLs now interestingly enough for somebody that is a little bit of a history buff a technique like that was or a similar type of technique was possible for several years now by using the CPSO option out of IP options without obviously the signature to send without the flexibility of these environment offers and also for somebody that is very familiar with Red Hat Systems will know that ACLinux has some network extensions that can actually use the CPSO option although I don't think many people use other than potentially some government organizations finally the payload can be encrypted optionally so the solution can also do transparent encryption if what we decide is policies that we want to do transparent encryption so the benefits of this approach is that there are no complex ACLs the network is very simple it doesn't require anything from the switches or routers on the network the threat model allows us to is much stronger than a simple fargo or a simple ACL type mechanism because it prevents against money in the middle attacks replace spoofing attacks TCP sequence number attacks and the implementation front it is implemented by encapsulating these tokens as json web tokens essentially we use OAuth for processes if you think about it right json web tokens is the way we communicate information for OAuth but instead of using it for at the application layer we use it at the infrastructure for processes and it can be configured both with a pre-shared key or through a PKI infrastructure using elliptic curves now when it comes to Kubernetes the implementation and the integration of TriRAM to Kubernetes is actually very simple a TriRAM pod can be deployed using Kubernetes demo sets on any Kubernetes cluster and the TriRAM the TriRAM pod then goes into the middle and traps the CINC MAC and MAC packets on any communication between pods and introduces the end-to-end authentication and authorization on any interaction between pods there is no controller in this implementation there is no policy controller or anything like that the TriRAM instances that get deployed through the Kubernetes demo sets listen to the CUBE API, subscribe to the CUBE API they get the policy updates or the policy create the pod create notifications and based on this they implement end-to-end the authentication and authorization policy so just to summarize the benefits of the solution is we essentially decouple security from the network what this allows us to do is it allows us to maintain a simple network without infrastructure complexities and at the same time get strong security and that's kind of like getting your Pi having your Pi and eating it all essentially the result the approach is very simple it doesn't require any shared state it doesn't have any control plane if you want and for this reason it's very scalable and for this reason it becomes very easy to deploy and use and in order to illustrate some of these things I will pass over the screen to Bernard now and Bernard is going to show you a demo of how you can deploy the TriRAM security solution in a Kubernetes cluster so Diane if we can switch over to Bernard now please okay can you see me hi everyone what I want to show you today is basically an illustration of what Dimitri just explained so I'm going to use you're not quite sure there you go there you go here we go okay so I want to demo basically what Dimitri just explained so to do this I'm going to use a typical Kubernetes cluster so my cluster is running into AWS it's a typical kube AWS cluster if you use OpenShift it will be the same thing underline okay so just to illustrate this I'm going to use kube CTL and I've currently two nodes ready to be scheduled and one controller node is to show the typical steps needed to deploy TriRAM with Kubernetes and then start a couple of pods implement network policies and show how TriRAM makes this super simple so the first thing I want to do I mean the first thing you will need to do if you want to deploy TriRAM is to deploy TriRAM itself so to do this we implemented what we call a demon set this is standard Kubernetes and it's a construct that allows you to deploy two nodes and one pod only on each node part of the cluster and so as Dimitri explained what you want is you want basically the TriRAM agent running on each node that will trap the CINAC and hack packets of each TCP connection that is initiated and those will be redirected to the TriRAM agent that's why you need one of those agents on each node and again we don't have a central policy controller so that agent is the only thing that you really need to install and it's made super simple by just having to deploy that simple demon set so this is basically the demon set what it shows you it's deploying the TriRAM Kubernetes image we give it a couple of parameters this is pretty standard so I'm just going to do a kubectl create and by the way as Dimitri said we support encryption by default you can do it with a pressure key or a big EI to make it simple I'm going to use the pressure key demon set here we go and now if I check my pods I can set TriRAM started to schedule one I mean Kubernetes schedule one TriRAM agent on each open running node so right now we got a Kubernetes cluster which is open running with TriRAM installed you just have to start pods and implement network policies and automatically TriRAM will pick up those constructs and start implementing those network policies in the background yes so this I created a new namespace which is my demon namespace and so one of the things you need to do and this is following the specification from network policies on Kubernetes is you need to define your namespace as being default deny if you don't do this by default everything is allowed so on my demon namespace I simply added a notation because this is still better today and that defines that by default you will just drop every packet unless you explicitly allow them to communicate between pods and those will be defined on your network policies themselves okay so the next thing I want to do is start a couple of pods so I'm just going to start my typical Twitter application so in a Twitter application you get typically an external tier which is completely outside the system this is this pod you have like a frontend pod which will typically represent your load balancer and then let's say you have a backend pod which is your business backend which should never be allowed to communicate directly with any external pod so what you want to do is you want to start those three pods and then I will have policies that will implement a very simple policy that in order to reach the backend only the frontend should be able to open that TCP connection basically any external pod should not be able to directly reach the backend pods okay and so again this is made very simple by using the Kubernetes and OpenShift labels so everything is a label in Kubernetes and OpenShift so you just need to instead of saying that you match explicitly on a specific pod you match on a set of labels so in this case we will basically match on the the whole business backend and you will specifically define that only the frontend label so the work frontend will be allowed to communicate with that backend okay so in order to do this I'm just going to go to my policy examples and again deploy my namespace my three pods there we go so at this point running into my namespace all of those pods and let's do a very simple test because by default we define that the demo namespace should not be allowed to send traffic between each pods on that namespace I'm just simply going to try to communicate between frontend and backend and this will be completely denied okay so let me just grab the IP of the backend pod there we go and so what you can see is you got the labels and you got a specific IP so I'm just going to do a kubectl exact this basically should allow us to issue a simple wget to my backend pod there we go and it's denied and so this is TryRim right now blocking those TCP connections and later on when I will implement the network policies when I will create the network policy and that when we will basically explicitly allow that TCP connection to be opened you will see that it will be allowed okay so let me go back and show you exactly what the treaty policies so those are the standard Kubernetes network policies I'm basically watching on the web frontend as a label so not as a pod but as a label and I'm describing that the frontend should be able to communicate between each other this is this policy and then I have my backend policy basically describing that when you match the business backend label both the frontend and other backends should be able to open communications to it and so the TCP connection to the backend so we have an external pod that will be able to demonstrate this so from external which is outside of the backend tier you should never be able to open a TCP connection to your backend so in order to do this I again need to create APIs a bit slow did I lost my yeah that's real yeah okay that's the first well if I don't have access to my cluster I will probably not be able to get the modis to you cause it's a live demo yeah so I mean should I do it on your cluster but in this case it's my AWS one you tried to save I also lost it so that's it wait I got it back I think let's see yeah here we go yeah here we go okay perfect live demos there you go okay so right now I basically created all my network policies into Kubernetes here we go and so now if I go back to my ping this is working so basically by explicitly allowing it into the network policy we allow only that specific flows from the front end to the backend to be opened but if I go for example to my external pod I try to do the same thing here we go it's still denied and so basically what we do here is that Tryrem is implementing in a very efficient way those network policies right you don't have any central controller each Tryrem agent on each node is opening a watcher API basically an API connection to the Kubernetes API and we only get the information related to the local pods and so we have a complete efficient implementation of those network policies so that's pretty much it for the demo all right well thank you Bernard perhaps if you go back to sharing your screen Dimitri so that we can have the end slide asking some questions and hopefully your last slide has your how to contact you information on it if not start typing seriously there were I think Marc had a question or two so I think what I might do now is unless there's some other points that you wanted to make Dimitri just one point the code for all these things is available on github the information is on this slide feel free to visit it there is a Slack channel if you have questions please don't hesitate to ask so right now Bernard is still sharing his screen and so he's got to stop and you have to share your screens Dimitri okay hold on just a little blue jeans technical steps you get a beautiful shot of Bernard's face right now and go back to the end slide of yours and Marc I'm going to unmute you if you would like to ask your question you just have to unmute yourself perfect can you all hear me I can hear you Marc sweet this is cool stuff guys the policies and the identities is it all right now static through the yaml configuration or is there some kind of way to either assert the identities or store it somewhere in some kind of repository so on the implementation that is out there right now at least on the Kubernetes integration implementation the identity is auto created based on the Kubernetes pods however the way that trialing is designed if somebody looks at the library it allows what we call an external metadata extractor we call this is a function is a metadata extractor and this extractor can be anything can get the identity from anywhere that it feels like getting it so you can pretty much run a script there you can integrate the library with other code and go to other things there it's very open and modular on where it can get the identity from or what exactly is the identity the key issue is that the key thing that it does is it grabs all this identity information and the encapsulates it in a JSON WebToken and signs this JSON WebToken so the identity eventually is captured as a JSON WebToken okay because kind of the use case I'm thinking that something like this would be really interesting for is if you had let's say you had a a Mongo database right and you want to control access to it well something like this would be really cool to say well I'm going to have a team that they know Mongo databases right that's all they know we're going to deploy it on Kubernetes because Kubernetes is awesome and to control access to it we're going to authorize applications via a service like this to lock down who's getting to it if there was a way to be able to dynamically do that and then even more importantly on the de-authorization so oh your pod is doing something weird I don't like that I want to stop that access that would give you a layer of protection even over something like a service account for at the database level so if somebody compromised that pod you could just turn them off at this layer too without shutting down the entire application exactly you got the point right that's the exact use cases that we're thinking also about right now on the Kubernetes integration this can be done by modifying the labels the network policy in Kubernetes doesn't have yet it's a it's a white list policy right so the one thing that is missing and we hope we can work with the community to deal with that is you cannot fence off an application with this policy you have to explicitly go say what is allowed potentially you can you can do some fence off things like that by using the not operator but it can become a little bit trickier so the policy model can be extended but I think it's already gives you a lot of things you can do and as you said that's the that's some of the classic examples right I think last week somebody discovered that the whole set of you mentioned MongoDB MongoDB databases have been pwned and somebody removed the data and this access asking for ransomware why because the MongoDB databases were listening to 0000 and it was just too complex to deal with ACLs and all this kind of things right in with a solution like trying the entities that can access MongoDB are explicitly authenticated and authorized based on their identities at this layer maybe if you wanted the infrastructure layer and so the developers don't need to worry whether they deployed MongoDB with the 0000 option or they deployed it with the right IP address right and deploying MongoDB with the right IP address when you use Kubernetes for example deploy it is quite quite tricky to begin with mm-hmm yeah no exactly that's the exact use case I was thinking of mm-hmm yeah and the MongoDB apocalypse is kind of top of mind for a lot of people it's a great example and you know it's no fault of MongoDB right it's on another basis it's just that these guys get the publicity now I know I feel bad for them but I think it's good at least that it's exposed a weakness in the deployment models for people so I think this is it's a good thing raises some awareness gives us a way to go forward mm-hmm did you have anything another question Mark or no that was uh that was the big one is like I said it's really cool stuff yeah it's actually really interesting I have to frankly say that had really not a very good understanding before the talk about what you were trying to accomplish here and it seems to me this this is one of those projects that might benefit from being under the auspices of something like the CNCF have you guys thought about approaching them and having this project there so that I'm going to say it wrong I'm sure Karim would get more eyeballs and more resources working on it from the Kubernetes and all the other cloud native yeah this is actually something we are in discussions and we are actually in the process of doing some paperwork to join CNCF ourselves first and take it from there perfect it does seem it seems like it could really benefit from having more Kubernetes engineering resources looking at it have you done integrations with other orchestrations or is this really strictly something that runs on top of Kubernetes are you working with mesos or it will actually work with pretty much anything we have done integrations with mesos it can even work with docker Schwarm if somebody wants to use it there so it doesn't really depend on the orchestration framework the advantage that Kubernetes has is Kubernetes has a mechanism to express network policy mesos is still in the process of trying to figure out how to express this policy but I think they will be following up shortly and I'm sure docker Schwarm will be following up shortly and by the way it doesn't also the way it's designed it doesn't really depend on docker it there so somebody can use OCI or can use any other container format and achieve the same functions that's why it makes it really seems to me something that's very useful for lots of cloud native applications and folks in this sphere so I'm looking forward to seeing more of it and hopefully we'll see some of you at KUKON in EU in upcoming months I think KUKON is in March in Berlin and I'm hoping that we'll see some of this and where you've gone with it since then we'll also hopefully get you to come and talk at the upcoming OpenShift Commons gathering which is going to be in Berlin as well co-located with KUKON so hopefully we can get you talking there and I think what I'd like to do is get you an aperitif and others more involved in a security special interest group where the OpenShift Commons and see if we can get some more eyeballs from the OpenShift ecosystem on this project as well so I really appreciate you taking the time today I'm looking to see if anybody else has any other questions in there so just raise your hands otherwise I'm sure you can find the contact information on their GitHub links or on their websites to this project and we look forward to you guys being incredibly successful so thanks again thanks Diane for organizing this alright you take care thank you bye