 So, I am JJ and the guy who's supposed to be presenting his camp, Pavlovsky, he works on this project and he did all the preparation and everything, so he has all the context. I wanted to hear, I wanted you to hear from him. If for some technical reason it doesn't work out, then I think you get to hear my version of the story. So, short a description for what we're trying to accomplish and what we think needs to happen. We're calling it Padme. Today when we talk about hybrid clouds, we're talking about having maybe some resources in the data center, some in a cloud, some in multiple different clouds, different providers. We call this problem heterogeneity. And this problem in our current context is not just limited to clouds. It exists up and down the software stack. Chances are, for example, if you purchased some company and their technology is built on something or other, it's not what you necessarily have built in-house. And so now you're dealing not just with necessarily different cloud provider or maybe they have their own data centers or whatever else the case may be. Maybe they have a different flavor of web server that you're using or an older one. If you've got network infrastructure of any kind, chances are that the stuff that you're dealing with is all configured differently. And forget about like, oh, your switch is configured the same way as your router and all this other wonderful stuff. These things are all, everybody has their own proprietary way of configuring all this stuff. Why is this important in a security context? Well, because all of this stuff is difficult to configure and it's all configured differently, it's difficult to understand what's going on at any given point in time. And it's really hard to roll out a policy across this myriad of different things. This is the heterogeneity problem and everybody basically has it. You know, oh, you've got something going on Google Cloud, oh, that's different than what's going on on Amazon. I mean, you start to configure something, my experience has been it's basically kind of three minutes with the code and then three hours to three days with I am or messing around with various other things as you try to figure out which permissions you need to make everything work together. So this is still a problem in that context. That's the heterogeneity problem. Another problem and people are starting to slowly become aware of it is the temporality problem. The temporality problem has existed, but we haven't really thought much about doing anything with it. Consider, for example, a case from, I guess what we might call the old economy and I'd use that not in any case in a pejorative way. If you have a company that has a batch processing job that let's say does billing reports and runs at midnight and ends at 2am, it probably needs a bunch of permissions. Those permissions were probably configured once and now the permissions are open the whole time, the rest of the time that the system is running even though these reports aren't actively running. Those are security holes. Take a different example. You spin up a QA environment, maybe a temporary environment for testing. Those policies have to come and go for that environment as they will. They should be configured programmatically. No one should care. It should just happen and work. And when the environment is no longer needed, they should be pulled down. Now you could build all this on top of IAM and I'm assuming some of you probably have started that investment maybe gone part of the way, but you're not the only ones who need that sort of infrastructure. Everybody should be able to leverage that. You should all be able to basically say, oh, okay, I need this resource on a temporary basis. Oh, okay, gone, done. And here's your policy. If I'm setting up a new service that I'm releasing, then I should be able to configure the policies in advance, make sure everything works, and then turn them off until the service is ready to go live. And then when the service is ready to go live, I should be able to bring them back up. You know, I've scheduled it all for deployment on Tuesday. Should just happen. I shouldn't have to worry about it. I have better things to do than babysit my system. Those are the sorts of things that we need to deal with in terms of temporality. There's another part of temporality, though, that when you start thinking about that aspect that comes through, and that is that to a large extent, we don't really think about how policy distribution factors into things. In a distributed world, it takes time for policies to get from point A to point B. You have to understand how a policy is going to behave if there's a partition of the network. And this, I mean, by this, I mean policy distribution, right? So is the policy still active? If you want to revoke a policy, how's that going to work? How long is it going to take? Most of us don't think about these sorts of things as yet. But we will because when you can create policies on demand, then what on demand really means becomes important. So these are the problems that we are trying to address with Padme. We are aiming to build a reference architecture to provide this kind of global I am that one could use. Our goals in this context, and I'm just going to read this off here. It's a lot of text, but bear with me, provable, composable security, simplicity, ease of use and well-defined behavior in a distributed environment. That last one, based on what I just said, should be fairly proximate to you, right? We want to know when policies get to where they're going, how they're distributed and things of this nature. We want that to understand the cap theorem and we want to be able to reason about it. Well, that reason is important for the reason that, for example, our back. Is so pervasive is because it's intelligible. You define a policy, add attributes, it's a static time binding. Everyone can look at that thing and understand what's going on and what should happen as a result of those policies in a dynamic world. We need the same thing. We need our policies to be intelligible. And once we've defined a policy, we need it to be able to be put together like a Lego brick with another policy in a fashion that's also intelligible. We need to be able to know how they're going to behave together, how they're going to be distributed. And if something happens, what's what they're going to do? We need to be able to test the effects of these policies on our traffic before we deploy them. Oh, we need to be able to see after the fact what may or may not have happened. And so all of these things are issues that we would like that we would like to address. We've condensed what we're talking about into these. This is the short version. So with respect to our approach to this problem, I'm going to talk a little bit about the approach itself. I'm going to talk a little bit about the details of this approach. The basic understandings of what we mean by when we say policy, I'll talk about our architecture and then I'll come back a little bit to what's called the talk about our back a little bit to tie everything back together again. So the way that we've chosen to approach these problems of temporality and heterogeneity, excuse me, heterogeneity is that first we define a common expression syntax so that we can work in an abstract universe away from a lot of the details that are in the specifics. Oh, how do you configure this Kubernetes container to do foo? How do you configure engine X to do bar? IP chain syntax is exactly what? Oh, and I've got a hardware firewall here somewhere or my writers can figure to do the same thing. How do I make that apply the same policy as I've got here in IP chains? Common expression syntax. The next thing we've already alluded to is composable policies, policies that fit together like Lego bricks so that you can understand what's going on and reuse a policy you know works. Right. So you know this policy blocks on Port 80. Great. Fine. Use that as a building block to a larger policy that does, let's say Port 80 and only this service you are. Off you go. Distributed architecture falls right out. The architecture of policy deployment and policy enforcement must be distributed, must be distributed so that you can approach the issues that happen in distributed systems in it. It's very, it's almost tautological, but that's where we are. Right. Last but not least component plugin. We have a common expression syntax. We are obviously not in and of ourselves capable of handling every piece of hardware out there, every piece of software out there, everything that you possibly can. We may be able to handle some of the major ones, but for this to fulfill its promise, you have to be able to bring your own and other vendors have to be able to bring their own and they have to be able to proceed from there. So a component plugin architecture is required for this. Now we understand in that context that our common expression syntax may say one thing, but if you botch the configuration of the plugin, you've negated, you know, the impact of that and that's a necessary evil. We have to be careful with that. Our assumption is that if the plugins, you know, because we've built this distributed system in order to or designed this distributed system in order to handle these cases, we will at the very least be able to correct what problems we have on mass and assuming that the underlying implementation, everybody's going to have this problem to one extent or another, right? Assuming this implementation is correct, you will have your provable security. So common expression syntax should be intelligible, should deal with heterogeneity problem, composable policies should be intelligible, distributed architecture deals with both heterogeneity and temporality and what's called the component plugins deal with heterogeneity. We'll deal a little bit more with temporality in a minute when I talk about what actually goes into a policy, but first let's talk about the component expression syntax a little bit. In order to think about the expression syntax, first we need a common model for where we apply policies. We call this thing the informant surface onion and how many layers this onion has in its final implementation. I don't know. This is an example of what is possible. Basically, what this is telling you is that your policy infrastructure has to be able to address any layer where you can make a policy decision. Then you can say, well, cam is the network physical layer. What does that mean? Well, turning off a port on a switch can be a policy decision. It definitely has security implications. It is sometimes your last and only line of defense. Short of ripping the plug out of the wall, which sometimes I have to admit has happened, but these are the sorts of what's called these are the sorts of things that you have to be able to handle. We are, even though you may be interested, particularly in let's say the container aspect, or maybe you're interested in dealing with all the network gear that you have in your network, or maybe you're interested in something else, we would like our system to be able to address all of these things, even though some of them you may not end up using and they may not become the common use cases. We want that generality. But let's take so let's take things that are more common and more approximate network protocol. If you want to define something that sits in IP chains, that would be the layer of the onion where you would be dealing with. If you wanted to set, for example, the physical limits on the amount of what's called memory a process can take on one of your boxes, hardware physical layer, decide where your container is going to be run, probably at the container layer, want to disable a particular endpoint, possibly service that might fit in the network protocol, we're still working some of those details out. But for example, if you wanted to configure which tables in which queries are possible in my sequel, probably at the application layer, and it can kind of go on from there. Our intent is to provide, you know, a very definite structure to this layering. In some cases within a given layer. So for example, in the networking layer, it's fairly obvious what the precedence is. But also a final arbiter, I think it's going to end up being alphabetical, when all the things are equal. For which policy gets applied first, because you need to understand that you may not like it. But at the very least, you need to understand what's going on so you can get that intelligibility piece. So you can reason about whether or not a given request will pass through the system. And what you know which policy will hit. In the grand scheme of things, the way this will end up working is that you will define a, you know, a request in terms of our language, pass it through the policies, and then basically get back an up or down answer as to whether or not it made it or not, right? And that kind of goes from there. So this having been said, let me then move on to what comprises a policy in our universe. In our view, a policy has a set of rules that identify resources. And in our world, a resource is something that can be accessed. And because we're using because we expect that these policies will be will be updated programmatically, and that it will be other programs making the requests, it also identifies who's making the request. So in what's it called, I don't remember the exact details of the, the terminology here, you know, the requester basically will have, you know, is a resource and the request is a resource, and the request E will basically be able to say, Oh, okay, what requesters are allowed or disallowed and so forth, right? So those are the, that's the deal with the rules. The rules are of course, composable, I should be able to say this IP range and this IP range, or this IP and TCP and so forth, right within a given layer, and so on. Because that's what allows you to build, you know, more complex policies without necessarily going to building a little policy that does port 80 and another policy to something larger and so forth. It just gives you more granularity, though we will get to policy bundling together in a minute. Provable identity, wherever possible, the system uses provable identity, whether it's something like spiffy, or whether it's something like OAuth or whether it's something like Kerberos, or, you know, name me pick your poison. Obviously, if you're using SSL sorts Kerberos, etc. OAuth to write, then those systems are built around a model of request per connection security. And if you're doing something like a MUXer, you have, we have to provide a way for you to be able to do per request queries. But that's a sort of, but that's the the universe we expect to be moving into. Obviously, at a per IP address level, you're not going to have provable identity. So that's you're no better, you're no worse off than than you work today. Because we want to address temporality head on, time must be a first class citizen. Time as a first class citizen allows you to turn policies on and off at specific times. But it also allows you to deal in a sane way with propagation delay. It's going to take is that allows you to engineer around how much you want to pay for your district policy distribution architecture need something that's guaranteed consistent. Well, you're going to pay for that. And then you can set the delay on when the policy is going to be deployed, or the time which the policy is going to be active, much lower than someone who hasn't and let's say they're going to say, I'm gonna that's going to get deployed in half an hour, not a big deal, right? Or it's going to get activated in half an hour. That's not a big deal. The next thing, and this is what's called something that comes out of modern needs is that location has to be a first class citizen. You need to be able to say in this universe where you have IP mobility with containers, which is I think something that Azure was talking about. I'm pretty sure they do now, but it's been a while since I've looked at it. If the IP address can move between with a container in a data center, or possibly who knows between data centers, then you better be able to say, Well, I don't want that container to be moved outside of Dublin. I don't want that to be moved outside of Germany, etc. location must be a first class citizen. You cannot get away with the mapping that says, Oh, those IP addresses are only assigned to that area. That's too difficult to deal with. That's going to break. In order to deal with heterogeneity and the miscellany of plugins that we can possibly have to deal with a policy can carry information for one or more plugins. And in that context, you know, so you have, you know, let's say there's some configuration here for engine X or some for IIS, there's some for Apache, and that all goes and whatever happens to be the one that's being dealt with that gets that implementation. Obviously the elephant in the room is how do you make those snippets work and how do you cut and paint different parts of the policies, but that's something that we have to deal with. That's better than that's a better problem to have than the problem of not being able to configure these things at all or not being able to distribute. Last but not least, as I mentioned before, policies have to be able to bundle together and have to be able to be applied and or now you obviously there's got to be a rule such as, you know, for time and things of this nature, it's the most constricting of the policies that takes effect in an and right and things, but this logic has to be there so people can can understand what's going on. And this is roughly what we think in terms of how policies work. Now let me show you how these policies fit into our general architecture. In our general architecture, we have a picture on your left here, an area called a zone. And this is an administrative region and a zone shares all the policies and all the bundles. A zone has a bunch of controllers. These are responsible for distributing policies. Note that in this model, we make no assertions at all about where the policies actually live. If you happen to use the Netflix case, for example, in their model, which we are based, which was the inspiration for this, the general Padme effort, policies live in RDS and it takes care of distribution for them. That will work for them. That may not work for you, because you may not be all on Netflix, you may be somewhere else. So we have to be able to support hybrid models. But the controller allows you to basically distribute policies. The enforcement component is called the enforcer. It's a very imaginative name in this context. And the enforcer, we haven't decided yet whether it should do push or pull or a difference or support both for being able to get policies. Obviously, each model has its own implications with the pull model where the enforcer pulls the policies, being a little bit more robust about the controllers not knowing about the enforcers. But on the other hand, that model makes a little bit more difficult to understand where the policy has been distributed within your network. So there are tradeoffs to be made. We haven't come to a design decision there, but that will come. The enforcer itself can either make decisions on behalf of the component itself, or it can configure other components to go off and make those decisions. This could be done either via plugins or directed. And in our context, the thing that's actually fulfilling the request is called a resource. So if you look on the picture on your right, this will give you an idea. So if for example, the enforcer is configuring IP chains, then in that case, when it's configuring IP chains, it writes the IP chains rules from the policies it knows about, and then basically lets IP chains do the business. This works for cases where you can't handle every request. Or for example, I mean, you could imagine this kind of architecture in a software defined networking universe, where basically, you know, the switch pins you back and says, should I let this flow go? Well, that's great. But it can't make the switch can't make a, you know, HDP routing level decision on a single packet, because it doesn't have the whole request. So as a result, you know, you have to just tell it, okay, go do this and then pass this traffic and let something else handle it, right? In the model that I'm showing you on the right hand side, you have basically bits and pieces that are configured and can be configured by plugins and you can have plugins that handle this and then you have, you know, engine X can call us back and say, hey, should I let this request go through and the enforcer will say yes or no up or down and you can proceed from there and very obviously you could delegate to a plugin. Oh, you do that check for me. You could also have multiple plugins that had the different layers depending on, you know, what they're doing. So you could say, oh, this guy's configuring buffers and this guy's configuring what's called the, you know, what ports you're allowed to handle, right? We could delegate bits and pieces of, for example, where containers go to the guys who are doing OPA because that's, you know, sort of in their in their core or a few set of features. And so that's essentially how we expect these bits and pieces to interact with, to interact with one another. We are obviously in this context trying to build a system that is fast enough for, you know, a multi-hop in the data center authentication. And so as a result, the checks are going to have to be down, you know, sort of in the sub two millisecond kind of be able to respond to a request kind of thing. So that doesn't eat up your all up SLA. And this is the what's called this is the system architecture. This is how we expect enforcement to happen to one extent or another. Now that I've given you a bit of a tour of what we've done and what we've approached and how we're going how we're going about it. And what we think the problems are these problems of temporality and heterogeneity. Let me come back to our back because from the point of view of intelligibility, it is the gold standard. Our back does a stag time config binding and attributes are added to walls in our universe. Rules are defined by the attributes that you give them. When you define a rule that identifies certain piece of traffic to match this policy, you're doing the inverse operation. The binding of who is allowed to do what is done at one time when the request comes in. If you match all of these policies and it's just and we have to understand too that it's distributed, your policy may address multiple layers. And in the context that it addresses multiple layers, your policy may be enforced at multiple different stages in your network, right? So the binding is at runtime. The only difference between the two is when the binding is done, and we're striving to be as intelligible as the RBAC model in terms of what's going on in my network. We feel that as I don't know how many of you taken a CS class recently, if you're familiar with this, you know, you can move between a condition variable mutex pair and a semaphore back and forth. The concepts are identical. We think the same thing in this context that moving from RBAC to Padme and back and forth is, you know, basically the same kind of identical thing. And so that's what that's what we're striving for. We are aiming to build a secure system that we can understand and that will let us address these next generation problems. And to that end, let me talk a little bit about what we're actually doing. We are building a reference architecture. We have a team put together and we are going through the process, borrowed resources from various places. Some of us like myself are basically this is our side gig. We have full time work. We've got our policy definitions and the sort of architectural structure in place. We're working on building some prototypes. And if you want to catch up with what we're doing and see where we are and so on, we have something on GitHub. You can find our documentation there and you can find us at Padme.io. Again, I apologize for not being there in person. I thank you for your time. I hope this has been useful. And I hope you've enjoyed it. Thank you very much. So just to emphasize the point like it's a full on volunteer effort. Basically, like when I started proposing this problem, there were like a lot of people that registered interest in this problem. So the reason why despite him not being able to fly over, the reason why I wanted him to present was like, it was full on his effort, which is his side gig. He works till 2 a.m. in the morning to finish this off. So the problem is real. People are facing this problem. There isn't a clean clean cut solution for how you'd marry or back to your existing infrastructure. So you can actually really solve this problem for administrators, not for infrastructure, because infrastructure can solve by itself in many different ways. So that's the reason why despite all the hiccups, I wanted him to present. And I'll be here to, I mean, we did have a demo put together. I think we're running out of time. So we'll post the demo. It's basically a on why authorization plugin. We have written the plugin. You specify your policy ones and then it'll get applied to either my sequel or your HTTP restful endpoint in one single way with common expressions and tags that we were talking about. So you can already do that. We are working on it. It's still in a private GitHub repo. So once we get to a state where it's okay, and it may require like some pull request to on why, but that's where we are right now. And if anybody is interested, I'd be happy to chat about the fundamentals of where we got to. There's a lot of history about our back, the back and then the evolution of where we are right now. I'd be happy to chat about it. And if you're interested in collaborating or contributing, then we can consider how to set up. Thank you.