 Okay, sure. So, hi all. My name is Tim Hendricks. I'm the PTL of the OpenStack Congress Project. I've been working in policy for a dozen years, and I was around at the beginning of the Congress project. So what I thought I'd do today is give you a brief overview of Congress to sort of set the stage. And then I'll go and talk a little bit about the progress we've made toward the overall Congress vision at the KELO release, and then I'll dive into some more of the updates that we're trying to do for Liberty. Okay, so great. So for those of you who don't know, because Congress is a relatively new OpenStack project, the goal of Congress is that we want administrators of a cloud, cloud operators to be able to explain to Congress how the data center is supposed to behave. And then Congress' job is to sort of work with the administrator to make that happen, to make the data center both behave the way it ought to. And so the way this works is that Congress is just another service that runs on a server, and you interact with it via a RESTful API. And so when you're interacting with Congress, you really start by giving it two inputs. The first input you give it, if you hit the next slide, is the collection of services other than Congress that are running in the data center. And these are all the standard OpenStack services you're familiar with, right? No one you try and send or switch. And the idea here is that these other services that you tell Congress about are describing to Congress the actual state of the data center. So this is what Congress is going to use to understand what's really happening in the data center. So the second input that you give to Congress, if you hit the next slide, is the policy. And the policy describes how that data center ought to behave. And so now you can see you've given to Congress two inputs, the policy that says what should happen, and the collection of services that Congress can use to understand what actually is happening. And now Congress can do a variety of things with those two inputs. But at the end of the day, what you give Congress as input is some description of the desired state of the data center, as well as a collection of services that allow it to understand what the actual state of the data center is. All right, so if we go to the next slide. So this is kind of a fairly standard story that you hear about policy systems. Tell me the desired state, tell me the actual state, and now we'll go off and make it happen. There are only two design goals that Congress sets forth that make it different, that make it fairly unique in this space. And the first is that Congress ought to let you hook up any service running in the data center. Why? Because those services reflect the actual state of the data center. So one of the things that you can set up that you can hook up to Congress is obviously all the OpenStack services. And we have a number of those that we support today. The list is always growing. But we're not talking about just the OpenStack services. We're talking about any service. Proprietary services in particular, ones that customers seem to want to care about. They seem to want a bright quality, not just about the OpenStack services like Nova, Neutron, Sender, Glance, but also about their proprietary services. And so you can see that this is an important feature of any system like Congress because if you can't hook up any service then you may not be able to give Congress the ability to understand what's actually happening in the data center. So what we do is we have a plug-in architecture for services. So the second design goal that we've put out there for Congress is that it ought to support any policy. And by this I mean that you ought to be able to write policy about compute. You ought to be able to write policy about storage, about networking. You also ought to be able to write policy about how all three of those interact. You also ought to be able to write policy about all those other services that you've hooked up. And I'm going to be talking about infrastructure policies here. I'm talking about application policies. I'm talking about business-level policies. Any sort of domain that you want to write policy over, you ought to be able to write that kind of policy in Congress. And really these are two sides of the same coin. If in fact you can support any service then you better be able to write policy over that service. And if you can write any policy, well then you need to be able to support any service in order to understand whether or not that policy is being obeyed. Okay, next slide. All right, so here's an example to sort of give a grounding to these relatively higher level ideas that I've been talking about. And here on this slide you see both the inputs. You see the policy and you see the cloud services. And so the cloud services in this example that we're going to talk about are NOVA, Neutron, and Keystone. And the policy that we want to describe is very simple. It says that every network that's attached to a VM either needs to be a public network meaning it's shared and anyone can use it. Or it needs to be the case that the owner of the VM and the owner of the network must belong to the same group where here a group is defined maybe via LDAP and given to the system via Keystone. And so what's important about this policy is that you'll see is that it's across the main policy. It talks about how Keystone, NOVA, Neutron must all interact in order to satisfy the cloud operator's desires. Now I'm not going to go into the details about how we write this policy or about how we set up cloud services. If you'd like information there's plenty on the web and there are pointers at the end of the talk for that. But if we go to the next slide, what we'll see is that once you've given Congress these two inputs, the policy that you care about in the collection of cloud services there are a number of things that Congress tries to do to help the administrator, the cloud operator, make the actual state of the data center satisfy that policy to obey the way he or she would like it to behave. And the first capability is monitoring. And this is a very simple idea. Congress takes the policy, compares that to what the services tell it the actual state of the data center is, and it flags mismatches, right? Mismatches between the desired state and the actual state. This is a pretty simple piece of functionality, but it also provides a great deal of visibility into what's actually happening in the data center all in one place, all with a single interface. So the second capability that Congress provides is enforcement. And here is at the end of the day probably what most people want from a policy system. They want the system to go off and actually force the desired state and the actual state to be one and the same. And here what we're talking about is really three different kinds of enforcement paradigm. So the first is what we call proactive enforcement. For proactive enforcement what Congress is trying to do is prevent violations before they occur. And here this is the same kind of paradigm that Keystone uses. And the idea here is that if there's some other service running in the data center such as Murano today, and that service wants to ensure that before it takes an action before it actually makes changes to the data center that it's going to do so in a way that's compliant with policy then it can send an API request to Congress and ask, well if I were to execute this change, if I were to change the data center in this way would that cause any new policy violations or would it not? And so Congress can kind of give a thumbs up or thumbs down for that. And keep in mind that again those policies can be written over any collection of information available in the data center. And so that's a key differentiator there with Keystone. The second kind of enforcement that we're talking about is reactive enforcement. Here what we're talking about is Congress going out and finding a policy violation a mismatch between desired and actual state. And then it's trying to correct that. It's trying to eliminate that violation after the fact. And now it may seem that reactive enforcement is really just a stopgap, something that you do if you fail to prevent a violation. But in actuality there are many policies that can only be reactively enforced. The standard example here is a policy that says all the operating systems have to have the latest security patches in place. Now at any point in time the only way for that policy to be violated, let's assume, is that Microsoft, Apple, or Red Hat release a new security patch. As soon as they release a new security patch the policy is violated. There's some operating system that has to be out of date. And so the whole point of writing this kind of policy is to have Congress go off, identify the violation, and correct it. So that's reactive enforcement. I'll have more to say about that in a little bit. The third kind of enforcement that we're talking about is delegation. And here this is a pretty simple idea. This is just based on the idea that there are a growing number of policy engines that are showing up both within OpenStack and outside of OpenStack. These policy engines are often domain specific meaning they are tailored for networking or they're tailored for compute or they're tailored for storage. So the idea behind delegation is that it would be nice if an administrator could come to Congress and say, here is my policy. And this policy spans networking, compute, and storage. And then Congress goes off and it says, well here, I've identified the compute relevant portion of this policy and then I'll hand it off to the compute policy engine, let's say, in NOVA. Here's the networking specific portion of policy and I'll hand it off to the networking policy engine, let's say, in Neutron. And here's the storage relevant sub-policy and I'll hand it off to the storage relevant policy engine perhaps in Swift. And so the idea is just how do we, the idea behind delegation is very simple. The right way to do enforcement in this world of having many different policy engines is to give each policy engine the piece of policy that they're best suited to actually do enforcement for. So this is really an interesting and exciting kind of enforcement, one that I don't think anyone really knows exactly how it's going to work. So again, I'll talk more about that in a little bit. And then the last capability for Congress is audit. And here the idea is that we just want to give administrative ability to chronicle the history of policy violations, what the policies were at that point in time, if there was an exception, what the administrator did to actually try to remediate that and so on. Okay, so I think that gives a pretty good overview. I've talked at this point about the inputs that you give to Congress, remember a policy and a collection of services, and then I've also talked about what Congress does, what capabilities Congress provides once it's given those inputs. All right, next slide. So the current status as of the KELO release is, I've captured it pretty succinctly on this slide. It was just in KELO. KELO was the first time that Congress was officially part of OpenStack and so happy to say that we're now in level three in a big tent. Congress has all the usual features that you'd expect from an OpenStack service. It's got a RESTful API, CLI, Horizon Integration, Peace Integration, DevSec Integration, and Tempest Test. It does have a full-blown policy engine in it. It's a variation on Datalog. In terms of the services we've integrated, you'll see the list there, I won't bother reading through them. The list of services is really, there's no real rhyme or reason to it. Whenever we have someone new join the group, what we always recommend is to have someone write the data source driver, hook up a new service to Congress. And so doing, they learn a great deal about how Congress works, but also they extend Congress's reach in terms of the services that are integrated. And then finally, as I mentioned, of those capabilities that I talked about in KELO, we had monitoring in place, we had proactive enforcement in place, and we had a version of reactive enforcement in place. Okay, let's go to the next slide. What I'm going to do for the duration of the talk is just describe, go into a little bit more detail about the updates that we're hoping to get done by through Liberty. Okay, and the first update in Liberty that we're focusing on is reactive enforcement. This is a feature that we just barely got in terms of QO. And so the feature that we actually had in place, remember reactive enforcement is this version of enforcement where what we're doing is Congress is watching the state of the data center that identifies the violation and then goes off to try to correct it by executing some action. And so in QO, what we managed to get in place was a version of this reactive enforcement where the policy writer, the administrator who's providing the policy has the ability to codify right within policy statements that say, if some condition holds, then execute this action. And the intuition here is that those conditions represent a violation of policy that the administrator would like to be true. And then the action is the thing that will go ahead and correct that. So this is just a very basic, fairly straightforward kind of declarative statement. And we got it in place in QO. What we're hoping to do in Liberty is refine what we have there. In particular, what we're trying to do is add a number of administrative controls so that they can protect themselves against action execution that may be dangerous. We need to add some API level introspection to the law and administrator to understand what actions are actually available to them. And then the last thing that we're hoping to do is enlarge the number of services that are actually capable of executing these actions. At the end of QO, we only had NOVA hooked up to actually be able to execute actions. So you could do things like create servers and delete servers, pause servers, anything in the networking space or the storage space. And so in Liberty, we're hoping to improve on that. All right, next slide. Also in Liberty, we're hoping to extend or to actually implement technically for the first time the high availability architecture. And the idea here is that we have some customers who have asked us to ensure that Congress, that they can run multiple instances of Congress so that if one of them happens to crash or the machine goes down that they can immediately swap over to start using one of the other versions of Congress and then so doing ensure high availability of Congress. And so the architecture that we made to progress on in Kilo and with any luck at all we'll have in place in Liberty is a very simple version of a high availability architecture where you run in completely identical instances of Congress. They do exactly the same thing. The only thing that they share is the database. So any API call that comes in that actually creates a policy statement or that hooks up a new service to Congress will go ahead and write to that database. And then all the other instances of Congress will periodically synchronize their state with that shared database. So we're hoping to get this in place probably in the middle of the cycle for Liberty. All right, next slide. Another architectural improvement that we're hoping to make by Liberty is something that enables scale-out. And so on the last slide, the high availability architecture we just had in different instances of Congress all running on different machines. In this particular scale-out architecture, what we're hoping to do is take a single instance of Congress and split it across machines. And to sort of understand exactly how we're going to do that I have to spend a minute here describing the current architecture that's under the hood. So within this blue box on the slide, you'll see the internal working of Congress. Now that blue box, Congress, is still communicating with neutron, nova, cinder, and Swift on the bottom of the slide. But inside of Congress what we see is that there's a policy engine and then there are drivers or what we call adapters for each of the services that Congress has integrated with. And that policy engine and all those drivers interoperate via message bus. So what we're hoping to do for Kilo is make that message bus operate across different machines, across different boxes. So if you hit the next slide, what we might do in the future is say put the Swift driver on one machine and put the remaining drivers as well as the policy engine on a different machine. So that kind of functionality having that message bus that spams different boxes is what we're hoping to use to build our scale out architecture in the near future. All right, next slide. Probably the final big thing that we're hoping to make progress on is delegation celebrity. And delegation is one of these really difficult problems to work on. So what we've decided to do is every cycle to spend some amount of time trying to grapple with this particular problem. And so one idea that we picked around for the next cycle for Liberty is to look at delegation with Keystone. And here we haven't really worked out details, but it seems there's a lot of work going on about having each service they provide an API call that exposes the policy.json file effectively that is running and that is relevant to that service. So NOVO will have an API call, let's say, that returns the policy that it is implementing in likewise Newton, and Cinder, and Swift. And so you can imagine a world in which each of those policies.json is up into a single pane of glass so that the administrator can look at that policy and maybe reason about it and try to understand how that policy and the Congress policy interact in interesting ways. So this is all very speculative, and we haven't yet decided exactly what we'll do, but something along the lines of delegation I'm sure we'll make some progress on. All right, next slide. Okay, so that pretty much wraps it up. I went over a quick overview of Congress. Talked about our progress toward the overall Congress's vision for Kilo, and then gave you a few ideas about what we're hoping to get done for Liberty. You're always welcome to type up with questions. My email is on the very first slide, but you can also log on to IRC on Pound Congress or join our weekly IRC meeting, which is Tuesday the 10 a.m. Pacific. And then our Wiki is always there. There's a lot of good information there. So that just about does it for me. Thanks for tuning in.