 All right, hello everyone. My name is James Slable and I'm the triple O P.T.L. I'm going to be updating you on what we're hoping to accomplish in the Liberty cycle and what we've already accomplished as well. Next slide please. So just real briefly, I want to talk a little bit about the triple O project's mission and kind of how that relates to kind of some of the big tent governance changes that have gone on in OpenStack. So our mission has kind of always been to deploy OpenStack in production using OpenStack itself wherever possible. So kind of what that means in the big tent model is that we have a lot of new services to deploy as a lot of new projects kind of come into the OpenStack umbrella. We want to be able to deploy those. But I think kind of more interesting is that there's also a lot of new deployment tooling and infrastructure tooling that we can actually deploy with. So we're able to make use of a lot of those OpenStack projects as they kind of come into the big tent model. Slide three please. So at a high level during the Liberty cycle, we hope to be able to complete the full puppet-based implementation. We've already made a lot of progress on that during Keelow and we've continued to make a lot of progress and that should be wrapped up during this cycle. We're making use of the Puppet OpenStack project, which is a new OpenStack project. And these are all of the puppet modules that used to live on Stackforge before, so all of the modules and manifests that you would use to configure your OpenStack services. We've actually integrated Windows and we're able to deploy a cloud using PuppetNow. We do have a very lightweight layer on top of some of those modules. We've tried to not really have a lot of logic in that layer and keep it as simple as possible as we can. We've also continued to invest heavily in heat itself. So we're using heat in its template language to kind of model a complete declarative model of the cloud that we're going to deploy. We're also making use of the environment feature and the resource mapping feature as well. Likewise, we've also invested heavily in Ironic as well. Obviously we're using that for bare metal provisioning, but we're also using it for ready state configuration as well, so things like BIOS config before you actually deploy to a node. And we're using the Ironic vendor pass-through APIs to do that. We're also making use of Ironic Inspector, which is formerly the Ironic DiscoverD project. And Ironic Inspector allows you to discover hardware attributes about nodes that are already known to Ironic, so you don't have to enter all of those attributes manually into Ironic yourself. You can actually discover those now. All right, slide four, please. Okay, so just kind of diving into a little more detail about some of the changes in triple O heat templates directly. So I mentioned earlier we're making heavy use of the resource registry, and that allows us to map heat resources to different backend implementations. This really allows us to enable and disable different features on demand. And so one of the things that we were able to do is we were able to work on the new puppet-based backend in parallel to our existing templates that we already had and kind of keep the top-level template interface the same. And we're using this same feature to enable and disable features such as pacemaker, network isolation, and deploying in containers as well. Parameter defaults. So the way that the resource registry actually gets implemented is that your backend resources are actually implemented as nested stacks, and lots of times these nested stacks have a different set of parameters themselves. So parameter defaults gives you a way to set those parameters in those nested stacks without having to modify your top-level templates directly. Heat environments. So heat environments are really, they're just saved YAML files that combine parameters, parameter defaults, and the resource registry sections into a specific environment that you want to deploy. And so we have several of these examples checked into the tree directly. And these are kind of our recommended configurations for the different ways that we can deploy a cloud. And of course the model itself is actually very flexible. So we offer a lot of choices and options for the changes that people are able to make. All right, slide five. So this is just a high-level deployment overview of what a cloud might look like that's deployed. So we have an under-cloud node in the first column and several controller nodes that are part of your over-cloud. And then in the second column we have several compute nodes. And then in the last column we have block storage, set storage, and object storage nodes as well. So if we go to the next slide, slide six. You can kind of see how that same deployment overview is actually modeled in the template themselves. So we're making use of the OSHeat resource group resource. And those resources themselves are implemented as the different role types that you see here. So controller compute and the different storage type nodes. And you can really start to get kind of an understanding of a high-level declarative view of your cloud if you kind of start diving into some of the templates that we have. Slide seven. All right, so I mentioned this a little bit earlier as well, but just to kind of offer a little more detail, one of the features that we found a lot of folks kind of working on is network isolation. And what this allows you to do is define additional dedicated networks based on their traffic type. And this is important because you can provide needed network isolation among these different traffic types. So obviously tenant traffic versus storage traffic versus internal API traffic. You can separate this network traffic out onto different networks. And these additional networks are actually defined in Neutron itself on the undercloud. And they're created via heat and they're all template-driven. We're using static IPs from these networks to configure the actual interfaces on the deployed overcloud nodes themselves. All right, next slide. So on slide eight, this is just a diagram kind of showing off the network isolation, kind of what it looks like. So we're actually able to isolate up to six networks right now. These are the most common networks that people are really deploying with. And the model itself is pretty flexible. So if you only wanted three networks and you wanted to set up this traffic sharing kind of just across those three networks, you could do that as well. It also shows that obviously not all your nodes are going to be connected to all the networks. And this is important because you don't want all your nodes connected to the external network if that's not an actual requirement. So again, all of this is modeled in our templates directly and allows you to kind of customize it based on the deployment needs. All right, slide nine. So we've continued to work on being able to deploy a full HA cloud and we're also using pacemaker as well for cluster management. HA is actually, we actually always deploy HA with all of the templates that we use, but the pacemaker part is optional itself. And again, this is enabled via the resource registry feature. And it's actually just a one-line change to kind of make in the heat environment file that you're using to deploy. So that kind of shows how powerful that feature is and kind of what it's enabled us to do. We can kind of toggle these different recommended deployment scenarios with just simple changes in the heat environment. Slide 10, please. So we've also continued to kind of refine the upgrade story. And one of the ways that we're doing that now is to kind of focus on a solution for package-based upgrades. So we have a couple new resources in the templates themselves. It's called update deployment. It's one of the software deployment type resources. And when you actually update this resource that triggers heat to go run a script on your deployed nodes, that's going to either execute a YAM or app type package update. There's a lot of complexity here and kind of what packages get updated and what order and kind of a lot of ordering problems and dependency issues around service restarts, especially as it relates to a lot of the OpenStack services. So one of the things we're doing to mitigate that is that packages that are managed by the public OpenStack manifest themselves are actually excluded by the update deployment. And later on in this upgrade scenario, we'll rerun Puppet Apply with the ensure latest flag set. And that lets Puppet OpenStack itself actually update the Puppet managed packages. So all of the dependency and ordering logic, which already exists in the Puppet modules themselves, we're able to reuse that without having to re-implement a lot of that same logic. So that's what this scenario allows us to do. All right, slide 11, please. So we also have some folks working on deploying an OpenStack cloud where all the services are containerized themselves. And we're actually reusing a lot of the container content from the Polo project. So we're reusing their container build scripts and we're able to deploy these containers using Docker Compose. So we have resources in the templates themselves, and we just basically substitute the Docker Compose-based ones for the Puppet ones instead. And this has kind of enabled us to iterate pretty quickly on getting a containerized cloud deployed. We've focused on just doing the compute nodes at first because there are less services on the compute nodes themselves so kind of less things to kind of containerize and orchestrate there. We've actually been able to deploy a cloud where the compute nodes are container-based, but the controller nodes are still Puppet-based. And then we are working on switching the controller nodes over to be container-based during the rest of the cycle. Obviously, using containers offers a lot of nice features, such as atomic upgrades and rollbacks, and it also speeds up the deployment process quite a bit as well. All right, slide 12. That's pretty much it for the overview. We're working on a lot of other stuff as well, so if you'd like to connect with us, these are probably the easiest ways to do so, either on the OpenSec Dev mailing list and we're also on FreeNode in the Pound Triple-O channel. Thanks a lot.