 All right, welcome to my talk on triple O, bridging the gap between deploying OpenStack as a cloud application and as a traditional application. My name is James Slagle, and I'm a principal software engineer at Red Hat. So just real quickly to introduce or review what triple O's mission statement is, we're an OpenStack deployment tool where we actually use OpenStack to deploy OpenStack itself. And when I say that, I mean, we really focus on using the OpenStack projects that expose end user APIs. So you have two OpenStack clouds. You have an under cloud and an over cloud, and these are two separate OpenStack clouds. So you start by installing the under cloud, which is just a single node installation of OpenStack. And then on top of that, you want to deploy an application. That just happens to be your over cloud, which is your end user tenant facing cloud. And when we talk about deploying applications on the cloud, these are some of the characteristics that most people tend to think of. You have an elastic and resilient infrastructure. It's also usually hybrid. You want to be able to have multiple deployment targets, whether those are physical or virtual. Typically you have a microservices approach, so your services expose an API and they're loosely coupled. You also want your services to be composable so that they work together to form a larger application. Those characteristics are often contrasted with how you would deploy a traditional application. In these cases, you often have different physical storage requirements and you are deploying to large pools of heterogeneous hardware. In these cases, you also typically have entrenched management tools that you need to integrate with, whether those tools are for provisioning or network configuration. You also have typical entrenched configuration management tools like Puppet or Chef or Ansible and of course, monitoring tools. So what we really try to do is kind of bridge the gap between these two extremes and kind of meet both sides of these requirements. So how do we do that? The first way is our approach to the bare metal management since our deployment tool uses Nova and Ironic to deploy, we're able to treat hardware as generic pools, if you want to be able to deploy your instances that way. However, we also are able to address the typical operator requirements such as using predefined host names and IP addresses. We also model all of the physical networks that your servers are connected to. We're also able to do predictable instance scheduling with Nova, Ironic and Inspector. We actually boot the nodes into a discovery RAM disk and we do a lot of benchmarking and hardware data gathering. And you're actually able to define a set of rules based on that data that you've gathered and you can tag your nodes and those tags are associated with Nova flavors and then those Nova flavors are associated with the roles that you actually want to deploy in your cloud. So you know exactly what types of roles is going to which hardware. We can also validate the entire physical hardware environment so you can exclude certain hardware from your deployment if you know there are problems with it. We also take an approach to hybrid infrastructure so we can deploy to bare metal virtual or containers and the entire deployment model is abstracted so you don't have to use triple O to actually provision the hardware itself. You can use an external tool if you want to whether that's provisioning bare metal or virtual. We also have a hybrid cloud approach to where you can use resources that are on or off premise just depending on how you want to scale your resources. If you have an existing cloud and you want to experiment with like a new API service you could actually run that on off premise resources if you wanted to. We also have a modular template design for each service so this is kind of where we get our microservices approach each service is responsible for describing how you install it or how you scale it up or scale down or how you upgrade that service and that's completely separate from the way that you deploy the infrastructure side of the stack. So since we actually control installing the under cloud we're able to be specific about what open stack services we include on that piece of the deployment tool. So if you wanted to include projects like Cilometer, Zikar things like that on your under cloud to actually use that to manage your over cloud you're able to do that. So you could have Cilometer emitting events to a Zikar queue and then heat consuming those events and then triggering a Mistral workflow so you can set up kind of a whole chain of event subscription processing using just open stack. You can also scale the under cloud itself just using those services there. So if you wanted to add additional ironic conductors if you're going to deploy a bunch more bare metal you'd be able to scale out your under cloud if you wanted to do that. So kind of where are we going with all this? So we're looking at tighter integration with Ansible in the future. Right now we use Puppet quite extensively but Ansible is kind of proving to be a common way that people are deploying all types of applications. So you'll be able to execute Ansible playbooks for the service configuration side and you'd be able to apply those via heat or Ansible directly. And this kind of gives operators more flexibility into exactly how the configuration is applied. And then of course, Kubernetes. You know, the combination of open stack and Kubernetes is obviously very attractive for cloud applications and most open stack deployment tools are moving to use Kubernetes. So of course you want to be able to take advantage of those same advantages that that tool offers you. So for scaling, scheduling and self-healing apps, you want to be able to apply that to your open stack cloud itself, not just applications that you have on the cloud. So just real quickly in the last few minutes I have. So this looks a little busy probably but these are some of our service configuration templates. And you can actually see where we're including Puppet here and in this specific one, we actually have some Ansible tasks to upgrade this service which is Glance API. And in the same tree, we actually have a Docker version of that. So this is a real pluggable model just depending on how you want to deploy this particular service. This is a heat environment file that shows kind of how we wire up these services to however they're implemented whether you're using Docker or Puppet in this case. The other thing I wanted to show is kind of how we model the bare metal network configuration. So in this example, this is the network configuration that would apply. We would apply for the controller role. You can see how we have the network modeled in this file and that configuration would get applied to the bare metal itself kind of depending on what physical networks are actually attached to the node. So thank you very much. We are an upstream open stack project so would love to get more contributors. If you're interested, this is how you can get in touch with us and then there are some of my details at the bottom. I'd love to answer any questions in one minute for questions or you can catch me at the end, yes. So yes, how is the Ansible Puppet link going to work? So that will be orchestrated via heat. Heat is actually what is driving either Ansible or Puppet. There's an agent on the instance that pulls the data from heat and then depending on what's in that data will either execute the Puppet or Ansible. Right, so yeah, they're not gonna run at the same time but yeah, you'd be able to use either like, we're actually using Puppet for the configuration of the services in Docker containers right now just because the Puppet modules are very rich at the configuration they can do.