 OK, hello, everybody. My name's Mark Goddard. I work at StackHVC, and I'm here to talk today about how we deploy scientific OpenStack. This is one way of thinking about a cloud at a very high level. We have some physical infrastructure, including switches, servers. We have an OpenStack control plane, which has services such as Nova, Keystone, Glance, et cetera. We've got some resources registered within projects for our users, images, flavors, networks, those kinds of things. And then we have some workloads that run on top of the cloud that actually do the useful work. They might be running in server instances, containers, or clusters built using heat stacks. And of course, we can't forget monitoring logging. And this should be applied at all levels of the stack. But try to scrub that image from your mind for a while. And imagine you are a cloud deployer, and you've been tasked to build the super science cloud. So you're quite happily doing your work one day, and you get this email. The data center operator has racked and cabled your servers, your switches, everything you need. They've installed a Linux OS on one of them, and given you a spreadsheet. Uh-oh. What do we do now? This doesn't look anything like the cloud that we've got in our architecture documents. So this is where Koby comes in. We can use Koby to provision the physical infrastructure to configure the control plane hosts, then lay down a set of containerized open stack services using color ansible. It's written in Python and ansible. And it encourages you to version control your configuration in a YAML format. So one of the big dependencies of Koby is Bifrost. Bifrost is a tool for deploying a standalone ironic service. And we use this to discover the control plane hosts, then to inspect the hardware configuration, and then provision them with an image. And this is just a set of ansible playbooks. So we'll apply a little order to our system. We've got our laptop on which we install Koby. We use Koby to install Bifrost on this server in the middle, which we call the seed. If you're familiar with triple-O, then it's similar to the undercloud director. And on the right-hand side, we've got our pile of bare metal servers. And we use Bifrost to discover, inspect, and provision those servers with an OS. We then use Koby's ansible playbooks to configure the OS. The service is running on it, and to deploy Docker. We can also use Koby to configure the physical network devices, the switches and the routers in the system, if you so choose. If you have a network administrator who would not let you do that, then that's perfectly fine as well. Collar and Collar Ansible take over at this point. Collar is a project for building Docker container images for many different OpenStack services. Collar Ansible is another project which takes those images and uses a set of ansible playbooks and roles to deploy them. Collar Ansible can scale out to multiple hosts and provide you with a highly available control plane configuration. Of course, let's not forget monitoring and logging. We tend to use Manasca. It's an upstream OpenStack project. It provides metrics, alarming, and notifications. And it also allows you to aggregate logs and import them via an API into an Elk stack. It also allows you to process those logs. It's a multi-tenant solution, meaning you can monitor and aggregate logs from both the control plane and the workloads running on the system, on the cloud. And it's scalable. It's built around a Kafka message queue. So you can scale it to pretty high data rates. So at this point, we use Kaby to install Collar on the seed. We build our container images if you want to build them locally, or you can just pull them down from Docker Hub if you prefer not to. You then install Collar Ansible on your laptop. And we use Collar Ansible via Kaby to drive the deployment of the container images into containers on the control plane hosts. This includes Manasca, which provides our monitoring and logging system. So once this is all done, we've got a fully functional OpenStack cloud. But it doesn't really do very much yet. So this is where we have not so much as a project, but more of a pattern. In this case, we'll call it SSE config. But typically, it's the name of the project and config. It's a set of Ansible playbooks and roles. And we tend to use this to configure the project resources for a cloud. So that will include projects, users, roles, images, flavors, networks, subnets, and routers, all of these things defined in YAML definitions and declared declaratively and applied to the APIs using Ansible. There's a cookie cutter repository, so you can stamp out these for different clouds. And there's really not much to them. The core logic is housed in Ansible roles that we share on Ansible Galaxy. So anyone is free to use them. So once you've got project resources registered, there's still not really any useful work going on. So what we do, we have a similar repository, a similar pattern, SSE appliances, in this case, Super Science Cloud appliances. We use this to create clusters running on OpenStack. For clusters using virtual machines or bare metal servers, we use heat to define stacks that define those clusters. For containers, we typically use Magnum. We have Ansible roles to drive the creation of those clusters and then to use Ansible to do the post-configuration that turns them into useful clusters, to install packages, to configure user accounts, to configure them to mount storage shares. We have various different appliances in these playbooks to support things like a Slurm batch queue, a desk, a processing engine, Redis, BGFS storage. They're all written using Ansible. Some of the roles are included within the appliances repository, a similar shared on Ansible Galaxy. So now we've got our cloud. We've got SSE appliances installed locally. We've got the control plane running heat, Magnum, and other OpenStack services. And we can use this tool to deploy various workloads, including those on the right-hand side. There's a common theme here. If you haven't spotted it yet, it's Ansible. We really do use Ansible from the top to the bottom to provision our clouds and to automate everything. So taking these project names and applying it back to the diagram we started with, which you are supposed to forget, we have the physical infrastructure managed by Koby with its configuration and Bifrost. We have the control plane managed by Collar and Collar Ansible. We have the OpenStack resources defined by the SSE config repository. And we have the user workloads defined by the SSE appliances repository. And Manaskar on the right-hand side providing monitoring and logging. I realize there's a lot of information here. So if anyone does want to know more about any of the tools used and mentioned in this talk, then I've put the slides on SlideShare and provided various links in the slides after this point. So many on Koby, a bit about the architecture of Koby, Bifrost, Collar, and links to real instances of these config repositories. So the first one here is for the square kilometer array. The second is for a cluster called the evolution of Darwin at Cambridge University. And the same applies for the appliances project. And these Galaxy roles, which you may find useful, we have shared on Ansible Galaxy. Thank you very much.