 Okay. Welcome, everybody. My name is Adam Collard. I work for Canonical on the landscape team. Today I'd like to talk to you a little bit about reference architecture for deploying OpenStack and the autopilot, which is a feature of landscape for deploying that reference architecture on your hardware. Okay. So what do we mean by architecture? Well, if we look back in recent years, it used to be that a simple, all you needed to do to deploy software was know about a single piece of software. It had some configuration files and it ran on one machine. But these days, with modern software like OpenStack or machine learning or big data things, we have a full set of different services and the key piece of information, the key thing we need to know is how to put that onto two machines, four machines, or 100 machines. And that's what we mean by architecture. It's about laying down those services on your sets of machines, working out what goes where, what can be co-located with what. So the reference architecture that Canonical employs is built to scale from very small clouds to very large ones, from dev QA to production clouds for running hundreds, if not thousands of nodes. And it's designed to cope with failure. So hardware will fail. We know that. And what we need to do is basically minimize the effect of that failure on the overall cloud. So I want to talk you through a scenario where we have four servers. Right? We want to put a cloud down on the minimum set of hardware. Maybe it's for dev proof of concept playing around. So first of all, we want to put Nova compute on each of those four machines. Because we want to have the largest capacity available to our users and our customers. And then we also put storage on all of those machines as well. Because if that's Seth or Swift, it doesn't matter. We want to have the largest amount of capacity for storage as well. And if we have dedicated compute nodes, then we're wasting some storage and we're spending a lot of money on those compute nodes. If we have dedicated storage machines, then again, it's a very expensive machine, and we're wasting compute on that hardware. So in addition to that, beyond just the storage and the compute, we also need the control services. So let's imagine this is MySQL, right? We need to put MySQL down, so we put it on one of the machines. But actually, we want HA, right? Because if we lose one of those machines, we don't want to stop operations and kill the whole cloud. So we put MySQL on three different nodes and minimise the risk of losing any one of those nodes. And we do the same with Keystone and Nova Scheduler and all of the other services which comprise an OpenStack cloud. But then we put them in containers. Why do we do that? Because we don't want all of these services contending for the same file system. By putting them into a container, a lexie container, much like was discussed earlier, if you're at James's and Tyco's talk, this gives us a secure, isolated little box to run our service in. And these are full machine containers. You can SSH into them, you get a full process tree, you can debug them and inspect their log files much like you would any other real machine. But we still have the problem that if we were to lose one of these machines, we'd lose one-fourth of our cloud. And if we had some important services, the leader, for example, in MySQL running on the same machine as the leader for Gluster, say, if we were to lose one of those machines, the failover that both of those services need to do at the same time could get very messy. So what happens if we add two new servers into the mix? We've grown the capacity of our cloud, so we want to put compute and storage down on this new hardware. But then we can increase the overall resilience of our cloud by moving some of the services, some of the units, from where they were on the existing set of hardware to the new hardware. So this spreads the load and spreads the failure across the risk of failure across the whole set of hardware. So this also comes into our performance story. The best overall performance of your cloud is when every single machine that forms part of that cloud is humming. It's not screaming, trying to keep up with high demand that it can't keep up with because it's blocked on CPU or IO or network or whatever. And it shouldn't be just idle sitting there doing nothing because then it's an expensive waste of money that you've laid out for this machine, for your hardware, for your cloud that's being underused. So by putting all of the control services across the full set of machines that you have in your cloud, you're making every component of that cloud contribute to the overall throughput and the overall demands on the cloud. And it should be simple. We want to make open stack available for everybody, and we want to have an easy way of exposing that to our end users. And so that's why we made the autopilot. So I'm now going to switch with a bit of luck and show a demo of the autopilot. So as I said, the autopilot is a feature of landscape. It uses both our Metal as a Service, Maz, technology, and Juju, our modeling and orchestration technology to deploy open stack. So this is our landing page. We do some checks with Maz to make sure that we've got enough hardware to deploy an open stack cloud and we've met the requirements. And then through a simple clicky wizard, we can select our hypervisor, our SDN, and this information has been pre-filled, the network information has been pre-filled based on information that's been entered in Maz and discovered hardware information has been discovered by Maz about the particular machines that form, will form part of the cloud. And then we can choose both our block and object storage. So I'm going to pick Swift and Ceph. And then we get down to the machine view. So here we can see all of the hardware that's been registered in Maz that we can then add to our cloud. We get the information about the characteristics of that hardware, how much RAM is available, how much disk is available, how many cores, et cetera. And I can also see that they're connected to the network as well. One important part of our architecture is harking back to the failure is to record just once in Maz where the common points of failure in our hardware is. So if we've got a couple of racks which share a power feed or they share a network feed or they share a cooling system, we can record that in Maz and say all of these machines are in one physical zone. And those zones you define once in Maz when you wire up the data center and it's reused throughout the whole stack. So what that means is that your failure scenarios which occur from your physical wiring and your physical setup of your machines are exposed through availability zones in OpenStack to your users. So that means they can they can build an HA application on top of your cloud which reuses the information that you've already set up once in Maz. So I'm going to add another availability zone which again is a separate physical zone in Maz. So here I can see I've got six machines selected which means I can do an HA cloud. So one important thing given enough hardware in given to the autopilot and to the architecture in general, we will deploy HA by default so that you get a production quality cloud. So that is going to kick off powering on the machines, laying down a bin to on them and then using the charms to pull in packages for the different services which comprise OpenStack. We're not going to wait for this to finish because it takes a little while. But what I will say is at the end we also want to give you a usable OpenStack. So that means we automatically upload into glance images for 1404 LTS and 1204 LTS and also set up security groups to allow SSH and ICMP access and finally set up some information to easily allow juju to be used to deploy workloads on top of that cloud. You can see some more about how we facilitate juju deploying services on OpenStack in a talk later by my colleague Rick Harding. So now we'll jump to one which I prepared earlier. So this is the dashboard that you see once a region, a cloud, has been deployed. So we get some usability graphs, some trending data to see how our cloud is being used and to anticipate outages and to allow us to do capacity planning and to know when we are running out of block storage or object storage or our machines are screaming and really need some more compute capacity. So we also get a summary of the existing hardware, some links to horizon and to download the RC files but what I want to show you today in particular is ad hardware. So this initial deployment was with six nodes but there's more machines registered in mass which allows me to expand my cloud and so groups by az I can see here there's region 1, 1 and region 1, 2. These are two different azs again tied back to that zone information in mass. So I can see I've got two machines that are already in the zone which are already in the in the cloud but I can add another one and I can do the same for the second az if I can hit it. Okay. So again that is doing exactly the same thing as the deployment. It's using juju and mass to boot up the machines, install Ubuntu and then install the software and automatically juju allows us the beauty of juju is that it will automatically manage the relationships between the different components in the cloud so that these compute hosts will automatically show up as additional capacity in horizon or in your nova command line. Okay. Now I'm going to switch back to this one. Okay. What about the hardware? So we think it's important to use commodity hardware when building clouds. There's an economic sweet spot which changes over time as new machines come out and new processor technology is released but at the moment it's around a 2-socket 2U server. So it's important to recognize that the economics of a private cloud are very important and that your users, the users of your public, of your cloud will be competing, they will have the choice to either use your private cloud or your public cloud and so if your open stack install isn't using, isn't the most optimal that it could be in terms of the economics of the hardware that you've bought or the services that you and the head count that you've got allocated to it, then it's going to fail. So commodity hardware as I said 2-socket 2U with approximately 16 gigabytes of RAM in there seems to be the current sweet spot. Okay. So I now want to talk a little bit about the upcoming features of landscape. So the install that I just showed you earlier was installing open stack kilo. With the GA release in two weeks time we will update that to Liberty and with 1604 we will support Mataka and with that 1604 LTS have been to with Mataka, from then on we will support live upgrades within landscape to keep your cloud up to date with the new releases of open stack as they come out. So N onto O and P etc. Okay. One of the new features in landscape is an additional SDN. So during the beta period we just had Neutron's ML2 reference plug-in using open V-switch and with the GA release we are adding open daylight as an additional SDN choice and looking forward again to 1604 we will of course support Lex-D as a hypervisor on your cloud. So if you were at the talk earlier just before the coffee break you would have seen all of the great features with Lex-D and Nova Lex-D and that will come with the 1604 release of landscape and the autopilot to allow you to get very high density loads on your VM loads on your compute and not pay the virtualization overhead that you get with KBM. Okay. I've left lots of times for questions. So that's all I want to say today. Thank you very much. Yes. Okay. So the question was do you need an internet connection in order to install OpenStack using your autopilot? Broadly yes. There are certain sites that we download the images from, for example, and download other parts of the charms and the packages to get that. But you can with the GA release we have proxy support so you can put it behind a proxy and get all of that information from there. So it doesn't need direct access but it does require access to the internet. Yes, exactly. So the broad steps of installing the autopilot are to first install MAS which allows you to then register your hardware in your LAN, install MAS onto one of those machines, be that a top of rack controller or be that some other machine that you've got available. And then that allows you, that runs a proxy on it and caches apps packages and then you install landscape and then you would configure a proxy during that installation of landscape and then it would use that for going forward. The juju storage, did you say? Sorry. So the charms, so during that installation process you'll bootstrap juju, install it onto one of the machines on MAS and then install landscape. All of that can go through a proxy but yeah, you need, you will need, there's features in juju to set that proxy and that will be reused throughout the stack. Is it your place of services and containers? Right, so all of our services are in containers. The only things that aren't are Navi compute and the Neutron router gateway service. Every other control service which forms part of the cloud is in a lexie container. So no, we don't use any KVMs as part of the control service but obviously KVMs are used for the workloads on the cloud. Yeah, so that's, I mean the whole point of the autopilot is to make it seem simple. The, we, landscape gets the information from MAS about the hardware and then makes an intelligent choice as to where to put that. So for example, if you were to install it onto 100 nodes, it puts down three monitors, and then OSDs on all of the rest of the hardware and we'll use all of the storage available beyond the OS disk to expose that as block storage, object storage to the users, to the workload. So if I answered the question, you want to be able to customize the open stack that you've deployed using autopilot? Okay, so, right, so the question is after you've installed a cloud and then you've customized it, could you then confuse the autopilot and it doesn't understand how you've done that? You could certainly, if you try hard enough, yes, you can break it. It depends what sort of customizations you're doing. Sure, sure. It very much depends on what sort of customizations you're doing. The overall, the technology that the autopilot uses is juju to maintain the, to add that hardware and to expand the class to the cloud. Again, it very much depends on what you're doing, but the chances are if it's just, you know, plugins for Horizon or its changes to configuration files, then it depends where on the spectrum that is. Some of it will be overwritten and you would be fighting against autopilot and some of them it would just be alongside it. So if you do want to have a customized version, then all of the charms and all of the ways that landscape is deploying this is open source and is readily available for you to do. So you could, if you don't want the strongly opinionated view that autopilot has, you can easily reuse the charms, deploy OpenStack and then customize that. But yeah, you then don't get some of the benefits that we have with the system. Any other questions? Okay. I have some t-shirts to give out at, if anybody would like an autopilot t-shirt and, I have a screen lock. And I also want to remind you there are feedback forms on your chairs or a chair near you. Please fill it in. By doing so, you get the opportunity to win some great Ubuntu swag. So please give some feedback. And thank you very much. Oh, sorry. One more question? No. So with landscape, with the autopilot, you can for free use up to deploy cloud on up to 10 nodes. That's for free for perpetuity. If you want to expand your cloud or deploy on more than that, then you require some licenses, additional licenses. Okay. Thank you very much.