 Testing. Good afternoon. How are you doing? So it's after lunch, so the only rule is it can't be the quietest session of the day. Otherwise, you'll hear snoring, and that's not good. All right. Well, my name is Dean Hendricks Meyer. I work with the Landscape Engineering Group at Canonical, and we're responsible for Landscape, a bunch of systems management tool, and within that is the OpenStack Autopilot. The OpenStack Autopilot is our tool that manages the lifecycle of your OpenStack cloud from deployment on through over time. So that's what I'm here to talk about today. I thought I would go through kind of just the basics, like what it is. Some of you've probably seen demos or screenshots or downloaded it and tried it yourself. So I'll talk about what it is, why it exists, and what we'll be doing with it going forward. So first, let's talk about what it is. The easiest way to do that is to show you. So, right quick, I'll just show you briefly. We'll kick off a cloud deployment so you can see how it works. So we'll go into the details here in a little bit, but you can see I've got some resources registered in a checklist that's ready to go. So we'll just click Configure here. If I can get my mouse, there we go. And so I'll just pick some components. Just KVM, OpenVSwitch. And so you'll see there's some pre-populated network information in here. We'll explain why that's pre-populated in a little bit, but it makes it much easier to install and set up and make sure you don't typo things and end up with clouds that actually don't work in practice. We'll do seph and seph. And then we've selected components. Now what we do is we choose resources that are available. So you can see I've got a number of physical zones available here to choose from. These are pulled directly from Mass, which I will get to in a little bit. So I'll just pick these nodes and you'll see here on the right, it says that I need at least three machines to build it. I need at least four machines for HA. The number of machines you need obviously depends on the type of components that you pick. In this case, seph-seph means I don't need as many nodes as I otherwise would. So we'll just add some machines here. Let's add some more. That is not enough for a fun cloud. Okay, so there's nine nodes and you'll see here because I have enough machines and I have enough of them on both networks so that I can have Neutron HA. I've got a highly-vailable cloud. So I'll click install, we'll let that go, and then we'll get into the meet. That will take a while. So one of the key things about the autopilot is OpenStack has progressed over the last couple of years. It's gotten easier and easier to deploy. So you've got lots of people trying it, whether it's in labs or in enterprise situations or in various environments. One of the challenges that we've seen is people will deploy OpenStack and they think that once they've done that they should have like Amazon right in front of them. So they're ready to go into Horizon and launch an instance, but they're an admin user and so they're seeing options in Horizon they don't understand. They're trying to launch things on a network that's not meant to have instances on it and lots of confusion. So one thing we do is not just deploy it, but we set it up to be consumable. So things like putting images in glance automatically when you deploy. We prefer deploying our workloads with GGO on top of the cloud, so we'll add the metadata and streams necessary to bootstrap right away and deploy workloads. We also set up the first project and tenant user. Not so much because it's a critical thing that you have to do, but people appreciate it as a template to know, okay, if I'm going to create a user because I have real customers, I want to know what's a good template to do that? What are reasonable quotas? How should it look like? So it helps them consume their cloud in a way that they can share with their customers. So I'll just throw in a quick password here. Foo is my favorite password just so you know. There we go. Hit save. So when the deployment is done, it will take care of bootstrapping that. So let's talk about what this is. So the autopilot is basically five main components. You've got Ubuntu and OpenStack. Those are core, obviously. That's what we're deploying. And then you've got the components that we use to deploy. MAS, Juju, and Landscape. So MAS is basically the resources, and we'll get into a little bit more detail. Juju is the service modeling that we use to drive the OpenStack services and put them where we need them to be. And then Landscape. Landscape you could think of as the brains on top of what we're driving that has the intelligence to know what's the proper way of doing things. So first is MAS. I'm sure some of you are familiar with MAS. MAS stands for Metal as a Service. It's a utility that's designed to allow you to treat bare metal more like cloud virtual machines. So you can have machines, you can auto discover them, you can enlist them, commission them. MAS will do things like boot a non-destructive image, discover all of the hardware, categorize it so you can organize by machine type and CPU, memory, disk size, you name it. The other thing that it lets you do is organize your infrastructure into what's called physical zones. So these are important because oftentimes you have infrastructure. Let's say you've got racks that are on different power. Let's say you've got different data centers, different parts of a building or data centers across the country. You want to physically organize those things in a tool that you can understand where things are and what's going on. It makes deploying workloads onto those resources much easier. So you can organize your machines into physical resources and once you've got it organized the way you want, the other thing that MAS lets you do is define networks. So if you're deploying anything that's cloud-related, fairly complex, generally speaking, you're going to need to define your networks because how it's plumbed and how you have it hooked up and how they're interconnected depends a lot on what you can do with it. So MAS lets you define networks. So the common scenario is you have a normal managed network where you have MAS due DNS and DHCP, but you also have a public DMZ network that you want floating IPs on, completely independent of everything else. And so MAS lets you define those networks in there so that when you're deploying things that need access to those networks, that's easily discoverable. And so what you saw when I picked the open vSwitch component in the autopilot, those networks, that information was pulled from MAS. So I didn't need to remember, okay, which was the right network, what was the IP space of that network. Now I could adjust the amount of floating IPs I give if I want. By default it's going to give me the maximum range that I can use. But it just helps prevent user error and mistakes, trying to translate to remember everything and type things in. So once you have MAS and you've registered that with Landscape, Landscape is able to drive deployments. The way that it drives those deployments is with Juju. So I'm sure you guys have all heard about Juju, so I won't go into a lot of detail. But basically Juju lets us model the open stack services into what's called charms. Those charms let us articulate what kind of service it is, what relationships those services have, how they interact with other services, and things like what's the proper way of scaling them out. We can encode that intelligence into a charm that will make driving it, deploying it, adjusting topologies, scaling out, things like that, clean and efficient. That's what we use Juju for. So the heart of it in terms of what's driving that and making the decisions on how that happens is in Landscape. So you could think of it as the brain of the open stack autopilot. So you have the resources that you give it, you have the models that you can move around and manipulate, and then you need some intelligence to know how exactly am I to model that. So that's why we built the autopilot. So it started out, and what we focus on is not one size fits all. Organizations are different. Component preference is different. What you want to do with your cloud is different. The hardware you have available, all of those things are different. And so having, okay, you're going to use this component, this component, and that's it, and go, doesn't work for most people. So our idea is to let you choose what works for you and take those choices, take the resources that you have, and make the best deployment that we can. So choose your own adventure. Choose your hypervisor, choose your networking storage. You know, maybe I don't really care about object storage. I just want lots of block storage and some compute, so I want stuff only. I don't care about Swift. Maybe it's exact opposite. So, you know, whatever you want, we will drive. So once we did that, you can, you deploy, and that's great. The most important thing, really, in terms of using it in production, is HA. And HA is one of those things, if anyone, I'm sure, some of you have worked with HA. The way it works is you generally make assumptions, you make a plan as to how HA works. You deploy it, it looks like HA, until you unplug something. And then it turns out it's not really HA. Or worse, you unplug it and the cloud doesn't die, but it also doesn't work, which is just as bad, right? So what we strive to do is, in HA, it's not just deploy it so that, okay, I have multiple services in case one dies. So by default, we're HA plus one, so if you lose a note of a service, it doesn't really matter. You're still going to be in HA. If you lose two, you should kind of pay attention, because you could be in trouble eventually. But the way it works is, when you lose something, not only should my cloud not die, it should still be up, but I should also be able to guarantee a level of service. It should work correctly. And the only way to do that is just start unplugging stuff and do that until you've exhausted all of that and made sure that HA is really HA. So that's something that we focused on recently. And then the next obvious thing is scale out. It's great to deploy OpenStack, but oftentimes what happens is people will, they'll download the autopilot, they'll install, they'll grab five machines, they'll deploy OpenStack, and they're like, ah, this works. Well, I didn't, I used all of my machines instead of five, and they wanted to add them. So we've added the ability to scale out. We'll get into the details of scale out in a little bit, but it's more than just adding a machine to compute. It's more than just adding a machine and having some additional storage. Adding capacity to your cloud can change what the optimal topology should be. It can change where services are placed. How can I minimize risk now that I have more machines to spread everything over? And so that's where the intelligence of the autopilot comes in. So here's the heart of the matter. The autopilot strives to make sure that the very best topology and design that you can have for your cloud is what is reality in your cloud. And that means handling change. It's one thing to deploy it with a fixed set of hardware. You know exactly what you want. You deploy it, that's great. But the reality is things change. Hardware dies. Your needs change. Hardware needs maintenance. You need to add additional hardware. Or the needs of your business change. So what do you do then? Do you tear down and redeploy? Ideally not. So let's talk through that. So you pick some components. You choose some hardware. Autopilot deploys OpenStack. It takes, you know, the set of resources. It says, OK, given the components that you chose and the hardware you have available, here's the best way to place that so that you have the most efficient cloud that you can have. So we do that. It also monitors cloud state. So this means two things. One is from a user-facing perspective, we have reporting and utilization. So what that means is we'll track, for example, CPU and memory utilization of your cloud. So it's nice to know that 80% of my compute nodes are running at 90% capacity. Maybe that's good. Maybe that's bad. It depends on how you want to run your cloud. We also do reporting on storage. So if you're using CEF, for example, we'll tell you here's the addressable storage space that you have available. Here's your total amount. Here's what you're using. And here's what you can actually write. It takes into account things like replication. So you actually know the data that you have and the amount of data that you can write to it. In addition to that, we have four weeks of historical data. So we can say, okay, over the last four weeks, you've been writing data at this rate. So in three weeks, you're going to have an outage unless you either change your pattern or add additional storage. So that kind of information is useful just to know what's going on. So we don't interpret that data for you. And what I mean by that is I can't say, because your cloud is running at 80% CPU capacity, that that's bad. Maybe it isn't. Maybe you prefer to run your cloud hot. You want the best ROI you can. You don't want machines running that aren't being used. And so running it hot is what you want. Maybe for you, you burst. And so running at 80% all the time is a problem. I don't have room to burst then. And so we'll just present the data to you so you know how things are running and what you want to do about it. The second type of state that we monitor is internal state of the cloud. So as I mentioned earlier, the autopilot determines what's the best way to lay out these services and orchestrate them across this hardware to have an efficient, resilient cloud. And in order to do that, we model that out and then we keep track of what happens. So we have the deployment, but maybe a machine dies. Maybe a service has an issue. Any number of things can happen, as you know. And so we keep track of what that state is. So we know the difference between what I wanted and what I actually have in the real world. So let's have a scenario. Let's say an admin decides I want to add some additional capacity. That could mean I want to add... I found this machine. I forgot to rack. I want to add it to the availability zone that I have. Or maybe I got a whole new rack of hardware. I want to add another availability zone to this region. And so what happens is when you take all of those resources and you put them together, maybe there's a much better topology that you can have given that amount of resources than the original one you have. Maybe that means I don't need quite so many admin services on one machine. That way, if I lose a particular node, my cloud is not thrashing around. Maybe I can ensure that I'm going to have more resiliency and more performance given any node failure. So what the autopilot does is it will calculate that delta between, given the resources I have, what is my desired topology? What's the best way of doing this? And then calculate the difference between what I have and what I want. And then what it does is it orchestrates those models until the picture you want matches the cloud that you have. And that could be as simple as two steps. It could be as simple as 25 steps. You know, deploying the nine node HA cloud that I started off earlier, it's going to take there's probably 120 individual steps that happen in order to deploy that cloud. So if I change that, you can imagine a fraction of that even, calculating what that means, how many do I have to change in what order? How do I do that without being destructive to my cloud? That's where the intelligence of the autopilot comes in. So that's what we have today. So HA, scale out, those are features that were not in the original autopilot beta, but they will be coming out in the GA release that comes out next month. So what are we focusing on going forward? We're focusing on things beyond deployment. As I pointed out, deploying is one thing, but managing it over time is quite another. That's where the real need is. We'll start by having more choices. You know, choose your own adventure only works if you can actually choose from choices, right? I mean, choosing an adventure with one option is not really an adventure to choose. So we'll add hypervisors, storage, SDNs, for all kinds of reasons. I mean, depending on your needs, various components may be more important to you than others. People have, let's say, they have a lot of Juniper Kit in their infrastructure. If they build their cloud, they would love to have Contrail in there. Also, because maybe it works the best for the needs that they have. Or the list is endless. There's all kinds of things that would make more sense for your institution depending on what your needs are. And sometimes it's not even so much a technology choice. It may be a relationship or an organizational choice. And just having those options and that flexibility is great. So beyond that, we're focusing on providing the flexibility and keeping the autopilot intelligence, if you will, intact. So one of those things is hardware roles. So we have scenarios where the hardware for an OpenStack cloud comes from different budgets with different people. So the storage guys, they have their budget. They're going to buy some storage nodes. And Compute has a different budget. They're going to buy some stuff. And they're fine if that's in a cloud, but they're not cool if there's anything on the storage node except storage. So they want to have the ideal topology within certain constraints. And so we'll support those constraints. The other thing is resource quotas. So having an autopilot that can respond to change is only good if it has a means to do that. So for example, if a node dies, and I'm not in HA plus one anymore, it'd be nice if the autopilot could grab another node and put me back in HA plus one and then say, hey, this node died. You should go take a look. But you're not at risk. You're OK. That only works if we have some spare resources to work with. So we're introducing something called resource quotas. So basically you'll be able to say, OK, I've got 25 machines. I want you to build a cloud with them with these components, but only use 20 of them. Keep the other five for spare. So if something happens, you can heal me from HA. Let's say I reach a certain threshold, and I just want you to add some capacity. You can do that without having to go back and figure out, OK, how am I going to do this? Let's get this in there. Given the topology I have, where can I add this? What can I actually have this node do? That kind of thinking, you don't have to do. The autopilot will take care of that. Node maintenance. So this has a number of different meanings, depending on use cases. But it's the fundamental building block for doing things that are complex with running clouds. So node maintenance can mean, for example, a hard maintenance thing would be almost involuntary. Let's say you've got a Sef node, and it catastrophically died. For whatever reason, motherboard, power, something, it's gone, it's not coming back. It would be great to say, OK, I can acknowledge this is not coming back. Let's have Sef rebalance, and let's make sure that we're not at risk as much as possible. You can have the same node, but let's say it's fine, but do you want to add more memory? So I want to take this node out. I want to add some RAM to it. I want to put this in basically soft maintenance mode. I want to say, I want to do something to it, but it's coming back. Sef doesn't need to rebalance, the data will come back. There's no need to add additional load to the cloud that's not necessary. That's very important. It involves two things. It involves service evacuation. So you need to know what services are running on any node at any time and be able to evacuate those services. That doesn't mean kill them. That means make sure that what they're doing happens elsewhere so that when I remove it, it does not impact the functionality of my cloud. And the second thing is obviously if it's compute, you need instance migration as well. I need to be able to pull a compute node without worrying that, OK, I'm going to lose everything that's on there. So I need to be able to say, OK, I need to do something to this node, make it so that it doesn't hurt my cloud when I move it. So having that node maintenance feature allows for more complex operations like controlled cloud reboots. The story, I mean, I've probably heard it 50 times. I deployed OpenStack. The next day, I had a kernel update. Now what? Right? Like, am I going to take it down? I'm going to do this reboot and hope it all comes back up. Sometimes I can do it. It works. Sometimes I can do it and it doesn't. Am I going to get lucky this time? That's the kind of conversations people have. And so what ends up happening is you have that scenario, you deployed OpenStack, a new kernel came out. Five months later, you still have that kernel that you should really reboot to. I mean, that's the nature of the beast. And so having things like node maintenance and guaranteeing services are not on particular nodes, that allows you to do things like controlled reboots. Make sure that you can reboot your whole cloud without going into the risk mode of, OK, I'm rebooting something that's live and supposed to be serving something. In addition to controlled cloud reboots, there's also just the acknowledgement that things change, architecture optimizations. That can be anything from, you know, there's new versions of OpenStack come out and there's features that you want to take advantage of. Let's say, for example, you've got compute nodes and you've got networks set up in such a way that they can do north-south traffic. Maybe you want to do distributed virtual routers on your compute nodes. Maybe you want to share that load instead of having everything go through a gateway. You know, we acknowledge that kind of stuff is going to come up and we want to build support for that so that as things change, we can evolve and allow those options to be used. Also, the ideal topology of three months ago may be very different than the ideal topology three months from now. You know, it depends on how OpenStack matures, what features are available. It depends on your hardware, your resources. It could be as your resources change, that topology needs to change as well. You want something that's going to grow with you. OpenStack is a moving target in a good way in that as it's growing and evolving, it's getting better and better. And the choices are getting greater, which is both good and bad. I mean, it's good if you want to have a bunch of software architects to figure out whether I actually want to use that or I don't or does it work as advertised? Is it actually production ready or is it not? I understand it's maybe experimental in Juno, beta in Kilo, kind of okay in Liberty, but how do you know that? So we guarantee that clouds that are deployed with the autopilot are going to use things that we have tested and we know that they work in practice, not just in theory. And then, last but not least, upgrades. So OpenStack seems to be the unusual part of enterprise software. Most enterprises, they deploy software, and if it works, they prefer not to touch it as long as possible. OpenStack seems to be the exception to the rule. They don't really want to do that with OpenStack. For whatever reason, maybe it's because of the nature of how quick OpenStack is moving and how it's evolving and maybe their business needs are evolving as well. They want to try new things or they're afraid of being left behind. They don't want to be stuck on any one version. Some companies, they have plenty of resources. They can stand up parallel clouds, try new versions, test things out, migrate stuff over, life is good. Others don't have that luxury. They have a small cloud. They have a medium-sized cloud. They don't have spare hardware and they would like to update that cloud. So when the autopilot comes GA in July, we will be releasing with Kilo. So that would be Kilo with HA and the ability to scale out and choose your options. And then by Liberty, we will allow you to upgrade that installation from Kilo to Liberty. You won't have to figure out, okay, if I update this in what order, what will break, how do I minimize the chance of something going wrong, of reaching a point of no return, where now I'm stuck, I'm not on an old version, I'm not on a new version, I don't have a working cloud. We'll manage that upgrade process for you. That is the direction that we're going. Not towards just throwing services out there and having an open stack that you can fire up an instance on, but having something that is optimally deployed that you can manage throughout the life cycle of your cloud, whether it's because of actions that you don't have control over, things dying, or actions you do have control over, adding capacity, changing things around, doing node maintenance. Any of the things that happen when you have to use something over the long term, that's what our focus is. That is it. Let's take a look back at the installer and see where we're at. So we are at 97%. So you can see there's some of the activities that we're doing are things that are not strictly necessary for, obviously the Neutron stuff is, but for deploying open stack, things like adding images to glance. As you pointed out, as I told you, the physical zones in MAS, are automatically to availability zones. So as I picked those machines from physical zones and added them to an availability zone, the autopilot will take care of making sure that those are logically organized into availability zones, so I can target them. And then we'll bootstrap the cloud with an initial user and project so that it's ready to consume and the administrator, they don't have to be a seasoned open stack admin. They have templates for what they can do for additional users so that they can give customers what they've just deployed. So that should... We'll see how long this takes. So this operation, as I mentioned, it takes about 130 activities when you spread them all out into what you would do for an HA cloud with this many nodes. If you broke them down into steps that you would need to do. And so just knowing that you don't have to know what those steps are or the right way in which to organize them, or to understand that those steps may change depending on the kind of hardware you have, what your network is, all of that stuff is knowledge that you either have to have, you have to hire, you have to learn, or you can use the autopilot. Questions? Yes. Commission them in MAS. Commission them in MAS. So you pixie boot machines, MAS will catalog them, organize them. You can organize them in physical zones, or not, it's totally up to you. And then you register MAS with Landscape. It automatically talks to MAS, pulls all the hardware information, so you can see the machines in Landscape, pick them however you want, and it just goes. Yes. That is correct. We prefer you juju deploy Landscape, but yes, you can apt-get install it as well. Yes, you get what you see. There's instructions, which I will just... There's an easy-to-find URL right there, where you can get it. PINs physical nodes, it's free. Try it out, bang on it, use it however you want. If you're running issues, there's automatic bug reporting, let us know. Our goal is to make it better. One of the challenges in making it better is the varied amounts of hardware and the way people are deploying things. So it's great to have feedback on that because we can make it more resilient and more resilient. Yes. Sure, absolutely. So there's two schools of thought there. One is under the hood. It's all open-source tools. It's all readily available stuff. So you can absolutely do that yourself. You can deploy a service out there. The environment is available to you. You can manually manipulate that. So here's the cost. The cost is, obviously, if you do whatever you want, it's not an autopilot. Do you know what I mean? So we can't guarantee our ability to do complex operations on environments that we don't have control over naturally. I mean, there's a cost. You have the flexibility to do that, but you will lose some of the assurance you get with an autopilot by doing that. So... Okay. So you will be able to, when the new version comes out with scale out, the ability to add machines, you can add a machine to a cloud that you have today. What we will not guarantee is the ability to upgrade what's out right now, which is Juno to Liberty, for example. We're going to start with a GA release in Kilo. Does that make sense? So you can add capacity to a cloud you have deployed today when you get the new version. It comes out in July. Yes, correct. No, you can upgrade from Kilo to Liberty. The one that ships GA will be Kilo. From then on, we'll support OpenStack upgrades. The one right now is Juno. The one that's been out since like a year is Juno. Correct. That's correct. So we do not support putting all the services in a single machine. We'll qualify this in a minute. Like a DevStack type thing. Because our intent is that you have something that's production ready and ability to scale out. That being said, if you have a beefy machine and you want to have a ton of VMs in there, like KVMs, for example, and you want to register them in Mas and deploy OpenStack, by all means you can do that. Basically create KVMs. Register them in Mas. Virtual machines. You need enough virtual machines to match the minimum requirements. Correct. So we have two main topologies that we use at the moment. We have one which seems to be popular with very private clouds. They take a big private network. They'll take a class B or something. And then they'll create two OpenStack. And what they'll do is they'll have management and infrastructure on part of that and they'll reserve part of that IP space also for floating point. For floating point. That's great. For floating IPs. The reserve for floating points. For floating IPs. And then they'll run their floating IPs there. We support that and that's fine. The most popular one definitely is the network that you want your floating IPs on and that's, you know, firewalled off and controlled. Those are the two topologies that we support today. Correct. Yes. No. If the autopilot is not connected to your OpenStack, your cloud does not die. However, that doesn't mean that the autopilot cannot drive it. It cannot respond to things. It cannot give you up-to-date monitoring information, that sort of thing. But it is not that you can deploy with autopilot and you can turn the autopilot off and your cloud is fine. Mas, yes, and that's it. You need to deploy Mas, commission your machines and then the autopilot can do the rest. Yes. Another one, actually. Depending. So that's not the way it works. So we will change the topology based on the resources you give us. So our idea, is to minimize the risk of you losing a node. So depending on what you give us, will depend on, for example, how many AVEN services if they are at all, co-located on a single machine. So we certainly want to make good use of the hardware. So we've got services in containers and doing things appropriately. But to what level they are co-located, will depend on the resources you give us and the best topology we can determine based on those resources. Yes. So you're talking like post-aggregates or scheduling or... So right now we do not support that. Now, you could do that inherently by organizing those into a particular zone and deploying that way and they would be grouped and then you could target to an availability zone that would essentially give you that. But in terms of tuning Nova to put things exactly where you want and how you want it for specific nodes, we do not support that. So you would get into trouble depending on how you made those tunings. So we deploy with the OpenStack charms. If you made those tunings with the configuration options of the charms which we would consider to be the correct way of doing it because you've got these services modeled so you want to make sure that if you make changes, you make changes in the model. But if you do that you're okay. If you do things outside of that model, we can't guarantee anything because we don't know. Yes. So he said if you've got one cloud in Atlanta one cloud in Montreal and you mentioned another city, but can one autopilot control them all? Today, no, technically yes and I'll explain what I mean. What needs to happen is landscape needs to be able to talk to the mass servers that have the resources for those clouds and today we limit that to one mass server. We will support multiple mass servers in the upcoming release so that you will be able to do that. Now, you would need to ensure connectivity between all those points. That may or may not be trivial, depends on your setup, but you can also have multiple autopilot for each of those clouds if you want. Okay. So we have our cloud which is a baby cloud so it has no utilization whatsoever. But you can see the components that we deployed the basic hardware summary and over time you would see those graphs. If you picked for example, you would have the storage allocation broken down by different storage types so you know which storage is running into trouble or isn't. And then we just have some helpful next actions things like the credentials of the first user that we created, not the admin user but the admin credentials are also available for the first bootstrap user. Some helpful things like an OpenStackRC file so you can run NOVA API commands and just helpful things that people want to go look for and they don't know how exactly to generate them just to help people get started in consuming it. So everything in landscape is drivable via the API. We do how do I answer this one? So yes and no. We do have APIs that we use to drive the autopilot. We have not published those yet but there's no reason why not. Our principle is anything that you can do in the landscape UI you should be able to do with the API and the autopilot will not be an exception. Yes sir. What kind of flexibility do you have on the HA configuration? What type of flexibility do we have in the HA configuration, is that what you said? What do you mean by flexibility? So basically the way HA works is you have enough nodes, we're going to give you HA you don't have a choice. That's kind of the way it works. We think if you're running something in production especially with OpenStack that you really want it to be HA. And the way virtualization and containers and things work today you'd be very hard pressed to have a really good reason why you would not want to do that. And so our principle is since it's an autopilot we're very opinionated about what we think is best and so if you give us enough hardware we're going to give you an HA cloud. You can't say HA plus one, you can't say I want to maintain a certain level of HA, I want HA plus X we don't let you define X today. We don't due to the nature of the requirements today we're not availability zone HA meaning you can't unplug an availability zone and everything be okay. There's additional complexity in doing that when you talk storage and then you need more than two AZs and you get complexity there that raises the level of entry and so today we guarantee machine level HA. You lose any machine, you're okay doesn't matter what's running on it we'll take care of that but that's the way HA works today. What about underlying mechanisms such as pacemaker versus people are either we use pacemaker in course sync yeah I mean the the technologies I mean you know we use Percona you know we yeah so it's not yes HA plus one with four nodes would imply that you have a neutron node with some admin services and storage and you have three compute nodes with admin services on compute nodes not all of them but to make effective use of the hardware this is not going to end is it yes go ahead yes so the autopilot is canonical product 10 physical servers it's free anything beyond that you need to contact canonical all versions even the new one that comes out 10 physical seats it's free beyond that you need to contact canonical we do absolutely yes not at the moment not at the moment we are certainly a minimal to adding as many things as we can I mean there's no we're wanting to make this a choose your own adventure so obviously we've got partners that we're dealing with that want to get into the autopilot with their SDNs we don't have other ML2 implementations right now other than open V switch correct I do not yep okay I'm sorry one last question then I got to cut it go ahead okay define partially so I'm not entirely sure what you mean by the second question the first question I can answer landscape is the systems management tool for bunty one of the things that it does is role-based access control in terms of you can define roles and permissions of who can do what and in what scope our plan is to extend that also to the autopilot so it's reasonable to have a different admin for doing different functions or has the ability to monitor and keep track of things and then not change or be able to change we plan on integrating that into the role-based access control in landscape I think we're out of time thank you