 So, we have a very nice group here today to help us understand what it is to deploy, maintain and upgrade OpenSack. A lot of people have had this habit of saying, oh, upgrading OpenSack, that's impossible. Deploying OpenSack, that's a nightmare. And I'm sure this nice fellow here are going to help us understand how this is not so true. And they may have similar or different approaches, and this is what we would like to find out today. So, I'm Nick Barcet. I work for Innovals. I won't be speaking much. That's a role when you propose a roundtable, generally you introduce and then you let the other talk. So, the first thing I'm going to ask everyone, one by one, if possible, introduce yourself and explain to us in less than two minutes what is your deployment methodology and technology you're using, and why you've chosen that one. Do you want to start, Juan? Sure. My name is Juan Negron. I work for Canonical. I'm a cloud architect. We developed Mass and Juju. That's how we normally deploy OpenSack. We started with using probably some of the same tools that some of you guys may use, but we decided to do an abstraction layer where we end up orchestrating the services and concentrate it there and turned OpenSack into a set of charms, which is what we use, the modules that we use for Juju, and we use those as deployment and it allows us to deploy the services individually and then connect them in a way that is intuitive and allows us to deploy OpenSack with a level of ease, so I think it's suitable for most of us. What was the last time you were here? Hi, I'm Tim Bell. So my team is responsible for deploying the CERN private cloud. We're about 25,000 cores at the moment, aiming for about 150,000 in 18 months. We deploy using a combination of puppet, and since we're using a distribution that's derived from Red Hat, we also use a lot derived from the RDO distribution. My name is Boris. I work for Marentus. I'm Chief Marketing Officer for Marentus. As far as deployment is concerned, we've actually, I think maybe sharing a little bit about history is relevant because we started out in the industry as a services company and as such we had to deal ultimately with clients that wanted to have kind of one of builds of OpenSack and wanted to have a lot of configurability built in. So we started doing deployments using Chef and Puppet, then eventually we figured that Puppet is it. We focused on Puppet with built in a set of Puppet modules for deploying a variety of OpenSack configurations, then with a lot of kind of configurability built in. And eventually we've introduced something called Fuel. And Fuel very much builds on the experience we've had with building the Puppet modules. It still uses Puppet underneath, but it overlays kind of a very simple, intuitive UI that enables you to go step by step and configure, you ultimately choose from less options, but you don't have to know Puppet nearly as well, or OpenSack nearly as well to be able to do deployment. So you just basically go through a series of wizard-like steps to define the OpenSack flavor that you want to deploy and push the button and boom, it goes. So, and that's Fuel. So I am Keith Basil. I'm the principal product manager for OpenSack for Red Hat. We have three tools in general. One is called PackStack. It's basically for proof of concept installed for rail derivatives, Sintos, SE Linux, et cetera. We have a tool called Forman, which we support. Our guys in Israel are leading the community around Forman. It's very similar to Fuel in the sense that there's a GUI, bare-metal provisioning in the whole nine yards. Long-term though, our strategy is to use Triple-O. We recently introduced Tusker as a code contribution to the Triple-O project to do infrastructure awareness at scale for deployment. So using Triple-O and Tusker, you basically get a command and control plane for your entire cloud, you know, multi-hundred racks, whatever. So that's kind of what we're targeting. The commonality between all three of those tools is that we use Puppet. So when we create a Puppet manifest or module, thank you, sir, it will work in PackStack, works in Forman, and we intend to carry that work over to Triple-O as well. My name is Rob Hirschfeld. Thank you. My name is Rob Hirschfeld. I'm a senior distinguished engineer at Dell and founder of the Crowbar Project, which was initiated in the very early days of OpenStack because my team had been doing hyperscale deployments, working with some very large cloud and service providers, watching their DevOps and operations process. And we knew that as OpenStack was emerging, that what we needed to be able to do is create an operational environment for our customers that could cope with high rate of change environments in a coherent way based on DevOps principles, and that's what we built with Crowbar. And it was very important for us to be able to do it from soup to nuts. So we wanted to be able to take a server that had been unboxed, plug it in, deploy it, set up RAID and BIOS configurations, OS, build OpenStack up, add in all of the operations infrastructure necessary to run a cloud from DNS, NTP, and DHCP up, including Nagios and Ganglio, so that you could actually monitor the darn thing once you put it into production. We use Chef very heavily, so I guess I'm one of the few Chef people. We think that's really a sound operations platform for this, and then we built an integrated stack. It's all open source. We actually developed it live in the open, and we've been really excited about community participation and being a multi-vendor tool. We actually have a lot of community participation, including SUSE is very active with us in development of this, and it's the basis of their Cloud 2.0 platform. So I'm going to break the rules and ask a question instead. Can I get a show of hands? How many of you have installed OpenStack using DevStack? Probably me as well. How many of you using one of the other tools mentioned? Cool, interesting. So my name's Joshua McKenty. I'm the co-founder and CTO at Piston Cloud. I led a team in NASA that did things that turned into OpenStack, so I've been deploying it for a long time, and I've used most of the tools that everyone else mentioned. We did use both Puppet at one point and then Chef at another point. We did Cobbler. We did some weird early mass type things before mass was a thing. And at Piston, we wrote our own tools called Cloud Boot and MoxieHA that are Paxos algorithm based. So rather than traditional CM, we deal with distributed state machine. And it's super, super opinionated and super prescriptive, so it's kind of the opposite of a deployment tool. It is a product. You get almost no choices at all. So it only works in hyper-converged environments really, although we're making it a little more flexible. And it uses a embedded OS that boots into RAM, so you actually don't even install an OS. Yeah, so it works really well for our customers, but it is not really a deployment tool. It's a deployment approach. Thank you guys. So we'll start the other way this time. And from your perspective, are there today any barrier in deploying OpenStack? Are there any complexities left? Are there stuff that we need to improve in the project so that it becomes easier to deploy OpenStack? Yes and no. So I think probably I'm more of a fan of Triple-O than anything else as far as what's coming forward, which I would have said the opposite a year ago. So I think Triple-O is evolving in the right direction. The two places where we see our customers get tangled up every time they do a deployment, no matter, I mean, our products takes about 10 minutes. The network configuration of the top of Rack Switch and the VLAN tagging and whatever they think they're doing for upstream gateways usually takes somewhere between 10 minutes and a month. And that ends up being that line between who's doing the cloud project and who runs traditional IT. It's always where you hit the network. One of the things that I think folks like Dell and other vendors have an advantage of is when you control the hardware and you really, and then Crowbar is a great example of it can be used for other hardware, but it works really well if you're using it with Dell servers. You can do things like configure the BIOS and configure the HBA. So from a software-only standpoint, it's a little more tricky for us. We end up having to build partnerships with all of these individual vendors. And I think customers suffer because looking at the ecosystem of choices, they have to understand some of those have relationships with hardware vendors and some of them don't. I totally agree. The reason we built the tool was because those things really are hard to do. And it kicks us at every single revision of hardware. We spend a lot of time certifying and recertifying that. And it's actually, networking is truly one of the things, physical infrastructure networking. It's the common pattern for us is to show up on site, deploy our full OpenStack cloud in a couple hours, tear it down, do it right, leave the site while we're flying home. The customer usually tears the thing down and builds it a third time. And that's actually really productive, powerful statement about how you build an operations environment. You automate it, you do treat it as cattle, even down to the metal and the studs and the racks. If you can treat those things like cattle, so that they're just systems that you push a button and deploy, that's how you make things work. As far as making OpenStack, better I think we still have to do a very good job treating version to version APIs and integrations and points like that. We've got to get into a position where we can step-wise migrate things over and I think we're just at the very beginning of that. Yeah, I would agree with Joshua in the sense that we see the long-term vision being with triple O, because out of everybody up here on the stage, I think triple O is the only community sourced deployment and management tool that we see today. And as you'll see in the keynote tomorrow with Mark McLaughlin, there is a, sorry, Rob. Yeah, yes, yes, OpenStack community. Not, so it's forming. So, correct, yes, yes. But as far as being OpenStack blessed, triple O is where it's at. And we can argue that, but it is what it is. So, yes sir, yes sir. Yeah, I'm probably, I'm the only guy that's probably not on a board here, so I'm amongst sharks. Okay, we're the shark food. All right, so tomorrow, Mark McLaughlin is going to do a keynote and his title is Truth Happens. So, by having things upstream and coalescing around things upstream, there's power and value in that from open source community. So, long-term for Red Hat, we see triple O and Tusker as an extremely viable solution that we all can rally around in terms of a framework that we can support, so. And the last thing that I would agree with is that the network is something that has to be solved. We at Red Hat are working on reference architectures where we have a traditional spine and leaf topology. People don't understand that network capabilities come a long way. Most core fabrics that we see today are layer three routed, fully switched, the wire speed. And they're agnostic in the sense that by using traditional protocols, you then can come back and do L2 over L3 configurations. And again, going back to triple O as a community, open stack community project, we see that as a framework that's agnostic that can layer on top of that core fabric and give us a lot of flexibility in terms of deployments and give the customer a lot of choice. Now, I got the microphone. So, let me first, I guess, answer Nick's question, which is whether or not it's actually hard to deploy open stack. And I always get asked this question by everybody, ranging from techie guys to gardener analysts to media. And I always give the same answer. And it can be really easy to deploy or it can be really hard to deploy. The thing of open stack is there's so many configuration options. And ultimately, whether or not it's easy or hard to deploy, very heavily dependent on the flavor of open stack that you're deploying. If you're building a one-off, ultimately, chances are you'll have to build a custom puppet or chef-based framework for you to be able to actually deploy manage and scale your environment. But there are a number of open stack flavors that have evolved as common flavors used by customers. These are the flavors that actually we have baked in to our solution, such as fuel, that make it extremely easy to deploy to the point where you don't have to know anything about puppet, you don't have to know anything about chef. It's a complete kind of a wizard-like next, next, next, okay, experience and bam, everything works. I agree that networking is the big sticking point, is a problem. So now kind of talking about triple low a little bit. Triple low is an interesting thing. So it is officially open stack blessed deployment program, but fuel is not very much different in the sense that we do all the fuel development in the open also. We completely follow the open stack process. All the commits go for Garrett Gating, all the documentations available on Wiki. And other than the blessed label, there is really no difference between fuel and triple low from the standpoint of how open stack or how open it is. And it also is all written in Python. And the final comment about triple low is I think that I don't know exactly where triple low is gonna evolve, but I think that the first problem that triple low has set out to solve is really the problem, not about the deployment, but the problem around scaling. And when you're scaling an environment, they have very, very elegant solution where instead of actually using something like pop it and shift and bootstrapping a bare minimum machine from the start, you actually create an image, a snapshot of the machine, and then you scale the images. Now that's great. It's a beautiful, elegant architecture because you remove a lot of complexities pertaining to scaling, but in our opinion, the biggest challenge is actually properly creating and testing the images. And this is the part that actually triple low has touched upon very lightly at this point. And I'd be curious to see how that will go. I can tell you that creates a lot of challenges if you're gonna maintain images across multiple hardware variants. Very unravels very quickly. I agree. I absolutely disagree. Because we have disk image builder inside triple low to help build images. And you can also provide your own tool set to build images. So when you deploy the image to bare metal, there's nothing stopping you from hooking that into Chef or Puppet to an existing configuration management. So you can have the best of both worlds. You deploy your base image for very large changes, use configuration management for the incremental changes. So it's the most flexible tool available today. And to your point about hardware, being Red Hat, we have a tremendous ecosystem of hardware partners. So when we deploy rail with KVM, it's one image that's certified to run across many different vendors. So it's not an issue for us because we are not a hardware company. We are a software agnostic company supporting many hardware platforms. I don't know how many servers you've installed, but we can talk about this. Let's take it offline. I think we've made you proud. So from the CERN perspective, we follow a public procurement model, which means that we send out specifications and the cheapest offer compliant with specification wins. So that means that we can expect that we get three different deliveries every year, each one of which is completely different. This presents a certain set of challenges in terms of a large scale deployment. We don't know until a month before the hardware arrives exactly what's going to arrive. And therefore, what I would say is that the primary goal that we have in terms of the deployment of our cloud is that we leave it to other people to be exploring the technologies and we ensure that we follow the community in the implementations. What's great about this is there can be good debates going on about different approaches. There can be agreed approaches of one or two or three good ways of doing it. And then we try out each one, select one for a period, and then after a certain length of time, switch to alternative. So the aim for us is to ensure we don't end up embedded into one particular technology but can switch relatively easily between them as the community evolves. To answer Nick's question, is it hard, easy to install, or what can we do about it? We took a little bit, a different approach. So I don't necessarily think that installing OpenStack is difficult, configuring that's a different story because it could be overwhelming the amount of switches and things that you need to configure for it to work the way you need it to work in your environment. To aid, what are we doing to alleviate that problem? We created a set of charms. I mean, everything that we're deploying nowadays it's revolving around JuJu and we create charms that encompass each service, each component of OpenStack. And the expectation is that, JuJu deploy Nova Compute, JuJu deploy SF, et cetera, and it will just work. And we've taken steps to document the necessary features that you need to change to alleviate the overwhelming factor of configuring OpenStack. With that in mind, it's, we've worked on masks to alleviate some of the issues that you have with provisioning hardware. So at the end of the day, when you have that stack of machines that you're using to deploy OpenStack on, they are utilized just cloud. Give you a machine, and we end up treating all the machines like cattle, give you a machine, deploy Nova Compute on it, give you another machine, deploy SF on it, give you another machine, deploy Keystone or bundle them. But the steps that we have taken is to try to encapsulate all the necessary knowledge to deploy a particular service in a charm and make those configuration options easy enough to change. So you don't have to redeploy and redeploy and redeploy every time you change something in it. Thanks. So I guess we've got a pretty good overview of what's the installing challenge. It doesn't seem to be any left. Apart from seeing there are outside the OpenStack scope, you find us to dwell. However, I heard one or two of you mentioning the problem of upgrading. And this is, from my point of view, kind of a key problem because OpenStack evolves quite fast, right? We release every six months. And what it is that you recommend to your customer and in your case, Tim, what it is that you use, in order to follow up or not, with the different releases of OpenStack, do you, like some have announced, you continuously update so that you never have to upgrade or do you guys do something different? So we do two things that are very different than I think most of the other vendors. So then I'll give them a chance to argue about why they think I'm wrong. First off, we don't follow trunk. So our current commercial version, Piston OpenStack is grizzly. It will be grizzly probably for another four months. And the reason is, right now Havana has a CVE, a security update about once a day. And we publish a dot release for every security update. And most of my customers don't want to do an upgrade every day. It's a one-click deploy, but, and it doesn't affect anything. It's rolling upgrade behind the scenes, but you're still taking a note or two offline at a time and live migrating VMs around so that you have reduced capacity while the upgrade's running. The larger cloud, the longer that runs because we really don't want to turn anything off. So we don't like putting a release out every day. Grizzly right now has a new CVE, yeah, every three weeks, four weeks right now I think roughly. That to me is about the right pace for security patches. It would be nice if it was never, but realistically, every three or four weeks, dot release with a little security patch, okay, fine. And so we don't really think about the major releases as being the bigger problem. The bigger problem is, how do we build something that allows people to upgrade every three or four weeks without any of the users noticing? And that was a year worth of engineering. And I think a lot of the core technologies, you have to be able to live migrate your workloads. You have to have an orchestration system that crosses services. So this would be the question for, how do you do this with Juju Charms? I think it's possible, but I don't understand. Yeah, at least we've done before. You've got to make sure that my SQL is stopped to play a schema upgrade, which means all of the services talked to my SQL have to be halted while that happens, but then they have to be restart in the right order because if you don't have Keystone running before you restart Quantum, Quantum freaks out. So there's some fairly clever orchestration there, and this is trivial at small scale, or seems trivial at small scale, and the larger you get, and the more edge cases you have, the more insane it gets. And I mean, for us, we had lived through this personally at other environments doing it with traditional tools, and we decided to write a different tool because I couldn't figure out how to do multi-server item potency with Puppet or Chef or Ansible or Salt or Juju. And I still mathematically don't believe it's possible to do. This is one of the benefits of us having a deployment infrastructure that we've been getting experience from for two plus years now. One of the things in the next version of Crowbar that's entering Alpha right now is we actually took a computer science approach to this, and we based our orchestration model on a loosely coupled, what we call ops late bound, infrastructure model based on simulated annealing. So we actually used real computer science applied to ops, and we actually think we have a coherent way to handle it. It's still an incredibly hard problem, but you have to be able to handle basically an iteratively goal-seeking algorithm that handles disruptive change within it. And it's a really hard problem, and it's actually a pretty fun problem to try and solve, but it's hard. Yeah, that's the same approach. So upgrades are very difficult as I think we've come to agree upon. Some folks in Red Hat have done a tremendous amount of work. It's an evolving thing. So there's been work with Nova Conductor. We try to not break RCP API work. We try to not break the database schemas. So we're very close to coming to a community-based solution to making upgrades smoother between versions. We're not fully there yet, but collectively we are extremely close. But it's a known problem, it's very complex, and we have a lot of folks working on it. Red Hat has quite a few years of experience in terms of packaging up community work into something that we would call a product in terms of, as Josh mentioned, like Security Arata bug fixes, pulling down feature requests for customers from upstream into basically backboarding. So we, as a product in terms of our upgrade, we tend to follow Trump in terms of stability because we have a 10-year track record of QE for our product. So we tend to follow by four weeks, four to six weeks between a release, a stable branch release, and that becomes our product that's fully supported throughout our entire ecosystem with all of our vendors who support Red Hat. So we're a little bit behind everybody, probably on this panel, but it's the thing that the enterprise customers wants in terms of stability and that supportability throughout the entire lifecycle, and we hold hands with our customers in terms of the upgrade process. Oh, okay, so you want to say everything you said just now and then only closer to the microphone? Okay, so the upgrades problem, complex, unsolved, and I think that I would, here probably would like to agree with Keith and maybe take a little bit of a stab at gentlemen there. I think that I personally believe that the upgrade problem, it does need to be solved upstream. I don't think that anybody kind of working on the side on the silo trying to solve it is a sustainable solution. And it's to some extent for me unfortunate to hear that you guys like founders of OpenStack and you are, you know, the first deployment tool for OpenStack are kind of doing a pretty amazing thing. It sounds like you've used a lot of fancy terms there, computer science and stuff like that, but the upstream community would very much benefit from some of his insights. And one of the things that I do applaud about Triple O is that I think and I sincerely hope that the Triple O initiative is not just gonna kind of, you know, solve the already largely solved problem of deployments, but will spearhead and push forward this already ongoing for a very long time aching problem of actually conducting upgrades in a predictable way upstream. I'm gonna push it back towards, if we will take a roundabout dig. In order to do what we do, you have to be able to do live migration. If we could land the set of API changes that Red Hat rejected for live instance cloning, we would be able to push some of this stuff upstream. Sorry. I have to say something about this upstream comment. There are more, there's significant stack forge chef and puppet cookbooks that are upstream OpenStack community supported and maintained things. One of the things that Crowbar's doing is moving to those. I know there's significant motion in the chef and puppet world around what those are doing. I think it's completely bogus to not to be saying that there's only one upstream way to do OpenStack deployments when there's a flourishing community of multiple ways that are all upstream that are part of OpenStack's community. There's more than one way to skin this cat. So we have taken approach initially when we deployed Diablo of doing a teardown and build green field. To be fair with Diablo, we never got it working enough to consider it a teardown. It was more of a just give up and move on. Since then, Essex and Folsom, we tore down, rebuilt. We have a benefit of a user community who's very happy just to be re-instantiating environments. However, with Grizzly, we now intend to be doing an upgrade path. Basil mentioned that he was probably the furthest behind on the panel and I disagree since we're relying on... So since we use RDO, we'll follow the upgrade procedures there and we're using the puppet community edition releases in order to be able to perform that execution. So we're already now going through the test cycles of doing the Grizzly to Havana release and it looks promising so far. But maybe ask me in six months to see how it turned out. We've taken a different approach to the whole installation upgrade and configuration. When you're using Ubuntu, you have a package installed and you expect it to just get upgrade and off it goes. So when we build our charms, we follow the same logic. We have our command in Juju, which is upgrade and the same logic follows. You will have OpenStack installed and we can do the installation and the upgrade from kernel all the way to the OpenStack packages without disturbing the instances that are running. I really don't know of the approaches that have been taken by any of you guys, really. But I can tell you that that's not an issue that we've had so far. We at Juju were able to, you know, Juju deploy all the different components and when it's time to upgrade, Juju upgrade and off it goes. I want to piggyback on something that Boris said. Deployment with bare metal provisioning, that's pretty much solved, right? The thing that I'm excited about in terms of triple O is that when we talk about OpenStack deployment, it's more than just deployment. It's really about management services. So there's three areas of focus that Red Hat tends to look at. There's helping a customer understand how to plan for a deployment. So customer says, I want to have expetabytes of object storage under management, X number of cores under management. What does that look like from an architectural point of view and from a kind of generic framework point of view? So that's number one planning. The actual deployment then would take place. So that's what everybody's kind of zoomed in on here. The third piece is what a lot of folks in general ignore, which is what we call internally Mr. Coffee Cup. The guy who comes to work every day, sits down in front of his computer with his coffee cup and says, boom, how's my cloud doing? And today that's largely a black hole. There's nothing there. So when you guys think about a triple O in Tusker, when you do your, when you start researching it, understand that it's more than just deployment. So the code that we contributed to triple O is Tusker, which gives that Mr. Coffee Cup that awareness of the entire infrastructure, I mean the OpenStack infrastructure, okay? So this is bigger than just standing up OpenStack. So once you stood it up, what happens? What do you do? Rob has solved a lot of that with, you know, the Nagels integration with the dashboarding, okay? So we're trying to solve that from a community point of view. And I think that's where we see a lot of value. I would agree. I think what you guys are doing with Tusker, those additions are really significant. We have to find ways to have instrumentation and OpenStack in the infrastructure components that allow it to be more manageable. And I think Tusker is a really good addition for that, to the unsolved problem of deploying physical equipment. So we've got five minutes left and I'd like to open the ground for questions. I think you need to wait for the microphone. Okay. I have a question specifically for Canonical and Red Hat folks on the differences in your security approach and how would you cement when, what was in OpenStack today becomes, it is tomorrow in the Linux kernel itself. So what do you do with that kind of a change in upgrade parts? Okay, so that's a very good question. So in terms of security, Red Hat has a pretty good experience there. So let's break it out in pieces. So we have really good traction in the federal, the public sector space with folks deploying our products and services. So we intend to roll in best practices around OpenScap, FedRAMP compliance and Stig work for KVM and Rail, which we already have compliance for and it's already been accepted by various entities in the federal government. As a product, we would have those rolled into our images for our image-based deployment so that you kind of get that out of the box by default. The second piece I want to mention is that the community did a book sprint on OpenStack security and some of the best practices from that effort are being rolled into our product in terms of network separation, the layer three network that I described earlier in terms of the fabric and how we deploy something like triple O within that framework and make sure that it's secure and you're not bridging security domains and things like that. So I can't speak for anybody else but I think we have a really good story to tell in terms of security. We have a security team where we constantly release security updates and as I keep mentioning, we do all the deployment via Juju and the charms install from Cloud Archive. Cloud Archive is supported by Canonical and it's part of the Ubuntu, it's supported just like in Ubuntu main. Whenever we get security updates, we update the package and in the same manner that you would update your machine or get upgrade or dist upgrade, with the Juju upgrade, we can do all the security updates as they go along without affecting any of the instances on top. So it's really not a, it's from my point of view, it's not an issue. Security updates come in, we stay as close to trunk as possible. Security updates come in, we release those and then they get upgraded and the instances running on top OpenStack, they don't get affected. Any other divergence? No? I was just gonna say, we're crazy. I like package based security, but I hate people. So I don't, I mean, I worked in NASA, we did the first FISMA certified cloud environment ever. We did first FISMA low and the first FISMA moderate and I went through figuring out what best practices look like. And the problem with a run book, even if it's semi-automated using Puppet and Chef, is if your admin has a password, at some point they're going to log in and screw up the box. They're gonna app get install something else or they're gonna tweak a little config file that's not in configuration management and then you're hosed because the state of the world doesn't match the state of your CM. So I mentioned before we do this weird thing with a custom OS that boots into RAM. That's because when you're not clear about the state of the machine, you reboot the server and then it's in a known state and then known state is the patched and updated state. So it's 100% based on disk images which is a bit like triple O and the whole, you know, we have a somewhat abandoned project called Shoelaces which is trying to standardize this whole play configuration management and then snapshot and then play configuration management and then snapshot again, workflow that people have around image building. I like it except for the part where users define the configuration management. Again, I hate people. I think Josh's approach is excellent. If security is your top priority, I think a lot of what Josh is doing and where you're actually, it's more appliance like and you're locked out is he's right, that's a great way to handle it. Our approach has been to do collaborative best practice and do security by being as open and collaborative and having as many eyeballs on our deployments and testing them across the broadest range possible. And that, you know, a lot of times the issues with security are the fact that one person finds something and fixes and it doesn't get back. So we've really been focused for a long time on open ops and trying to make sure that we had consistent open practices so that people could review them and give us feedback on how to make them more secure and durable. We have time for one last question and you'd raise your hand, right? Yeah, so I have one last question which is API compatibility. We had the problem with Suzy Cloud One which is SX based that people couldn't upload images because the newer glass client packages would use chunked HTTP transfer and the SX glance over wouldn't understand it and just return a useless error message. So and it could happen the other way. So you upgrade your cloud and people can't access it anymore because all the client packages don't work. Unfortunately, we had this fixed at NASA in the original version of the dashboard and I haven't ever seen it done again which is the download for the client tools was in the dashboard. So when you do an upgrade if there's ever one of those issues of hey, client tool versioning is kind of broken here the advice is log back into the dashboard get the fresh version of the tools they're always upgraded in place. Nobody does it that way anymore. So there is this challenge and backwards compatibility of client to environment is a challenge. I don't want to talk about Ref Stack next slide but we're working on it. There is some hope that it's getting better. We make incremental progress and then we do things like the Keystone V3 API and we roll ourselves back a ways but we're still making progress. More comments on this? I think everybody agrees. Okay, thank you very much. I think we are done for now. All these fine people should be available throughout the week if you have additional questions. Thank you very much.