 Hello, everyone. I'm Mike Kedera. Thanks for coming. What we wanted to do today is talk a little bit about a competition that we ran called Rule the Stack. So I'm with Intel. I've actually been, my team has been supporting that event along with the foundation and other groups within the community. And what we wanted to do is kind of show a little bit about what we've done with the event as well as the lessons learned. And what we want you to walk away with is what you can do in your environment today. So I'm here today with Pete and Dirk and Adam as well. And so they have worked with us actually competing in the competition. They've got some great results. They're gonna share with everyone today. So a little bit about what we'll talk today is I'm gonna give you an overview about the competition, the reason why we had it and some of the things that we did in the event to help make it fun for the group. And then we'll talk about some of the past competitions and the challenges that we had and the lessons learned. And those lessons learned are not just from, you know, the winning and that is all the pieces that go into that. But it's what we've taken back to our own teams and worked with it in our own groups as well as what we've done to help evolve the competition. Now we've hosted this competition at the last three events and Atlanta, Paris and Vancouver. Now this time we're taking a little bit of a break and what we wanted to do was kind of bring in more people from the community and get more information on ways we can bring in new challenges and really start to challenge even more people, some of the people that are new to OpenStack as well as the veterans and give out different prizes for different things, focus on things that the community is really starting to work on as well. So a little bit about why we set up OpenStack or rule the stack initially. And it really was to demonstrate readiness of that whole environment through showcasing results and results that people are actually implementing in the competition, things that the community is implementing in OpenStack. If you think back to when we had started in Atlanta, I mean you're thinking about the Ice House days of Anna and the challenges that went into that. I know that I was in our IT group and we deployed OpenStack back in Diablo and I and OpenStack's come a long way since then and you know there's a lot of work that went into that and even since you know those two those releases there so much had happened and if you think back to the history of OpenStack and what people were thinking then they think of POC, they think challenging, hard to do, it'll take weeks to get set up and that was the real purpose of our initial competition was challenging that. And so we wanted to really bring in more of the leadership within the community and what people are doing in that community to showcase the new features, talent, as well as new approaches and of course have fun. And we gave out lots of great prizes, one of them you'll see right here, as well as just you know it really created a lot of fun at the event as well. So sorry if you came to get a couple of tips and tricks for competing this time, but we will have it at Austin. You can definitely get ready to compete there. So this is a little bit of that buzz. One of the things I really liked about the event is that you're really looking at all the competitors and the amount of energy in the area, people really watching all the teams getting into it and the new techniques and challenges that people would take on. So the idea of just getting people to step outside of their comfort zone and getting people that wouldn't normally do these types of things to step up and do those challenges. So it was really a lot of fun. A lot of people like to see us continue this event and so that's one of the things we'll be looking for input from you as well. So kind of just summarize why we're holding that competition. Really the initial part is that open stacks hard. We need to challenge that and we need to drive the community and bring in new ideas to make this better. And so that's what we're looking for is to help showcase that through the results of the competition. And it takes long to deploy. That of course I think you'll be really surprised at some of the results that we've seen with this competition. And of course show enterprise readiness. I mean you need to make sure your services are responding, they're always up. If you're going to have a cloud, you're going to have customers there, you want to keep them happy. How do you do that? Enterprise readiness, making sure that those services are available. And of course we've talked about the buzz and of course the community point of view that is always important to make sure that we're looking at the community and considering all approaches. So let's talk a little bit about past competition. So I mentioned about Atlanta, Paris and Vancouver. So initially it was pure speed. As fast as you can get it up we had of course some challenges that we included in it to make it a little bit more fun but Paris we started to evolve it a little bit more. Really looking at how you can make it more highly available. All of those services should one fail. How does another one come back online? Making sure that whether it's Nova or Keystone or all of those things that you need to have those services up. The way we tested was by focusing on the VMs, making sure that they're available. And then in Vancouver we evolved the competition even more to focus on getting to Kilo first how those new Kilo features as well as some performance tuning. And it was still a competitive looking at it from speed that was still the biggest important component but we included more features that you'll see in a moment. So the way we set up the competition is not unlike the way you'd have a small rack in your environment but what we did is we designed it for head-to-head competition. So one team would have their own client systems that were connected to the rack. We had VM, we had VLAN setup and then some common setup within the environment. You're seeing that just kind of most of these systems are relatively the same but a little bit of difference with RAM and the SSDs were in there. So it's not unlike the environment that you'd have. Maybe a couple of different generations. Now one of the things that we did as we transitioned to Vancouver is we reduced the number of nodes and the reason why we did that is that we're starting to really learn from the competition. We had, of course, we always wanted to have hardware on site so people could run this but we started to really look at some of the things that we're doing for shipping hardware all over the place. Sometimes you're concerned about these servers surviving a drop or something like that that would happen in shipping. So we just actually reserved a couple of extra systems that would be ready for if we needed them. Luckily we didn't but there's a couple of lessons learned that we had in setting up and running the environment that I think we'll talk about. The Vancouver environment, we also had a couple of other hardware features we had. We have our brand new, at that point, the Xeon E5 2600 V3 release and so we wanted to bring that to the event as well as showcasing some security components that that platform had and that's why you're seeing the TPM module on that. Okay, so I mentioned Atlanta, pure speed, that was the whole idea. Now we also added additional time deductions on there so first live migration. So really looking at some of the things, live migration is the second bullet but that is one of those things that was new to the event. And of course live upgrade, this was the one we wanted to make sure if you could roll an upgrade within your environment and this was a very challenging attempt at this stage and that's why you're seeing the 30 minute deduction. And then the next with running OpenStack with high availability and that takes a lot of work and I think we'll talk a little bit more about how that impacted the way that some of the teams had with success with the event. And then using Heat, Heat was pretty new at this time and making sure that people were using it promoting the new things in OpenStack that was the idea. Okay, so this is what happened. Can you believe that? Three minutes, 14 seconds to set up eight nodes in OpenStack, fully functional and running, pretty remarkable. And so I'm going to bring up the team to talk about that as well as we had, I mean, Morantis did a great job as well, seven minutes, that's just remarkable, a little bit over that. Okay, we have a few lessons learned on how you did it. So just to explain right away, we didn't actually set up eight nodes because the spirit of the whole event was to be fast. So I walked by the booth, I didn't actually know about the challenge until I actually walked by the booth in Atlanta. So at some point in time I saw that people are reading DevStack documentations or immediately recognized it and they're doing Git checkouts and I wondered what this competition is about. And I learned so, okay, it's about deploying OpenStack and being the fastest. So I thought about at that point in time when I walked by it was like maybe one hour or two hours that they had entry and I thought like, okay, we can be quicker than one hour. And one way I thought to reduce the effort was to install only one node. So the Atlanta challenge since it was about pure speed was also about reducing the number of things that could grow wrong. And I thought installing multiple nodes if you don't know exactly the network environment, the hardware environment, everything and you have unreliable network connectivity to the outside. That's one thing to get rid of. So I did a one node install. So in the end what I did is I prepared a USB stick with an image and the image contains Lins Enterprise. The reason I picked that operating system is because I know it's certified for the hardware so I didn't have to deal with any of problems or potential compatibility issues. I know it's certified, it just works. So I don't have to deal with that problem. And so what I prepared is a USB stick that was plugged into one of the servers and it was booting up and during booting up it was actually also installing the system as it booted up on the machine because the requirement for the competition was to have an installed system. So we have something called OEM installer which is basically copying a compressed image onto the drive and makes it bootable automatically and that takes just about 26 seconds. So the image was around 250 megabytes in size. It's a fairly small, fairly slim thing. And another neat trick that I did, it was immediately K-executing the installed system because I think the post-process for booting those servers was around 3 minutes or so. So you wouldn't end up with a competition time of 3 minutes, 14 seconds if you wait for a reboot which takes maybe 2 or 3 minutes. And then we had like roughly 30 seconds for just booting the general operating system like booting initial services and so on. And then I was starting to scratch my head because I realized that I forgot to disable waiting for DHCP so on. By default the image was configured to power on all Ethernet devices that it could find and waiting for 30 seconds to get an IP address. And by the time I entered the competition there was something wrong with the network and something like that so it was just like waiting there 30 seconds and was like, hmm, this didn't go the way I wanted it to be. But anyway, so after 30 seconds it recovered and just continued. And then what I did is I injected a script that was installing OpenStack. So the image had the OpenStack packages from the OpenStack cloud included so it was just the regular off-the-shelf product basically with our packages and included a script that we call OpenStack Quickstart. It's a script that sets up a single tanner and single VM very bare minimal cloud and we use that usually for testing. So it's part of our Jenkins integration testing workflow. Whenever we do a change to our development environment we are running this test. Everything was prepared. The only thing I had to do was bundle that all up into one solution. And the lesson learned for me in that scenario was really you don't want to rely on factors that you can't control. So if you rely on DHCP being available and it's not available that's going to throw you off the competition. Also the external network was kind of okay, but sometimes they dropped that through off many other people who were doing like it checkouts of NOVA and like after 20% the network connection dropped and they had to start over again. And that, I mean, in a normal deployment scenario you can just deal with that and you go for a coffee break and do something else. But in a competition where it's about pure speed you're thrown off the fence. Also one story that I like and I have to quickly check if the person is here. No, it's not. So there was one other competitor who prepared a live CD with OpenStack pre-installed so he pre-configured everything. He configured our VM so it had everything included and it was a live USB stick image which was allowed under the rules of competition so the thing I did was having an empty operating system installing OpenStack from scratch and he had everything prepared and was walking with the USB stick to the competition. But one thing that he forgot was to make or change the USB stick to be an installer image so he had downloaded some instructions from the internet on how to convert the live USB stick into an installed system and the thing that he did was he was scribbling those notes on how to do that on his hand. So he had a pen and he had his written on his hand so he was booting the USB stick everything went fine, it was like maybe 20 seconds so I was really getting nervous and then he was starting to read the instructions that he had previously scribbled on his hand and after some lines of scribbling and reading and typing he apparently misread what he did or what he was writing there so instead of copying the USB stick to the hard drive what he did he was copying the hard drive which was empty to his USB stick so one thing... That sweaty hands Maybe there was some sweat involved I don't know why that would happen but so one thing really to learn is you want to avoid manual stuff so anything that could go wrong is the person most likely in front of the computer so you want to test everything you want to prepare everything you want to be automating everything that you have to do in order to do a successful deployment that way you can iron out all the human mistakes I mentioned it already so we use Kiwi, we really like Kiwi we are working with that also in the product it's an image builder it can create images in various formats for various purposes and lesson learned here is the minimal solution that has the least impact or the least risk of failing is the one that wins minimizing the risk and having your deployment going wrong is another lesson learned reflecting that a bit so the lessons learned for those who organized the competition I think were some of the time benefits were slightly unbalanced so when you do such a challenge it's all about more or less about gamification so you think about is it worth doing this extra feature to win the bonus because in the end you do the bonus in order to be further ahead in the competition there are some about 5 minutes for HA and 5 minutes for deploying heat if you automate deploying heat it's maybe 10 seconds so the 5 minutes is a big win if you deploy HA it probably takes you more than 5 minutes to do it so it's a negative benefit more or less also the goal was for this competition to be open for everyone so it attracted a lot of people who had never deployed OpenStack before which is a very good thing in my opinion and that's something that we shouldn't lose in future competitions so it should be really open for everyone it shouldn't be like only a silo people who are doing professional OpenStack deployment all the time but that also had some trouble so the rules were not announced in advance so people didn't know about it they couldn't think about it they couldn't prepare for it and well really many tried to just follow the DevStack installation guidelines which is interesting but you're not competing I mean everyone doing the same steps on the same hardware I mean it's just seconds that you could beat each other with and many of the contest rules really require you to prepare if you've never done that before you're running into a lot of first time mistakes and you don't want to do that while you're running against the clock and another thing is the contest rules probably need to require a more realistic deployment I know I gained the system more or less because rules didn't say I have to deploy all 8 nodes it was open but the others who entered the competition also only deployed one node but that's not a realistic deployment to be honest I mean you are deploying on multiple nodes but lesson learned we did change the rules for the next event a couple of things that we really did learned from that event from hosting it was really looking at the rules that we did expose and the time we did and so we started looking at what we can expose a little bit early but we still were working that a little bit so as we moved to Paris we did change things of course really focusing on speed of course being the number one thing but then bringing in high availability and so you can see the tests that are important if a controller node goes down of course we could also bring in a test of our own so we got to pick what we wanted to do all their clusters of course and then the Nova one Nova node going down as well so that was the big focus for the Paris Summit it was a lot that we had done to alter the event this time and just give it a little bit more of a twist so Adam had done a great job with this one and again we had really rethought those rules and changed things so it was a bit of a different challenge but still you look at it open stack on all those nodes 53 minutes so Adam you want to talk about that a little bit here you go, I'll give you that too if you want it Hi so when I saw the rules for this challenge which I think we only saw when we arrived or maybe a day or two before I can remember but I saw them and I was really happy because all the work that I had done for a workshop actually not just at the Paris Summit that it says there but also in Atlanta six months before I prepared a hands on workshop for people who wanted to deploy a complete highly available cloud from scratch on their own laptops and if you can imagine like the number of different types of laptop that people bring is pretty ambitious goal to have everybody deploying an HA cloud from scratch especially if they'd never deployed open stack before which was true in some cases and this was a 90 minute hands on workshop I think or maybe two hours I think it was 90 minutes so realistically we had to aim for everybody to be able to do it guided through by us within one hour to leave time for things going wrong and questions and so on so we'd put in a huge amount of work on automating the deployment over the really the year leading up to this competition so when I saw the rules I knew well we can basically just reuse all that capability that's already in our product and then some stuff around the product as well so we had vagrant boxes with the product in prepared images and a vagrant file that automates the whole deployment and I'd also implemented this feature in our crowbar deployment tool that can allow you to configure and set up a whole cloud from a single YAML file more or less for setting up all the open stack components so we had all the ingredients and it was just a question of doing some small customizations to apply it to the competition so there were a few networking tweaks and so on and yeah so that's what happened we just basically applied all that work that I'd already done for another reason and it just worked so we were really happy and there were a few other competitors of course no one actually managed to I don't think they managed to set up an HA cloud at all No, you were the team that had Right, yeah and then we got the the testing by one of the judges who started killing controller nodes and so on and seeing what happened and yeah so we succeeded and it was really again a team effort Dirk helped build the images and other people on our engineering team helped in many different ways so it was definitely not just me and there was one thing for the future is that maybe if there's an HA challenge the power supply should be highly available as well redundant as well so yeah we saw with this competition it was much harder challenge because HA was a requirement not just a bonus so the fewer people attempted it and no one else succeeded in even deploying and maybe that was partially because people didn't know in advance so they couldn't do preparation and if it hadn't have been for us doing that work in advance then I probably wouldn't have been able to do it either so that was kind of luck that we had it already done so the question for the future is you know when there's a particular part of the challenge should it be a bonus or a hard requirement that should say requirement there not requires on the third point and yeah the current thinking in the discussions we've had about future competitions is maybe make it a bit more flexible and then offer bonuses for different features to encourage as many people to compete as possible yeah so lessons learned just from us hosting it of course with the HA and power really you live in the environment that's given to you and that's one of the things with bringing hardware you know we had teams that have to work off of the power that's been applied by the event staff as well as the networking and you know who's getting a great network connection right now in this room you're all fighting for what's available so preparing your images everything that you need and walking up and then executing the competition was really a big challenge and we definitely had network connectivity but you're fighting for that bandwidth with everyone else I'd just like to add that I even breathe HA as witnessed by my HA lanyards okay so on to Vancouver I've changed it up a little bit but still focusing on speed and accuracy of that deployment and provisioning time so what we wanted to do was really promote Kilo and so we gave basically a penalty if you went with earlier releases so you still allowed people to deploy older releases but bonuses for or a penalty if you want to look at it that way rolling upgrades again now the next thing we did is really performance tuning and the idea here was this was a very simple tweak you can do just to make sure that you can expose underlying chipset capabilities that are common with your platform and so by just editing your flavors, your image flavors and tweaking them a little bit you can actually expose various things that we have and that's that enhanced platform awareness there's multiple features that are available on multiple different platforms and so that was just kind of to showcase the simple things that people can do to really tune their environment and then VM deployment with heat continued and we also proposed secure compute environment with trusted compute pools and that is leveraging that TPM module so you can actually have root attestation measuring your boot up environment all the way up to the hypervisor and that would just be showing a little bit more of the security features within the platform and then live migration again so with the Vancouver results again we had Dirk do a great job with it and Adam and the team really you can see look at this negative when you get into bonuses so they finish before they started a little bit of a time travel with that so but things to really note with this too is that we had a lot of different teams competing in this and that was what we wanted you know we had the Marantis team there that actually came in day one jumped in and did it and their record held for three days so they really did for someone just jumping in cold and doing it still great results and then with that whole HA and the environment you're living in we had networking issues and Walter from Rackspace he was our determined competitor he was going to fight through those issues and getting his components downloaded so he he battled through it got it all going and then we had our first female competitor from Red Hat that she did a great job in jumping in and winning the award for competing so a little bit about everything you did there so the entry for me in Atlanta was more or less a refresher of what I did in Atlanta so it's the same principle so it's a prepared operating system image that installs itself automatically and starts things on boot up and instead of doing one image I did two images so in order to deploy actually a multi-note instead of four-note so I split it out one controller node and had like five or so compute nodes available and this time I actually didn't expect to win so that's why we had also a backup like highly available so Adam and Vassau were doing a second entry with a different approach because I was personally believing that I would not succeed luckily I was wrong the reason why I believe that it was fairly challenging for us to bring up Sleetvalve which was barely available in our stack cloud environment on top of the freshest kilo which was at that time just maybe released like two weeks ago and there were many variables of things that could go wrong so we had a lot of things to prepare and a lot of things to test and there were many many many last minute bugs and long nights to spend on this and basically but it was more or less the same thing and it worked out just fine so it was fully automated where we plugged in six USB sticks everything installed everything registered against each other and we had a full open stack cloud running just fine and we demonstrated additional features I think it was live migration heat horizon so we gained a couple of bonuses and that's how we end up with negative time also there were some things that made things quicker like the switch to system being Sleetvalve so that parallelized the boot which both caused a bit of but it also speed up the challenge quite a bit so it's a lot faster than the stuff that we had before but as I said I didn't actually believe to win so we had a second entry maybe Adam you want to comment on that one this wasn't hugely different from what I did in the previous competition really except for not going with an HA install because there was no requirement for it in fact there wasn't even a time bonus for it and it was just extra risk like Dirk said earlier there's no point taking on extra risk if there's no reward for it so we just kept it simple and I think on this occasion actually I also had the administration server which is responsible for deploying the software I think that was actually on a laptop externally connected to the cloud rather than being installed as part of the challenge so that saved a bit of time we didn't with the cloud 5 product it's Juno based so we got a 10 minute time penalty for not being on the latest release but we reduced that penalty by doing the same things as Dirk did deploying heat live migration and I think there was like a CPU architecture feature yeah exactly but yeah not really any special tricks there it's just what our product does it can just deploy things very quickly in a completely automated fashion again that was yeah just a safe backup second place I can talk about this I guess so the tools that we used here Dirk has mentioned some of them already Kiwi is the image building tool that we used to build all of these appliances effectively crowbar is our deployment piece and orchestration piece within the product and that so that does the majority of our well all of our deployment automation Dirk already mentioned the OEM install and yeah we benefited from having all those tools available for sure and having the new release of OpenStack come out it makes the challenge difficult if it involves that because it doesn't leave much time for testing in advance of the competition and similarly with the network the more you know in advance and can plan for that the easier it becomes and we learned from the previous competition in terms of automating our network configuration okay so summarizing the lessons for the future we received a lot of feedback positive and negative and we agree with every feedback saying that the competition should have realistic deployment scenarios as a result so there's no win for anyone to have a just gamification of the challenge so it has to be realistic it has to be something that customers would actually want to prove otherwise the competition is more or less not the thing that also we would like to see so in the future our belief is we should also maybe put a bit more on the manageability and the deployment quality of what is being deployed so I mean in reality to be completely honest it doesn't really matter if you deploy in 51 minutes or 52 minutes but if the tier 52 minutes deployment actually works in all the scenarios that you want you would rather prefer that one so in some way we have to change the rules to be in favor of that one instead of like favoring the one that is maybe 5 seconds quicker than the other one and also one thing that I noticed is we always had a live upgrade in the challenge and no one as fast I know maybe correct me if I'm wrong no one ever attempted not in Atlanta, not in Paris, not in Vancouver that kind of tells you something we're going to keep it there is what it tells you so one of the things we want to do was talk about not only you know okay so we did this we had these contests we won, we proved some things to what we're trying to do in the real world so first of all I think the lesson from Dirk and from Adam is automate as much as you can not only in the world of how it went in competition but the more that's automated the fewer places that you have to make an error going back to Dirk's comment about the other competitor on the first competition of having to do a lot of manual steps especially when you're in pressure you have a finger check you just type something wrong and it's a problem the more you automate the more successful not only winning competitions can be but also being successful in deploying every customer that I know that has gone through installing Suze OpenStack Cloud has probably done the installation 5-10 times before they finally get into production just because they made a mistake they learned something, they want to tweak something it just takes a while second point is man is a tool using animals so use tools again it's similar to the automate everything but if you can use a tool to build an image if you can use a tool to orchestrate your deployment that's the right thing to do because it simplifies the process it makes it more repeatable repeatable processes are less prone to errors and can be optimized so that leads into if you're in the keynote today when Erica Brescia was talking about what Bitnami's doing at the end of the presentation she said use an OpenStack Distribution because it makes things easier but doesn't solve all the problems you still need to add some additional capabilities but why use an OpenStack Distribution so I think this is an old IceHouse or Juno statistic that there were 1400 parameters 11 components that you had to install and coordinate the installation of those and it's sometimes can feel like taking a big box of Legos and dumping them out on the floor and start working and this is just a picture so it's not just that you have to deal with OpenStack the orange boxes are OpenStack but you need to pick a hypervisor you need to pick a message queue you need to pick a database you're going to have third-party adapters if you're using Cinder or Neutron or Manila and other projects coming on in the future you've got to pick an operating system as Dirk said and it's certified on all tier one tier one hardware so it was easy to make sure that that was going to work in the environment but then most importantly is wrap an installation framework around it because that's the tool that helps you automate the deployment process and it looks like this so instead of having a big box of Legos dumped out on the floor you buy a kit that gives you something that's fast, that gives you something that's repeatable and it gives you something that ultimately there's many options you can go on the OpenStack website to see the various distribution options that are available it really does make sense to try and work with one of those so the real world lessons just wrapping up you know clearly you want to plan around making something that's going to actually be useful once you've done the deployment I think that's what Dirk's comment was at the end was you want to have something that represents an installation that makes sense networking not only in contributions but also what we've seen in terms of working with customers networking is the biggest challenge on getting OpenStack working it's just a complex problem in general but then when you start adding multiple different networks between the VMs from the VMs to the outside world back end storage networks it gets to be very very complicated you know you want to minimize going through again a competition phase minimize doing reboot because post is slow there are differences in how fast operating systems install so from a competition point of view using something that can be configured as a lightweight image that can boot easily off of the USB stick is a huge advantage and then can you do you want to build from packages or do you pre-build packages and actually could just take images that you can then just deploy directly can speed things up tremendously and then one thing is you need to make sure that once you get everything up and running if you pull your installation media that you use you hope your cloud doesn't go down I think that was one of the things the problems that some of the competitors ran into is they get everything up and running they pull the media out and things just stop and then finally using tools using some modularity because you're not going to have it's going to be different in Austin than it was in Paris or Vancouver or Atlanta so you need to be prepared for dealing with a different sort of setup and different sort of requirements as we go forward so having a tool that gives you the ability to configure the deployment is going to be important for winning going in the future now this is all good about doing this competition but so what was there any real benefit upstream for the community and we think there were a couple first of all when we did HA in Paris there was still some perception that HA was a hard thing to do and that you needed to go get a consultant in the house to help you configure your cloud if you wanted to be highly available even though the instructions were available on openstack.org and the documentation what we did was build the tooling to automate that process as Adam said he he built it for a workshop session that he did with I guess with Florian right with Florian Haas from Haas Tech so they put together a workshop to show that it was capable of being you know you can automate this process but in doing that we also identified a bunch of bugs in pacemaker and CoroSync for the HA capability and because no one had gone through the rigorous process of how do I really get this thing stood up in a real world environment how do I deploy it once we did that we identified problems so you know in the process of going through not only the competition but putting through the workshop we actually found problems in the underlying Linux environment that we could then push upstream to repair so proved HA was simple to employee it could be automated just to the underlying infrastructure this is the one thing what I think that doing things quickly helped identify was we identified race conditions in the install process and why if you're going through a more relaxed environment as opposed to trying to get it set up in 3 minutes and 14 seconds or whatever you may not care about that the reality is that there's always a chance when there's a latent race condition existing in the environment that you could cause a problem sometimes it will fail silently you're not sure what's going on so by doing it quickly we actually identified and fixed a number of race conditions that existed in open stack components themselves and then the last one is we provoked healthy and that's a little bit of a healthy discussion I think some people were frustrated at what they thought were some of the vague rules or shifting requirements whatever you want to say the idea was that we had a lot of discussion around how do we do this in the future how do we make sure we provide good solid competitions but also goes back to the point that deployment is still a challenge for a lot of people in the open stack environment so we want to continue doing these contests because we think it highlights the fact that you can do deployments in a repeatable fashion make it quick reduce errors make it repeatable so we think it's important to continue that awareness so the last one okay so as we look forward to the next event we have a birds of a feather discussion coming up in the next day just before noon tomorrow so I'd love it if more people could come and join us that's going to be in the design summit lounge I wish I could direct you there I haven't found it yet myself but I'm sure that many of the people in those nice yellow jackets will help you get there but there will be this sign set up there as well so it would be great if you could join us we have some great ideas already that we're thinking of these are some of the ideas here de-emphasizing speed I think that that will still be some component because we still have to run a competition and not have people come to the event for this and not attend sessions we need to of course balance that live upgrade that I still think is something I'd like to be there maybe by bringing in containerized services might be a way that we could challenge that and of course HA that's something that I think is very important for us to continue with lots of ideas I'm also looking at ways we can look at applications as well and deploying enterprise services with them so that kind of wraps things up one of the things I wanted to just kind of bring to your attention is in the Intel booth we are running a passport program and you get a stamp by attending this session you can see myself or Derek who's right over here and we can give you one of these or you can stop by the booth but feel free to join us you can win a compute stick they're a really cool little tool but yeah you can just come up here and get a stamp from me or Derek or you can stop by our booth all right that brings us to the end yes I'll leave a mic up here for you guys to answer as well here you go quick question for the last three competitions you got Fran the configs and the video are they posted are they live? so we do have the configs both of you have blogged about them and in our session notes we have the links to both of those blogs and the results if you go to Intel has an open source website called 01.org and you can just search on rule the stack open stack and it'll bring it right up I also have a blog post that basically guides you through the whole process of installing the SUSE OpenStack cloud appliance which is pretty much the same process that I used minus a few tweaks for the competition environment but there's a blog post that just guides you through that whole thing how do I find them? maybe links I do have a link in the presentation if you give me your email I can send it to you a bit of topic so you mentioned that you were using crowbar which I believe is based on Chef have you considered using other tools? we have considered it for sure and we're still considering management systems as the back end so crowbar is essentially the orchestration layer on top of Chef that is kind of more multi-node aware in the end we've looked at other things but in the end we've stuck with it because it has a lot of capabilities that are actually very hard to find elsewhere and it works well for us and changing to something else like Puppet or Ansible or whatever would be a lot of work and that's not to say that we will do it in the future but it's not a change to be taken lightly because we've invested a lot of work into the Chef cookbooks so the question was do we have an example of a feature that is a lot of work to translate? oh well it's really the combination of several things so there's the bare metal discovery, inventory, allocation we've got multi hypervisor stuff in there the high availability the automated deployment of all that in a very flexible manner which requires a lot of orchestration and synchronization between different things in the workflow yeah if you want to talk to us afterwards I can go into more detail just a confirmation question so with the line migration you guys are talking about have a guest instance that you can migrate to the contestants so in the past we've run a ping test and that's how we've tested that and just a couple more questions number one for the potential contestant have you guys thought about maybe for the future that there is a regular image that they can practice we didn't want to necessarily tie to a specific image but one of the things we've really considered is by exposing different and we were thinking of Easter eggs as we lead up to the competition for example we would of course expose the hardware requirements early but as we get closer and closer we could just kind of give out little more information on where to go like for example maybe push people to a specific session at the event that would give away the results on what you could do to implement one of those features so those are the things we're thinking of we want to give enough people information to prepare but we don't want to give away everything so we've done something with console that we have a little testing environment where we use that to do some scenario test and also interview test lab maybe we can talk offline yeah that'd be great okay all right thank you everyone if you have any other questions you can talk to us offline