 Hello good afternoon Welcome to the open source open cloud provisioning two panels. My name is Hain Wang. I'm city over cloud computing at Huawei I'll be moderate so as a open stack is getting more mature and is winning the battle for Mindshade IT as a future cloud management platform. We see more more companies Standing up the open stack cloud and standing up the open stack cloud is very hard in production environment Not a mention that I keep them updated as the software itself changing where fast so We have today. I'm have a great honor to hosting this panel with Creators of four popular open source Provisioning tools namely maybe five that's triple and Furman few Open crowbar and a compass So those are the people who create these tools and they will talk about why they form this project What probably try to address? What's the experience with customers and what's the feedback what they are going forward? so without a lot Without the and more delay. Let me just get started What I'm gonna do is I'm gonna ask each of panelists to introduce themselves and Give a short description about the tools now ask a few prepared questions along the nine I just mentioned and Then and and I will invite Audience come to ask questions So with that, let's get started Boris Yeah, my name is Boris Rensky. I am co-founder chief marketing officer at merantis Was I supposed to say more about a few okay, so for open stack deployment we use Something called fuel. It's an open source project. We develop it and they open completely following the open stack development principles It's a hosted on stack forge anybody can come contribute, so fuel is our approach to deployments Hi, I'm Keith Basil. We're for Red Hat. I'm a product manager for our management tools We are today. We have a tool called the open stack forming installer But our longer-term vision is the triple O project that's upstream in the community today Hello, my name is Rob Hirschfeld I am a startup CEO of a startup that is focused on commercializing the open crowbar project and that that project started out actually as an open stack installer and we have Lately been repositioned it with the new version of code to be a hardware abstraction layer that works with multiple installers like the chef poppet pack stack Doesn't really matter for us how you install on top of that and we'll talk about why Hi everyone, my name is Shuoyang. I'm a principal cloud architect working for Huawei We started a project called Campus which is helping us stand up the open stack and this customer feedback We expanding our you know tool scope for you know Hardware healthy management all this stuff. I'm happy to you know be here to discuss with dishonored panelists So The first question is that Can can you introduce how the project get a formed what set of problems you tried to address? How you solve them why you choose to make the technology open source? So yeah, great great question. So we started a compass as a dog food project When we you know started this Open stack journey, you know, we we try to stand up our own open stack cloud. So Back then, you know, we look look around and we felt we finally make decision We want to build our own script and then over time we We basically, you know abstract this script so that we basically trying to build a layer You know, which is a function and all our time people can input the parameters to do Variety of open stack deployment. That's how this you know compass project gets started So when we started this project, I was a Dell employee leading the open stack initiative at Dell This was actually back in the Austin and bear release days. So we started on the bear release And what we found was that it was very difficult to repeat and open stack deployment And so we had to write operational tooling to make it possible to lay down the infrastructure necessary to run open stack and then run Open stack and at the time we did it in a very chef focused way We did it in the open because we were collaborating very deeply with red hat And then started collaborating. We did a lot. We still do or used to do a lot of collaboration with Susa Who also uses crowbar in their cloud products? and It was the only way we could do that collaboration was in the open because neither company could agree on allowing either company into the Inside the product what what we really found was that if in order to create a repeatable open stack deployment, which was our goal Because we really wanted to create repeatable open stack upgrades in order to do that We had to be able to create a baseline Infrastructure over and over again in our labs and then over and over again at our customers That every time we showed up at a customer couldn't repeat a deployment with automation It created something that was impossible for us to maintain over time And so that was a big motivation in what we ultimately built in the open crowbar project So the project I'm going to talk about is triple. Oh, that's red hats forward-looking tool It came together it started with HP actually So the good thing about triple-O is that the three O's oh is open stack on open stack So we are reusing open stack to deploy open stack. It's kind of a mind twist, but It it works out very well actually so the philosophy of golden images and using open stack Is really powerful because we have an existing community. We have well-known APIs to integrate with and There's a lot of involvement meaning that if you look at the landscape of triple-O You've got HP as a as a pillar in the community You've got red hat you've got us and you've got rec space who contributed a lot of work to ironic So it's it's a true upstream open stack thing, right? So we like that as red hat because you know open source is our thing It's in our DNA So we if you if you look at other tools and I'm not calling anybody out here and I'm not trying to start a fight But if you look at other tools They were largely led by one company. So you were kind of baked into the philosophy of that one company So this is why we think triple-O is a longer-term play where? You know nobody is smarter than all of us is kind of the approach we're taking So in our case we started our open stack journey Primarily as a services company and we've been engaging with many different organizations kind of with their own opinions with respect to how they want to do deployment so We started with chef and pop it and whatever basically a company wants and then over time as we started kind of to uncover various repeatability patterns with respect to the deployment configurations We started to codify those so that we can deliver open-stack environments to customers faster and we understood that In order to do that we have to actually, you know make some choices We can say that you know we work with pop it and with chef and with salt and whatever We have to make choices and we have to kind of follow through so ultimately We made our decision that pop it is kind of a the configuration management tool that that we're gonna use We use a cobbler for bare metal bootstrapping and then we had to write Somewhat of our own orchestrator called astute all of the bits are open and we Similar to a triple O are kind of an open project Anybody can go ahead and contribute but our vision is that I think deployment of open stack is a very complicated area and Open stack at the same time is kind of a at odds with it because when it comes to deployment you have to Kind of a limit the number of configurations that you allow the customer to deploy open stack because you can't have a tool That deploys it in any shape or format. There is tools like that. They're you know, they're puppet. So In in in doing that we with consciously kind of a limited the choices with respect to you know What what the apologies you can deploy with fuel? and While fuel is still open and we welcome everybody to kind of participate in the community I think what's been playing a little bit now favor is the fact that the fuel open source community at this point has been More limited than that of triple L and That's kind of you know an interesting notion But when triple offers came about we were like, oh, holy shit. There is an official Open stack deployment tool for the problem, you know fuel is not relevant anymore but all I came to realize going forward is that because in community There is so many opinions and because deployment is such a contentious concept that it's very hard to really Create a generic deployment tool that will address all different configurations and will take into Consideration the opinions of everybody in the community with respect to how deployment should be done So long term of the fact that on one hand fuel is open and anybody can contribute on the other hand the fact that we Weren't very proactively pushing it into the community and that we're able to maintain Leadership and to some extent control of a project. I think gave us an Important advantage in terms of engineering lead and the fact that fuel works well and you can download it And it's maybe more limited in some ways than than triple O But you can just you know click click click and the thing works. So That's my answer Good and I'll get some heat here. That's interesting. So trying to get it controversial a little bit So so I think there's a lot of you guys also use lots of open stack Open source technology being used you talk about a cobbler. You're talking about a puppy and a chef So for the tools and for the users, do we have to let them know there's a one better choice? all this choice should be by the customers by the Install situations Following that I want to also ask we already know there's a difference There's image-based installation and a script-based installation. So can we guys talk about the pros and cons of this? on the context of auto installations so I'll split the questions up So triple O represents a set of Legos right which is great and you can use ironic to do bare metal provisioning You can use heat to do your orchestration You can use Nova to find the host nodes to actually do your image deployment on so that's great from a community standpoint Red Hat's differentiation here is that we see deployment as step one. So you stand up open stack? Great The question that we're trying to answer on top of that is how's my cloud doing? There's no real Initiative to push an interface a management interface to show the cloud operator the state of open stack How much storage do I have under management? What are the storage ratios? What's the duplication ratios? How am I compute nodes doing? What is the API count for Nova coming through? So all these performance metrics for your infrastructure don't exist in most tools So with triple O we have the Legos on the table to actually solve those problems So we're reusing horizon to do an operator dashboard for example We're reusing salameter to pull stats and information off of you know Compute nodes at the hardware level etc. And we're rolling that up visually for the cloud operator So they share he or she can see the state of open stack Going forward after you do the deployment. So for us, it's about planning You know, what does the build materials look like? What should the hardware look like in terms of configuration the actual deployment a step to and then the ongoing management of open stack? So that's kind of the framework in which we're building triple O on top of I Represent a completely different viewpoint So what we did was we started doing open stack deployments four years ago and What we found was it wasn't that open stack deployments were really hard It was that the physical infrastructure was very hard to get consistently correct Once the physical infrastructure was consistently correct deploying open stack is not that hard But you have to be able to say all right for this site They wanted teamed 10 gig nicks and they enumerated in this way and in this site They needed the same network, but it was configured a completely different way or they needed these drive arrays in this way And that that type of hardware issue, right? The fact that every server is a little bit different every data center is a little bit different You can't fight that you have to embrace it. This is why when we refactor crowbar We refactored it as a hardware abstraction layer so that you could then do whatever deployments you want I'm not here representing You know fuel we could actually deploy fuel on top of what we do what we're trying to do is create a repeatable Baseline so that you can then compare open stack deployments against each other So if you want to do it with salt you can do it with salt if you want to do it with chef do a chef Puppet pack stack it really doesn't matter The premise that I come from is that you have to have a repeatable physical baseline So that you can then compare and contrast these debug these Operational scripts and then reuse them right if somebody Sousa did an amazing job doing ha open stack in their last product Right you want to be able to use what they did and submit bugs against it because you're trying it But if your hardware infrastructure looks too different than there's even by a degree You might not be able to do that and so what what I believe is really important is that deploying hardware Infrastructure is its own challenge it has its own problems There's architectural components that you have to expose in doing that that we don't want in open stack We don't want to know the network topology at the switch level in inside of open stack That's not something you'd have in cloud it totally matters if you're doing a physical deployment of open stack and our premises that those are separate problems deploying open stack and operating it is one problem and Using tools that make cloud deployments easier and more effective is a different problem so Talking about this openness. I think openness means something to somebody and that means Entirely different thing for another type of people. So Openness meaning a startup can use your technology plug it their technology in and helping you the end customer to build a final, you know solution and If you constrain your solution into a certain You know Pre-requisite then this startup or this you know innovators will have hard time to integrate with you For example, if they choose to use a chef to do certain, you know stuff in their, you know Technology stack then how can they you know get into your so-called open system? Right, so I think so from our perspective we want to serve that kind of people We want to say the final user not the developer the final user is the judge Judge of whether a technology is open or not So for for compass we choose open source our technology because we believe we have You know plug and play architecture Even we worked with a certain sdn vendor right think of this if you want to deploy a sdn fabric, how can you use a You know certain prerequisites, you know That kind of thing I would call it to not open so you cannot deploy, you know sdn fabric so That's my opinion and also talking about Talking about openness from the community perspective right now. There is a huge umbrella called big 10 movement and You know a lot of project, you know, there's a lot of innovative part, you know ideas getting into this big 10 under Stackfrog and if you are following this project management as Chris mentioned, right? You are the family of open stack and I think all of us are part of that, you know Family and I think you know visit, you know, healthy, you know, collaboration Competition, you know every project will be open. That's my opinion. So I'm a one response So some I said earlier that nobody is smarter than all of us, right? So I'm going to refer to something that happened in the keynote today So the president of the executive director of the Linux Foundation said that open source is kind of this Pareto rule Where 80% of the work is open source, right? And then value add is that last 20% So the guys to my left and right if so I'm in the center I'm squarely in the triple O camp with red hat open source everything upstream first, right? So we talked to Rob. We loved crowbars ready state concept the philosophy there getting the infrastructure at the low level right and Repeatable beautiful the guys brilliant. Okay, so we're trying to do that. All right We love Boris's idea of a very opinionated and repeatable process So we're actually with triple O taking those commodity Legos and doing a very stable Focused install of open stack that's repeatable and for our enterprise customers very supportable so again, I I'm upstream first community based and If these guys have great ideas in their upstream We're going to adopt them and use them and make it better for our product and last thing With our product rail OSP six triple O will be tech preview. You'll be able to download it You can install it deploy against VMs deploy against hardware You'll see the operator dashboard all the vision that we've had and been working on for the past two cycles We'll be realized in our next release Okay, so I didn't get to talk and it's very important that I talk, right? So I think that There's two questions and I'm not sure that any one of them was answered so far the first one was whether or not There should be just one deployment tool or there can be many and the second one was the scripted Deployment approach versus the image based which is what triple O uses so as far as the first question my opinion Clearly that there shouldn't be just one and the reason for it is because I think that having just one deployment tool goes at odds with the Plugability and configurability principles of the larger open stack Community because when you're talking about deployment as I mentioned you have to and as Keith has mentioned you have to really inject some opinion with respect to how the deployment is done and With that opinion is where the value ultimately comes out and because open stack is pluggable set of Legos You can't have one Concrete opinion that is correct with respect to what kind of house you build of this pluggable set of Legos So open stacks great, but I think that as far as deployments concerned There will always be many different tools that deploy open stack in different ways and represent the Proper deployment opinion of the particular organization or set of organizations behind that tool Second question about the image versus scripting so I can probably say less about that We actually you know we like the image based approach. We think it's a very elegant solution And we ourselves have been very actively contributing to a triple O I think that we're like number three contributor behind Red Hat and HP to triple O but in our opinion is that Image based is Useful and great Primarily when it comes to very large scale deployments when you need this repeatability across very large scale You're deploying a thousand nodes 1500 nodes image based is great, but the reality that we've seen so far is that you know vast predominant set of open stack use cases so far has been under 100 nodes deployments and when it comes to under 100 nodes Script based approach is better. It's better understood. It gives you much better control at package level And it's not you know, it's it goes much less against the grain of what the organizations are used to although going forward, you know, we are very seriously looking at triple O and Potentially would use some of its building blocks inside of the you know deployment approaches that we use a fuel Good, I think you guys all had enough chance. I think this is I'll pass you the mic So I agree with Boris With triple O are the next generation We're gonna take a hybrid approach many of you know, we acquired in advance in advance had Some technology that was very attractive and very interesting So the next iteration for us is to take some of those best practices Which is lay down an image first for speed and then come back with the upstream public modules to actually finish off the Configuration so it's the best of both worlds in our case again multiple legos on the table We're gonna pick the pick the best ones to get the job done Actually, I think we are agreeing more so I It's funny because back I've been doing virtualization since 99 and we used to call golden images Ended up being a anti pattern just like waterfall method is actually the definition of an anti pattern for us golden images became Anti-pattern because as soon as you have that golden image you now have a Maintenance issue where you have to maintain and propagate that image and in any deployments at scale variations between hardware models and during upgrades and biases and things like that actually make that golden image harder to maintain Because you now have to propagate one try and maintain one One golden thing across a larger and larger set So what we saw and what we've modeled in best practice operations at scale is you can definitely use an image to Bootstrap a whole bunch of stuff at once, but you still have to maintain Operational control of those systems in some type of layered approach and the layering actually allows you to have some functional separation Between the layers right what we want to be able to do is have a Network module that sets up the networking consistently across whichever operating system you do that capability It's really significantly important in hardware because you have to set up your nicks and your networks and your bonds and all that your Vlands That logic you don't want to embed in your golden image because it's not consistent Machine to machine in a data center. It's not consistent customer to customer So we believe very strongly. There's a lot of things in the operational stack that should be scripted because on a hardware level They have a high degree of variation per site Yeah, I agree. I I want to at one point So this the Docker, you know in innovation to this, you know, DevOps word I think the blurring line, you know the line between image-based and script-based is becoming a little blurry and You know, I think I agree the other panelists Opinion that you know how to combine both side of the goodness is you know as a community We we should you know spend a lot of time to think and innovate Okay, just keep it. I think This is the round two of discussion since last time I think probably we need a one round three to figure out is one twos or multiple twos Who gonna be winner a jury still out is too early. So let me shift the gear a little bit I think a standing up open stack is hard, but I keep them running It's even harder if those stand up software defined infrastructure not being updated Correctly and quickly they may become often the hardware quick because underlying software changing like crazy Not open stack is one of example the other soft and similar kind of so I want to hear you guys saying For supporting people to operate such infrastructure already know standing up is hard, but if you cannot run It will be useless in this content So on the monitoring side on the cross-release update upgrade as I think lots of people Asking I want to get some insights from you guys from panelists about this So yeah upgrade to me is the holy grail a lot of our product requirements, right? Remember I had an open stack deployment running back in 2011 The product managers turned around us say hey great You got deployed now upgrade it because I get beaten up every six months on the really on the upgrade cycle And so we had to give a lot of thought and I remember at I think it was the San Diego summit I was actually on stage talking trying talking about upgrades then and hoping to get to him The challenge we have with upgrades There's multiple one is open stack itself needs to be able to handle and an n-1 API compatibility and that has to be a priority from a component capability and then We actually have to deploy it with infrastructure that understands how to do a coordinated orchestration So upgrading requires orchestration it requires an awareness to take small steps and one of things That's very important to me is a scriptable site to site. So we have to have a way And I can't go into some of what we did for to reflect in a crowbar architecture It's not the only way, but we think it's a significant way. We call it annealing We actually anneal a deployment, but you have to have a way to script Small changes in a repeatable way so that you could have developers test it QA people test it customers test it because It's if you don't have it repeatable and site to site deployable You actually haven't really helped us with upgrades if you can't talk about how you did an upgrade And be a guinea pig for the next person on the line and fix the scripts that they use so that they work better and don't repeat your Error and I know most people at any scale actually have test labs where they rehearse the deployments You can't rehearse the deployments and end up with a notebook You have to rehearse the deployments end up with a script because a thousand node deployment doesn't get done Except for maybe with Tim or Tim Doesn't get done by doing a whole bunch of hand tweaking So we have to have automation interfaces that handle iterative Directed and validated steps to do this type of of sequential change within an operational environment so talking about this Update a problem as Robert said actually in our opinion There are two separate part of this, you know update issues One is the code of OpenStack itself. Do they can you know preserve the you know data consistency across version? Do they you know preserve the you know wire particle consistency across version? I think that's you know, the community is moving throughout that word and the other problem is regarding the You know deployment tools, you know problem How can you oxytrate the sequence of your you know upgrade? You know process and how can you you know roll out, you know Rolling update and the roll out your new updates. That's a you know another issue. We want to talk about So hopefully, you know OpenStack Core community, you know deal with the first issue and the here, you know as you know the Operational more operational related the community. We can deal with the other issue Sure. So talking about the upgrades, I guess that that's that's the question operations and upgrades so there is a notion of OpenStack upgrades in Nirvana where you have Zero downtime in place upgrades from version to version That's kind of like the ideal that everybody wants to get to I Am 100% sure that nobody is there right now and Moreover, I'm pretty sure that in the foreseeable future. It's unlikely to happen There is ways to mitigate the upgrade problem at this point and Here's I can tell you what what we do. So First you have to be able to upgrade the tool itself that is doing the deployment and upgrading of OpenStack So we've kind of been tackling that problem for some time now that problem is solved with fuel You can actually upgrade fuel without wiping out your environment or installing second you can do kind of a dot patch type updates to the specific services not moving from one version of OpenStack to the other but Patching specific services. So as of the recent release, we were able to solve this now the third the most valuable part of upgrade is actually moving from one version to the other and At this point, we don't have kind of a magical solution But what we what we end up doing is basically deploying the new version of an OpenStack environment next to the old one and we have a tool called pump house which is also an open tool for Migrating your existing workloads from one environment to the other environment and that's that's effectively the only way that we've we've seen it be possible to Upgrade your OpenStack environment in a minimally disruptive way. So one other notion that I wanted to bring up that You know the zero downtime thing a lot of people are really kind of you know obsessed with a zero downtime and Zero downtime upgrades in our opinion the zero downtime When it comes to specifically the OpenStack the orchestration layer is not really an important problem to tackle because Scheduled downtime is a completely normal thing in any organization be it sophisticated web 2.0 guys or traditional enterprise guys Scheduled downtime off the orchestration service. No problem as long as your VMs stay up and the workloads themselves are still accessible It's okay to have the downtime of the OpenStack API. So in trying to solve the upgrade as far as the work that we're doing Solving for the zero downtime is not something that's on our priority list in the near term So I agree with everybody up here. It's an upgrades are very complex this is kind of why we Converged on this hybrid model where you do for major upgrades You would do a new image and orchestrate that very carefully as Boris alluded to For minor upgrades for things like you know heart bleed You don't want to send you know gigabytes to a new hose. You just want to make that few bytes change, right? so the hybrid model works for us, but the key here is orchestration and By using something like triple-O, which is really OpenStack. We have the benefit of the 17,000 member community Looking at this problem. So, you know back to the earlier comment. I think we can collectively solve this It's very tricky, but I feel better being in the community with you know, everybody looking at the problem then kind of Beside the community up even though it's upstream OpenStack is great I mean, I think we're accomplishing amazing things, but I don't think we're inventing operating scale applications and I think that it that that the Operations environment of people doing scale operations and doing upgrades and zero downtime deployments and changes is is actually much bigger than OpenStack and operators are out there doing all sorts of things on all sorts of systems and I think that saying OpenStack is gonna solve This problem. Thank you very much Overlooks the fact that there's a lot of people with frankly more experience than some of the people on this team Then and I'm not denigrating the experience It's just the fact of having a big pool of talent the bigger pool of talent on upgrades and migrations is in the shaft poppet salt Ansible communities because they do everything and so I think it's it's important for us as a community To not just think that OpenStack is gonna solve this problem by themselves by creating another tool that We actually have the mechanisms and the experience and the capabilities to bring in a broader set of Experiences and solve these problems by listening to what how people have solved these problems before So I disagree with Rob a bit because he keeps saying this new tool. There's no new tool. It's OpenStack I mean, it's out there. We use, you know upstream public modules where we're you know, we're feasible So it's just orchestrating and putting things together that to solve a problem Okay, a few minutes left. I want to open the floor for the audience If anyone have questions, I can pass the mic to you. Otherwise, I will continue to ask the list questions Anyone have a question? No, okay, so I will ask one more So it's more forward a statement Can you talk about a one or two things that you would like to see to change or improve for your project in the future? Yeah, I'll start so one thing that we tried to introduce about a year ago was this notion of topology awareness So when you deploy OpenStack, you need to know what your failure domains are either be it at a rack level or a data center level Or even at the node level certain things need to know that for example, Hadoop HTFS is rack aware, right? So we want to pass that up is Possible and we don't want to provision services all within one failure domain because that's kind of anti-cloud, right? So we would love to see Topology awareness kind of baked into some of the tooling that we're using upstream. Yeah, sure I can I can say just one thing I think for us going forward to fuel the one thing that we're focusing on the most is Plugable framework we refer to this pluggable framework, but basically it boils down to simplifying for third parties the Integration of fuel so for example if you're like a storage vendor or you're a networking vendor like or firewall as a service vendor And you want that deployed alongside OpenStack that is deployed with fuel and you want to pre-configured out of the box It's it's fairly straightforward for you to actually create a plug-in and make it work On the surface it sounds like a very simple problem, but actually as we started digging into it It's not as simple. So Next year, I mean we've finally come across come across what we believe to be close to a solution and The subsequent releases of fuel will actually have the first implementation of Plugable architecture where third-party vendors that want their infrastructure to plug in with OpenStack can actually codify the configuration settings into fuel and expose them to the end users happily good first so We're a hardware abstraction layer Our goal is to have multiple people in the community using community tools to do OpenStack deployments so that they can share learnings between that so what I would love to see is somebody who's passionate about the chef community and the chef cookbooks Be able to do the little bit of integration to take allow crowbar to then deploy the OpenStack the pack the Stack forage chef cookbooks You know we're already doing things with pack stack. There's you know, you could actually use any of the modules There's salt so I would encourage and I'd love to see people taking OpenStack community deployments of of OpenStack and Using those against the standard hardware reference architecture so that we could actually start having shared learnings around those around this platforms The thing we want to you know work next is you know truly embrace this openness model Openness meaning as I said earlier, you know enable the startups to integrate with us. Actually, there are several, you know such kind of Initiative going on right now and as long as you you can commit that your operational code will be open source Right. We are we're willing to work with you and that's a win-win situation for for us and for the You know our customer because customer doesn't buy Whether the source is open or not they buy whether the solution is open or not and with this as a Rob have this, you know hardware abstraction layer if they can get plugged into our system we love to work with them and Other startups if you guys have that same mentality We love work with you guys. That's what we want to do next Okay, I'm just I'm just saying that I have the next panel starting now So I have to run for which I apologize, but you guys can carry on So that's that's the way and let's give a round of applause to the panelists. Thank you guys