 All right. Well, thank you very much. Full crowd here, standing room in the back. So hopefully we won't put anybody to sleep. There is coffee right outside. My name is Greg Nierman. I'm a technology evangelist for Hitachi Data Systems. I also co-host a podcast called Speaking in Tech, which is distributed on the register every Wednesday. This panel is being recorded. It'll be available on YouTube. And we're also have it available on the podcast this Wednesday as well. And I also want to introduce Ken. Ken, why don't you introduce yourself? Ken's co-moderating with me. Sure. My name is Ken of Hoyt. I am a technology evangelist at Rackspace, focused on educating users on OpenStack and helping the community. So one of the things I want to encourage you all to do, if you're on Twitter, to go ahead and tweet things you're hearing in the session using the OpenStack hashtag. Please do. We'll start off with Brian. If each of our panels can introduce themselves, so everybody has a context for your backgrounds and some of the discussion we're going to be having. Brian? Sure. Thanks. My name is Brian Graceley. I'm Director of Cloud Solutions at EMC. And I also run the Cloudcast podcast. My name is Manju Ramanathpura. I work for Hitachi Data Systems. I'm a CTO for Intelligent Platforms. Primarily, I focus on Cloud Platforms, OpenStack being one of the primary initiatives that I'm driving in Hitachi. And my name is John Griffith. I work at SolidFire. I'm a software engineer. My focus is actually OpenStack, the PTO for the Cinder project right now. Neil Levine was VP of products for Ink Tank, the sponsor of Cef, and I work for Red Hat. And I'm not going to announce anything here. I'm Val Bercovici. I'm the evil twin of the guy with hair up on the screen there. I've been at NetApp a long, long time, more than a decade and a half now. I direct research for the company, and prior to that, I was a reform developer. That's outstanding. I want to start off this panel discussion. Neil, obviously, Ink Tank made some news. Something happened? Yes, a little acquisition happened. So I don't even know which company or title to put you under. I need to speak to my lawyer. I can't comment on that. I can't talk about that. Give us some context for Red Hat's acquisition of Ink Tank. And I don't want to make this about vendors necessarily, but there is a significant impact here for Cef and the technology that surrounds that. Can you give us some background specifically around Cef? And then, largely, I want to open this up to the rest of the panel about the state of storage technologies in OpenStack in general. But just to kick us off, let's talk about Cef and Red Hat for a minute. So for those who don't know, Red Hat announced an acquisition of Ink Tank the week before last. So Cef is a massively scalable open source distributed storage product, which has seen a lot of traction in the OpenStack community, especially on the block side, but also on the object side. And I think the acquisition... You'd have to ask Red Hat exactly for all their reasons, of course, but certainly Cef's popularity within OpenStack was a huge part of that. But I think also they recognized that to become a serious software-defined storage player or to make a sort of dent in the storage community, they needed to bolster their portfolio, bring in some of the strengths that Cef has, which complement Gluster, the existing storage technology that they have. So OpenStack was definitely a part, but with the very early discussions we've had already, it was about more than just OpenStack. Certainly there's the big data part that Cef is looking to pay to get involved in as well. So OpenStack plus more. Outstanding. John, why don't you kind of... You've got a lot more exposure, I think, than a lot of us here on the panel about the state of storage and OpenStack right now. It'd be great to get your feedback and kind of a summary of where you think things are right now. Sure. So I've been working on the Cinder project for a couple of years now, well actually from when it started on. I think it continues to grow, continues to mature, I think it gets better every release. And one of the things that drives that is the fact that there are more and more vendors and more and more choices being introduced. But the important thing is not only are we introducing more choices and more vendors, at the same time we're also enforcing compatibility between those vendors and those choices. And I think that makes a really huge difference. So in my opinion, of course, I'm biased, but I think the state of storage in OpenStack, at least block storage in particular, is fantastic. Very good. Val, it's kind of funny. We've got three big traditional vendors, I guess is a good way to describe it. What's the... When folks think about storage in the OpenStack environment, a lot of times the thoughts go right down to commodity hardware, with the features being built into the software. From your perspective, and your view of where OpenStack is progressing, is that true? Are we boiling this down to commodity storage? I think some of the people, I can't generalize anymore with 4,500 people here. I used to be able to generalize a couple of years ago and these sessions are smaller. But today there's no easy answer to that anymore. I think if I were to sort of tread on that overused and abused software defined storage term, if nothing else, I think OpenStack is the embodiment of software defined storage. To administrators, it might mean options of white box as well as traditional storage vendors. But I think to developers, there's no better community, no stronger community in the world of actual practitioners that implement storage through software, as well as developers that consume it through software. Even Amazon, which is obviously the 800 pound gorilla in this space, it's a closed system. You don't implement it, you just consume it. Here you get the choice and the luxury of actually interacting with people that implement. That to me is the true definition of software defined storage. Not to just brown nose here, but I think this community really is leading the way to the promise and the value of software defined storage. Let me bring this up. We're talking about software defined storage in OpenStack. Red Hat bought Ink Tank, which is a unified storage platform and does software defined storage. In everyone's viewpoint, is that where we're going with software defined storage? That there's no longer value in the hardware, it's going to be purely in software only? Who heard Chris from Disney this morning? What were the three top words he emphasized? Fast, fast, fast. You got to look for where the value is. The value for fast in my mind is at least two-fold. The predictable one of when I'm in production, I want the application to be as fast as possible. Hardware acceleration, heavy metal helps there. But really, it's that 80, 90% of the application lifecycle during development and test, where you can be truly agile and create some of our probably other members of the panel here. You've got solutions that can create hundreds, thousands, tens of thousands of instances, NOVA instances, Cinder instances. I'll put a plug-in from Manila later in terms of why you need that. But all those instances very, very rapidly create them, delete them, make them permanent even though they start off as ephemeral. Those are things that we add value to. So it's really hard, again, to generalize and say, only SAF, or only Swift, or only Gluster, whatever is the way of the future because you've got to have that option of being able to go fast when you need to go fast and deliver value with a range of options, including software only, including hardware only, and the right balance in between. And again, it doesn't have to Uval, but other folks on the panel. I mean, what do you mean, when you say value, what do you mean by value? Again, in a world where AWS, a lot even opens that cloud, the storage is dumb and cheap. We don't really care what's running on them as long as they present this capacity. Well, I think, so again, I would go back to the keynote presentation. Glenn Ferguson from Wells Fargo talked about some of the challenges that banks have in terms of meeting the compliance, making the backup, and having certain expectations built in in terms of the rules and regulations that you follow. Those challenges, we still need to evolve as an OpenStack community now I'm speaking. I think there is room for OpenStack as a whole to improve. When you look from a software-defined point of view, I think that's where the gaps are. You look from a software-defined, it's really about using your software to programmatically manage your infrastructure. It's less about using the commodity hardware or using non-commodity hardware. That's really the abstraction, software-defined. OpenStack has really driven the wave of using the commodity hardware, especially in the space of shared-nothing storage space. But when you go into the shared storage space, there is still a lot of work that we all have to do as a community. That's where I think some of the, when you talk about value-added functionalities from storage vendors, I think there is more value over there. And that helps for customers like Wells Fargo that we're talking about, meeting certain compliance requirements. So one thing I wanted to add on the topic of it's just dumb storage, and you don't care, it's just block, who cares? I don't think that's true, I don't think that's true at all. I think there are a lot of differences, whether it be via the software, like some products are, or whether it's via hardware or whatever. There are significant differences, and depending on your environment and your use cases, they matter. It may be performance, it may be things like battling the noisy neighbor problem, quality of service, things like that, or maybe it's just more along the lines of availability, HA, redundancy, things like that. Those things are really, really important. The thing is, is everybody offers a different sweet spot. Everybody on the stage, I think each of us would take our product and be able to put one thing up and say, this is one thing we do extremely well compared to everybody else. And I think depending on your use case and your environment, that's going to dictate which one of those you're going to want to look at, or maybe you want to look at all of them. Please don't ask us to define what software defined storage is, because I think you'll find people. You'll get three opinions just from me. But I don't want to make a forecast about the future, and is it just dumb storage, is it just commodity. I think the great thing about OpenStack is it's allowing the businesses to discover what it is to them. Yes, there is obviously legacy stuff there, and now they've got a chance to explore and say, well actually, do I want to try commodity based hardware for my storage, or do I want to have a mix and match approach, and it's one of the great strengths of Cinder, of course, it allows you to do that in a relatively seamless way from the control plane. And I think customers are trying to, they're discovering that now, what is the value they get from the different solutions. And as John says, some of them really value the QoS or the HA, of course some of them really, really value lower cost storage. So where the cost is the primary consideration, obviously that's where the commodity comes in, but yeah, of course I think there'll be heterogeneous environments for a while, but OpenStack is it's surfacing the value very, very obviously to the different users of OpenStack. Ken, we've got a question in the audience, come on up. If anybody else has got some questions, just line up behind the microphone. Go ahead. Just talk really loud, we'll re-articulate the question. You used to have it years ago, mainframe, but we don't know about that. So the question, if I can summarize this, when we'll get to the point where because of the APIs, you can actually do the management of the storage layer up at the OpenStack layer as opposed to having to go to each individual storage solution and do configurations there. I just want to make one point and I'd like you to pick it up. I think instead of using commodity hardware, it's probably good for our conversation to use general purpose hardware. The difference is people associate commodity with cheap and no value in the hardware. The real discussion here is having a general purpose hardware so that you don't have like a vendor lock-in from a hardware point of view as well, but you still get the benefits of differentiated hardware. So if somebody wants some acceleration that has to happen on hardware for faster data application, faster snapshots, they should be able to do it. But the differentiation there is use a general purpose hardware so that the customer could use Hitachi's API, NetApp's API, ENC's API, SolidFire, anyone's API, and they should be able to pick and choose the underlying hardware platform. John? So there's a word I think a lot of us use here which is a storage catalog. If you expose your interfaces for sender and others through a storage catalog, implement it inside that catalog should be your specific provisioning for the kind of service level you expect, the kind of pricing, the kind of replication, backup, and so forth that you need. So that's sort of my simple answer to that is if you implement most of the storage, implementation, and orchestration, provisioning and orchestration services behind the catalog, we can satisfy what you ask for. Brian? I mean, I think the other thing to sort of keep in mind with this, and we've seen this over the last three, four, five years is we see more and more converged systems. We've seen what used to be siloed things. You've now got a virtualization team that has to know enough about networking and enough about storage. Some of that exists today already, but we're getting to the point where the storage team, provisions whatever there is on the network side, they provision pools of storage, and like Val says, they'll expose that through either service catalogs or through an API. I think the thing that's a little bit of a stretch though, and you'll find it in little pockets of things, but it's quite a bit of a stretch to believe that you're going to have application teams that are going to know enough about storage, and if you think about storage, not just I want a blob of whatever, I need a lun of whatever size, but I need to know what that thing does, because somewhere down the road in your application you probably have to back it up. It may be important enough that you need to have a copy of it somewhere else, and God forbid it becomes a compliant and thing that you've got to govern. That's not stuff that any application team typically is thinking about, other than they just go, I want it to run all the time. So the premise that you're going to build a system that storage is just an API and you don't have to think about anything else, but I think it's a stretch, I think what you were talking about though, if you get to a point where that storage team or that data team gives you either pools of data, however you get to it, object, file, block, and that they're going to worry about some of those things in the back end in terms of giving you what looks like an SLA, I think from an application team that's when you start going, okay, it tells me how much it's available and I can dynamically grow it or shrink it how I need to. So it becomes much more tenable in terms of that split between whether it's dev and ops or apps and storage or network. John, I see you following your brow over there. Don't always look like that. So I'm kind of confused because basically what you described and a good part of what people have talked about here is actually what Cinder does, right? So when you talk about provisioning storage, I guess maybe I'm confused on which piece of the provisioning you talk about. If you're talking about installing and configuring the storage device, that's one thing and I don't see that going into a common API. But if you're talking about actually provisioning off pieces for users, that's exactly the point. That's OpenStack, that's Cinder, right? It's self-service and... Okay, perfect. Sure. Yep. Yep. And that's exactly what we have. So in Cinder, what we have is we provide an API. The end user doesn't know necessarily what is on the back end and what's serving it up and they may have choices. So depending on the OpenStack admin and what they set up, they may have choices like I want something that's HA, I want something that's backed up, I want something that has this performance level, whatever it might be, and we use a filter scheduler to do the automatic placement based on those parameters that they provided. So I think we're closer to what you think than maybe the impression that you have. I think the question comes up is maybe we're mixing OpenStack now with software-defined storage. But I think the question is how much of the value of each of your individual solutions should you surface up to OpenStack? Because remember, right the reference implementation for Cinder, for example, was on commodity storage that had no QoS, none of those types of function. But now we're talking about each of you unique functionality. Should we be exposing that up to OpenStack? So here's the thing. So first of all, the software-defined storage thing, this was a really hot topic over the past year, right? And I was on panel in Hong Kong on this and got beat up really, really bad. But the way I look at it is OpenStack and Cinder in my view, everybody has a different view, right? So maybe I'm completely wrong. But the way I view it is Cinder is the software-defined storage. That's the whole point, right? So that's kind of the premise I'm going on. So if it doesn't match up, maybe that's why. In terms of the features, absolutely, the whole point is there's no reason to have a lowest common denominator and a race to the bottom in terms of the storage and in terms of OpenStack and Cinder. But at the same time that has to be done in such a way that it doesn't impact compatibility or usability for anybody else. So the way we do that in Cinder right now that I think works really well is we allow you to custom-define volume types. And those volume types may point to different things or expose specific functionality and features that different products have. So SolidFire, for example, we expose quality of service to that and actually now multiple vendors do that. But that's just one example. And then OpenStack also provides extensions. So you can always add extra capability and extensions and stuff and customize your deployment that way by impacting base compatibility. Neil, you hit some comics. Yeah, I mean, it follows on directly from what John was saying, which I think the challenge is going to be how I think we've got the commonality done. There's some tech vectors still going to fix, but ultimately the common API is there. And as John said, we can we'll extend to expose our individual capabilities. It's how do we move on to innovate collectively beyond that. That's, I think, the next challenge. And that's going to be a very unusual thing in the storage industry to try and say, well, actually we think we can push the state of the art of storage through a common API with an extension and we'll work on that and develop that individually so we can expose it at different costs or performance points and what have you. I think that that's the in terms of the API flexibility that I think the questioner wanted, I think that's the next challenge for us here. Yeah, just to sort of double down what Neil said, I think exposing the commensurate cost of a particular new feature is really, really critical because you can still sort of dumb down to a naive enterprise setting where the budget was allocated last year or two years ago and I can consume this resource at any rate I want. To be really cloud, you have to make sure that the cost is visible in the upfront part of the interface and it shouldn't always be super intuitive where the most valuable, fastest, coolest feature is the most expensive, but as you start to consume it, particularly at scale, you need to know what the commensurate cost is for the service level you're requesting. I'd like to ask each of the panelists just real quick as we're getting toward the end of this discussion. What are some of the challenges you see around the corner for OpenStack and storage specifically that folks that are exploring this should be conscious of and some of the things you're conscious of and John, I'll start off with you again because I know you're up to your eyeballs in this and I'd love to get your reflections of those challenges and then we'll move around the panel. I think there's a lot of different perspectives on this, so I'll give it as PTL working on the project and what I see on a daily basis. One of the biggest challenges I see is actually how do you continue to have some sort of compatibility and some sort of structure and things like that when OpenStack is now the hot newness, right? Every group, every engineering group, everybody that has a storage product they have marketing people and sales people saying, hey, you have to have an OpenStack driver right now. You have to be in there. The problem is continuing to scale that and make sure that you're still delivering a quality product. I think we're taking the right steps to do that and I think it's going to continue to even get better, but I think that's one of the biggest challenges. The new features thing that just came up, that's a tough one too. And part of the problem there is how do you define where those features should be? One of the big debates that we have is on replication. Is replication something that is an end user feature or is it an admin feature? Does it belong in the API at all, right? So there's all sorts of things like that. So those are some of the big challenges I see. Mind you, do you want some reflections on the challenges you see? Yeah, I would probably dwell a little bit more on the feature side. I'm looking more from a customer's perspective and how they are using it today and how they want to use it in the future. I think some of the topics that we've been discussing here from an enterprise point of view keeps coming up. How do I do a replication? How do I do a disaster recovery? And how do I do a fast snapshot, fast cloning when you are deploying hundreds of virtual machines? And backup restore? Those type of things keep coming up in the enterprise world where they are used to using certain other tools that fits in with the rest of the data center and it could be from a compliance point of view for instance, right? So if I configure my data center to meet certain high availability and that matrix is automatically propagated into some other tool that our compliance police are using, right? How does automatically all these pieces plug together in an open stack environment is a challenge that I see. Not necessarily again going back to John's point, it's not necessarily Cinder's problem now. It's really more of a data center management problem. Some of the things are more of a features that are specific to storage. Some of the things are more about how the data center as an entity is managed and how Cinder gives the outbound messaging to those tools. I think that's another challenge we need to look into. Right. Neil, you want to take a pass at it? Actually I was going to echo some of the things here. Again, this is personal from the self perspective. I think there are some more challenges just to surface the kind of enterprise features that customers want. I'm not too worried about that. I think the problem is really well defined and just got to write the code and handling multi-site and disaster recovery a bit more elegantly in these kind of things. I do think there is still some tech that which needs to be caught up on which is natural in a project like this. But I think to speak to Manju's comment, the real challenge I think is going to be the interaction between things like Cinder and Neutron in a sense of actually the network becomes really important and if it's all software defined if there's an asymmetric network failure how do you pick that up? How do you correct for that? How do you modify both? Does the storage react to the network or does the network react to the storage challenges? And I think there's a coordination issue there between Cinder and other projects to handle that in a common way which makes it easy for the administrator or the data center manager to handle and I think it's a challenge across all of OpenStack to ensure that these things are well concept as well thought of before we start implementing things and that's going to be a challenge for people who are just Cinder devs and that's all they are to start really collaborating I think. We'll get to Val next. I just want to encourage anybody from the audience if you have a question please step up to the microphone and we'll get to those questions in just a minute. Go ahead Val. So again I'll just start by agreeing with Neil with regards to integration of Cinder and Swift and hopefully Manila as well into the greater OpenStack ecosystem whether it's Neutron, HeatGlass deeper integration is really important and there are really two really really big challenges which in my mind are also major opportunities. The near term one is the fact that two thirds of enterprise storage today is unstructured through file interfaces. So I think it's a huge opportunity to actually leapfrog Amazon and be able to have something like a project Manila be promoted to a first class project not just a blueprint right now and be able to satisfy all those file interface requests and requirements POSIX requirements in the enterprise of which is influenced in pretty cool shared storage instances between multiple Nova instances actually. That's a huge near term opportunity. The long term one is really what I spend most of my time in research which is super cool. The next evolution of fast storage isn't faster storage it's slower memory. It's persistent memory. So the ability for the OpenStack community to define persistent memory interfaces perhaps building off some of the work that I know Alex is involved in the near community with NVME and NVME programming extensions that Intel NetApp and others are working on the ability to define how you now offer up persistent memory for Nova instances perhaps a new kind of instance type for sender and so forth that is a really really cool opportunity that's something that Amazon inevitably will bury behind some pretty cool instance as well. But the opportunity for the OpenStack community to be more agile to get their first at value there is upon us in the next couple of design summits and conferences that will be a big topic of discussion because the economies of scale are going there. Outstanding. Brian, any thoughts on some of the challenges? I think the thing that we hear from our customers the most is that the applications they're rolling out don't tend to be incredibly siloed anymore. They tend to be there's an element of it that's going to be HDFS and object and they want to do analytics on that at the same time they're doing relational database dips since there's a block element to it and there may very well be a file and what they're trying to do is go I don't want to have to think about five, six different APIs and like how do I get to a point where I'm really just thinking about this as a set of workflows whether it's for provisioning or it's for backup and they're like in us in particular they're pushing us very very hard to go I want you guys as sort of storage experts and people that have products across these things how are you going to make that simpler for me? I understand kind of these core technology elements of it whether it's Cinder or Swift or Manila or whatever's coming on the road that's all great that's that's part to the engine how do you make that simpler for me because I don't want to have to burden my application guys going it's this hard to build an application that has multiple pieces and now the underlying infrastructure is all these different components so that's what we see quite a bit is people going the applications are getting more interesting and more complex please don't keep making your infrastructure so incredibly complicated simplify the number of APIs I'm talking to simplify the points of management and so forth right we got quite a character here ready to ask a question Alex go ahead what's your question so the question is about standards go ahead John so well the only thing so I do want to touch on the vendor lock-in thing and you said that vendors are usually kicked out and so I would argue the complete opposite especially if you look at the stats on any project in OpenStack right now but especially Cinder and you look at where the contributions come from and what's going in it is vendor driven and vendors are valuable and important there's no question about that I think the vendor lock-in argument is kind of hype to be honest because if OpenStack is doing its job and doing what it's supposed to do and giving you that compatibility of what those pools are on the back end vendor lock-in shouldn't exist it shouldn't matter so standard so when you say that right so standards are you talking like SMIS or are you talking just keeping standards and keeping things from breaking okay totally totally agree totally agree with you 100% and that's something that Cinder side we've definitely tried to focus on because we've seen what's happened in some of the other projects when people do upgrades and it is bad it is really ugly I think everybody has come away pretty bloody from that and they've learned and I think everybody every project in OpenStack now is really fully aware and focused on trying to make that better I don't know how we standardize that and make it you know something as part of a process I mean right now we have grenade which runs and actually takes all your code and ports it back and make sure it runs on the previous version of OpenStack as well as the current version right all of those things are good and it's a good start we definitely have a long way to go though I think we have to make that better I'll just make a quick comment I think where we are from OpenStack point of view it's really following the model of fast fast fast we really want to get OpenStack ready, deployed, managed it's in sort of like a startup phase still I think and but some of the components like Cindernova are maturing at a faster rate than some other and I think it's more of a cycle you know once you have customers deployed Cinder once you have customers deployed Nova you will start seeing the resistance to not make major changes to the API structure I think that will sort of evolve into its own standard this is just my perception as opposed to following say SNEA standard or CDMI or something like that and that my take on why that's happening is probably more because of the developer mentality to let's get it done I don't want to get bogged down into the bureaucracy of standards I'll actually pause on the second one I think to a certain extent so yeah there's some general sort of don't do evil types of things but the marketplace knows how to deal with vendor lock in they know how to deal with risk management storage is one of those few industries that you don't have truly truly an 800 pound gorilla I mean that 30% is sort of market leadership it's not networking it's not database so the market knows how to deal with this stuff customers have choice in which distro they want to use they have choice in which platform they want to use today if they want to go wild west and go look I'm just going to kind of pick and choose whatever I want find me a customer who goes I really don't care which vendor I do and I just plug them all freely in for Ethernet switching or storage that's kind of a problem that's solved I think the vendors tend to focus on it because we have to sort of defend it a little bit to our customers who go well I'm going to beat you up for vendor lock in vendor lock is a risk management that's what it is and whether you look at it as cost management or technology management the market knows how to deal with this stuff we as vendors worry about it because we have to fill out RFPs and stuff but I don't see it as a massive massive problem because again you don't have at least in storage you don't have that one gorilla that owns 60, 50, 60, 70, 80% of the market that can make one change and everybody else is locked out I mean there's how many 20 something storage vendors that are contributing code now I think maybe you use the word standards and you thrown everybody because we thought it was capital S, CDMI, SNEA kind of things I think quality and sort of not breaking things is probably quality is actually the phrase you meant maybe not you know don't don't change the API so suddenly everything breaks I mean I think I mean this is a standard thing in open source communities it's like the developers want to rush forward developing cool new stuff and the product guys are going wait wait you're breaking things it's an emergent thing which will happen in the community it's happened in other open source projects and I think there'll be a natural shift probably in the downstream open stack products where they take a lot of the driver code and everything else and I mean they'll maintain the quality because they're the ones who are on the line for you know when mind you new masters of red hat or canonical or cloud scaling whoever it is when they're shipping a product they're going to be ones on the line when the customer does the upgrade and everything breaks and I think they'll police things to a certain extent but I think in a way I think it's kind of it's fairly mature from an API point of view so that clean abstraction I think it's fairly consistent now and doesn't vary huge amount especially because of these different extensions which give the vendors the opportunity to play with things but you know if when the customers really demand do not break anything at all I want 100% verification they'll put you know that'll force the change and I think that's happening right now so it's not a concern we've seen from our customers certainly within a couple of years and I think that's a good question. Okay let me ask one last question. Hang on I got to jump in here. So I'll be brief. So the adage about cloud is you know the cost of failure is really low which is why people like to move fast and break things because the failure cost failure is low but the price of success is high so there's a major milestone I can't let pass with that question that happened two weeks ago who knows what Facebook's motto used to be that motto at the F8 conference they just held where they realized their developers now their own infrastructure and all of the APIs they exposed to the billions of dollars of revenue they do with their developers the interface is so important now that keeping that stable consistent and so forth is the new mantra and the new reality so I think it's a really fascinating milestone it shows you that once you succeed you grow up and I expect that problem to go away with success. So last question this is related to storage vendors and open source right so I talked in the ecosystem to be frank there's a huge suspicion about storage vendors that basically you guys are in it basically just to sell products that you're not really interested in helping the community open stack move any further so A how do you address that accusation and B what do you do about it and specifically I'm talking about people who say all the storage vendors are doing is adding drivers and doing anything to make open stack storage a better product. We're going to need to move very quickly. Look at the data look at the contributions that people make not just lines of code number of contributors to the open stack project but look at open source in general contributions to Linux to BSD to the new Heartbleed initiative in terms of securing funding the secure open source foundation for a lot of our infrastructures look at the data it's an ignorant comment if you look at the data I have a slightly different view maybe I'll come back to my earlier point we've got pretty good very good API abstraction now we need to innovate on it so it's going to be step up in the show and yeah I mean look we had to do the drivers everyone had to do the drivers so that's the table stakes to start playing with the API at that point but I think they are committing the resources yes I think they have been focused on just making their stuff work but I think it would be especially interesting at the summit and the next one what new features do they surface first in the Cinder API and that they put resources into it and to that which then benefits everybody in the community I mean it's the proof will be in the features and the code well unfortunately we're getting the hook I got it from the back of the room John I'll let you go real quick and we'll wrap up all right so this is one of my favorite topics so I I would say that that can absolutely there are definitely vendors in the community that are just doing a driver and that's it there are other vendors that are doing significantly more again you can look at the stats you can look at Garrett you can look at GitHub and you can find out who those are you know I'm the perfect example I work for a vendor but we're not in it just to sell more product or just to get a driver and obviously I've been leading the project so you can look and get that information at the same time though I want to say that a vendor that wants to just put a driver in and be an open stack try and get some sales that's not a bad thing in my opinion because the bottom line is I don't care if if anything is driving open stack and getting more customers and more potential users interested in open stack and that's a roadway that's a gateway for them to get into open stack that's great I'm fantastic with that I think we're going to have to wrap it up on that note thank you very much Ken thanks Val Neil John Marju of course Brian thank you Full House we appreciate you guys making an out for this thank you