 Live from Vancouver, Canada, it's theCUBE at OpenStack Summit Vancouver 2015, brought to you by headline sponsors EMC and jointly by Red Hat and Cisco, with additional sponsorship by Brocade and HP. And now your hosts, John Furrier and Stu Miniman. Okay, welcome back everyone. We are live in Vancouver for OpenStack Summit. This is theCUBE's Silicon Angles Flagship Program. We go out for the events and extract the signals and the noise. I'm John Furrier, my co-host, Stu Miniman. We have two great guests here from Red Hat and Cisco, Ranga Rangachari, VP and GM of Red Hat Storage and Dwayne DeCappi, who's the Director of OpenStack Product Management, Cisco. Welcome to theCUBE, guys. Thank you. Thank you, thanks for having us. Okay, so it's Red Hat and Cisco. So what we're talking here is scale, efficiency, simplicity, what's the story? I mean, can it be that easy? I mean, UCS is popular. What's going on with you guys? What's going on in the relationship? So we have Cisco Validated Design, CVDs together, where we put all the components together, compute, storage and networking. You have Cisco UCS, as you mentioned, unified computing system. Good compute and storage. Red Hat on top of that rail, OSP, as well as CEF technology. We have Cisco Validated Designs. We also have UCSO, UCS Integrated Infrastructure for OpenStack. But we de-risk the solution and we provide all the best practices for a complete compute storage and networking solution. Let's talk about the de-risking, because certainly you guys know, and certainly I know from the Cisco days, is that the policy-based is really important, especially at the network level. Now, it's storage, software defines it, is the big rage, but everyone wants scale-out storage. So that is the real value that you're seeing with open source. Are you guys tying that together, preserving the scale-out nature? What is the CEF design for? And that's really a real design thing. Can you explain how that works? How do you make it enterprise-grade? Let me start on, like, Dwayne can obviously also fill in on this. I think that's a great point, because we've seen this trend shift over the last two to three years, where it's no longer about scale-up, it's scale-out, right? Whether it's files or objects or block, it's about how horizontal scaling happens without any degradation in performance or capacity. So, an intrinsic part of the CEF architecture is something called the crush algorithm. And I don't want to make this a PhD thesis around the crush algorithm, because Sage Will, who is the author of the CEF project, is a better person. What does crush stand for real quick? So crush essentially stands for controlled, redundant under scale, R is, S is scalable, H is hashing, right? So, and it's an offshoot out of the original rush algorithm. And the fundamental pieces behind the crush algorithm is when the data gets placed, because when you're talking about billions of objects and millions of objects, you need to have a hashing algorithm that's not directory bound. So the algorithm essentially is smart enough, it's policy-based, that it knows where storage is placed, and then there's no single point of failure. So those two elements really help you scale. So you're tracking the data. Tracking the data, you know, with the hardware in mind. So what your power supply looks like, what the hardware disks look like, what the storage subsystem looks like. Essentially what customers care about is can the scale without compromising on performance and without compromising on the availability. So that's the- So we can joke around since Cisco's crushing it in storage. Yeah, or Red Haters too. All right, so what does this mean for enterprises? Obviously, you know, software is key, software defined, everything is going on. You said not bound by the directory. You mean from a scope of the data where it resides or across platforms, what does that mean? Well, it is highly redundant in that it's across multiple nodes. So you can have multiple UCS servers spread geographically. So the ability for you to track the data regardless of where it resides is one of the key attributes of the crush algorithm. Now, if you pop a level higher, we are absolutely seeing the trend where customers are moving to a software-defined everything. But a fundamental part of the software-defined architecture is you need to have a real, I guess, enterprise-ready hardware that innovation happens on a daily, weekly basis to take advantage of that. So that's where the Cisco relationship really comes in and I'll let kind of Dwayne. Yeah, Dwayne, maybe if I could tee it up for you there. So, you know, we actually talked to Sage last year at the Atlanta Summit and, you know, Red Hat is a software company. They've got the stuff pieces, they've got Gluster, you know, they've done a lot in OpenStack for a bit. Talk us through, you know, I obviously know the CVDs, but you know, how does Cisco tie into that? Is it the underlying hardware? Is there some software pieces on top of that? Absolutely. So Cisco is market leader in the cloud infrastructure market synergy research group. Cisco UCS is a great compute and storage platform, like take the UCS-C240, for example, which is part of the UCSO offer between Cisco and Red Hat. So it supports up to 12 large form factor drives up to 60 terabytes with two SSD drives. You also have the UCS-C3160 for 60 large form factor drives for up to roughly 360 terabytes. But kind of what that means is with the storage architecture and the amount of storage that you have, you overlay something like, you know, Red Hat, Ceph on top of it and you have a complete kind of scale out storage solution. And what Ranga mentioned was because the crush algorithm allows the location of the storage to be computed rather than stored, there's no single point of failure. You know, there's no controller, there's no metadata server, and you combine that with UCS with the active-active fabric pass and the high-scalability. It's a really nice scale out storage solution. So what's the main benefit throughput or IO? It means IOPS, is it throughput? What's the main value of that? So it's, I think customers, so the way the customers think about this, they look at it three different dimensions. You know, one is latency, the other is performance and third is capacity. So depending on the workload, you know, there are certain, what we call cheap and deep with just a huge archival element where capacity is more important than performance or latency. Or you have classic scale out where performance and latency comes in. So what we are working with Cisco on is classic, you know, CVDs that help organizations go back and take a lot of the guesswork out of the system, which is I'm running this type of workload, what's the best configuration that I can expect from the Cisco hardware and the CEP software to work together? So that's the overarching theme behind what we're doing. So Dwayne, put this in Cisco's language now because Cisco has been doing this stuff from the infrastructure from day one, from balancing, you know, packets, local bound, local director, all that stuff and routers. Software's changed, obviously. Is the centralized piece a big part of it and how does UCS make it intelligent? Because that's going to be the key thing. And has that worked across all the other Cisco UCS opportunities, sure. So UCS, it's a very innovative server solution. It was designed for virtualization and cloud from the ground up. It has its own integrated management controller, IMC chip built into it. So it uses the UCS manager so you can actually configure it with a real easy to use GUI. You can configure it securely with SSH and CLI. But you create service profiles, essentially, which are things like RAID levels, you know, bias levels. It makes it very easy to get a new server configured. It makes it very easy to copy configurations from one server to another. Makes it very easy to replace a server. But it's really the UCS manager dramatically lowers the optics and enables the scale out architecture. All right, so, you know, when Red Hat made the acquisition of Ink Tank there, one of the things that had flagged for me was that when I looked at the survey of people using OpenStack, you know, Seth was right near the top. Gluster was another one. So Red Hat's, you know, bought a lot of those open source, you know, groups that are being it. Talk a little bit about the customers. Where are customers with, not just Seth, but with OpenStack in general? Do you have any kind of joint stories that you can share with us? Good. So for example, Broad Institute, so we'll be doing a session tomorrow where we'll talk a little bit more about it. But just kind of the ease of use of UCS, the scale out combined with the power of Red Hat. But we're seeing lots of interest and adoption with Cisco and Red Hat. And you know, the other part is, even though it's been a year since the acquisition, right, one of the things that I'm personally very thrilled with is just what I would call unabated pace of innovation that's going on in the Seth community. Couple of weeks ago, Yahoo, I think they published a blog, how they were managing almost a 14 petabyte of storage using the Seth technology for their Flickr property. That shows you, you know, the true nature of what we mean by scale out. Today the conversation isn't about terabytes, it's about petabytes. So scale out becomes very, very key. So while we continue to innovate on the product side, what I'm continuing to be amazed with just the innovation that's going on in the community. And support for both block and object storage with Seth, which is very nice. All right, so, you know, we've got the kind of the momentum with Seth. The partnership with Red Hat and Cisco has been on for many years. You know, what's your thoughts around OpenStack in general? You know, it's early day one here at the conference, but you know, how many of the conversations that you're having with your customers at least have OpenStack and how many of them are asking for that at this point? Yeah, I mean, I can tell you from our perspective, the upswing in the conversation compared to where we were a year ago is definitely way more than where we were a year ago. And that's, you see that in the crowd today, right? It's, I don't know, somebody told me it's 5,000, 6,000 people at the summit. Yeah, 6 to 7, yeah. Which is what, 40, 50% more than where we were in Atlanta last year. Yeah, over 4,000 last year. So definitely it's not about, it's not in the lunatic fringe of your will, it's kind of becoming more and more mainstream adoption. And we have hundreds of proof of concepts on the OpenStack side of things. And a vast majority of them have CEPH, part and parcel of that implementation. So what is the big uptake in CEPH with customers? Can you guys talk about the use cases in particular? What specifically are the adopting CEPH on? I'd say the number one use case for CEPH is block with OpenStack. I mean, the survey, and I saw the survey from yesterday too for this conference CEPH is still north of 60% of the storage substrate for OpenStack. So that is by far whatever, if I can use the word the killer app for the CEPH block solution today. And obviously, even for object, the scale out object is another use case where especially around the, what I would call the digital enterprise where people are just audio, video, all those things need to be managed. Those would be the two key use cases that we're seeing in the marketplace today. And how about as if for cost control perspectives, I mean, mostly? No, I think it's more than cost. I think it's a flexibility, which is really starting people to look at and say, okay, cost does factor into the equation, but I think fundamentally it's about the scale out. They want the ability to start at, I don't know, 500, 800 terabytes, and can you really score up to 20 petabytes of storage? So the number one thing when we ask them, why do you go with the solution? It's about scale, the scale out nature. So I got to ask both of you guys, the evolution of OpenStores, you got APIs, this is now table stakes in the cloud. How does all work? I mean, Red Hat, you guys are purists in the OpenStores formula. Pure play OpenStores, great business model's been working great for many years. Now you have this kind of like the ODP model in Hadoop, you're seeing people work together. EMC's adopting OpenStores, other people are adopting OpenStores. So OpenStores is now becoming an opportunity, but also in some cases a marketing program. So I got to ask the OpenStores question to you guys, how does the OpenStores community win with this? How do you guys get the advantage of the rising tide, floats all boats of OpenStores? And how does that render itself in the customer environment? Is it through APIs and whatnot, how does that work? I mean, it's all about solving customer problems, right? Tremendous interest in OpenStores, tremendous adoption, the power of the community. We compete on implementation, provide software and services, provide value add layers on top of it. But it's very much, it's a major asset to be able to have something that can be changed and it's very flexible. And just when we look at the OpenStack surveys, you know, it's not even the cost savings, it's the most important thing, it's the flexibility and the pace of adoption. And so we completely support that. Well that's what you're saying about flexibility. That means when tuning it, that means writing code, is that what you're talking about? Yeah, it's about flexibility to fine tune to certain hardware implementations, but also, I think it goes beyond OpenAPIs. I think there's more value in OpenStores than just OpenAPIs, right? So we subscribe to the theory that OpenAPIs is in a way a subset of open source. Right, so for somebody to fully take advantage of it. The other interesting thing that I saw in the OpenStack survey yesterday was, if you look at the amount of storage, I think there was one class which was one petabyte and above. Last year it was 2%, this year it's 6%. So that to me is a true indicator of people starting to grow their storage infrastructure with an OpenStack. A 3x between last year and this year is pretty good. I mean, the people want freedom and choice. That's obviously, OpenStores equals free and that's always been our mantra on theCUBE. But when you have production environments, they like to say Cisco environments, there's a lot of proprietary gear involved in Cisco. It's certainly huge de facto install base, de facto standard, I always call the Cisco model. But the idea is that they want control with open source flexibility to do tuning. And they want security. They want to have a comfort blanket. They want a warm fuzzy blanket to kind of go to bed at night and not worry about failure. So I got to ask about redundancy, throughput. Talk about this idea of failure. Is there a single point of failure problem? What do the customers talk to you guys about in this? Because that comes down to, okay, love OpenStores, but will it work and will it fail? Who do I call? Is it a single choke to throat? Choke to throat? Let me, throat to choke? You know what I'm saying? So like, customers worry about that stuff. What's the failure point? If I can just take a step at the beginning. So I think you hit the nail on the head, right? Which is there is a huge difference between a project and a product, right? And if you look at, and we pay very close attention to it, right? I mean, it's great from an innovation standpoint that's going on in the community. But for us to give our customers a bunch of bits, you know, we absolutely make sure that it's enterprise ready, right? That includes security, that includes, you know, things like testing, everything else that customers take for granted. So that's, if you can think of it, that's a secret sauce we add to the overall, at least from the software side of it. About 10 years you guys offer on your Unrel? Yep. Then it's huge. That's huge. You guys can call us from, you know, release. So that's one thing that the comfort that they get in going with the Red Hat stuff storage is the fact that, look, this has got the Red Hat badge behind it. I know what it stands for, so. Yeah, I get that, I get that all. But I mean, that's kind of like, that's the overarching, like, warmness that you guys provide at the beginning. But in agile cloud, you know, stuff's breaking, right? You got to be ready for having to failover. So talk about that piece of it with the new stuff with stuff. I mean, is there a single point of failure? Is it more redundant? Is it distributed? Well, I can talk about the CEPH inherently is a very distributed architecture, right? And there is no single point of failure. Just built from the ground up, as you was talking about earlier, built from the ground up for an environment that scales out. So you cannot have a single point of failure. Things like no metadata and all those things are just attributes of that. But the fundamental pieces on what CEPH has built upon, and so is Gloucester, is there is no single point of failure, so. And that's the scale of it. That's the scale of it. That's the whole scale of it. What's your take on it? So absolutely. And then when you add the active-active fabric pass, the fabric interconnect on UCS, for example, one node can go down, the other can pick up. There's also a lot of technologies just to scale out the server technology, plug in a new UCS blade, or it's automatically discovered as part of the chassis. These are the things that the customer needs because you made an excellent point, is they need quality components behind this cloud and they need something that's going to lower their optics. All right, so where did Crush come from? Like to this controlled replication under hash, whatever it's called. Is that a joint? Is that a Red Hat product? Is that Cisco? No, so Crush is actually Sage Wheel, who was the founder of Ink Tank. That was his PhD project that he authored with a couple of his peers when he was at University of Santa Cruz. So that is kind of the genesis of the Crush algorithm. Because that's part of Red Hat with the acquisition. That's part of the CEPH project? Yes, yes, absolutely. It's part of the CEPH project, and it's part and parcel of it, right? And then, yes, we inherited that as part of the Red Hat CEPH project. So you guys are staying behind this in a pretty big way. Absolutely, absolutely, absolutely. Dwayne, I got to ask you with the Cisco. UCS has been really hot lately. Certainly a lot of debate on market share on the server side, what they include, they don't. We always debate on the cube. Server shares up, and why is UCS popular? Why is it so successful? Is it more of an, because it's integrated? I mean, what's the magic behind UCS's success? Well, I mean, it's designed from the ground up to scale and for virtualization. It's designed for a single fabric, for a networking, compute, as well as management. The integrated management controller directly as part of the infrastructure. The service profiles, setting rate, and bias settings. It's designed from the ground up to be very scalable, you know, which is why it's been a component of, you say, D-blocks and flex pods, and now OpenStack. Yeah, so Dwayne, you bring up a great point, because with the wave of virtualization, you know, kind of was the rising tide that rose all boats. I think I cataloged over two dozen storage partnerships that Cisco has. What makes the Red Hat partnership kind of special, or what customers are driving it towards that joint solution set? Sure, sure. Between the two of you. So, you know, Red Hat leader in open source, you know, Cisco leader in cloud infrastructure, market share, it's a great partnership, quality components for a complete compute storage and networking solution. So we're very excited about working with Red Hat, including our CVDs and our joint offers like UCSL. All right, so what's the outlook for the show? What are you guys working here? What are some of the conversations you're hearing in the hallways around your relationship, around OpenStack? Share with the audience in the last minute we have some of the top conversations that you guys are involved in. Well, it's just three hours into the show, so I haven't had too many opportunities, but overall, you know, I think, at least informal conversations I've had with customers around the CVDs or the reference architectures, that really rings home, you know, because that's one of the things they look at and go, it's not a missing piece that, having that really takes a lot of the friction out of the system. So, you know, we are bullish. I'm sure Cisco is, the customers are eagerly looking forward to it. Yeah, absolutely. I noticed the conversation is changing. It's not so much about pilot anymore. It's about production. A lot more activities. People are excited about where OpenStack is going. Lots of questions about Magnum for containers for something that Cisco and Red Hat are working on. I'm ironic for Bare Metal in the release as well, but people are very excited about the feature velocity. We have some crowd chat activity from Bert Latimore and some folks on chat here. Regarding Red Hat, Seth Storitz, this is to you. I'm wondering about all the potential use cases we're delivering just in time scale out for those high growth applications that are somewhat unpredictable. Yeah. What does he mean by that? This is from David Deans. So, I think the key word is unpredictable, right? Which is, that's one of, when I have these customer conversations, I think scale out and unpredictability go hand in hand. Because there's no way to predict. If I open up a, you know, I'm a line of business for a large insurance company and open up a web property. You have no idea whether it's going to be 2000 people are going to show up or 20,000 people are going to show up. So, how do you have a storage infrastructure that really accommodates and brings some control into that unpredictability? So, this is again. This is the hyperscale model. You've got to be ready for auto scaling, auto-scaling, all those, all that stuff. That's all software. It's all software, it's self-healing. Yeah. You know, which is how do you go about adding 10 new servers? How do you make sure that the load gets distributed evenly? So, you know, those are all the conversations that we have day in and day out. So, and I think the, in this specific instance that self-architecture lends itself really well to an unpredictable nature of the storage requirement. And self-balancing as well. Self-healing and self-balancing. Great point. Yeah, I mean, this is the grid computing vision. I mean, it's all those stuff's being recycled back from the old days. Yeah. S.O.A, service-oriented architectures, web services. I mean, I heard ZAML today in the keynote. So, I mentioned ZAML. What year am I in? 2001? I mean, a lot of that stuff was being worked on by both you guys, and I know for a fact, I mean, a lot of the web services I was now here. But it's got a little cloud twist to it. Plus, the trajectory it's on. I mean, that's the more important part, right? Which is just the, from last 12 months ago, where we are, I'm sure you guys are seeing the same thing. I mean, it's where the rubber hits the road. So, I got to ask you a final question. For the folks watching, we always say, you know, open stack's got to go faster, got to go faster. And certainly, the big vendors are coming in, bringing some real muscles to the table in terms of code and knowledge and IP in an open way, you know, out in the open. What's the show this year? What is the meat on the bone? What is the real deal? What proof points are here at this show that you've seen or are hearing that can give confidence that open stack has legs? I mean, I look at the agenda, there are a lot more customer presentations, right? From compared to previous years, that to me is a true testament of adoption, right? The vendors can talk all they want, but customers are... The solutions. The solutions, exactly. I completely agree with that. And it's even the applications the customers are talking about. It's no longer, you know, just kind of DevOps get an application up quick and running, mission critical application, you know, large scale web servers, large scale e-commerce. Yeah, and they're doing it in a way that's customized. That to me, that's what the whole promise of open stack was, can it be stable? Can it be hardened at the infrastructure level? And then people can use the building blocks in whatever flavor that they feel is their business model. That lends to their business, absolutely. Guys, thanks so much for coming on theCUBE. Thank you. Red Hat and Cisco here on theCUBE together sharing their relationship, their partnership, what they're working on. And of course, we're sharing that with you. This is theCUBE. We'll be right back after this short break.