 Live from San Francisco, extracting the signal from the noise, it's theCUBE, covering VMworld 2015. Brought to you by VMware and its ecosystem sponsors. And now your host, Dave Vellante. Welcome back to Moscone, everybody. This is theCUBE. theCUBE has been here for, this is day three of VMworld 2015. We go out to the events, we extract the signal from the noise. A lot of signal, a lot of noise too, a lot of signal around storage. Ed Walsh is here, he's back, he's the CEO of CataLogic and Bina Hallman, he's the Vice President of IBM Storage and Software Defined. Welcome, great to see you again. Bina, we just saw you last week in Marlboro at Wikibon headquarters. We got the deep dive under NDA and now you've announced everything and Ed, good to see you again. Well, thanks for coming on. So, big show for you guys. You're coming out, we had Eric Herzog on, he had the Hawaiian shirt. No Hawaiian shirt for you. That's right, yeah, yeah. Hawaiian shirt was a theme, right, Hawaiian. That's good, you guys making some noise, that's good marketing. So what's going on across the street at the IBM booth? Yeah, we've got, you know, it's been a really good show for us. We've got quite a bit of traffic and, you know, client focus. We've got a whole family of storage offerings that are out on the floor from our store-wise offerings through Spectrum and our flash systems, of course. We also have our VersaStack offering, you know, the joint offering with Cisco and VersaStack, our power and systems, so quite a bit of new stuff out there that we're demoing. And how about what's going on with CataLogic? We've seen you a couple of times, it was so early on, we didn't really talk about what's going on across the street, but, so I hear it's crazy. I haven't been able to get over there. No, great traffic, I think it's better than any year we've seen. So in the booths in the first two days, we had over 2,200 unique people coming through and actually we had a little theater where we actually had over 750 people listen for 10 minutes, which is a long time to get people to do. So very engaged individuals, but also a lot of foot traffic, so it's great. So what kind of discussions are going on, Bina? Sure. People must be talking about, you know, some of the announcements and, you know, the future of VMware and, you know, how to solve various problems, hybrid cloud and so forth, but what's the practitioner discussion like? What are they asking? Yeah, yeah, a lot of discussions. And, you know, as we talk to a lot of the clients, it's really around some of the business transformations, all of the dynamics going on in the industry and the space, new applications, new workloads of businesses trying to look into transform their environment, but really the discussions are also around, you know, investment protection. They want to be able to take their existing environments, transform them and be able to do rapid innovation and deploy new services, right? So those have been kind of the conversations that have been focused on those topics and really that's where our focus is, to help our clients take those existing environments, transform them to really adopt to some of the new applications and when you think about the existing environments, right, they've got typically, there's the traditional environments, right? And those are often running some of the traditional workloads. Really oftentimes they can be very optimized. So things around virtualization, around data tiering, life cycle management, those types of things are very much the topic of discussion and then helping also then bridge to some of the new environments that are needed for some of the applications that are born on the cloud, those types of, you know, analytics that really require elastic, agile infrastructures so that developers and, you know, can rapidly deploy new environments and deliver new applications very quickly. Of course, bridging those two environments is very important as well, managing that data holistically and protecting the environment. We've had a lot of discussion and conversations about hybrid environments as well. So that's kind of the genesis of the conversation. That's a good overview and I love the investment protection comment, Ed, because that kind of brings me to what you guys are doing. I mean, you're all about that are predicting that we're getting a return on the existing assets. So you're leveraging the existing infrastructure, might be snapshots and et cetera to sort of solve other problems. Maybe talk a little bit about how you're doing that in an IBM context. Sure, so we announced this week that we're integrating our copy data management onto IBM's whole host of their platforms, which is a fantastic. Everything from their high-end flash arrays, V9000, the whole store-wise line from 3000 to 7000, SVC, Flash Gully Manager. But what we do is just like in all environments, we don't make you move your storage, we use your current investments to get a return investment. You already have your storage, you know, hypervisor. We simply add our software into the environment, we agelessly discover the environment and then we give you some key use cases. So in the IBM example, for their environments, we're able to come in there again, agentlessly, very simple and give them three major things. Life cycle management, just easier way to drop OPEX, driving the way you create copies and manage them through the life cycle. Second thing is the automation orchestration. Everyone's looking for ways to drop their OPEX and get more agility, but things like doing daily DR. Again, don't change your storage, don't change your hypervisor, but we can give you daily DR or refresh test dev as often as you do snapshots or dev ops automation, give the power to the app team to actually do these snapshots or analytics. You know, how do you get near real-time data access for analytics? So those are the things we're able to provide but without changing out what you have. In fact, seamlessly work on the environment and provide those type of capabilities. And then the third, which I was about to go through is self-service. Organizations are trying to reduce or OPEX by doing things like template for provisioning. Allow the users to provision their own storage. We're able to, let's say, gold, silver, bronze type of storage templates for particular application groups and based on role space access, they can do that. So provisioning, but also the more advanced users are usually saying, hey, I'll use the same role space access and API to do things like DR, but let the app team do DR. Who's better to check DR than the app team themselves or will dev ops, the app team needs to drive the dev ops and we provide that for API again across the whole IBM portfolio of storage. So I wonder if we can unpack some of those. Sure. And Bina, you mentioned life cycle management as well. So add some color to that discussion. What does that mean, life cycle management and how does that drop to the bottom line? Well, so the biggest thing about copies, right? So everyone talks about copy, we do copy management and everyone has copy data. You create copies for good reasons, right? For test dev, DR, analytics, the thing is too much of a good thing is a bad thing. All the tools you have with all this complexity heterogeneous is you're easy to create copies but never manage them. So what happens is IDC study, if you have one copy of daddy, it ends up with more than 50. So what happens if you have a life cycle management instead of a script that just creates and then you can't manage it, you can bring it in environment and we're managing through the public API it's the same process but the platform's able to do life cycle. Plan, provision, leverage them for the automated use cases but also end of life them and report on SLA. So I set them up to do certain things. Local recover points, how many of them? RPO, RTO, remote or in a hybrid cloud but it's hard to get, are you actually meeting your SLAs? So that's the other thing you get from the platform. And again, all we're leveraging is all the power that the store-wise family has. They do the snapshots, they do your applications. We're just using the public APIs. That's why we're completely agent-less and don't change things. But the life cycle is really where the problem is and we're able to do that because we're actually in place, we're running on your production storage. Yeah so let's talk a little bit about the portfolio and I want to come back to automation and self-service but so the IBM portfolio has evolved and you guys now have a pretty solid lineup from cradle to grave really. So maybe to give us an overview of sort of what we're looking at here in the portfolio and maybe recent announcements too. So let's talk a little bit. Yeah so you know our portfolio, it ranges as you said from end to end, right? So from our primary storage and perspective, our traditional, our optimized appliances around store-wise, XIV, DS8000, of course our tape and SAN offerings but also as we look to software-defined storage earlier this year we introduced a complete set of software-defined storage offerings, our spectrum portfolio, right? That includes not just the storage components but also from a data protection, from a management perspective as well as archiving some of the life cycle management. And then we've got our strategy and focus around the flash systems. As you know, we have the IBM flash systems that came through our TMS acquisition, we've built that out and now we have a complete family flash systems offerings, right, V9000 being the latest addition of the flash systems to the family. That's based on of course our SAN volume controller from a software perspective as well as our tier zero flash systems. So that's kind of the full spectrum of the portfolio and from an announcements perspective, as you know as we talked when we were in Boston, we've had a number of key announcements, this quarter around our data protection, being able to leverage cloud as another tier storage, software being one of the key cloud providers. We also introduced on our virtualization spectrum virtualized offering and one of the things we discussed here at VMware, right? The SRM high availability stretch cluster support and then of course the V-Vol offerings. We also have our spectrum control insights, which is really a cloud services offering on a software that allows you to get insights and manage your on-premise storage infrastructure. You guys had a couple of sessions here. We did, we did. And on flash systems, right? We had Mike Kuhn talked about it a little bit yesterday, the project capstone with VMware, HP and IBM around some of the performance benchmarks that were presented there. So I wonder if we could come back to your three, life cycle management, automation and self-service Ed. The, Bina was talking a lot about flash just recently. That's a big part of the portfolio. It's a very high growth area. There's a relationship between flash and just sort of copy data management catalogs, et cetera. Particularly as it relates to one of the use cases that you talked about, test dev. Being able to give developers a more current, you know, production currency level of data to work on. Can you talk about that a little bit? How you enable that and what the business outcome is? Yeah, so without copy data management, what you do today is if someone needs a test dev, you create a copy and you move it into another environment and you use a separate environment. But it's very hard to keep that environment up to date. Well, what you have is in city A, you'll have a copy of production data and you're replicating it in city B or building B or the cluster over it. You'll have an application consistent copy of what production looked like a minute ago or an hour ago. And it's just sitting there waiting to be used. So we use the APIs of VMware, but also the storage itself and actually create copies. And you basically refresh test dev environments from that copy without impacting production. And what you're able to do is now keep that up to date all the time. So you're able to refresh those mount points as often as you do in snapshots and you keep it up to date. So you're not working on old test dev data. And then dev ops is just the next step, right? So in these environments, what we do, the automation we provide is we talk to VMware, we talk to the storage, we provide a fence network environment, bring up so many servers in a particular order, last good snapshot, bring it up and they can use it. Dev ops is now hit a button and you promote to production and you do another snap and you do a cycle of development. So either enhance test dev, just getting quicker time, using the more recent data but doing it in an automated fashion, doesn't kill the storage team because we're using automation to do that. And just to clarify, when you said we create copies, you mean we as an IBM creates the copies through snapshots and you get whatever granularity, you can be sure you got very granular levels of timing, right? Yes, right. Okay, so I could take snaps every, I don't know, what's the minimum level of granular, five minutes or 15 minutes? Yeah, I mean it's frequent as the clients need and then we have the space efficient snapshots so only taking the changes that are needed from a snapshot perspective. Right, okay, and so then when my test dev team is done, they're deployed, what often happens is, well yes, they're space efficient but they add up and they stay there. Yeah. So you go back and say, okay, now it's back to the lifecycle piece. If we clean them up. So either you're giving the next mount point, the next day or whenever you want to do it weekly but it's very easy to, because you're already up to date because you have this mirror copy going, very easy to do the automation but also it's very easy to tear it down without any resources. And that piece of it, and it gets me to the third one, that's self-service for the dev team? So all we're doing, so a lot of this is available for the IT team, really the storage team to put all these workflows together. You can do it ad hoc, you can do it scheduled daily DR, test dev but what you can do is through roles-based access and our API, our REST API, you can literally give certain developers a development team a set of templates, you know, gold, silver, bronze and allow them to actually do self-provisioning but also they can actually call upon doing daily DR, test dev. The key thing is that the REST API, they can only see what the storage ministry allowed them to see but they can be very powerful. They can do their own dev ops and that self-service helps both the app team move faster, less frustrated with the storage team but the storage team is outside the mundane keeping up with things so they can actually do their day job and it moves faster but also you get the control of cost which is what the CIO wants. Great, so there's a lot of talk at this event about hybrid cloud, VMware on day one gave some, you know, examples of hybrid cloud and you know, laying out their vision of the future. You know, some of it's ready today, some of it's sort of futures. We'll see it, we'll see, I'm sure we'll see announcements next year that relate to the tide of what we saw today but nonetheless they're putting forward that vision. I could see, Ed, your capability being important in a cloud environment, if I'm putting stuff up into the public cloud, I want to keep track of it there too. So, Bina, I want to ask you, when you talk to customers, you mentioned in the booth discussions that people are talking about hybrid cloud, what are they talking about when they talk about hybrid cloud? What are they actually doing? Yeah, yeah, you know, a lot of conversations around DR, a lot of conversations around, you know, clients really wanting flexible consumption models so, you know, whether it's consuming storage as optimized appliances or software only to leverage your existing, you know, infrastructure they might have available all the way up to service in a cloud, right? And being able to offer solutions that provide each of those options, so for example, our XIV offering. Today you can purchase it as a optimized appliance. That software is also available for clients to purchase the software only to leverage on any x86 kind of an environment and also available as a service and software, right? So now having that flexibility and licensing model, consumption model to be able to leverage that consistent set of capabilities in different consumption models is pretty powerful, right? It gives the clients that flexibility that they need for different use cases. Now, just to clarify, so you support, I know vast swaths of the portfolio, but not the entire, do you support XIV now? Is this that yet? Not XIV yet, but that's, but there's no, okay, so that's roadmap. Roadmap. But there's no technological reason. Not at all. In fact, we talked about it, so we started in February with one storage platform supported and now we have over nine. So our provider model allows us to quickly add these different platforms. Well, I've kind of reminded Ed when you were running store-wise, you know, they had that compression technology and we had this discussion, we said, well, is there any reason that can't be applied to virtually anything? And, you know, remember it was an appliance at the time? We really didn't have to be, now it's sort of embedded. I would think that the technology that you guys have developed has the same sort of application here, right? It won't be in an array, right? Because, right, compression's now in the V9000, it's in the whole SVC line, the store-wise line now that's an XIV, so you're seeing it going across the portfolio. In this case, it's software-defined. IBM has a very strong software-defined portfolio that's there today. We're just another software-defined focusing specifically on different challenges about round copy management. It's a perfect complement. We leverage their layer and their APIs to do things. So it's in place, you don't have to change it, we're not changing the way you replicate Snapshot or leverage any of the IBM portfolio, we're adding that automation orchestration piece. And there's no, you said it several times, but just to collaborate with no agents. And that's a key thing. So we don't make you change anything, we don't make you deploy agents, we're literally just talking to the public APIs. You download the software, you just register your storage controllers in your vCenters, which you do that once. After that, that's all we're doing. There's no data flowing through us, so there's no latency in a flash environment. Also, we don't make you move your storage onto a different platform to get these values, we do it in place as a term we do on your storage. And if you have a copy data management problem, it's actually on your primary, so we solve that, but also give you all the automation use cases that people kind of associate with copy management, automated, you know. And you're not messing with my performance, right? In fact, for the flash team, you don't want anything in between you. So all we can, we can't slow down a flash machine, we're literally talking to our APIs, which is critical. So. Right, it's programmatic, yep. And you know, to add to Ed's comments, so as you know, SCC leadership, virtualization environment, store-wise family in our V9000, it's built off on that software stack. And the architecture of it, you talked about real-time compression, right? When you virtualize heterogeneous storage, we virtualize 300 plus IBM, non-IBM, hybrid storage environments, or arrays. And so with sand volume control or virtualization, as you apply real-time compression, all of the storage pool that's underneath that get the benefit. Similarly with copy data management, as you layer the copy data management on top of that software stack, then all of the storage underneath get that benefit as well. Awesome, all right, we have to leave it there. I'd love, thank you, Beena and Ed, for coming on theCUBE, love the collaboration, solving some problems together. You know, you can't do it alone. You know, as Carl Eschenbach said, you know, you're ready for any, probably alone, probably not, but together we can make it happen. So really appreciate you guys coming on, sharing your perspectives, thanks. Thank you. All right, keep right there, everybody, we'll be back after this brief word. We're live from Moscone Center. This is theCUBE.