 Live from Las Vegas, it's theCUBE. Covering Edge 2016, brought to you by IBM. Here's your host, Dave Vellante. Back, this is IBM Edge. Check out ibmgo.com for all the major IBM events. You'll see tons of content flowing through there. This is theCUBE, the worldwide leader in live tech coverage. Benahalman is here, he's the vice president with the IBM Storage Division. Joined by Ken Barth, who's the CEO of CataLogic Software. Folks, welcome to theCUBE, it's great to see you. Thank you, likewise. It's great to be here. Oh, Dave, Dave, it's fabulous to be here. Thank you very much. My pleasure. So fresh off of VMworld, which really should be renamed Storageworld, according to Stu Miniman. So, Bina, let me start with you. You guys have been busy. This is our fifth Edge. Yes. The inaugural year of 2012. Really come a long way. 5,500 people, expanded into systems. So give us the update, what's going on? Absolutely, yeah, this has been a wonderful Edge 2016. We've had a tremendous turnout and lots of clients. The sessions have been wonderful for storage. This is a fantastic event for us. We have lots of new announcements around our Flash and our software defined storage offerings and just enjoying a time talking with the clients and the analyst community and partners. And copy data management has been a hot topic this week. That's your wheelhouse, that's what you guys do. So this has been a great show for you guys. Give us the update on catalogic. Oh my word, it's been a fantastic show. I mean, as I mentioned to Bina earlier, we couldn't ask for a more well-schooled crowd that really understands the issues of copy data management. And it's just been an exciting place to be. We're getting a lot of interest, a lot of traction. We're now covering application support with Oracle and SQL. We've extended into that. And as you know, Dave, and Bina certainly knows this, the world is awash in snapshots right now, right? As a matter of fact, some of the stats from some of the analyst firms will say that up to 50% of your primary storage is now snapshots, right? So all we simply do is, we've been talking about it a long time in terms of copy data management, but all we simply do is just give you a catalog of all those snapshots and then allow you to use it to solve business problems. And it's really starting to catch wind. So talk some more about the problems being of copy creep, if you will. I mean, don't you sell a lot of storage when there are more copies? I mean... We do, but we're also in the business of helping clients solve their problems, right? And storage creep is a problem that clients have these days, especially around copy data. There are copies for dev tests. There are copies for backup. There are copies floating around the environment. Really, clients are looking for a better way to manage and simplify their environment. That's what our whole approach is around storage and software-defined storage. Helping clients optimize their existing environment, whether it's the copy data problem or drive more efficiency utilization, virtualization, or to also new generation applications, right? To prepare for that. And then be able to seamlessly manage whether it's the copies or all of their data from an on-prem and a cloud environment. Not only does it solve a problem of reducing the number of copies, but it's a management challenge as well. And that's what this is really about, is copy data management, managing of that environment, both on-prem and your cloud environment. So it's visibility. I sort of tongue in cheek about you selling a lot of copies because as we all know, the storage market's elastic. If you solve a problem, you free up more space, you're going to sell more storage. It's always, always, always been the value, right? I mean, the price keeps going down, but you keep buying more, right? It keeps growing, yeah. So Ken, talk a little bit about the relationship that you guys have with IBM and the platform that you guys develop or the technology that you're developing. So first of all, let me say this. What Venus said about how they approach the situation, they look for value. And it was just absolutely wonderful to hear in the keynote the other day how they talked about, you know, by using a product like ours, our copy data management product, it really brings more value to the customer. They weren't really focused on selling more storage per se. They were interested in solving a customer's problem, right? So let me set that aside for a second and tell you the way we approach the copy data management problem, babe. And actually, copy data management, I call it snapshot management on steroids and my guys get really angry at me for saying that. But in essence, you know, taking copies of data on different platforms has been around for a long, long time. And what we saw a couple of years ago in CataLogic is we saw this involvement into it some point in time, this is going to be like the third wave in data protection, right? That you really can, the more you can begin to manage your snapshots or your copies, if you will, really it can be that next wave and become a more efficient way to restore applications, a more efficient way to run business processes like test dev, business analytics, and really a more efficient way that can really drive CAPEX and OPEC savings. So why is it more efficient? Because correct me if I'm wrong, but CataLogs exist today, they're just locked in just different stovepipes of infrastructure. Is that right? And so what you do is you surface that. We kind of mine, we mine it. So as an example, we work closely with the Flash team with Beane, we support their A9000, the XIV, you know. Just say Spectrum. Spectrum, yeah, Spectrum, yeah, Spectrum Accelerate, Spectrum Virtualize, yeah, Spectrum this, Spectrum that. We support all of those, and so we're able to pull all that information together in one consolidated view, if you will. And then once you have that, then you can now serve it out to the business teams. Right? And then from a Flash systems perspective, Flash is, you know, from a DevOps perspective, Flash is an ideal platform and to be able to now very quickly, near real time, be able to do the snapshots and be able to allow for self-service provisioning for developers to be able to get the near real time view of the information they need. That's pretty powerful versus, you know, weeks of putting in the request and getting the information they need. Well, Flash is kind of like, when virtualization came out, it was so easy to spin up VMs. Right, everybody started spinning up VMs. Flash makes it even easier to create space efficient copies that even though they're space efficient, they still take up space. But more importantly, they're just out there. So you got work in process and you're not able to keep track of them. So what's the relationship between Flash and what you do guys are doing with catalogs and copy data management? Well, I think what it will do, it drives the way I look at it. If you take Flash, Flash storage can save people a ton of time in terms of efficiencies and make things faster. But if you add a copy data management platform to it, particularly our ECX, which I call in place copy data management because all we do is lever the fantastic abilities of the existing Flash vendor like Venus boxes, that if you take that, then really you have a geometric savings in both CapEx and OpEx because you can start creating efficiencies, not only identifying unused storage or snaps that have been made that you don't need anymore. You can start cleaning up the storage. As Vina was saying, you start offering those views into your test dev team as an example. And if you have a self-service portal like we do, they can start calling those copies themselves. So you start having some OpEx efficiencies here. So I feel like from a savings perspective, if you add Flash plus a copy data management platform, you really complete the equation and you go to a huge amount of savings across the board. It's almost geometric. Well, the other value piece that we've talked about, David Flores spent a lot of time talking about this, is you can serve so many more copies out of Flash, yes, then you can share them, then you can with spinning disk, just because of the bandwidth capabilities and not suffer latency. And as a result, that means that the devs are going to be much more productive because they're going to be working on much more current data, as opposed to n minus one, n minus two, n minus n. Okay, have you seen that start to take effect? Absolutely, I'm going to say devs as well, but also you can look at guys that own applications or ladies that own applications, that they have to come back and do a restore. If they have a self-service interface, then you're right there. I mean, it's really kind of this whole modernization of your IT is what's happening here, right? And it's a way to kind of stair step into that. And I was just going to say, by partnering and working closely with IBM, if you look at what our value brings to their Flash offering, to look at some of the competitors, you've got to have four tools in the case of, say an EMC, you've got to look at, I mean, NetApp has probably got 10 to 12 tools out there, but all of this is contained in one, so it's really a very powerful solution out there. So, Beena, you bring the stack, the storage services, the copy services, and Ken, you bring the visibility, is that right? And you marry those together, it's software. So we'll talk about the integration. How difficult was that? I mean, it doesn't just happen overnight, you guys have been working together for a while. But listen, I think they were fantastic to work with, the unbelievable partner. They gave us access to the boxes. We have happened, we've been working for the last two years hard on our development framework, if you will, and when you do something like that, you have to make sure you have a series of APIs that are easy to work with. So we've been working hard on that, that now we can support many, many storage and platform vendors, and you'll start seeing that come as we go forward in the year, right? But these guys have been a tremendous partner. So you surface APIs, and that's how you integrate? The storage software surfaces APIs, and the catalogic software leverages those APIs to get visibility into the snapshots and help us manage those. Yeah, I didn't mean to interrupt, but we also have a series of APIs that allow you to further tie that in to other applications like Oracle, or you could tie it in to Chef, Puppet, these kinds of things to kind of complete the workflow. So if you think about it, the storage becomes an integral part, when you marry the storage to our copy data management platform, the storage now becomes an integral part of the application delivery, the transaction delivery, whatever they're trying to do. What's the relationship between sort of this topic, the copy data management, visibility, and hybrid cloud? Is there one? Yeah, of course there is. Clients have copies, they have copies on-prem. We have capabilities that we develop to enable movement of data from on-prem into the cloud environment. Well, when there's data in the cloud, there's data on-prem. Clients need have the need to be able to get a visibility. They need to have the view of their entire environment instead of managing this in silos to really be efficient with that management and have the seamless flow of data to and from the cloud. So there's a very key tie to hybrid environments here. So this was announced at the show, or was it previously announced? Yeah, we did announce IBM Spectrum Copy Data Management, which will be available in fourth quarter. Okay, so available fourth quarter. And what's the licensing model? How does that work? It's like our Spectrum Storage family. It's a per terabyte pricing. And we've got the Spectrum Copy Data Management is applicable to many of our platforms. As you know, we deliver that software, software only, but also as part of our integrated solution. So in many aspects, whether it's a Flash System A9000 or store-wise, in those cases, this IBM Spectrum Copy Data Management will be part of those systems. What's the, what are the choices that customers have? So I've got a set of, as I said before, in-appliance, stovepipe, catalog. Okay, that hasn't worked. I mean, it's okay, but it doesn't scale. I got this solution. What are our choices do I have? I can buy an entire platform, rip and replace my copy services. You can add another layer, right? So there are other vendors out there that sell another layer, Dave. So, and what they're really saying to you is instead of, you've already kind of picked your primary storage. So why go outside? They'll sell you another storage box, which is another utility, and then you'll have to lay another fabric, training for your staff. It just doesn't make a lot of sense, right? And now you're moving a copy. You're moving a master, if you think of on the IBM Flash, you've got a master copy over here. Why move that to another place? Where's your master now? Does that make sense? It just causes confusion in the organization. Yes, I don't have to, I mean, an example, we had Greg on from a hospital in LA, and he was talking about dropping in a Flash System. That was a versus stack with Flash. Used the SVC, didn't have to change anything. Had to change any processes. Exactly right. I mean, when you think about some of the successes, recent successes, look at data domain. I mean, they're so successful because you could just drop it in, and use those processes, and you didn't have to, and so many companies tried to do a rip and replace. Weren't they successful? So one of the value props, or I guess lack of friction is, I can use my existing processes, right? Yeah, absolutely. So when I install the spectrum copy data management, what else, what do I see as a customer? What new dashboards do I get? What does it all look like? What's experience like? It's a downloadable install, right? Very easy. You should have it up and running and it's basic in less than an hour. No agents? No agents. It's agentless. Once you have it installed, I mean, immediately it'll go out and start looking at the snapshots that are out there for you. So you'd be presented with a catalog, then there's a workflow that you can start setting up your workflows, right? And if it's a test dev issue you're trying to solve, it's a replication issue. If you're trying to do a global mirror situation to keep things, you know, synced up, I mean, it's just all right there. So it's pretty easy to set up. Will customers actually start deleting copies, do you think? Or will they just sort of leave them there and worry about the future, managing in the future? What are your thoughts on that? I think there'll be a little bit of both. I think there'll be environments where they actually do want to drive those efficiencies and do some level of cleanup. But there will be much more focus on the future, of course. If you can keep an environment more efficient as you go forward, then that's always beneficial. Okay, so, and how much of that can I automate going forward? With our part, you can automate it into end. And I want to offer one other thing. They can now identify, they have the visibility, as you said earlier, Dave, to now be able to figure out which snapshots, up until now they really haven't had that kind of visibility across the board. So you can define policies? Absolutely. And how does that work? Is that through? It's through our product. It's through ECX, Catalogic ECX. Again, it's a very much a GUI-driven ability to define policies. Very easy to use tool. Very easy to use. So take me through an example. What might a customer, take me through the anatomy of a policy, a creating data, creating a copy. What do I do next? So let's use Test-F as an example, right? So you're the infrastructure, right? I'm your Test-F customer internally. Right now, if I need a copy, I've got to email you, ask you for it, you're going to send me a copy. And immediately once you send it to me, that thing's going to be out of date because people are constantly updating that database over there, right? And so the policy might be, hey, once this database updates, let's update this. You know, let's give Ken over here in his Test-F team the latest copy of that data, right? And oh, by the way, I know that project's going to be done a week from today. Another policy might be deleted once it's over, right? Another policy might be let's move it up to the cloud, right? Another policy might be that once he completes his testing, let's move that, let's record that in Puppet and Chef, or where are you? Other places, instead of having to send that email, they can go do more of a self-serve. They do it to self-serve. It's all set up by them. So self-serve, self-serve. I got a policy engine. Absolutely. It'll get rid of my work in process if I don't want to run anymore. It's all programmatic. Right. Again, the storage becomes central. Your primary storage becomes central to solving the business problem you're trying to solve, and it works with it. And you're getting total visibility into the pieces that you need. How big is this business? Wow. I mean, is it like, obviously a subset of this $50 billion storage business, is it the kind of thing you're selling aspirin for storage and you kind of really don't know how big it's going to be, or it's just sort of subsumed into the whole storage flow? I think it's a necessary part of any reasonably good-sized business. I would say to you that any, certainly Fortune 5000, I would think, needs this kind of thing. On the test dev requirement that we were just talking about, I would tell you any company that has 90 developers needs this. And how do you guys look at it? Just make your products more competitive, right? Well, it does, provides the value, right? We're always looking to provide more value to the client. And we do think it's a, while the problem has been out there, solving that problem is something that not many have done. And for us, this solution enables us to help clients solve that problem. And I think you'll see it, the adoption of it increasing more and more, because the problem will continue to get worse if left on its own. So EMC made some announcements at EMC World, right? How would you compare what you're doing with what they announced at EMC World? So I think your product you're talking about is their ECDM, right? And ECDM is a very basic, I think it does some of this, but it doesn't near give you the ability to tie in. It doesn't have near the workflow engine, right? It's just, it's more of a point solution, in my opinion, right? And the value we see here is, especially around when you combine our capabilities with copy data management, if you think about it, EMC's approach may be related to a box kind of an approach, whereas when you combine our spectrum virtualization, which has heterogeneous virtualization of over 300 storage arrays, now you take that in a client environment and then give the client visibility across all of that environment, that's pretty powerful versus, let me give you visibility on this particular solution, kind of a approach. You're the silo buster in that equation. That's great. It's a stovepipe buster, as you were saying earlier, right? Great. All right, I'll give you guys the final word. Ken, you first, then Bina. Just bumper sticker on edge, 2016, what's the tagline? Oh my goodness. Great learning experience, wonderful customer base. Yeah. It's been fabulous. Fabulous turnout and lots of interest, lots of exciting announcements here. Wonderful. Excellent. Well, thank you guys for coming to theCUBE. Thank you. Appreciate it. Trying to give our audience visibility on what's happening at edge, extracting the signal from the noise. Keep right there, everybody. We'll be back with our next guest right after this word.