 This is Dave Vellante, Dave Chew of Edge, and here with my co-host, Stu Neumann. We're both at Wikibon.org. This is theCUBE's SiliconANGLE's continuous production, live at Edge. We had to call a little audible. Sometimes customers can't do live. They need, it's kind of an on-demand review by their PR departments before they can actually release the video. So that's just not how theCUBE rolls. So Eric Eiberg is here, and Eric, I'm very excited that he's here. He's formerly with TMS, now with IBM Flash Systems, came to IBM as part of the Texas memory systems acquisition. Eric is technical by background, but he's in a business development and a worldwide strategy role at IBM. Eric, I'm thrilled that you're able to be in theCUBE. Welcome. Yeah, thanks very much, Dave. Good to be here. So we've talked in the past. You guys briefed us when you were with TMS, and then we spent some time together at the announcement on April 11th at New York City. Big day for IBM and TMS. You saw the baby grow up, a teenager, and now it's coming into adulthood. Tell us, what's happened since you guys made the announcement that you're investing a billion dollars into Flash, and you really had that big coming out party in April? Right, so we had the real big coming out party, of course, and that was really the formal launch of the Flash system products, as well as the commitment to invest in the ecosystem. And since then, we've been steadily growing the team. We have our dedicated Flash sales and pre-sales organization. We've been steadily adding engineers. In terms of any real big announcements, obviously, there's nothing to report at this stage, but steady growth, lots of interesting things happening on the engineering side, and of course, we grow our collateral, we grow all the things which we need to effectively bring the product to the market. Now, one of the things that I learned, because I hosted a panel for Steve Mills that week, and one of the things I learned in talking to the customers both on the panel and at dinner the night before is many of them, not all of them, but some of them, and generally many of them, are putting the IBM Flash system product behind SVC. And you and I had a discussion about this, and somebody from IBM was kind enough to send me, I was on Twitter the other day, talking about, okay, I'm asking kind of like Colombo questions, and I'll ask you those questions. Doesn't that essentially put in a layer that is, provides, you know, puts an inject overhead into the system? And so they sent me some technical documentation, it was actually a red paper, not a red book. So I've been squinting through that, but let me just ask you directly. It's common that customers will leverage the SVC stack and put TMS behind that, and some of the customers that we talked to, like Kareem Abdullah, I believe, is one of them, puts tier one storage behind the SVC. But nonetheless, conceptually, you would think that adds a layer of overhead. Does it, why, or why not, and does it matter? So, of course, anything in the data path adds some degree of overhead, but the question is, what are the benefits, would you get out of that feature and functionality piece in the stack? And so, what we see is that the traditional enterprise storage customers, who want the snaps, who want the dedu, or I'm sorry, the compression, who want the replication, all of that, and they can't get it from the application layer, where we think, you know, lots of the tier one apps have it already in software somewhere else. But if they need that in the stack, then we'll deliver that with SVC. And then we'll do that without compromising the performance of the flash system in a significant way. Does it inject overhead? Yes. It's about 100 microseconds of additional latency in the base use case with SVC. But for that price of 100 microseconds, you get basically all the enterprise features of a tier one array. So you have the performance, or at least most of it, and then you have the feature functionality. That's a winning combination for many of our clients. Well, even better performance than the tier one array. Absolutely. You get the performance of the flash system, essentially, without compromising on the features. Which is, it's a question which we often have to ask, and that's why we've been very aggressive in developing bundled solutions that incorporate not only the sand volume controller technology, but also the flash system technology as an optimized tier one storage solution. Eric, I'm wondering if we can also talk, does that SVC also help with the scalability of the architecture? Because when we've been looking at kind of the flash solutions out there, obviously performance is the first thing that people talk about. Secondly is that feature set, which most of the solutions have been trying to catch up to what the traditional tier ones have. And then it's really how the scalability works. So how does SVC impact that? Can you walk us through a little bit how TMS scales? Sure. So the flash system box is designed to be basically a modular device that you would just deploy in increments of one unit. And then if you need capacity aggregation or if you need pooling, that's where things like sand volume controller come into play. And so SVC scales up to an eight node. You've got very large sets of compute resources available. That's where you would get that single shared namespace if necessary. Now I say if necessary because it turns out that many of the clients who are deploying this technology for the tactical type deployments where we're fixing a specific problem, you know, they don't need more than 20 terabytes of capacity. It's only for the large scale, more strategic type deployments where we're talking about this as a replacement for the tier one disk where we bring in things like SVC typically to get the enterprise features and functionalities that the storage team is used to dealing with as opposed to the application owner type sale when we're solving the tactical type problem. So we address within the flash system, we typically deliver all the scalability that the client's interested in, but if we do need to go further than a single flash system or a stack of independent flash systems, that's where SVC comes into play. And we can deliver an SVC solution that can scale to extremely large amounts of capacity, extremely large amounts of IO, all while maintaining flash system as the performance piece of that solution. And you get the non-destructive capabilities to come with SVC as well. Stu, you have a follow-up because I want to come back. Yeah, just talking about how a flash system fits in with the rest of the stack, has a flash system been integrated at all or have you seen customers deploying it with any of the pure systems family, whether that be the flex system or one of the other parts of the family? I mean, with the flex system architecture, you can get a store-wise V7000 node there and that has the external virtualization capabilities. So that is a deployment scenario that we see in increasing numbers with flash system. So if we look at it, though, most of the flash system type devices are getting in these environments where there may not be much of an IBM footprint at all. And so if there happens to be SVC already there, it's obviously an easy fit. If the client needs the enterprise features and functionality, again, we'll bring in flash system plus SVC. And then if we can get that functionality from elsewhere in the app or that functionality isn't required, then we'll just bring in that little 1U flash system device and that turbocharges their existing infrastructure as well. So lots of different ways to deploy this technology. So I love these discussions with IBM because you guys do a great job with very transparent benchmarks. And you're honest, okay? So you tell us 100 microseconds overhead. Okay, so as a customer, you can say, all right, one of the trade-offs, just as you did, of that overhead. Great. I want to unpack that a little bit more. First of all, I want to talk about, you had mentioned another use case, which is when the application is providing the storage services. So I'm going to take an example, let's say, of Oracle. So you might have recovery services or other snapshot capabilities within Oracle. Or certainly Microsoft could do that, although sometimes IP-based replication scares some people. DB2 similarly can offer some of those capabilities. So in that instance, I basically drop in a TMS system, I'm relying on the application to provide those storage management capabilities, and then I gain 100 microseconds. Yeah. Yeah, so that's one very valid way of doing it. It's a very common way of doing it, especially for the clients who were with TMS prior to the acquisition. Although we did have some SVC certification even before we got acquired. But what we see there is that, especially for, let's say, Oracle, like you mentioned, a one-line change to an Oracle configuration file can get you to the point where you have the flash system as a copy of a device on your traditional storage array, and we call that a preferred read-and-read arrangement, where you're reading and writing to the flash system, but only writing to the traditional array. What that gives us is an instant speed-up on the reads, you're writing to cache so typically the writes aren't really the bottleneck anyway, and you've only added redundancy to the architecture. So by changing something at the application level very easily, you take advantage of many of the features of the flash system. So when you say you're writing to cache, you're writing to the persistent cache? To the disk array cache. So you'd be writing to both the flash system, which has its own internal persistence model, and your traditional array. So you still get all your BCV, all your other features and functionality for DR in your traditional array, but you've just got this extra copy on the flash system that we happen to be directing all the reads from. So the write is complete upon the write to the cache. We never want to get in a scenario where you lose part of your data because it's partially written. David Floyer is floating around here somewhere, but I know if he were here he would ask you, and I know you don't want to talk about specific competitors, and that's fair, but let's talk about, you know, sort of conceptually. This notion of using flash as read cache, and then you're writing to a traditional storage cache, which Floyer would say is not as advantageous as doing something to an all flash because of the overheads associated with destaging that cache and then getting a complete write because it's got to asynchronously trickle to the spinning disk. Can you talk about just competitively how you stack up to that type of scenario from a performance standpoint? So in many cases we would essentially be doing a deterministic version of what you just described in that preferred read-mere scenario where we're waiting on the consistency coming from the array. We're synchronously writing to both the flash system and the array. And so that gets us the benefits of caching, except now we know for a fact that all that data is going to be on the flash system, so it's all going to be fast. So that's one, it's not a technology that's specifically linked to the flash system. From a competitive position, we can use certain software to accomplish that caching with flash system just like we could with any other storage device. Okay, so now how do you compete with other all flash arrays that might have varying degrees of maturity in their stack? I mean, many of these are scale-out, so the shared nothing, immature stacks. Some of them, like, for instance, a violin or OEM in the stack or many parts of the stack for the Symantec. You know, actually look at extreme IOs and building out their own, but thinking about, and there are dozens of others, I'm just naming a few for the audience, but thinking about an all-flash array, which as many of the vendors you guys do, how do you guys stack up there, why do people choose you, and what do you emphasize with customers in terms of your name? Sure, so it all starts with the hardware, and I know we don't like to talk so much about the hardware architecture. As part of IBM, there's a lot of emphasis on the software as well, but what TMS did for over 30 years was design this purpose-built piece of hardware totally around performance, reliability, and efficiency. Those are the three key advantages that we bring to the table, and the reason we have them is because of that original hardware design. So that's something which there's only one competitor which comes close in terms of having the purpose-built hardware, which then you layer on the software. And the software brings IBM yet another advantage because we have this time-tested proven feature-complete SVC stack that's, you know, been in mainstream enterprise deployments for years. And we have this great integration that happens between the two of them, so you get the hardware part, you get the software part, and it's all integrated together in nice packages, which we'll be discussing even more as time continues to pass. Okay, now you guys are using SLC, correct? Well, we have a choice between SLC or EMLC. And the focus is really on EMLC. Now it tends to be the most cost-effective approach for many enterprises. Turns out that, you know, if you're a big enterprise and you're doing your traditional enterprise apps, typically you actually aren't writing as much as you think you are. And so EMLC has better value in those kinds of... Today you're not doing... A lot of competitors will focus on things like native deduplication and compression. You're not doing that today, so sometimes you get beat up on price, on cost from the competitors. How do you address that concern? So there are several different dimensions to that. First of all, with regards to the dedupe and compression, we do do compression. That's the IBM Real-Time Compression Engine, which is available with the sand volume controller. And V7000, exactly. So we get that checkbox, dedupe is something which we're addressing, and we've found that it's really of kind of limited use, except in some very specific instances like VDI. But we've found that we're able to create architectures using things like linked clones, which give you basically the same benefits, and you still take advantage of the raw performance of the flash system. So at any rate, that's one part of it. And with Real-Time Compression, of course you can get 5x compression ratios on the right data with that existing technology, so we do get that cost savings for many workloads. Very little performance compromise are arguably no performance compromise. But let's come back to the hardware though, because we have advantages in that we don't use SSDs, we don't rely on an OEM component, which has its own margin, to build our system around. We're buying flash chips directly from the vendors, integrating them onto our own modules. And so that gives us actual cost advantages, which can translate to price advantages for the same type of technology. Excellent. All right, so negotiate hard. Eric, that really... Do I not have time for one more question? So Eric, I was just wondering if you could share with us, what's the experience been with coming into IBM and how are the various flash solutions that IBM is putting together, working together? Give us a little insight there. Sure, so it's obviously been a great journey as we scale up from a 100 person company to a 438,000 person company. Got a few more faces to learn. The integration, it's been wonderful seeing IBM kind of come together as a team and rally around this technology. You probably saw that in the opening keynote. There's a lot of emphasis on flash in the entire stack as a transformational technology, and it's great watching these pieces kind of fall in place. So you're going to see a lot more in the areas of flash development, and it's not just going to be about the flash system box, it's going to be about flash in various solutions. So it's been very exciting. All right, Eric Eiberg, we're out of time. I really appreciate you coming by. You're a tech athlete and really thrilled that we can have you on the queue. So really appreciate you making time. And Karim Abdullah is up next. He's an IT practitioner with Sprint and looking forward to hear how he's applying technology to create a business capability. So keep it right there. This is the queue. Right back at this room.