 from the Silicon Valley Media Office in Boston, Massachusetts. It's theCUBE. Now, here's your host, Dave Vellante. Welcome to this special presentation of theCUBE. This is CUBE Conversations. We're going to have a conversation with executives from Infinidat about the 3.0 announcement. Randy Arsenault is here. He's the chief marketing officer and he's joined by Brian Carmody, who's the CTO of Infinidat. Gents, good to see you again. Thanks, Dave, good to see you. So, unfortunately, I missed VMworld, but give us the update on the business. What happened at VMworld? I heard it was a great show for you guys. VMworld missed you too, Dave. I have to say, it wasn't the same without you. Yeah, so VMworld was a phenomenal show for us. I mean, it's the biggest show we do every year. This year was no exception. We've continued to have a larger and larger impact, I think, each year. And again, this year, we continued that trend. So, we had a lot of great activity in the booth. We did some sections on theCUBE, which were phenomenal. We had customer presentations, which were well attended. So, the message really resonated. We had a ton of inbound interest from alliance partners, both ones we're currently working with and others who kind of want to partner with us. So, overall, it was phenomenal. I was really, really happy with the results of the show. So, we went into the show with kind of a provocative message, faster than flash, easy on cash. So, our intention was to go kind of kick the hornet's nest a little bit and shake off some of the cobwebs around this whole idea of the all-flash data center and all-flash arrays dominating the market and really tell a little bit more circumspect and more, we believe, intellectually honest story. And I think we achieved that. We'll talk more about that at the end. There was some interesting things that came out of that. But, by and large, the show was phenomenally successful. We'll be in Barcelona in a couple of weeks. That's shaping up really well, too. So, great show. And how about just general business update? I mean, you privately held company, but what can you share with us? So, businesses is rolling. We just closed our third quarter. Can't, obviously, give numbers, but we, I will say, we have had a sustained year-over-year doubling each quarter for the last three years running. So, we are continuing to grow at a pretty, I would say, rapid but manageable and sustainable pace, right? In our line of work at the company's size that we are, we actually have to control the growth a little bit. Like, we have to resist the temptation to just completely explode the growth and overinvest in the wrong areas. So, we're pretty disciplined, but we're, again, able to achieve those same results. We just had a record quarter. So, the business is going fantastic. We're really finding that we're selling into our largest customers, so we're getting a lot of repeat business that continues to be a very strong and growing portion of our revenue portfolio. The service provider community is becoming a huge representative segment of our customers as well. We'll talk about that in the context of version three. So, business is very healthy. Any metrics you can give us? Headcount, growth rate? Yeah, so, we are now pushing up against about 400 employees. We expect to be around 400 by the end of the year. I'm pleased to say we now have over an exabyte of production capacity in the field worldwide. So, I think when we spoke at VMworld last year, when we announced our 2.0 release, I think we are around 225, 230. So, we're now more than double that. So, we're over an exabyte of production storage. Sorry guys, it's half an exabyte. Sorry, half an exabyte, I'm sorry. Yes, thank you for correcting me. Oh, okay. Yeah. All right. Brian keeps me honest. Yeah, we'll get to the exabytes soon enough. All right, it was just about a year ago, almost exactly a year ago, you guys announced 2.0, Brian. So, first of all, how'd that go and take us into 3.0? Yeah, 2.0 is a monster release. It was just exactly a year ago that we made that generally available and that included support for NFS in production. So, it was our first non-block protocol support in production, which was awesome. Support for asynchronous replication between systems. And so, exactly a year later, we are releasing and we're getting ready to announce version three of our Infinibox system software. And again, it's a monster release and there's a ton of features that are built in but there are three anchor features which are kind of the core of the value prop for this release for our customers. The first is support for inline compression. The second is a major update to our iSCSI implementation, which optimizes it for managed service providers and large private clouds. And the third is a really, really cool, the nerdiest of all the features is called performance analytics, which is a very sophisticated performance analysis platform for storage operational data. Okay, well, let's break those down. So, compression, data reduction obviously took the industry by storm sort of a while ago but it really never hit the storage business until flash because of the performance implications. How are you dealing with that? Tell us more about your algorithms and your secret sauce. So, first off, there's two reasons for implementing data reduction technologies because you have to, because you wouldn't have a viable sellable product or because you want to and because you can. And the former was why purpose-built backup appliances actually finally took hold because they had to, otherwise it would have been too expensive. That's number one. And number two would be systems that store the primary copy of data on solid state. The only way you can get things like NAND flash and 3D crosspoint down to a price point that it's viable is to implement data reduction technologies. Okay, so did you have to or did you want to? We want to. So, we got a challenge from Moshe about a year ago who, and Moshe challenged the team to develop a compression algorithm that allowed us to store more data on the box than the physical capacity but to do so without impacting latency and performance in any way whatsoever. And, you know, we had the luxury of taking our time and getting this right the first time. We already had an extreme price advantage over incumbent storage technologies. So, you know, this was an opportunity for us to get out in front of and innovate and kind of disrupt our own business and extend that pricing advantage even further. So, it's a pretty cool implementation. The way it works is when a piece of data comes into our system, it's going to hit one of our three controller nodes. We're going to take that piece of data, we're going to put it into DRAM. We're going to send a copy of that right and do a DMA over our InfiniBend network and put it on one of the other nodes and then we're going to acknowledge that right back to the host. And that entire process takes 180 microseconds. When we implement compression and when we turn that on in 4Q, nothing changes in that critical data path. We implemented our compression engine on the back end. So, long after that data has been written in and we have it replicated in DRAM, we have an asynchronous process which pulls data sections out of DRAM, assembles them into D-stage stripes and writes them to persistent media along with parity and data protection. And it's here where we implemented that compression algorithm. So the way the process works is when we have a section of data in memory that's a candidate to be D-staged and written to persistent media, we take the data, our compression engine chooses from a library of candidate compression algorithms and makes a data-aware decision for each 64 kilobyte section of data, which algorithm to choose, compresses the data and then writes it down to the back end disk. So it's an inline process. There's no way that data can ever get to persistent media without being compressed. But by doing it asynchronously outside of that critical path, after we acknowledge the IO back to the host, it allows us to compress and get all the benefits of compression without any of the penalties on performance. So it's inline to the storage path. It's asynchronous to the activities that are going on in memory. Yeah, it's asynchronous to DRAM, but inline to go into persistent media. And just the modular architecture and the modular framework is an interesting approach as well because it gives us a lot of flexibility moving forward to adapt to new data types and new usage patterns. So whether it's a particular vendor or a particular data type genomic data, as an example, we use a lot. We have the ability to adapt and really fine tune the compression for specific workloads and data types on the same system. And just to be clear, this applies to spinning disk as well as your flash storage. Yeah, so the protocols and the algorithms are universal and they're not specifically tied to any one type of media. And that's kind of a pretty recurring theme in our architecture is we're trying to move the industry away from this dependency on very hardware-centric, media-defined systems and move to software architectures that can accept any type of media and have excellent performance and basically use software algorithms for data placement and putting the hottest data on the fastest media and the coldest data on the slowest media and doing that in real time and doing so in a way that's completely independent of the particular media types. Yeah, because typically, even though vendors were able to ship compression technology on spinning disk, customers would flick a switch and turn it off. Yeah, the tax was just too high. Yeah, once they see the performance implications, they're horrified and they turn it off. So when we ship version 3.0 of our Infinibox software, compression is on by default for every volume and file system that's created. But we give customers a flag right in the GUI and right in the REST API to turn it off on a per volume and per file system basis. And we want customers to test it. Put a heavy workload on, turn it on, turn it off, turn it back on and see that the latency and performance is razor flat. And the library of algorithms you're using is just to accommodate different data types? Yeah, so as Randy mentioned, for example, one of the very popular use cases in the academy and life sciences for infinite app technology is genomics. And there are specific algorithms that are optimized for genomics workloads. Similarly, there are application partners in our ecosystem that have database and application server compression algorithms, which if we implement them at the storage layer, it creates really cool synergies from the application server all the way down to the storage. So we designed this to be extensible and to allow us to very easily drop in new protocols. But the first step is you have to create a framework that lets you select from that catalog and kind of do it with an awesome level of granularity, which is exactly what we did. Okay, let's talk about iSCSI. I love iSCSI because Ethernet, coming into the storage business was a game changer. But there were holdouts, right? I mean, the FC bigots that we talked to said, I'll never do iSCSI, give us your take on iSCSI and what's the innovation here? Yeah, so 15 years ago, they were absolutely right. iSCSI was a second level protocol and the advice which was correct at the time from all the vendors was if you wanted the highest performance, the lowest latency and the most deterministic performance, invest in a fiber channel sand. Now there's a lot of really cool innovation going on, Gen 6 fiber channel and everything. But for pure cost optimization, the trend we're seeing, especially in the managed service provider in cloud space is these data centers are all Ethernet only and there's been advances in Ethernet technologies, non-blocking designs and spine leaf topologies and whatnot, but the fabrics have caught up on the Ethernet side. So we saw a really cool opportunity to update our iSCSI implementation and take it from business class to carrier grade services. So what that means for us in this 3.0 release, number one is we're deprecating the Infinibox iSCSI nodes which were dedicated hardware nodes for delivering iSCSI services. And we've brought the iSCSI service directly onto our cluster of controller nodes which is where we deliver Ficon, NFS and fiber channel. So we're making iSCSI a first level peer. The latency guarantee is within 1% of the same volume coming off of fiber channel, 1% latency differential and we deliver the same seven nines availability SLA for iSCSI that we do with fiber channel. So the net net is making it a first level peer. This is squarely targeted at managed service providers and large enterprises that are deploying private clouds. They want to go with Ethernet based data centers, but they need carrier grade storage services in order to support their businesses. So technology aside, this is a TAM expansion opportunity. Yeah, we believe so. I mean, again, as Brian said, this the industry is definitely moving towards and especially as you go out West, you see these predominantly IP fabric based data centers. So it makes sense for us to have a sort of a big boy implementation of iSCSI to make sure that these systems are designed to operate in very high volume, high transaction level, mission critical environments. So we obviously have to make sure we have the best, most reliable, most bullet proof iSCSI implementation with the best performance. And you'll obviously continue to support your fiber channel clients because there still will be holdouts. Fiber channel isn't going anywhere. And again, especially with the GEN6 technologies and stuff like that. Fiber channel, especially for East Coast IT, fiber channel is still king. All right, let's talk about the geeky part of the announcement, which is the performance analytics. Why is that geeky? It's a hot topic. Because engineers love metrics. They love data. So I would say that up until version three of Infinibox storage software, our performance tools were par for the course. You can look at different types of objects, file systems and volumes and whatnot. And then with respect to time, you can look at latency and performance and IOPS and whatnot. You can save, you can go back, you can chart it. But there's been so much advances in thinking and tools in data science, even within our own company, that we raise the question, why not apply this and make these tools available directly to customers for analysis? So performance analytics is a complete rewrite of our performance and monitoring subsystem on Infinibox. And what's unique about it is it allows customers to do multi-dimensional analysis of performance and operational data coming out of their storage fabrics. So for example, you can start with a data center level view and look at all of the storage flows going to a particular Infinibox system, going to the whole estate, going to a particular tenant or going to a particular volume or file system. But the tool then lets you drill down to a particular level of granularity and look at an individual flow from one endpoint to another in the fabric. It then lets you do a multi-dimensional slice and dice and construct an OLAP cube where you can look at, for example, individual SCSI commands, individual NFS operations. You can do it with respect to IO size and create different histograms and you can do it with respect to time. So the idea is to create a really awesome tool that allows customers to do very sophisticated analysis and get very sophisticated insights into what's happening in their storage estate. We built an awesome web-based GUI for doing these visualizations, but we also make the entire dataset, every piece of data, every function is available via our REST API. So if you want to pull that into existing big data infrastructure, Splunk, Natesa, Graphite, whatever you use, we give you an awesome API to do it as well if you don't want to use the GUI. But I don't need Cognos to build the cube, is what you're telling me. No, no, no, it's very, very simple. The concept is multi-dimensional analysis, but we have paid an enormous amount of attention to the user experience and to do consumer grade UI and UX on the system. You absolutely do not have to be a data scientist to understand this. Every storage operator will get this. Although the good thing is the system is instrumented to such a degree of granularity and detail and so robustly that if you are one of these super nerdy storage architects or capacity folks who's modeling future performance and load characteristics, you have every imaginable data element you would need to do that. So you can be as high level and sort of simplified or you can be as granular and detailed as you wish. This tool is going to allow you to visualize and tell stories about your workloads that were not possible with the kind of current tools. Make heroes out of the guys who have access to that data. Data is king. How do you price this stuff? Is this a separate license that I have to acquire or how does this all work? We hate license fees. We hate license fees. Dave, license fees are tricks that vendors use to obfuscate the true TCO of their systems. When a customer purchases or consumes Infinibox storage, whether it's an OPEX or CAPEX model for the particular system, that price of what a pennies per gigabyte month or whatever includes the hardware, the software and entitlement for all the features and all future features coming into it. Everything that I've mentioned in the 3.0 release is available as a software update. It's non-disruptive. It's available at no charge for all of our customers that have an active support contract or have an active OPEX consumption model. And we hate license fees. We think all they do is unnecessarily complicate purchase orders. And any system in the field can take advantage of this? Every Infinibox system in the field is compatible with this release. There's no special hardware or anything like that required. Advantages of decoupling the architecture and the software completely from the hardware. I mean, it's cloud-like in your philosophy, even though it's not public cloud, but the question you must get a lot, we get all the time is, what's your cloud strategy? Here comes cloud, here comes Amazon or Azure. What are customers doing? Because we know they're not all moving to the public cloud, but they want to replicate the public cloud and imitate it as much as possible. Absolutely. So the future is hybrid. The past, the present and the future is hybrid. You know, what customers are looking to do is to, number one, move cloud-native applications, get it off-premise, get it into a co-location space, get it onto a public or a semi-public cloud provider. Then the kind of incumbent old-school applications, the big three-tier Java apps and ERP systems and mainframe apps, stuff that you can't run on Cassandra clusters and whatnot. Keep that stuff on-premise, optimize for cost and for availability and then make it really, really easy to connect those two and move data, move information and move and shift workload between on-premise and off. And that's exactly the way that our technology works. Customers today can replicate their data sets to an off-premise, excuse me, to an on-premise, in-finnabox system located in another data center. And we're gonna be making some announcements about availability later this year for technology that allows customers to replicate that to the Infinidat cloud service that exists in the public space and allows super easy access from all the major public cloud providers for those remote copies and does all of it with a four-second RPO, does all of it with a cost per gigabyte month that is a fraction of what you pay for incumbent storage from Amazon, Google and Microsoft and does it all with a four-second RPO. Yeah, so the compulsion for enterprises to move workloads and for companies of all sizes to move workloads to the cloud is exactly what Brian said. It's the agility, it's the elasticity, it's the flexibility in the consumption model. We're kind of replicating those core functional and foundational abilities and bringing that on-prem, right? So you now have the ability to implement a private cloud that behaves and is costed and allocated and monitored and used and delivered in precisely the same way but at a significantly lower cost and higher level of availability. So, all right. I want to come back to this, faster than flash, easy on cash. How are you spelling cash? That's a- The proper way. S-H, not the memory. Hey, okay. Yeah, so we actually spent a lot of time prior to VMworld. We've been seeing an interesting shift in our customer base and we think in the industry at large over the last year, at least, maybe a little more than that, where obviously the industry is driving towards all flash. I mean, the messaging and the background noise is all flash data center, all flash arrays, disk is dead and that's been prognosticated for a while. However, what was happening is we were going into these large opportunities and we're inevitably competing against one or more all flash arrays. Anytime we're going into a greenfield opportunity, we're competing against all flash arrays because that's become almost a de facto standard if you're doing a benchmark of any kind. And interesting things began to happen, not least of which is the fact that we were routinely beating them on performance. So we were going in and on par head to head getting the same or better performance. And there's a bunch of reasons for that. The primary reason and it's a fairly simplistic pragmatic reason is that data has to be written. It can't just be read. So customers run workloads that are not just 100% sequential or random reads. They're read write, they're genuine real world workloads. So customers will say, we've got our flash appliance over there which serves this one very specific niche requirement that we have and it's fine. But we've got a lot of other things that really don't need that and in fact we can't use that for those workloads because it just doesn't behave as a general purpose storage system. It's too specialized. So for that fit for purpose, great. For everything else, not so much. So we go in and take a majority of the remaining workload portfolio but then eventually what happens is they start because they have the headroom, they have the performance and the capability to bandwidth. They try loading those onto our system and they find low and behold, it performs as well or in some cases better. So we decided it was time to stop kind of keeping that to ourselves and start talking about the fact that if you need highly reliable petabyte scale storage services at flash performance levels then this is how you get it. And just to be clear, you use flash. Absolutely. It's not like you're not using flash. Right. Yeah, so Dave it's not, there's no magic in the system. The, what makes our architecture unique is if you take a look at the hardware that our Infinibox software runs on. In each rack is up to three terabytes of DRAM, up to 200 terabytes of NAND flash and up to 3.7 petabytes of spinning HDD. And our algorithms place the hot data in DRAM. It places warm data on NAND flash and it places cold data on spinning disk. These algorithms operate in real time. They operate multiple times per second. It's completely transparent. There's no knobs, it just works. And as Randy was saying, the difference in latency that clients see for real world workloads comparing an Infinibox system to an all flash array is we put the hot data in DRAM and we have three terabytes of DRAM. Whereas most of the incumbent traditional all flash arrays put hot data on NAND flash which is 100 times higher latency. It absolutely cannot, it falls apart when you try to do high sustained levels of rights to it. And unfortunately for us, it makes for very easy proofs of concept where you just run workload against the two. You look at the numbers and we're a lot faster. There isn't a lot of magic. I'm glad you're saying that because when I first heard about Infinidat and got briefed and New Moshe was involved, I put you in the FM category. FM is effing magic. So, thanks for clearing that up. The magic of Infinidat is Moshe's ability to get outlier software developers to many of them to come out of retirement. They're all already independently ridiculously wealthy, very, very successful to come out of retirement and mentor younger folks like me and my friends. And then we kind of all come together and somehow manage to not kill each other. And he just instead create really, really full of fault. He preys on their overwhelming desire to solve difficult problems. They just live to solve difficult problems and they're exceptionally good at it. That's fantastic. All right, Randy, we'll give you the last word. What are we looking for in the next several months? Well, so we're obviously into our fiscal fourth quarter now. So it's going to be another busy quarter. We expect it to be another good one. We recently moved into our new North American Headquarters and Briefing Center in Waltham. We're going to be doing actually a grand opening event here in a few weeks, ribbon cutting ceremony with various local magnet, magnet, magnetaries, luminaries, I'm thinking, I'm trying to think of the right word, political luminaries. We're continuing to grow the business. We're adding people. We are growing out our space, both here and in Israel and in the Valley, all of our primary centers. And the 3.0 release is a significant release, as Brian said. There's a huge payload in this release. It's a really strategic release for us given the timing. We think a lot of the shifts that are happening in the industry this year and into next year bode very well for us. We think there's a lot of prevailing currents in the industry that we're riding. A lot of updrafts that we think we're well positioned to ride. So we're going to close the year out strong. We think next year is going to be a pretty significant year for us as well. We'll love it. We'll be watching. I mean, love the non-conventional thinking. You've got Israel that brings that. You've got the bi-coastal thing going on. And you've got execution. So congratulations so far on what you've done. I know you've got bigger plans, but really appreciate you guys coming on theCUBE. Thanks for having us, Dave. Appreciate it. All right, you're welcome. Thanks for watching, everybody. This is a special presentation in SiliconANGLE Media's theCUBE. We'll see you next time.