 Okay, we're back live here at NAB. I'm John Furrier, the founder of SiliconANGLE.com. This is SiliconANGLE.tv's theCUBE, our flagship telecast. We go out to the events, set up our anchor desk and talk to the smart people who can find, extract the signal from the noise, cover the news, provide our opinion, as always, and our analysis, and my co-host is... I'm Dave Vellante, wikibon.org, and John, we're here with Wim D. Wispeler, who is the CEO and founder of Ample Data. John, we've been talking for years now about cloud and storage and how the whole paradigm is changing. And so... Yeah, I mean, Dave, I mean, when we first started working together in 2010, we saw that storage was going to be hot in a bigger way than most people saw it, and we started working on our innovative business model, SiliconANGLE, wikibon, and clearly that was not on everyone's radar. We called storage a sexy. Now with cloud and storage, it's absolutely totally mainstream. All the top guys out there, like Google is saying, cloud mobile and social, storage is the fundamental bottleneck for real-time business, real-time information, real-time media, and here at NAB, you can't walk around without bumping into some sort of new camera, editing station, some new technology that creates bigger, fatter, more content that needs storage and processing. So I think, Wim, that's a lead into you. I mean, you guys are under the covers solving these problems. Talk about the tsunami of big data. And when we say big, we mean like it's big, file-sized data. Absolutely. That's what we're focusing on, is storing, so how do you store petabytes of data, reliably, so you never want to lose that data and you want to store it reliably on a disk-based storage system, and disks are still mechanical devices, so you know one data going to fail and you need to protect yourself against that. Second thing is like, how do you scale such a system to petabytes? We've seen, initially, we've been focusing on online applications, such as social media applications storing massive amounts of photos and small videos that people are uploading. That generates like petabytes of data, online archives of data. One of our customers is Jazz Montreux that they've been digitizing all of their video content of the last 40 years and they've put this on an empty data storage system, which makes it readily available now to stream, to jazz cafes, and so forth. So this is really driving like the requirement for storing petabytes of data online. Talk about the market dynamics, because right now most people think of storage as I have a hard disk on my computer and or part of some big company back office operation in the data center, monster drives, whether they're buying it from EMC NetApp, so there's a big systems that sit back there. And yeah, they serve up information, but no one really thought of it as critical infrastructure. Talk about in the market today, how storage has evolved to become so critical in the business operations of the marketplace. Yeah, well really it's the data volume that is generally or that makes that customers are running quite a lot of issues now at traditional storage systems whether it's a block based storage system or file based storage systems have never been built to store petabytes of data. Block based storage systems typically you're going to set a group of disks in a rate set to protect yourself against a failure of a disk and the technology has evolved in rate five, rate six gives you protection against two disk failures. Now when you're storing petabytes of data you want to protect or you want to use high capacity drives that are available in the market now. You want to use the three to four terabyte drives. Now typically you need to put them in a rate set, 10 drives in a set for instance that gives you like 40 terabytes worth of capacity that doesn't bring you to petabytes. So it's a really, you need quite a lot of storage management software and capability around that to scale out that system. These traditional systems don't offer you that capability. File based storage systems at the other hand also have their limitations like typical NAS type systems they're not built for petabytes. File based storage requires you to organize your files in a certain directory structure which you need to set up beforehand and that has several limitations that you have limitations in number of directories that you can create number of files per directory and all of that. So those are the limitations that you run into and it's very difficult in this unstructured big data it's very difficult to predict what kind of data you're going to be storing and how you're going to organize that data. So really what we bring into the market here is an object storage system that gives you like a global namespace that is scalable without limits which you can keep on storing your data without running into the limits of typical block of file based storage systems. Dave, you've been covering storage in this area really closely. Obviously block, object, obviously some differences. We've had Vesal Sika talk about object kind of not in a positive way. What's your angle on this? So John, RAID was initially introduced because of the problems with mechanical disk drives. They were error prone and RAID essentially solved that problem when drives were a couple of gig. The problem now has become that disk drives are so large as Wim was saying you're talking about three or four terabyte drives. How long does it take to rebuild a three or four terabyte drive? Typically in the traditional RAID system it can take you up to like two days. Okay, so if you have two days. Days, not hours. If you have a RAID five and a 30 terabyte drive it's going to take a month. So if you have a single parity now during that rebuild process you're exposed. So with today's bit error rates and soft bit error rates the probability of losing data is almost a certainty, right? So you've attacked that problem and others with this notion of bringing in read Solomon coding and erasure coding which you're well known as math basically bringing math to solve the problem. John mentioned VShell's SICKA the challenge is it's very math heavy and so you know their performance concerns so talk about that a little bit. Correct, so yeah basically to overcome these issues with traditional RAID. Typical object storage systems initially were storing multiple copies of data. So if you look at what Amazon and Google and the others are doing whenever you upload a picture that they immediately after the fact they create like three, four copies of that picture spread across multiple disks and that protects you against the failure of a single disk. How's that? Sure, because every terabyte of data stored you need actually four terabytes of disk capacity. That's very expensive. Also requires a lot of power floor space and all of that. So this is how we started using erasure coding to reduce that overhead. Really with the erasure coding and you mentioned read Solomon as a typical example of erasure coding you can reduce that overhead. You can tune to a certain availability policy like two, three, this, yeah, two, three, four disk failure protection with little overhead. Now the problem with traditional erasure coding systems like the read Solomon is that those are very difficult to make performing or performance. It's very difficult to get to the multi gigabyte per second throughput rates that are required like here we are at the NAB show. Those are required for media and entertainment. So that is why Ampley data within its Ampley store system is using a new erasure coding technology which is called rateless erasure coding. And this really allows us to put a throttle on throughput. We are now getting or on our controls where we're filling 10 gig bit ethernet interfaces. So we're getting beyond 700 megabytes a second throughput per controller and across a whole system by adding multiple controllers we can scale that out to tens of gigabytes per second of data that can be put in the system and retrieved out of the system. So yeah, I'd love to dig into that a little bit more but I'm gonna have a ton of time here but you know, my friend, he may not be your friend but he's my friend Chris Gladwin says, you know, Raid is dead, do you agree? Raid is dead, absolutely, yeah, I agree with that. So you guys have some common ground, a lot of common ground actually, you know. Absolutely, and we were, yeah, big promoters of object storage systems because they solve quite a lot of the issues in terms of scalability. The one thing where Ria has gone beyond is, so we fix the durability problem, we fix the scalability problem and in addition to that with our newest release of our software, the MP Store XT controllers where we stand for extreme throughput, we've also solved that throughput problem which is really key in the markets where we were operating here. This allows us to deliver the performance that is required for a storage system in addition to all of these nice benefits of object storage. So actually you're betting on Intel, right, I mean? Yeah, we're very close to Intel because to get to that performance, we've optimized our technology to make maximum use of the capabilities within the Intel Xeon processors. Well, I mean, it's a Moore's law bet I should really say. I mean, Intel, present me obviously Intel, but it's, you're basically betting that the math heaviness of your system is going to be overcome by processing power. Absolutely. So we're having, have your math, but- It's a good bet. But, I mean- Yeah. It's a good bet. It's a good bet. And so really we've gotten to the point by leveraging the Intel CPUs that we now deliver the performance necessary for media entertainment. We can now do, I mean, the newest movies now are recorded in 4K frames, stereoscopic, so that doubles the amount of data. And now we're going to frame rates that go up to 120 frames a second. So this is like five-fold the traditional 24 frames. So would you forecast that out, you know? Because, as John mentioned, you know, CTO of SAP had some sort of negative comments about object store, said it didn't scale, didn't perform well. Do you see the use cases for object storage going beyond archiving? Or will it be confined to that for some time? They definitely go beyond archiving. And so with the performance numbers I've given you, we have implementations at customers where the AmpliStore can be used as the primary storage targets for the data streams. So definitely we go beyond archiving. And I would say definitely object storage is the next big thing. So this is going to be the technology of choice for big data of the future. Because it dramatically simplifies the whole storage environment. We should mention that, obviously, right? Absolutely, it overcomes limits of rate of file-based storage systems. It gives you petabyte scale, high durability at the low total cost of range. No more loans, no more loans, no more management. We're the CEO founder of AmpliData. Storage is hot, storage is sexy, it's relevant, and it's going to be bigger, bigger frames in each piece of data that they're recording in on video, et cetera. Thanks for joining us on theCUBE. This is theCUBE at the Intel Studio Experience. Thanks to Intel for supporting us to come and taking our show out here to NAB. We'll be here for three days. I'm John Furrier with Dave Vellante from SiliconANGLE.com, Wikibon, theCUBE.