 Welcome to this special CUBE conversation here. In Palo Alto, I'm John Furrier, host of theCUBE. I'm here with David Richards, the CEO of Wendisco, CUBE alumni, been on many times. Wendisco continues to make the right bets. The bet they recently made has been on cloud many years ago. We've covered it certainly on theCUBE, but live data is the new hot thing. Multiple clouds is turning out to be the trend. That's your friend, David. Great to see you. Great to be back. Thanks for coming on. So we talk all the time about how you guys have always evolved the business and continue to stay out front on all the major waves. Now, again, another good call. You certainly bet on cloud. We've talked about that. Open source, big data, cloud. You saw that coming, positioned for that. But now you've got some great momentum and resonance with customers around live data, which is not a stretch, given what you guys have done with replication, things in the past, the core intellectual property. Give us the update. You guys have been in the news lately. So thanks for that. And I think you enumerated the past history over the past two or three years, which we like to say that we're living in dog years. Everything's happening seven times faster than it would do normally. So of course, we started out life by making a prediction that storage arrays would change, people who are beginning to store, companies beginning to store structures, well, unstructured data of mammoth sizes that we've never seen previously. We're gonna have to resort to open source software running on commoditized hardware that we'd already seen the social media companies move to. But then we began to see a problem emerge even in that marketplace where spiky compute or the applications, which were gonna be heavily compute, would need to run in cloud and cloud environments where you have complete elastic compute at remarkably low cost. And that leads to a problem. So this iceberg kind of that we like to talk about underneath the ocean. So moving data for static archival data is a really simple problem. That's not live data. That's archival data. You just, you know, FTP it from point A to point B. But if we're talking about transactional systems where 10, 20, 30, 40, 50% of the data set changes all of the time, that creates a humongous problem in moving data from on-premises to cloud, either for hybrid cloud or between clouds for multi-cloud. And that's the precise problem that Wanda's go solves. And we've seen customer traction recently. We just announced a deal jointly with Microsoft Azure. We're a big healthcare company who 12 months ago were not talking about cloud. Suddenly they get over that hump where security keys could be managed by themselves within the cloud. We're able to move petabyte scale data from their on-premise systems into the cloud without any interruption to service without any blocking. That's a trend that we're seeing our pipelines now full of companies all trying to do that. It's like you hit the oil gusher with data because the data tsunami's been there and we've documented it certainly in theCUBE and our research team at Wikibon I've been talking about for years and now you're starting to see it, you guys are getting the benefits of it is that people figured out that it's moving data around is expensive and it's hard to do so you push compute to the edge but you still got to move the data around. This is a key part of the latency piece of the cloud. So how do you do that at scale? So this is the thing that you guys have and I want you to explain what it is. You guys have live data for multi-cloud. What does that mean? What is all the hubbub about? What's the buzz? Why is there such a hot topic live data for multi-cloud? Okay, so let's just take a step back and talk about what multi-cloud actually is in today's definition which is the vendor's definition which is very convenient. So what they mean is putting applications into a container Kubernetes or whatever, picking it up and shifting it somewhere else and hey, Presto I've got applications running the same applications running in two different clouds. That is not multi-cloud because you're forgetting about the data and the iceberg underneath the ocean of this colossal amount of data. If I've got petabyte scale, multi-terabyte scale data sets and I need to run the same applications or different applications but against the same data set I need guaranteed consistent data and that is a data by definition a data consistency problem. It is not a data replication problem. So all of the stuff that we used to use in the past for gigabyte scale data for traditional relational database problems none of that stuff works in a live data world and by live data we're talking about multi-terabyte petabyte scale data, data sets that are so large that we've never seen them before running in n-cloud locations. It's different or same applications but guaranteed consistent data in every location. So you guys have had this core composite around integrity around the data whether it's in replication sounds like the same things true around moving data. Because of managing the life cycle of end to end data movement, point A to point B. The other approach is to move compute to the data which is seeing Amazon do a deal with VMware on premise so there's two schools of thought. When should customers think about each approach? Can you just kind of debunk or just clarify those two positions? So it's not really a chicken and egg because we know which comes first it's definitely the chicken, it's definitely the data. So if I'm going to rebuild my application infrastructure in the cloud, I'm going to do it piece by piece. I can't do lift and shift for a thousand applications that are running against this data set and then just hope that the data will block it then block for six months because I've got petabyte scale data and wait for it to all arrive in the cloud or put it onto the back of a user snowmobile or some physical device to move the data. I need to do this. I need to kind of build the aircraft while it's taking off and flying and that's probably a good analogy. So what we see is companies, the first step is to get consistent data on premise to cloud or between different clouds. And then what that enables me to do of course is to piece by piece, then rebuild my application infrastructure at the pace that I want to. I can't put, I mean there's a great adder that I keep on seeing on TV where it's migration day as though I can press a button and then suddenly in this Alice in Wonderland magical world, everything just appears realistically, and I saw the CEO of VMware a couple of years ago talk about being in a hybrid cloud scenario for 20 years. I think that's probably accurate. We've got billions of applications, a mix of homegrown stuff, a mix of actuarial applications in the insurance industry that are impossible to build overnight. This is going to take a long, elongated period of time. I was talking on Twitter with a bunch of thought leaders. We're talking about hybrid cloud and multi-cloud and the kindergarten class is hybrid, right? So you've got some public cloud and you've got some on-premises data center. And so getting that operational things nailed down is great. But as you progress on the grades and get smarter, as you increase your IT IQ, you're dealing with potentially multiple data centers or a bigger onsite or an IoT edge and multiple clouds. So that sounds easy on paper, but when you have to move data around the different workloads, that's the core problem that people are talking about today. How do you guys address this problem? Because I buy multi-cloud, I can see that certain tools and certain clouds, the right workload and the right cloud, I get that, makes a lot of sense to me. The data is the problem. So how do you guys address that? This is the number one concern. So the closest, people ask me all the time about competition, the closest is Google and Google, Google have got a product called Google Spanner. Google Spanner is a time-sensitive, active, active, Wanscope data replication solution that looks on paper very close to what Wanscope does. It enables them to keep active data in all of their different geo-locations that they built for their ad services years and years and years ago. The trouble with that is, it only works on their own proprietary network against their own proprietary applications because they launched a satellite and stuck it in the sky, put dark fiber into the ocean and they put GPS atomic clocks on every single one of their servers because it uses time and time accuracy in order to synchronize all of their data. We can do all of that over the public internet. So we're not a hardware solution. This is a pure software solution that can work over the public internet. So we can do that for any cloud vendor and any provider of applications. That's what we do. We're licensing our IP all over the place at the moment. So which clouds are, I mean, imagine it's a great uptake for the clouds. Which ones are you working with now? Can you talk about the deals you've done? We're very close. We announced that TIS are a partnership with Microsoft and their Azure product and we've been very impressed with the traction that we're seeing with them, particularly at Enterprise Cloud. I mean, the early stage of cloud obviously was dominated by Amazon, Amazon web services and they did a fantastic job of really bringing cloud to the market by accident kind of inventing cloud and then bringing it to market very, very quickly. Fastest ever companies, if it was an independent company to $15 billion. Most of those applications and projects and companies were born in the cloud. I mean, a lot of the modern companies today were actually, of course, Airbnb, et cetera, were born in the cloud. So the second inning of cloud is certainly Enterprise. We've also been impressed with the traction that we've seen from Google, GCP has been extremely impressive and of course Amazon continue to thrive in cloud. We also have an OEM deal with Ali, with Alibaba, with their cloud as well. So they're really the only four cloud. If Google has Spanner, how do you differentiate between Google Spanner? So Google Spanner only works on their proprietary network which is great for Google and between their data centers but what about 99.9% of the rest of the problem which is the rest of us, right, who operate on the public internet? So we can do what Google Spanner does, active, active, geo, one scope replication of data but over the public internet. So you guys have been talking active, active for many times. We've had many conversations here on the queue so I get that. How has your business changed with cloud? You had mentioned prior to coming on camera. You made a bet on cloud that's paying off obviously. People who have made the right bets on cloud at the right time are certainly paying off. You're one of them. How does the live data and the multi-cloud change your business? Does it increase your trajectory? Is there a pivot? I mean, what does it mean for Wendisco? So the very, so my thesis or the company's thesis, I won't take the credit for it, but the company's thesis was really simplistic which is I bet was in a small data world of gigabyte scale data, in order to do data replication, small data equals small outage. When you get data sets that are growing exponentially and you get data sets that are a thousand or a million times greater than what we'd seen previously that what was a small outage or small blocking of client applications will become an elongated blocking of client applications that we're talking about six months to move 20 petabytes of data. You can't block applications, business-critical applications for six months. That was the bet that we made. We expected initially to see that happen on premises in the data lake world, in the Hadoop world, if you will. That didn't quite happen or has not happened to date. We don't think that's probably gonna happen. We're certainly seeing a huge desire of companies moving those data lakes into cloud and we've actually innovated. We've got some new inventions coming out that enable you to move in a single pass, massive quantity of data that will be exponentially faster than anything else and just doing a unidirectional data move into clouds. That was our bet that we said, okay, companies in order to achieve the kind of scale that they need to achieve are gonna have to do this in cloud. In order to get to cloud, they're gonna have to move that data there and they're not gonna be able to block even for a day in order to move the data to cloud. And that was the bet we made and it was the right bet. Talk about where you guys go from here. Give a company update. What's the status of the company? Get some new personnel. Any changes, notable updates? So we really, really interestingly, my co-founder and chief scientist is a genius. Dr. Yatura Ladd, PhD from UT, an undergrad from IIT. A new VP of engineering, Shakti, IIT, PhD at UT under Draxler. This fantastic PhD program they did there. My new head of research came from, was chairman of computer science at the University of Denver. He was IIT undergrad, PhD with a lad at UT. And I said jokingly to a lad, there must be a fourth guy that we can bring on board here that went through the same program and said, oh, we can but we can't hire him because he's the CTO of Microsoft. So that was, he was the fourth guy. Joel, who I know is going to be coming on theCUBE shortly. He also has joined us, joined us from IBM to run marketing for us. So we've made some fantastic, fantastic new highs. Company's doing really well. You know, Cloud certainly has played a big part in the second half of last year. I think it's going to play a big part. It's definitely going to play a big part in 2019. We've seen a pivot in pipeline that's moved away from possibly even disaster recovery data lake in the first half of last year. We pivoted to more of a reliable subscription revenue in the second half of the year. We announced some pretty big deals, big healthcare companies. We've got really good public reference with AMD. We announced a motor vehicle company who are one of the new use cases there is four petabytes of data per day generating that all has to be moved from our premise to cloud. So we've got some ginormous deals in the pipeline. We'll see how they play out in the coming weeks and months. It's great to see the change. I mean, certainly, you know theCUBE, we've been talking, we've known, I think we've known each other for almost, almost it's our 10th year that was since we first met. It's fun to see how you guys enter the market at Hadoop staying on the data wave and thinking enterprise integrity of the data, the active, active, the key IP and how cloud is just assumed data. And it's not just data, it's large scale. So if you look at the new people you hire, you've got chops and large scale systems. We're talking about large scale systems. Now data is now just given. So you're really nailing the large scale, moving from an enterprise nice feature, certainly table stakes for fault tolerance and active, active, disaster recovery to mission critical ingredient in large scale cloud. Well, it's ironic, isn't it? Because our value actually increases with the volume of data. So we're an unusual company in that context where the larger the data site, the greater the problem and the greater the problem that we solve. So we made a pretty good bet that active, active replication, the live data would be a critical component of both hybrid cloud and multicloud. And that's playing out, I think really well for us. And certainly a lot more changes to come. Great to have you on cloud and multicloud. Certainly cloud has proven the economics, proven the large scale value of moving at cloud speed. But now you have multiple clouds that's going to change the game on applications, workloads. It's not going to change the data equation. Still more tsunami of data is not stopping. I think you got a good wave you're riding. Data cloud wave. David Richards, CEO of WAN Disco here in CUBE Conversations here in Palo Alto, I'm John Furrier. Thanks for watching.