 Live from Las Vegas, it's theCUBE, covering AWS re-invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. Well, good afternoon or good evening if you're watching us back on the East Coast right now. We are live here at AWS re-invent in Las Vegas, along with Justin Warren. I'm John Wolves. We're now joined by Anthony Brooks-Williams, who's the CEO of HVR. Anthony, thanks for being here with us today. And Paul Specketer, who's the Enterprise Data Architect at Suez, Paul, good afternoon to you. Good afternoon. All right, so let's just first off, tell us a little bit about your respective companies and why you're here together, and why you're here at the show. Anthony, if you would. Sure, absolutely. So HVR, we provide the most efficient way for companies to move their data, in particular to the cloud, and at scale, and that they have the peace of mind that when they move that data, that it's accurate, and we give them insights on the data that we move. And so, we do that for companies such as Suez, enabling them to get their data into S3, into Redshift, and so that can make decisions on the freshest data. All right, Paul? So yeah, we're formerly GE, and Suez acquired our company, and so now we're standing up an entire data platform. All the applications are coming over to AWS, and so in the past year, we've had to stand up a Redshift cluster, the full ETL backbone behind that, and including the replication from our ERP system into that environment, and so we're going live with that in the next coming months, and so that's why we're here. And we use HVR to move our data around before the ETL process. Anthony, you mentioned that your customers want to make decisions on the latest, the freshest data. So what are the kinds of analysis, and what are the kinds of decisions that customers are trying to make here? Sure, so obviously it depends on the customer, and so if it's a big e-commerce vendor, or something like that, of where a certain product's selling in a certain region based on a certain weather pattern, or something like that, our ability to capture that at a store level and moving that back so they know how to fulfill the warehouses or what stocks out there, that enables them to run a more profitable business. Whether it be someone like that, or Paul's previous company, someone like GE from an aviation perspective to transportation, it's what's happening in their environment, in their systems, so giving them the ability to move that data, move it at volume, and just make good business decisions. You mean a main use case for us is consolidated reporting, and consolidated reporting along some of those financials as well. And so the exact level, board level, are making decisions on their business with the freshest numbers that are sitting in front of them at that time. Paul, what are some of the key ways that HVR's been able to help you in designing that system that can support the needs of those customers? What are some of the key things that you got there and it's when actually we really need the help of someone like HVR to help us to do that? Long ago, we had database triggers and we had some programs that we had to write to capture changes that all goes away when you do log-based data replication. And so for us, we changed our whole strategy and we said, you know what, just take everything from the ERP, move it up into the cloud, and then from there, move it where you need to process the ETL and ship it around. So for us, it's just the first goal is take everything as is, get it up into the cloud as a replicated data set. And then from there, we do our ETL processing and we watch that or we view that in Tableau. And so for us, what I'm building allows us to close our books in one to two days. And as when we were in GE, we're driving towards a one-day close. Now that we're in Suez, we're doing a hard close every month. And so we're trying to drive that time down as low as possible. You got people sitting around, waiting for the report to look right. And so the more we can do to drive that time down, the more people get their weekends back. People like weekends, absolutely. So you talked about accuracy. You talked about volume. So obviously you got a lot more data coming through the need to keep it more, what about speed, latency? I mean, how much of a concern is that for you? Because you got this bigger funnel that you need all the time. Absolutely, and especially in today's world of the cloud and moving data across wide-area networks. And that's whereby the technique that we use, the CDC, the change data capture, where you're reading those transaction logs. You're only capturing the changes and moving those across the network. And then our technology, we have some proprietary techniques we do around compression that further magnifies that bandwidth. And so you magnify in the bandwidth, you're able to move a large volume of data more efficiently and the latency certainly comes into then as well. And so built into the product, we have a feature around a data accuracy perspective. So no matter what the source or target system is, they know that data is absolutely accurate. And then tied to that is this product that we released recently is around insights. It's telling them statistics on the data that we move in. We've gathered that and now we're now showing that, publishing it to the customers largely because the customers like Paul, that were doing this themselves. We provided the statistics on the data and they were having a front-end on top of that. We've now taken that to the broader market. And so it's showing them exactly things like latency. And so they'll be able to drill in and go, that graph or that line is red or it's thicker and it's telling them the latency, we should probably do something about that. What's the bottleneck there? So it's all coming together now, particularly in this cloudy world of moving this data. So Paul, can you give us an example then of what Anthony just talked about in real life of how this happened for you that with that kind of reporting that you saw whatever hiccup there was in the system, if you will, that it identified that and solved that problem for you? As far as the short cycle close, I had a hard time hearing you. Yeah, yeah, from the statistics I was talking about when we were moving the data and how you were collecting stats on that data that moves already, that's enabled you, particularly from a latency perspective of the volume you can move, if there's an issue with it, what do you do with that? So one of the challenges we always had was when you go through a long cycle replication and you've been doing it for months and I asked you the question, are you sure you got every change? Do you know? And so that's, we never know, but now with increases in the Redshift cluster performance with the DC2 clusters, increases in the performance of HVR and moving that data in, our strategy now is to not doubt the data ever. We just refresh it every month, right before close. We refresh the data, it takes us like four hours to move two terabytes into Redshift. So why not? And that changes your approach when you don't have to stress out about the data being accurate week in, week out, you know every quarter right before close you're getting a fresh copy and so that really changed my life is being able to know going into close before the finance guys look at it that the data is perfect. So now that you've had that issue or that concern taken away and you don't have to worry about it anymore has that opened up new possibilities and like I can now attempt to do these things which I would have loved to, like I thought about it like I don't have time, we have these other constraints. So with those constraints gone, what are you now able to do? What we're going to look at now is instead of doing ETL inside of the Redshift cluster we're going to take that out. And because we actually do about three-quarter of the space in our cluster is used for ETL. So we're going to carve that out, maybe do it in S3, we're not sure. And as soon as we do that, we'll be down to like a four-node Redshift cluster and that'll save a lot of money. So that for us, now that we're in the cloud the next push is how do we optimize it? How do we take advantages of cloud native services that we never had access to before? So that's what's on my horizon is looking at that and saying what can I do in the next year? We're seeing massive growth in data across, we've had many conversations so far today about data being generated from IoT devices at the edge, we're having to process it in more places because we just physically moving this data around is such a huge problem, just why you exist. So what do you see customers dealing, when they're trying to deal with this issue, this data is not going to get smaller. There's going to be more and more of this data. So how are you helping customers to grapple with this issue of like, well, where should we move the data? Should we move all of it into the cloud? Is that the only direction that it should be moved or are you able to help them say, you know what, we want to move some of it to here, we'll place some other data over there and we can help you move it around no matter where it needs to go. Certainly, so we obviously agnostic to where they want to move the data, but given the years of experience that we have and the people we have in the company, we certainly are able to lend that seasoned advice to them of where we think an efficient place will be to move that data and certainly within the technology of HVR, it's very efficient at capturing data once and then sending it to many. That's how we really set ourselves apart from a complexity of very being modular and flexible of capturing that data, feed it across where they need to, we can send to Capture One, send to multiple target systems and so they could go and say, I'm going to put the bulk of this feed into S3, I'm going to take a bit of that and put it into Redshift. And so it gives them that flexibility to do that and so obviously with us and some of our skilled architects that we have in the field are able to make them, not just go sell a product, but actually help them with the solution. We're out there selling software, but we make sure that we're delivering customers with a total solution because I think if we look back on yesteryear and some of the data lakes, you know, the stats from a garden somewhere, 70% of those projects fail and it was just, I'm going to take it all and put it in there. Well, why? How? And I think it's blending those worlds together and sort of the de facto data like, we see now today seven out of 10 times, it's something like S3. And so take the architecture, take the technology, take the people and help them, you know, go execute on that plan and just lend some of that advice along the process to them. That sounds like something that would add a lot of value. Yeah. You put it there because you could. Absolutely. It was a storage room and it was a good place to put it. I might not look at it for a long time. It was cheap, but it was clearable at some time. But I know it's there. Absolutely. Gentlemen, thank you for being with us. We appreciate the time. Thank you for the time. And Paul, we're really happy you have your weekends back. Yeah, absolutely. Thank you. Back with more here from AWS re-invent. Pardon me from Las Vegas, we're live at the Sands, and we'll wrap up in just a moment.