 Live from New York, it's theCUBE. Covering theCUBE, New York City, 2018. Brought to you by SiliconANGLE Media and its ecosystem partners. Okay, welcome back everyone. We're here live in New York City for CUBE NYC. This is our ninth year covering the big data ecosystem. Now it's AI machine learning. Used to be Hadoop, now it's growing. Ninth year covering theCUBE here in New York City. I'm John Furrier with Dave Vellante. Our next guest, Josh Rogers, CEO of Syncsort, going back long history in theCUBE. You guys have been on every year, really appreciate chatting with you. Been fun to watch the evolution of Syncsort and also get the insight. Thanks for coming on, appreciate it. Thanks for having me, it's great to see you. So you guys have constantly been on this wave and it's been fun to watch. You guys had a lot of IP in your company and then just watching you guys kind of surf this big data wave but also make some good decisions, make some good calls. You're always out front. You guys are on the right parts of the wave. I mean now it's cloud, you guys are doing some things. Give us a quick update. So you guys got a brand refresh so you get the new logo going on there. Give us a quick update on Syncsort. You got some news, get the brand refresh. Just a quick update. So I'll start with the brand refresh. We refresh the brand and you see that in the web properties and in the messaging that we use in all of our communications. And we did that because the value proposition of the portfolio had expanded so much and we had gained so much more insight into some of the key use cases that were helping customers solve that we really felt we had to do a better job of telling our story and probably most importantly engage with the more senior level within these organizations. What we've seen is that when you think about the largest enterprises in the world we offer kind of a series of solutions around two fundamental value propositions that tend to be top of mind for these executives. The first is how do I take the 20, 30, 40 years of investment in infrastructure and run that as efficiently as possible. I can't make any compromises on the availability of that. I certainly have to improve my governance and security of that environment. But fundamentally I need to make sure I can run those mission critical workloads but I need to also save some money along the way because what I really wanna do is be a data-driven enterprise. What I really wanna do is take advantage of the data that gets produced in these transactional applications that run on my AS400 or IBMI environment, my mainframe environment, even in my traditional data warehouse and make sure that I'm getting the most out of that data by analyzing it in a next generation set of times. I mean one of the trends I wanna get your thoughts on Josh because you're kind of talking through what the big mega trend which is infrastructure agnostic from an application standpoint. So that's the trend with DevOps and you guys have certainly had diverse solutions across your portfolio but at the end of the day this is the abstraction layer customers want. They wanna run workloads on environments that they know are in production that work well with applications. So they almost wanna view the infrastructure or cloud if you will, same thing as just agnostic but let the programmability take care of itself under the hood if you will. Right and what we see is that people are absolutely kind of extending and modernizing existing applications. This is in the large enterprise and those applications and core components will still run on mainframe environments and so what we see in terms of use cases is how do we help customers understand how to monitor that the performance of those applications. If I have a tier that's sitting on the cloud but it's transacting with the mainframe behind the firewall, how do I get an end end view of application performance? How do I take the data that ultimately gets logged in a DB2 database on the mainframe and make that available in a next generation repository like Hadoop so that I can do advanced analytics. And when you think about solving both the optimization and the integration challenge there, you need a lot of expertise in both sides, the old and the new. And I think that's what we unique. You guys done a good job with integration. I wanna ask a quick question on the integration piece because it's becoming more and more table states but also challenging at the same time. Integration and connecting systems together, if they're stateless, there's no problem. You use APIs, right, you do that. But as you start to get data that needs state information, you start to think about some of the challenges around different disparate systems being distributed but networked. In some cases, they've been decentralized. So distributed networking is being radically changed by the data decisions on the architecture, but also integration called API 2.0 or this new way to connect and integrate. Yeah, so what we've tried to focus on is kind of solving that piece between these older applications around these legacy platforms and making them available to whatever the consumer is. Today we see Kafka and in Amazon we see Kinesis as kind of key buses delivering data as a service. And so the role that we see ourselves playing and what we announced this week is an ability to track, change data, deliver it in real time in these older systems but deliver it to these new targets, Kafka, Kinesis and whatever comes next. Because really that's the fundamental partner we're trying to beat our customers is, we'll help you solve the integration challenge between these infrastructure you've been building for 30 years and this next generation technology that lets you kind of get the next leg of value out of your data. So when you think about the evolution of this whole big data space, the early narrative and the trade press was, well, no SQL is going to replace Oracle and DB2 and the data lake is going to replace the EDW and unstructured data is all that matters and so forth and now you look at what's really happened is the EDW is a fundamental component of making decisions and insights and SQL was the killer app for a dupe. And I take an example of say fraud detection and when you think, and this is where you guys sit in the middle from a standpoint of data quality, data integration, in order to do what we've done in the past 10 years, take fraud detection down from where I look at my statement a month or two later and then call the credit card company, it's now gone to a text that's instantaneous, still some false positives and I'm sure you're working on that even but so maybe you could describe sort of that use case or any other of your favorite use case and what your role is there in terms of taking those different data sources, integrating them, improving the data quality. Right, so I think when you think about a use case where I'm trying to improve the SLA or the responsiveness of kind of how do I manage against or detect fraud rather than trying to detect it on a daily basis, I'm trying to detect it at transaction time. You know the reality is you want to leverage the existing infrastructure you have. So if you have a data warehouse that has detailed information about transaction history, maybe that's a good source. If you have a transaction that's running or an application that's running on the mainframe that's doing those transactions real time, the ultimate answer is how do I knit together the existing infrastructure I have and embed the additional intelligence and capability I need from these new capabilities like for example using Kafka to deliver a complete solution. What we do is we help customers kind of tie that together. So specifically we announced this integration I mentioned earlier where we can take a change data element in a DB2 database and publish it into Kafka. That is a key requirement in kind of delivering this real time fraud detection. If in fact I'm running transactions on a mainframe, which most banks are. Without ripping and replacing. Why would you want to rip out an application that you send your core customer file while we can just extend it? And you mentioned the Cloudera 6 certification. You guys have been early on there. Maybe talk a little bit about that relationship, the engineering work that has to get done for you to be able to get into the press release you know day one. When we, so we just mentioned that my first time on theCUBE was in 2013 and that was on the back of our initial product release in the big data world. When we brought the initial DMXH release to market, we knew that we needed to have deep partnerships with Cloudera and the key platform providers. I went and saw Mike Olson. I introduced myself. He was gracious enough to give me an hour and explain what we thought we could do to help them develop more value proposition around their platform. And it's been a terrific relationship. Our architecture and our engineering and product management relationship is such that it allows us to very rapidly certify and work on their new releases. Usually within a couple of days. And so not only can customers take advantage of that which is pretty unique in the industry but we get some visibility from Cloudera as evidenced by Tendu's quote in the press release that was released this week. It's just, which is terrific. Talk about your business a little bit and you guys like a 50 year old startup. You've had this really interesting history. I mean I remember when I first started in the industry following you guys and you've restructured the company. You've done some spin outs. You've done some M&A. But it seems to be working. Talk about growth and progress. So we're the leader in the big iron to big data market. We define that as allowing customers to optimize their traditional legacy investments for cost and performance. And then we help them maximize the value of the data they get generated in those environments by integrating it with next generation analytic environments. To do that we need a broad set of capability. There's a lot of different ways to optimize existing infrastructure. One is capacity management. So we made an acquisition about a year ago in the capacity management space. We're allowing customers to figure out how do I make sure I've got not too much and not too little capacity. That's an example of optimization. Another area of capability is data quality. So if I'm gonna maximize the value of the data it gets produced in these older environments it'd be great that when it lands in these next generation repositories it's as high quality as possible. We acquired Trillium about a year ago. And I was coming up on two years ago. And we think that's a great capability for our customers. It's going terrific. We took their core data quality engine and now it runs natively on a distributed Hadoop infrastructure. We have customers leveraging it to deliver unprecedented kind of volume of matching. So not only breakthrough performance but kind of this whole notion of right wants to run anywhere. I can run it on an SMP environment. I can run it on Hadoop. I can run it on Hadoop in the cloud. And so we've seen terrific growth in that business based on kind of our continued innovation particularly pointing it at the big data space. Yeah, one of the things I'm impressed with you guys is you guys have transformed so having a transformation message to your customers is a lot of credibility. But what's interesting is that the world with containers and Kubernetes now in multicloud you're seeing that you don't have to kill the legacy to bring in the new stuff with you can see if you connect systems what you guys have done with legacy system. Okay, connect the data. You don't have to kill that to bring in the new. You can do cloud native. You can do some really cool things. Right, I think there's this rip and replace concept is kind of going away. You put containers around it too. That helps. Right, it's expensive and it's risky. So why do that? So I think that's the realization. The reality is that when people build these mission critical systems, they stay in place for not five years but 25 years. And so the question is how do you allow the customers to leverage what they have and the investment they've made? Let's take advantage of the next wave. And that's what we're seniorly focused on. And I think we're doing a great job of that not just for customers but also for these next generation partners which has been a lot of fun for us. And we've also heard people doing analytics they want to have their own multi-tenant isolated environments which kind of goes to if don't screw this system up if it's doing a great job on a mission critical thing don't bundle it, just connect it to the network and you're good. And on the cloud side, we're continuing to look at our portfolio and say what capabilities will customers want to consume in a cloud delivery model? And so we've been doing that in the data quality space for quite a while. We just launched and announced over the last about three months ago the capacity management as a service. So you'll continue to see both on the optimization side and on the integration side us continue to deliver new method new ways for customers to consume to consume the capabilities they need. That's a key thing for you guys integration that's pretty much how you guys put the stake in the ground and engineer your activities around integration. Yeah, we start with the premise that you're going to need to continue to run this older investments that you made and you're going to integrate the new stuff with that. What's next? What's going on the rest of the year for you guys? So we'll continue to invest heavily in the real time and changing the capture space. We think that's really interesting. It's, we're seeing a tremendous amount of demand there. We've made a series of acquisitions in the security space. We believe that the ability to kind of secure data in the core systems in its journey to the next generation systems is absolutely critical. So you will continue to invest there. And then I'd say governance. That's an area that we think is incredibly important as people start to really take advantage of these data lakes they're building. They have to establish real governance capabilities around those. So we believe we have an important role to play there. And there's other adjacencies, but those are probably the big areas we're investing in right now. Continuing to move the ball down the field in the SyncSor cadence of acquisitions, organic development. Congratulations, Josh. Thanks for coming on to Josh Rod. CEO of SyncSor here inside theCUBE. I'm John Furrier with Dave Vellante. Stay with us for more big data coverage, AI coverage, cloud coverage here. Part of CUBE NYC, we're in New York City, live will be right back after this short break. Stay with us.