 Live from New York, it's theCUBE. Covering theCUBE, New York City, 2018. Brought to you by SiliconANGLE Media and its ecosystem partners. Okay, welcome back everyone to theCUBE NYC. This is theCUBE's live coverage covering theCUBE NYC, Strata Hadoop, Strata Data Conference, all things data happening here in New York this week. I'm John Furrier with Peter Burris, our next guest is about Steelpower Qs, the lead solutions marketing manager, digital business automation within BMC. Our return was here last year with us and also Big Data SV, which now has been renamed CUBE NYC, CUBE SV, because it's not just Big Data anymore. We're hearing words like multi-cloud, Istio, all those Kubernetes data now is so important. It's now up and down the stack impacting everyone. We talked about this last year with Control-M, how you guys are automating in a hurry, the four pillars of pipelining data. The setup days are over. Welcome to theCUBE. Well, thank you and it's great to be back on theCUBE. And yeah, what you said is exactly right. So, you know, Big Data has really, I think, now been distilled down to data. Everybody understands data is big and it's important and it is really, you know, it's quite a cliche, but to a large degree, strata is the new oil, as people say. And I think what you said earlier is important that, you know, we've been very fortunate to be able to not only follow the journey of our customers, but be a part of it. So about six years ago, some of the early adopters of Hadoop came to us and said that, look, we use your products for traditional data warehousing on the ERP side for orchestration workloads. We're about to take some of these projects on Hadoop into production and really feel that the Hadoop ecosystem is lacking enterprise-grade workflow orchestration tools. So we partnered with them and some of the earliest goals they wanted to achieve was build a data lake, provide richer and wider datasets to the end users to be able to do some dashboarding, customer 360 and things of that nature. Very quickly, in about five years time, we have seen a lot of these projects mature from, you know, how do I build a data lake to now applying cutting edge ML and AI and cloud is a major enabler of that. You know, it's really, as we're talking about earlier, it's really taking away excuses for not being able to scale quickly from an infrastructure perspective. Now you're talking about is it Hadoop or is it S3? Is it Azure Blob Storage? Is it Snowflake? And from a control end perspective, we're very platform and technology agnostic. So some of our customers who had started with Hadoop as a platform, they are now looking at other technologies like Snowflake. So one of our customers describes it as kind of the spine or a power strip of orchestration where regardless of what technology you have, you can just plug and play in and not worry about how do I rewire the orchestration workflows because control end is taking care of it. Well, you probably will always have to worry about that to some degree, but I think where you're going, and this is where I'm going to test with you, is that as analytics, as data is increasingly recognized as a strategic asset, as analytics increasingly is recognized as the way that you create value out of those data assets and as the business becomes increasingly dependent upon the output of analytics to make decisions and ultimately through AI to act differently in markets, you are embedding these capabilities or these technologies deeper into the business. They have to become capabilities. They have to become dependable, they have to become reliable, predictable, cost, performance, all these other things. That suggests that ultimately, the historical approach of focusing on the technology and trying to apply it to a periodic or series of data science problems has to become a little bit more mature so it actually becomes a strategic capability. So the business can say, we're operating on this, but the technology is to take that underlying data science technology to turn into business operations. That's where a lot of the network has to happen. Is that what you guys are focused on? Yeah, absolutely. And I think one of the big differences that we're seeing in general in the industry is that this time around, the pull of how do you enable technology to drive the business, it's really coming from the line of business versus starting on the technology side of the house and then coming to the business and say, hey, we've got some cool technologies that can probably help you. It's really line of business now saying, no, I need better analytics so that I can drive new business models from my company. So the need for speed is greater than ever because the pull is from the line of business side. And this is another area where we are unique is that, you know, ControlM has been designed in a way where it's not just a set of solutions or tools for the technical guys. Now, the line of business is getting closer and closer, that it's blending into the technical side as well. They have a very, very keen interest in understanding, are the dashboards gonna be refreshed on time? Are we going to be able to get all the right promotional offers at the right time? I mean, we're here at NYC Strata, you know, there's a lot of real time promotion happening here. The line of business has direct interest in the delivery and the timing of all of this. So we have always had multiple interfaces to ControlM where a business user who has an interest in understanding, are the promotional offers gonna happen at the right time and is that on schedule? They have a mobile app for them to do that. A developer who's building a complex multi-application platform, they have an API and a program and an interface to do that. Operations that has to monitor all of this has rich dashboards to be able to do that. So that's one of the areas has been key for our success over the last couple of decades and we're seeing that translate very well into the big answer. So I want to just go into the hood for a minute because I love that answer and I like to pivot off Peter said, tying it back to the business. Okay, that's awesome. And I want to just kind of learn a little bit more about this because we talked about this last year and I'm kind of seeing it now. Kubernetes and all this orchestration is about workloads. You guys nailed the workflow issue, complex workflows. Could you look at it? If you're adding line of business into the equation, that's just complexity in and of itself. As more workflows exist within its own line of business, whether it's recommendations and offers and workflow issues, more lines of business in there, it's complex for IT to even deal with. So you guys have nailed that. How does that work? I mean, like you plug it in and the lines of business to have their own developers. So the people who care about the workflows engage how? So that's a good question and with sort of orchestration and automation now becoming very, very generic, it's kind of important to classify where we play. So there's a lot of tools that do release and build automation. There's a lot of tools that'll do infrastructure, automation and orchestration. All of this infrastructure and release management process is done ultimately to run applications on top of it. And the workflows of the application need orchestration and that's the layer that we play in. And if you think about how does the end user, the business and consumer interact with all this technology is through applications. So the orchestration of the workflows inside the applications, whether you start all the way from an ERP or CRM and then you land into a data lake and then do an ML model and then outcome the recommendations analytics, that's the layer we are automating today. Obviously all of this- Automating away the technical complexity for the users. Correct. And the line of business obviously has a lot more control. You're seeing rules like chief digital officers emerge, you're seeing, you know, CTOs that have mandates like, okay, you're going to be responsible for all applications that are facing customer facing where the CIO is going to take care of everything that's inward facing. It's not a settled structure of science. It's evolving fast. It's evolving fast. But what's clear is the line of business has a lot more interest and influence in driving these technology projects. And it's important that technologies evolve in a way where the line of business can not only understand but take advantage of them. So I think it's a great question, John. And I want to build on that and then ask you something. So the way we look at the world is we say the first 50 years of computing were known process, unknown technology. The next 50 years are going to be unknown process, known technology. So I'm going to look like a clown. But think about what that means. Known process, unknown technology, control and related types of technologies tended to focus on how you put in place predictable workflows in the technology layer. And now unknown process, known technology, driven by the line of business, now we're talking about controlling process flows that are being created, the spoke, strategic, differentiated. Well, dynamic too, I mean. Highly dynamic and that's, and those workflows in many respects, those technologies, piecing applications, services together become the process that differentiates the business. It's a, you're still focused on the infrastructure to a bit. You've got to nail the technical complexity. Is that right? Yeah, that's exactly right. I mean, we see our goal as abstracting the complexity of the underlying application, data and infrastructure. So, I mean, just, it's, it's, it's quite amazing. So it's easy to reconfigure to the business needs. Exactly, exactly. So, you know, whether you are on Hadoop and now you're thinking about moving to Snowflake or tomorrow, something else that comes up, the orchestration of the workflow, you know, that's as a business, as a product, that's our goal is to continue to evolve quickly and in a manner that we continue to abstract the complexity. So, I've got to ask you, having a lot of conversations around Hadoop versus Kubernetes on multi-cloud. So as cloud has certainly come in and changed the game, there's no debate on that, how it changes debatable, but we know that multiple clouds are going to be the modus operandus for customers. Correct. So I got a lot of data now. I got pipeline complexities and workflows are going to even get more complex, potentially. How do you see the impact of the cloud? How are you guys looking at that? And what are some customer use cases that you see for you guys? So what I mentioned earlier, that being platform and technology agnostic is actually one of the unique differentiating factors for us. So whether you are in AWS or in Azure or Google or on-prem or still on a mainframe, a lot of the, we're in New York, a lot of the banks and insurance companies here still do some of the most critical processing on the mainframe. The ability to abstract all of that, whether it's cloud or legacy solutions is one of our key enablers for our customers. And I'll give you an example. So Malwarebytes is one of our customers and they've been using ControlM for several years. Primarily the entire infrastructure is built on AWS, but they are now utilizing Google Cloud for some of their recommendation engines and sentiment analysis because their goal is to pick the best-of-breed technology for the problem that they need to solve. The best-of-breed services in the cloud. To solve the business problem. And so from a ControlM's perspective, that transcending from AWS to Google Cloud is completely abstracted for them. So, if it's Google, to Mars, Azure, they decided to build a private cloud, they will be able to extend the same work flow. But you can build these workflows across whatever set of services are available. Correct, not only, and you bring up one important point, it's not only being able to build the workflows across platforms, but being able to define dependencies and track the dependencies across all of this because none of this is happening in silos. If you want to use Google's API to do the recommendations, well, you've got to feed it the data and the data is pipeline, like we talked about last time, data ingestion, data storage, data processing and analytics have very, very intricate dependencies and being able, the solution should be able to manage not only the building of the workflow, but the dependencies. But you're defining those elements as fundamental building blocks through a ControlM model. Correct. That allows you to then treat the higher level services as reliable, consistent, correct capabilities. And the other thing I would like to add here is not only just build complex, multi-platform, multi-application workflows, but never lose focus of the business service or the business process that they're, so you can tie all of this to a business service and then these things are complex, there are problems, let's say there's an ETL job that fails somewhere upstream, ControlM will immediately be able to predict the impact and be able to tell you, this means that the recommendation engine will not be able to make the recommendations. Now, the staff that's going to work on remediation understands the business impact versus looking at a screen where there's 500 jobs and one of them has failed, what does that really mean? Set priorities and focal points and everything else. Right. So I want to just wrap up by asking you how your talk went at Stratoc Hadoop Data Conference. What were you talking about? What was the core message? Was it ControlM? Was it customer presentations? What was the focus? So the focus of yesterday's talk was to actually, one of the things is academic talk is great, but it's important to show how things work in real life. So the session was focused on a real use case from a customer. Navistar, they have IoT data driven pipelines where they are predicting failures of parts inside trucks and buses that they manufacture and reducing vehicle downtime. So we wanted to simulate a demo like that. So that's exactly what we did and it was very well received. In real time, we spun up EMR environment in AWS, automatically provisioned control and infrastructure there. We applied Spark and machine learning algorithms to the data and out came the recommendation. At the end was that, you know, here are the vehicles that are- Fixed their brakes. Exactly. So it was very, very well- I mean, there's a real world of example. There's real money to be saved. Maintenance, scheduling, potential liability accidents. Liability is a huge issue for a lot of manufacturers. Navistar has been at the leading edge of how to apply technology to that business. They've really been a poster child for digital transformation. Here's a company that's been around for 100 plus years and when we talk to them, they tell us that we have every technology under the sun that has come since the mainframe and for them to be transforming leading in this way and we're very fortunate to be a part of their journey. Well, we'd love to talk more about some of these customer use cases that what people love about theCUBE. We want to do more of them. Share those examples. People love to see, you know, proof in real world examples, not just talks. I appreciate it. Absolutely. Thanks for sharing. Appreciate it. Thanks for the insights. We're here at CUBE Live here in New York City part of CUBE NYC. We're getting all the data sharing that with you. I'm John Furrier with Peter Burris. Stay with us for more day two coverage after this short break.