 Live from Dublin, Ireland, it's theCUBE. Covering Hadoop Summit Europe 2016, brought to you by Hortonworks. Now your hosts, John Furrier and Dave Vellante. Okay, welcome back, and we are here live inside theCUBE for a special edition, European edition of SiliconANGLE's flagship program theCUBE. I'm John Furrier, my co-host Dave Vellante. Our next guest is Joe Goldberg, Solutions Marketing Head for BMC. Welcome back to theCUBE, good to see you. Thank you very much, great to be here. BMC, big data, the be in big, be in BMC. We're here in Europe, what's the update? Give us the update on what's happening at BMC. Okay, so from a BMC perspective, the specific set of capabilities that I'm responsible for is what we call in the traditional space workload automation, in a big data space, workflow scheduling, or Batch. It's a discipline that doesn't get a lot of press, even in a traditional space, nobody talks about Batch, they think about it as kind of real old school stuff, but it happens to be among the biggest portions of why a lot of companies are going to Hadoop and big data in the first place. So there's a lot of discussion about things like enterprise data warehouse modernization. And I think that that's probably just another word for trying to save a whole lot of money while being able to actually do a lot more with their traditional data assets and enriching them with all of the stuff that's being gathered and generated in society as a whole. And also the investments that's been made in the data warehouse, you can't just throw that away. Absolutely. A lot of pressure to save it, retool it, revamp it, or replatform it. And one of the big major components in any of this data management, whether you call it data warehouse, data lake, is the batch processing that has to be done both to feed it and maintain it, as well as to ensure that when you do trigger any of the stuff that gets all the hype, the analytics and machine learning and all of this fancy stuff, when it comes to business users, a lot of times they don't need or want live updates to the second. They need to be able to get the data when they want it and they want it to be up to date at that point. So to make sure you've got the right data at the right time, and then also give users the ability to request that data when they need it, most of the time that's done in batch. So people want to be able to run ad hoc reports, update their dashboards on an ad hoc basis, the care and feeding of the data lake, interleaving big data and Hadoop processing with traditional technologies that are going on in the enterprise. That's what we do with Control L. So I think a lot of what you say is true. I mean a lot of the EDW modernization ROI has been about reduction on investment, lowering the denominator. So the question is, how does BMC play in that lowering of the denominator? So Control M is an enterprise solution. We are the leading workflow, workload, automation, job scheduling, however you want to describe it, the component that makes sure stuff runs. So whether you're talking about traditional applications like inventory or customer billing and stuff like that, or in the modern environment of Hadoop and big data, the processing that occurs, we make sure that that stuff runs. And so since we've come at it from the enterprise, sometimes I resist saying that we're a 35 year old product because people immediately say, oh, that's old, it's old school, no longer fashionable, I want modern. I want modern batch. Yeah, well, and that's exactly what you get with Control M. So the reason we're the leader in our market is because we have kept it modern. We have added capabilities like auditing and forecasting and security and self-service and all of these kinds of capabilities that we now by adding support for Hadoop. And that's important, we add native support for Hadoop. So we are interacting with Hadoop in exactly the same way that the most modern tool using the facilities of Hadoop. So we have this intimate knowledge. We know how to start work in yarn, we know how to track it, we know how to kill it, we know how to control or collect its output. So the simple story from our perspective is that we make Hadoop as easy to operate as any of the other applications in your environment. And also letting you leverage all of your skills, your best practices, all of the resources that you have so that when you build your next application, whether it's a big data application or any other, you don't have to stop and think, how am I going to integrate this into my environment? How am I going to manage it? It becomes very simple. So where does yarn end and you pick up? What's the overlap? Are you essentially extending that functionality? So we don't extend yarn functionality but in the Hadoop ecosystem, we always use this as an analog so to put us into context. When people say, well, what do you do? We say, we're the enterprise grade Uzi. Okay, so there's a component in Hadoop called Uzi which is their workflow scheduler. But no matter how much you like it, by the way, we have almost, the only people I have ever met that said that they like Uzi are the Uzi committers. Everybody else, everybody else, the first thing they say is, I hate Uzi. It's complex, it's very limited. It's like, you look at any application and again, this I think underlines just how important batch is. No matter what application you look at, there's a way to run batch in that application. But of course, it's just for that application so it's limited. And if you look at an enterprise, you really can't run a mixed workload by using 50 different tools. You need to be able to have that higher level visualization. And so we operate technically exactly the way Uzi does. So Yarn has a bunch of APIs. You're yet another resource. Is that right? I guess so. Have it to Yarn. Yeah, exactly. So Yarn has a set of APIs for how you start work and how you track work and that's what we use. So technically, we operate exactly the way you would expect a Yarn based or a Hadoop workflow scheduler to operate. But what we do of course is that we'll let you then incorporate all of that work into the context of everything else that you're doing. So at the end of the day, the enterprises need their stuff up and running. Five nines, SLAs for their customers, your customers' customers at the end of the day, that's all they care about, right? So that's kind of one, that's now table stakes we're hearing that conversation on theCUBE. Absolutely. The other thing that we're hearing on theCUBE and certainly validated 100% at Big Data Week where we had our Big Data SV event in conjunction with that other event, Strata Hadoop, is that it's very clear now the revolution now that the hype's gone, reality's there, is that people, it's not mutually exclusive around Batch and this and data warehousing that Batch is big part of the value of Hadoop. And so this modernizing thing is not about one tool over the other, it's the coexistence of a variety of different things. But Batch doesn't go away at all. Exactly. But you got to maintain this modernization for new SLAs, mobile environment, which is bolting on to stuff that you're talking about. Exactly. So take us through that dynamic, because that's real, it's not just a bolt-on, it's not just a, it's really integrated. Talk about that. So again, this is exactly what we have seen. From our perspective, we're kind of very smug and sitting and saying, ha, ha, we told you so, that this has happened time and again. We've been around, let's say through the, take one example, the ERP sort of revolution, right? Everything was going to go away except for ERPs, of course, it turned out to be considerably different. But everyone went away except for Oracle. And SAG. And SAG. Yeah, well, and yeah. And a lot of others. And a lot of others. So what we have seen in traditional technology, and generally the technology adoption curve that really is pretty much applied to every single new technology. What we have been doing over that time, we started out on the mainframe. Sometimes I'm a little reluctant to say that because again, people say, oh, mainframe, like how could you possibly know Hadoop? But here we are at a point where we have seen, ERPs, relational databases, backups, file transfers, a ton of core banking applications, you name it, and they've all had the same requirement, which is you want to be able to manage it together with everything else that you have in your environment. Okay, and so you want to be able to leverage the GUI, the processes, the best practices, but there has to be some kind of balance between the sort of lowest common denominator and capability in that environment. So you talk to an SAP person, you can't say to them, oh, we have a scheduler for you that doesn't really understand SAP, but it has a really nice interface, right? It's kick ass, yeah. They're going to turn you off immediately, and so there has always been the need to find exactly this balance, how deep of a technical integration do you have to have? And what we've been able to achieve is a design in our solution that allows us to provide a very low level of integration, really deep, intimate understanding. So when I say that we're using the APIs that Yarn provides, this is like the industry standard. Anybody who wants to run work in a Yarn environment, that's the way that they would run it. So we use the same way that the most sort of native Hadoop tool would integrate with Hadoop, which is starting work and being able to understand its status and getting the output. So we do it exactly the same way, and that's why we use Uzi as an analog because we do it exactly the same way. Back in 2011, 2012, when the enterprise companies woke up to Hadoop, the theme was, we're going to make Hadoop enterprise ready. And today everybody said, that's essentially what you're talking about having done. That's exactly right. And so we were one of those, I would argue that it was later than 2011 or 2012, I think it might be around now. And you know, we were, I think we were actually. It was a couple of years after we started talking about it. So that's when we actually started. We were pretty early in the game. And certainly you can even see here on the show now, on the show floor that very few of the kind of companies that we traditionally compete against are really here. And if they are, it's with some niche solution that they may have acquired. We still feel we're one of the few sort of traditional IT management companies that in a big way have really committed with our existing tool set that organizations have been using for a long time and are expanding it to include Hadoop as yet another, just another application, which we think is a good thing. Because that's what we think is enterprise integration is. Exactly. And that's what people want, they want integration. Yes. So I've been sort of writing down the business impact of all this stuff, controlling them in particular. So it's automation, which means simplification and lower cost, better quality, fewer errors. Absolutely. Availability I guess, speed, am I missing anything? Did you read our data sheet? Is that the business impact? That's exactly the business impact. When people bring you in, they say, how much am I going to get out of each of these factors? And that's what's satisfying and gratifying to us is it's exactly the same situation as what happens in a traditional space. So if we look at our traditional market, there's still companies that don't use our solution or don't even use any of our competitive solutions. And we also, another analog that we use to kind of compare ourselves to is we say, Uzi is the cron of Hadoop, right? And so you say that to anybody who's familiar with managing an enterprise and they understand exactly what you're talking about. They're struggling with a lot of their users and developers for a variety of reasons, running mission critical jobs using cron. And of course, you know, typical Murphy's law scenario, nobody knows or cares until something breaks and then they can't figure out what the heck is going on, who owned it, how did it get created, all of these kind of problems. So this is the situation that we are solving for customers that are moving to Hadoop where they have the same kind of issues. In fact- You get a good point, the cron job example is a good example. And that's why we see SQL never going away. People are familiar with their command line and their tools that they use. And that's just, you know, why change that if they don't want to, but if they extend it. Yeah. Yeah, and what's really interesting is that, you know, there's another, that's what's so fun about being in the tech industry. There's always something new coming along. So- That's right. That's why we love doing the queue. So possibly in parallel or, you know, there are certain dynamics that maybe, that one affects the other. But one of the other things that's happening for almost every enterprise is DevOps and what I would call, you know, the Hadoop world talks about their solution as the modern data architecture. Well, organizations are looking at a modern application release process, right? And so that's another aspect that really is kind of dovetailing with what we're doing in the Hadoop space where one of the reasons, so, you know, why do people use tools like Uzi or Cron? Frequently it's because that's what the programmers have access to and the, you know, traditional sort of enterprise structure where IT owns an enterprise solution and has sort of, you know, a gate around it and they don't let anybody outside of IT sort of mess with it. That dynamic is significantly changing as a result of the need for speed and to deliver new services and this whole DevOps thing. And so what's happening is that we are having a lot of conversations about ControlM coming from the enterprise and being an IT centric solution in the past. We have now added a lot of capability and exposed our product and all of our solution capabilities with RESTful APIs and so forth so that you can embed ControlM into this modern application release lifecycle, rather. So you can start using ControlM right from inception. So the reasons that we heard time and again, why does a programmer use Cron because they can? They have access to it, they have full control and they don't have to worry about what happens down the road when that has to transition to production. And it's been a mess. Now, and you know, this has benefit to our entire market not just to the big data market but now with the capabilities that we've added, you know, we are enabling developers from inception to create workflows in ControlM using a notation that they're familiar with, you know, happens to be JSON specifically, but you know, that's just a detail. So now they're comfortable that they can build those flows, test those flows in their environment and let it move together with the rest of their application throughout the entire delivery pipeline. And you know, that's it, these two trends are really coming. They're colliding, coming together beautifully. Absolutely. Joe, thanks so much for coming on theCUBE. Real great to see you. Thank you very much for having us. So go over to BMC, bringing it all together, modernizing existing tools. This is the theme, building on layers of intelligence and innovation on top of existing great stuff. Thanks for joining and sharing your insight. BMC on theCUBE live here in Dublin, Ireland. We'll be back with more live coverage with theCUBE here at Hadoop Summit in Europe 2016. We'll be right back with more after this short break.