 Live from Orlando, Florida, it's theCUBE, covering .conf18, brought to you by Splunk. We're back in Orlando, Dave Vellante with Stu Miniman. John Rooney is here, he's the Vice President of Product Marketing at Splunk. Lots to talk about, John, welcome back. Thank you, thanks so much for having me back. Yeah, we've had a busy couple of days, we've announced a few things, quite a few things, and we're excited about what we're bringing to market. Okay, well let's start with yesterday's announcements, Splunk 7.2, what are the critical aspects of 7.2, what do we need to know? Yeah, I think first, Splunk Enterprise 7.2, a lot of what we wanted to work on was manageability and scale. And so if you think about the core key features, the smart storage, which is the ability to separate the compute and storage and move some of that cool and cold storage off to blob, sort of API level blob storage. A lot of our large customers, we're asking for it, we think it's going to enable a ton of growth and enable a ton of use cases for customers, and that's just sort of smart design on our side, so we've been really excited about that. So that's simplicity and it's less costly, right? And again, like storage. You free up the resources to just focus on what are you asking out of Splunk, running the searches and the saved searches, move the storage off to somewhere else when you need it and pull it back when you need it. And when I add an indexer, I don't have to add both compute and storage, add whatever I need and granular increments, right? Absolutely, it just enables more graceful and elastic expansiveness. Okay, that's huge. What else should we know about 7.0? So workload management, which again, is another sort of manageability and scale feature. It's just the ability to say, the great thing about Splunk is you put all your data in there and multiple people can ask questions of that data. When most people, it's just like an apartment building that has, if you only have one hot water heater and a bunch of people are taking a shower at the same time, maybe you want to get some privilege and say, no, the penthouse, they're going to get the hot water first, other people not so much. And that's really the underlying principle behind workload management. So there are certain groups and certain people that are running business critical or mission critical searches. We want to make sure that they get the resources first and then maybe people that are experimenting or kind of kicking the tires. We have a little bit of a gradation of resources. So that's essentially programmatic SLAs. I can set those policies, I can change them. Absolutely, it's the same level of granular control as you would, let's say, run access control. It's the same underlying principle. Okay. Other things? Go ahead, Steve. Yeah, John, you guys always have some cool pithy statements. One of the things that jumped out at me in the keynotes because it made me laugh was the end of metrics. Yes. When we've been talking about data, data is that the line I heard today was, Splunk users are at the crossroads of data. Yes, so maybe it gives a little insight about what you're doing that is, different ways of managing data because every company can interact with the same data. Why is the Splunk user, what is it different, what do they do different, and how is your product different? Yeah, I mean, absolutely. I think the core of what we've always done and Doug talked about it in the keynote yesterday is this idea of this expansive investigative search. The idea that you're not exactly sure what the right question is, so you want to go in, ask a question of the data, which is going to lead you to another question, which is going to lead you to another question, and that's that sort of finding the needle in a pile of needles that Splunk's always great at, and we think of that as more sort of the investigative expansive search. Yeah, I'm sorry, just to, you know, when I think back, I remember talking with companies five years ago, and they'd say like, okay, I've got my data scientist, finding which is the right question to ask once I'm swimming in the data can be really tough. Sounds like you're getting answers much faster, it's not necessarily a data scientist, maybe it is, we saw BMW on stage, but help us understand, you know, why this is just so much simpler and faster. Yeah, I mean again, it's the idea for the IT and security professionals to not necessarily have to know what the right question is, or even anticipate the answer, but to find that in an evolving iterative process, and the idea that there's flexibility, you're in no way penalized, you don't have to, you know, go back and re-ingest the data or anything to say when you're changing exactly what your query is, you're just asking the question which leads to another question, and that's how we think about on the investigative side, from a metric standpoint, and we do have additional, you know, this is the third big feature that we have in Splunk Enterprise 7.2 is improve metrics visualization experience, is the idea of our investigative search, which again, we think we are the best in the industry at, is great again when you're not exactly sure what you're looking for and you're doing sort of a deep dive, but if you know what you're looking for from a modern standpoint, you're asking the same question again and again and again over and again, you want to be able to have an efficient and easy way to track that if you're just saying, I'm looking for CPU utilization or some other metric. Yeah, just one last follow up on that, you know, I look, the name of the show is .com because it talks about the config file, you look at, you know, everywhere, you know, it's people are in the code versus GUI and graphical and visualization. What are you hearing from your user base? How do you balance between the people that want to get in there versus, you know, being able to point and click or ask a question? You know, this company was built off of the strength of our practitioners and our community, so we always want to make sure that we create a great and powerful experience for those technical users and the people that are, you know, in the code and in the configuration files, but, you know, that's one of the underlying principles behind Splunk Next, which, you know, was a big announcement part of day one is to bring that power Splunk to more people. So, create the right interface for the right persona and the right people. So the, you know, the traditional sort of Linux sysadmin person who's working in IT or security, they have a certain skill set, so the SPL and those things are native to them. But if you are a business user and you're used to maybe working in Excel or doing pivot tables, you need a visualization, a visual experience that is more native to the way you work. And the information that's sitting in Splunk is valuable to you. We just want to get it to you in the right way. And similarly, we talked about today in the keynote with application developers. The idea of saying, well, everything you need to be delivered in a payload in a JSON object makes a lot of sense if you're a modern application developer. If you're a business analyst somewhere that may not make a lot of sense. So we want to be able to service all of those personas equally. So you've made metrics kind of a first-class citizen. Absolutely. Opening it up to more people. I also wanted to ask you about the performance gains. I was talking to somebody. I want to make sure I got these numbers right. It was literally like three orders of magnitude faster. I think the number was 2000 percent, 2000 times faster. I mean, I don't know if I got that number right. I mean, it just sounds implausible. Yeah, that's specifically what we're doing around the data fabric search, which we announced in beta on day one. And just simply because of the approach to the architecture and the approach to the data. I mean, Splunk is already amazingly fast, amazingly best in class in terms of scale and speed. But you realize that what's fast today because of the pace and growth of data isn't quite so fast two, three, four years down the road. So we're really focused on looking well into the future and enabling those types of orders of magnitude growth by completely re-imagining and re-thinking through what the architecture looks like. So talk about that a little bit more. I was going to say, is that the source of the performance gain? Is it sort of the architecture? Is it tighter code? Was it a platform do-over? No, I mean, it wasn't a platform do-over. It's just the idea of in some cases, the idea of thinking like I'm federating a search between one index here and one index there to have sort of a virtualization layer that also taps into compute, living, let's say, in Apache Kafka, having, taking advantage of those types of open source projects and open source technologies to further enable and power the experiences that our customers ultimately want. And so we're always looking at what problems are our customers trying to solve? How do we deliver to them through the product? And that sort of constant iteration, that constant self-evaluation is what drives what we're doing. Okay, now today was all about the line of business. We've been talking about, I've used the term land and expand about 100 times today. It's not your term, but others have used it in the industry. And it's really the template that you're following. You're in deep in SecOps. You're in deep in IT operations management. And now we're seeing just big data permeate throughout the organization. Splunk is a tool for business users and you're making it easier for them. Talk about Splunk business flow. Absolutely, so business flow is the idea that we had, again, we learned from our customers and we had a couple of customers that were essentially tip of the spear doing some really interesting things where as you described, let's say the IT department said, well we need to pull in this data to check out application performance and those types of things. The same data that's flowing through is going to give you insight into customer behavior. It's going to give you insight into coupons and promotions and all the things that the business cares about. If you're a product manager, if you're sitting in marketing, if you're sitting in promotions, that's what you want to access. And you want to be able to access that in real time. So the challenge is that we're now stepping up with things like business flow is how do you create an interface? How do you create an experience that, again, matches those folks and how they think about the world? And again, the magic, the value that's sitting in the data is we just have to surface it the right way for the right people. Now the demo, and Stu knows I hate demos, but the demo today was awesome. Yeah. And I really do, I hate demos, but because most of them are just so boring, but this demo was amazing. You took a bunch of log data and a business user ingested it and looked at it and it was just a bunch of data. Like you'd expect and go, eh, what am I supposed to do with this? And then he like pushed a button and all of a sudden there was a flow chart and it showed the flow of the customer through the buying pattern. Now maybe that's a simpler use case, but it was still very powerful. And then he isolated on where the customer actually made a phone call to the call center, which you want to avoid if possible. And then he looked at the percentage of dropouts, which was like 90% in that case, versus the percentage of dropouts in a normal flow, which was 10%. Oh, something's wrong, drilled in, fixed the problem. He showed how you fix it, all graphically beautiful. Is it really that easy? Yeah, I mean, I think, you know, if you think about, you know, what we've done in computing over the last 40 years, if you think about even the most basic word processor, the most basic kind of spreadsheet work, that was done by trained technicians 30, 40 years ago. But the democratization of data, it created this notion of the information worker. And we're a decade or so now, or plus into big data and the idea of like, oh, that's only for highly trained professionals and scientists and people at PhDs. And there's always going to be an aspect of the market or aspect of the use cases that of course is going to be that level of sophistication. But ultimately, this is all work for an information worker. If you're an information worker, if you're responsible for driving business results and looking at things, it should be the same level of ease as your traditional kind of office suite. So I want to push on that a little bit if I can. So, and just test this because it just looked so amazingly simple. Doug Merritt made the point yesterday that business processes, they used to be codifying, you know, codifying business processes that waste the time because business processes are changing so fast. The business process that you used in the example was a very linear process, admittedly. You know, I'm going to search for a product. I'm going to maybe read a review. I'm going to put it in my cart. I'm going to, you know, buy it, you know, very sort of straightforward. But business processes, as we know, are unpredictable now. Can that level of simplicity work and the data feed in that some kind of unpredictable business process? Yeah, and again, that's what we think. That's our fundamental difference and how we've done it differently than everyone else in the market. It's the same thing we do with IT service intelligence when we launched that back in 2015 because it's not a tops-down approach. We're not dictating. We're not taking sort of a central planning approach to say, oh, this is what it needs to look like. The data needs to adhere to this structure. The structure comes out of the data and that's what we think, you know, it's a bit of a simplification, but I'm a marketing guy, so I can get away with it. But that's where we think we do it differently in a way that we think allows us to reach all these different users and all these different personas. So it doesn't matter, again, that business process emerges from the data. And Stu, that's going to be important when we talk about IoT, but jump in here. Yeah, so I wanted to have you give us a little insight of the natural language processing. Yeah, you've been playing with things like the Alexa. I've got to Google Home at home. I've got Alexa at home. My family plays with it. Certain things, it's okay for. But I think about the business environment. The requirements and what you might ask Alexa to ask Splunk seems like that would be challenging. You've got a global audience, languages are tough, accents are tough, syntax is really, really challenging. So kind of give us the why and where are we? Is this nascent things or do you expect customers to really be strongly using this in the near future? Yeah, I mean, absolutely. The notion of natural language search or natural language computing has made huge strides over the last five or six years. And again, we're leveraging work that's done elsewhere. Today's point about demos. Alexa, it looks good on stage. It looks good on stage. What we think, if you were to ask me and we'll see, we'll always learn from the customers. I'm not, the good thing is I like to be wrong all the time about these are my hypotheses. But my hypothesis is the most actual relevant use of that technology is not going to be speech. It's going to be text. It's going to be in Slack or HipChat where you have a team collaborating on an issue or a project and they say, I'm looking for this information and they're going to pass that search via text into Splunk and back via Slack in a way that's very transparent. That's where I think the business case is going to come through. And if you were to ask me, again, we're starting the betas. We're going to learn from our customers. But my assumption is that's going to be much more prevalent within our customer base. That's interesting because the quality of that text presumably is going to be much, much better at least today than what you get with speech. We know well with the transcriptions we do of theCUBE interviews. Okay, so that's ML and NLP. I thought I heard 4.0, right? Is that? Yeah, so we've been pushing really hard on the machine learning toolkit for multiple versions. That team is heavily invested in working with customers to figure out what exactly did they want to do? And as we think about the highly skilled users, our customers that do have data scientists do have people that understand the math to go in and say, no, we need to customize or tweak the algorithm to better fit our business. How do we allow them essentially the bare metal access to the technology? We're going to leave DevCloud for skip if that's okay. I want to talk about industrial IoT. You said something just now that was really important. And I think I want to just take a moment to explain to the audience. What we've seen from IoT, particularly from IT suppliers is a top-down approach. We're going to take our IT framework and put it at the edge. And that's not going to work. IoT, industrial IoT, these process engineers, it's going to be a bottoms-up approach and it's going to be standards set by OT, not IT. Splunk's advantage is you've got the data. You sort of agnostic to everything else. Wherever the data is, we're going to have that data. So to me, your advantage with industrial IoT is you're coming at it from a bottoms-up approach as you just described. And you should be able to plug in to the IoT standards. Now, having said that, a lot of data is still analog, but that's okay. You're pulling machine data. You don't really have tight relationships with the IoT guys, but that's okay. You've got a growing ecosystem. We're working on it. We're working on it. Talk about industrial IoT and we'll get into some of the challenges. Yeah, so interesting, we first announced the industrial asset intelligence product at the Hanover Messe Show in Germany, which is this massive, I mean, like 300,000, it's a city of things. It's amazing. It's amazing. It's a huge show, 400,000 people. A lot of schnitzel, I was just there. And the interesting thing that we were, it was the first time I'd been at a show really for Splunk in years where people, you know, if you go to an IT or security show, like, oh, we know Splunk, we love Splunk, what's in the next version? It was the first time we're having a lot of people come up to us and say like, yeah, I'm a process engineer in industrial plant. What's Splunk? Which is a great opportunity. And as you explain the technology to them, their mindset is very different in the sense that they think of very custom connectors for each piece. They have a very sort of almost bespoke or matched up notion of a sensor to a piece of equipment. So a little example, let's say, oh, do you have a connector for, and again, I don't have the machine numbers, but the Siemens one, two, three machine. And I'll be like, well, as long as it's textural, structured or semi-structural data, ideally with a time stamp, we can ingest and correlate that. And like, okay, what about the Siemens ABC machine? We're like, well, the idea that the notion of, we don't care where the source is, as long as there is a sensor sending the data in a format that we can consume. And if you think back to the beginning of the data stream processor demo that Devani and Eric gave yesterday that showed, the history over time of the purple boxes that built, like we can now ingest data via multiple inputs and via multiple ways into Splunk. And so that hopefully enables the IoT ecosystems and the machine manufacturers, but more importantly, the sensor manufacturers, because it feels like, in my understanding of the market, is we're still at a point of a lot of folks getting those sensors instrumented. But once it's there and essentially the faucet's turned on, we can pull it all in and we can treat it and ingest it just as easily as we can data from AWS Kinesis or Apache Access Logs or MySQL Logs. Yeah, so instrumenting the windmill to use the metaphor is not your job. Connectivity to the windmill is not your job. But once those steps have been taken and the business takes those steps because there's a business case, once that's done, then the data starts flowing. That's where you come in. And there's a tremendous amount of incentive in the industry right now to do that level of instrumentation and connectivity. So it feels like that notion of instrument connect, then do the analytics. We're sitting there, I think, well positioned once all those things are in place to be a top provider for those analytics. John, I want to ask you something. Stu and I were talking about this at our kickoff and just want to clarify it. Doug Merritt said that he didn't like the term unstructured data. I think that was yesterday, that's just data. My question is, how do you guys deal with structured data? Because there is structured data transaction processing data and analytics data together for whatever reason, whether it's fraud detection, it might be to give the buyer an offer before you lose them, better customer service. How do you handle that kind of structured data that lives in IBM mainframes or whatever? Do you miss this mainframes in the case of Carnival? Again, we want to be able to access data that lives everywhere. And so we've been working with partners for years to pull data off mainframes. Again, the traditional in-outs aren't necessarily there, but there's incentive in the market. Again, we work with our ecosystem to pull that data and give it to us in a format that makes sense. We've long been able to connect to traditional relational databases. I think when people think about structured data, they think about, oh, it's sitting in a relational database somewhere, in Oracle or MySQL or SQL server. Again, we can connect to that data and that data is important to enhance things, particularly for the business user, because if the log says, okay, whatever, product ID one, two, three, four, five, but the business user needs to know what product ID one, two, three, four, five is. That's a lookup table, you pull it in, now all of a sudden you're creating information that's meaningful to you. But structure, again, there's fluidity there, because coming from my background, a JSON object is structured. You can, the same way that Teresa Vu in the demo today, sort of unfurled in the dev cloud, sort of what a JSON object looks like. There's structure there, you have key value pairs. There's structure to key value pairs. So all of those things, that's why I think to Doug's point, there's fluidity there. It is definitely a continuum and we want to be able to add value and play at all ends of that continuum. And the key is you guys, your philosophy is to curate that data in the moment when you need it and then put whatever schema you want at that time. Absolutely, really again, going back to sort of this bottoms up approach and how we approach it differently from basically everyone else in the industry. You pull it in, we take the data as is, we're not transforming, we're not changing or breaking the data or trying to put it into a structure anywhere, but when you ask it a question, we will apply a structure to give you the answer. If that data changes when you ask that question again, it's okay, it doesn't break the question. That's the magic. Sounds like magic? Yep, 16,000 customers would tell you that it actually works. So John, thanks so much for coming to theCUBE. It was great to see you again. Thank you so much for having me. You're welcome, all right, keep it right there, everybody. Stu and I will be back. You're watching theCUBE from SplunkConf 18, hashtags SplunkConf 18, we'll be right back.