 Live from the MGM Grand Convention Center in Las Vegas, Nevada, it's theCube at Splunk.conf 2014. Brought to you by headline sponsor, Splunk. Here are your hosts, Jeff Kelly and Jeff Frick. Hi, welcome back everyone. You're watching theCube. We're at Splunk.conf 2014, the fifth annual Splunk user conference. I'm Jeff Frick and theCube. We've been here for three years. We go out to the events. We extract the signal from the noise. We try to find the smartest people we can and get them on theCube, ask them the questions that you'd like to hear from them. And we love the Splunk shows because they get a whole lot of practitioners, people that are actually executing with the technology, implementing the technology, transforming their business, transforming their companies. And so we love coming here three years in a row. I'm sure we'll be back next year. I'm joining this next segment by my co-host Jeff Kelly. Thanks, Jeff. And we've got Jim Nichols with us. He's a cloud architect for a company called and I'm hoping I'm getting this right. EnterNock, that's correct, fantastic. Jim, welcome to theCube. Thank you, thank you for having me. So tell us a little bit about EnterNock and what business you're in and then we'll get into kind of how you're using Splunk. So EnterNock is the industry leading provider of software-based solutions for energy intelligence software. So we have 30,000 energy sensors deployed around the world and we're measuring the energy of the top industrial, institutional customers and commercial customers and helping them maximize how they're using their energy. So that's clearly a very data-intensive business. Absolutely, it just keeps getting worse. Yep, worse or better, I don't know. Well, better. It depends on your perspective. It depends on your, yeah, the perspective. So you're old as cloud architects, so did you deliver your services from a cloud perspective? Yep, so we do, Splunk is all in the cloud. So we have that footprint in all of our new initiatives. Our big data initiatives are 100% cloud-based. We do still have a hybrid model though where we do still have some on-premises solutions, but my role is to lead the way to architect these cloud-based solutions and going forward, that's where 100% of our development's gonna be. Interesting, so I definitely wanna talk about that, but maybe if you could add a little color in terms of one of your customers in a use case, but specifically, what you're doing for them. Yep, so we help our customers with three phases of energy management, how they buy their energy, when they use it, and how they use it. And basically, the data that we're collecting from their energy meters is driving everything, and we have analytics, we have reports, we have alerts, we have emails, and we have all those types of tools that allow them to get the most out of their energy. So they can understand trends and how they're using energy when they're having peak times. Exactly, yeah. And how that relates to the price of energy. Right, so the peak time example is kind of our core bread and butter where we initially got our start. So on the hottest summer days, the grid operators can either start up a power plant if they need capacity, or they can call in or not. And our technology will help reduce our customers' energy usage in real time so they don't have to turn on the power plant. Now are these customers, largely industrial customers, with huge energy uses? Oh yeah, yep, so we're going after the top users of energy, so heavy industry, manufacturing, hospitals, campuses and universities, all 100% commercial, industrial, and institutional customers. So do you sit kind of in between the energy providers and the customer, because we're hearing some of the energy providers trying to get into this business and provide some analysis with smart meters of energy usage of their clients. So depending on which program that we're in, we're actually doing a little bit of both, a little bit of everything, depending on the geographic region. So in some areas we contract with the utility directly and we're providing either demand response for them or helping run in their energy efficiency program. And then in some cases we're working with a grid operator in a regulated market and we bid in just like we're a power plant and instead of burning things and producing energy, we're turning things off to provide the capacity. So kind of sit those dual roles and then as we've moved more into enterprise software solutions, it's more working directly with the customer. So we're enabling their data from their energy meters back into our platform and providing the analytics for them and it's just for them and there's no utility or grid operator or anybody else involved, it's just us and them trying to give them real enterprise grade solutions to manage their energy and make it kind of as easy to do as accounting, maybe not as easy to do but similar to accounting, have ERP systems and so forth and we're trying to do that for energy. So we're always asking for where's the ROI and things are more efficient in this and that but in this business, in your use case, there's big ROI, right? Small deltas in consumption relate to easily measurable dollars. Oh absolutely, so on those peak summer days when the grid is the most constrained for resources, the price of energy can be exponentially higher than it is even 10 or 12 hours before or after. So if the customer reduces their energy during those peak periods of times it can be extremely lucrative for just an hour or two, just stop and turn the air conditioning down or stop the manufacturing line for a little bit and they can make serious money doing that and then same thing on the energy efficiency side. All of the tools that we're building is trying to make it really easy for them to identify these energy efficiency opportunities and associated dollar amount directly to that mitigation measure, whatever it is, so that they can implement it and then see right away what the value is going to be immediately. Conversely, shut a line down for a little bit. There's a whole different kind of financial impact. So do you integrate with other systems on their side so they can make an educated decision as to how to throttle that up and down? Oh absolutely, so the main thing that we're pulling in, we're pulling in for every single customer their energy usage so we have a hardware device that we place next to their electrical meter that's getting their data every five minutes all the way down to every two seconds. We're doing that for everybody and then depending on the customer's needs we're also pulling in manufacturing data, weather information, occupancy information about their building, data that's specific to them. Like for example, you wouldn't want to run the air conditioning if it's a federal holiday on a Monday and you're running the air conditioning and there's nobody in the building, that's a savings opportunity right there. So that's the type of thing that we're doing for them. So talk a little bit about the analytics environment. You're talking about some big day analytics challenges there. How are you going about that? I'm sure Splunk plays a role, but maybe take a step back beyond just Splunk. What's your technology footprint look like? What are you doing? What tools are you using? How are you actually going about doing that sophisticated analytics? So when we first started, there wasn't really such a thing as big data, there was no cloud, there was nothing like that and we have had a lot of great success with the Oracle database and we run Exadata in our environment and we got amazing return on the investment and we were able to move in more new programs than any of our competitors were worldwide company and not having to build the infrastructure at the time in addition to building the new functionality really helped us. But now that a lot of these tools are becoming more mature, we're using HBase, MongoDB, you name it, we've tried it, we're using a few really heavily now for new demand management product that we're offering that's helping predict how much energy a customer is going to use in the future using some real sophisticated data analysis. We have, our lead data scientist is MIT PhD, I believe it's in physics and she's applying those types of tools and technologies to the energy data trying to find the insights in the energy usage that aren't things that people know. We do have experts in Nenonok that know if you're running an HVAC system and it's cooler outside than it's cooler inside, you don't need to run the cell or you can just kind of open the window and pull in the fresh air. And a lot of those things we know and we have people, we have like 10,000 years of experience or something doing that but there are other insights that only the data will present and it's the non-obvious things and we're trying to use those types of analytical tools to find those types of opportunities. Can you give an example of one of those non-obvious things that the data provides the insights around? Unfortunately, I can't go there for you. Okay, no worries. I have, there are some that are non-obvious and then when you hear it, it's like, okay, well that is obvious. But that is our, that's a secret sauce right there. We've gotta ask, we love to hear those stories when we can get them but I totally understand. So the tool set though, we're using MongoDB, we're using HBase, Hadoop, we're using all those things and those are the tool sets that we're using along with Splunk to get those insights. So, great, so before we get to Splunk, you're in a great position to provide a little bit of color around how the old world and the new world are kind of colliding and how the new world of Hadoop and these kind of open source distributed frameworks are impacting the old world, the Oracle being kind of one of the poster children of that old world and you're using exadata. How do you see that relationship? You see these new approaches pushing against kind of the old approaches, the more rigid approaches or how's that playing out, I should say, in your organization? For us, you know, the 100% of the new development is gonna be out in the cloud but we still have the exadata, a lot of the core business is still there and it's not going away anytime soon. We have a true hybrid approach so we're getting the data just as it's going into Oracle so that we can then put it out into the cloud and have the exact same data set in both places but then we can run kind of the mission critical operations so like when we're doing demand response for a grid operator, we're doing that to help avoid like blackouts and brownouts and there are like life safety issues involved, we're doing generators at hospitals and I think the AWS force reboots that went on last weekend are a good example of truly mission critical things like that where life safety is involved that I think that the cloud's not quite there yet but what it is there for is for elastic computing, it's for spin stuff up, try it out, throw it away, try something else and we've done a whole lot of that that if we had tried to do that internally on Oracle it would have been a much larger effort, it wouldn't have been impossible, I think it's possible either way but it would have been a lot harder and a lot slower and a lot more expensive. Oh, absolutely, absolutely. So yeah, so let's talk about Splunk specifically. So what role is Splunk playing in this larger picture that you've painted for us? So I bought Splunk initially to do weblog analysis, I had built like a custom homegrown thing and Enderknock, it's been an amazing story, it's grown incredibly quickly, you know, since we went public and my homegrown thing just couldn't keep up anymore so I bought Splunk to do the weblog analysis and I did that within a couple of days, having gone back to its sense because we delivered a new demand response program that had technology requirements that were, it was analogous to like we need to put somebody on the moon and we have barely working rockets, never put anybody into low orbit and to achieve the implementation and make the thing work where we're taking the energy data from our customer sites, getting it into our infrastructure, doing some aggregation and then sending it back to the grid operator within five seconds, being able to achieve that was an amazing accomplishment for us. But what we didn't do was any of the operational visibility. As we were building it, we're trying to figure out what we're trying to build, we're trying to do it and kind of assume that the data would always come in five seconds, which it doesn't always come in five seconds. So right before we shipped it, it was about a month before or a month after I had got Splunk. So I threw together a quick dashboard that is looking at the demand response program itself and how the system operations are impacting our performance and meeting our compliance requirements with the grid operator and that really paid for Splunk tenfold. The investment in Splunk, if we had tried to build all that with our development team, we could have done it, but it would have been 10 times as expensive. But so we're using it mainly for that system's operational visibility, but then in a few key places, we're using it where it's kind of this intersection between running the software, running the system, and then also running like a demand response program. So we have dashboards that'll have system performance, what's the latency of the data, and then how many megawatts are we sending them. So it's a really interesting intersection of those two worlds. Yeah, it seems like you certainly expanded some of your use cases, which is kind of a common theme, Jeff, that we're hearing here at the show with all the customers we've had on theCUBE. Oh, absolutely. And so does Splunk make it easy to do that? Certainly it does from a technology perspective, from their kind of approach to licensing and the kind of try before you buy. Do they make it easy to do that? Oh, absolutely. I could do a lot with the five gig trial license or the free 500 meg trial license. And then I was able to get a larger trial license and start to show some of the value. And I was really immediately able to show that we're gonna be able to get value out of this. And in terms of the dollars, it was right in the range of a lot of the other types of like performance monitoring tools that we had got in. And it was a real easy sell and we kind of proved the investment pretty quickly with it. So we've been talking in theCUBE another theme that seems to keep coming up at the show this year is DevOps. And so kind of what you described, the scenario you described, I think fits into that paradigm. And in fact, on the previous segment, we were talking about what comes first in a DevOps perspective, the culture that needs to go along with that to make it happen or the tooling that you need to actually implement it. What's your approach in your organization or what's your thoughts on DevOps, just the term, the concept, do you think it's, is that accurate in describing what you're doing and how is Splunk helping you do that? Yeah, I mean, personally, I feel like I've been doing DevOps at Innox since day one, 2007. It's always been to me about running that production system, making sure it's performing. And I think that it's great that we have DevOps as kind of an umbrella term that kind of encompasses a lot of these things and it kind of helps make it a little bit more concrete. In terms of DevOps, I actually gave a presentation here at the conference on how Splunk is helping enable our DevOps, really the knowledge, the sharing and the collaboration piece. So we'll have a developer, we actually have a development manager that built a dashboard. He used to be a coder, but I guess you could consider him a non-technical person now when he put this dashboard together because we're delivering a new feature and we wanted to make sure that it was gonna perform well when we released it into production. So he built this dashboard and then when he meets with the operations folks to do like the technical handoff, it's based around the discussion around this dashboard. So these are the metrics, he's the expert, he knows what they are and then that kind of forces that sharing of the information. So you get the sofa cams in DevOps, it's collaboration piece, we're getting that, we're getting the measurement piece and then the sharing piece and it's really around those shared dashboards and everybody is kind of drinking the Kool-Aid at this point, the developers are doing the logs better. One of the really great success stories I think for Splunk kind of this DevOps place is that we had an application support person that was totally end user facing, taking phone calls about can't log into the software or whatever types of problems there were a non-technical person that had a liberal arts degree from somewhere and they basically became the number one user of Splunk pretty quickly, started doing some really deep technical troubleshooting and over time basically carved out a role for themselves in engineering, we stole them out of support, put them in engineering and now this person is the production operations lead, the whole kind of world revolves around them and they're digging into the technology and without Splunk I don't really, I mean I can't imagine how else that would have happened. And did that person start to build their own dashboards? I mean did they really start to explore with the tool along their own kind of journey? Oh absolutely, yep they not only started building their own dashboards but modifying dashboards that I had made and then not letting me make any changes to them and they owned them which I wish I could have planned that but they certainly took ownership of it and they're doing all the alerts, they're doing dashboards, they're working with the developers on better logging and getting the forwarder set up and really has kind of taken the ownership of it because it was a key tool for them to be able to do their job. Even though it's weird it kind of, the job kind of created was kind of created itself to be in Splunk, to be doing this type of things, didn't really exist before but it's really kind of driven all the activity that they're doing in there in Splunk every day pretty much now. But it's a law of unintended consequences as long as it's a combination of someone's motivation, an opportunity to exploit that motivation using a tool and the data to redefine a new job. Like you said it wasn't even defined there before. So letting employees be self-advocates and execute their own little vision using available data and available tools. Yeah and if you can use Excel, you can go in there and you can create some pretty slick looking dashboards and run a query against, I have five years worth of log data in there at this point, run a query over that whole data set and point and click and configure things and then if you need to do something a little bit more complicated, you might have to get one of the Splunk experts involved which is really there's only two of us but we can support everybody that is using it right now and people have been in there, we have vice presidents that are in there looking at how many of their employees have logged into our platform so the drinking your own Coke I think is the example that we heard yesterday or dog fooding is another example of that but they're in there modifying the search and they're seeing who's logging in and I had another debate with another vice president over the difference between like a gauge and a dial like where kind of data visualization like nerds at Internock and those are some of the debates that we end up having but it's pretty amazing to have a software tool that we bought to look at web logs that now we're arguing about what's a gauge versus a dial and a dashboard that we have in our network operation center it's a pretty amazing thing. Yeah so expand on that a little bit in terms of the empowerment and really the change in the conversation when you go from data that was hidden, unexposed, buried, it's certainly not accessible to everyone, now it's out there and now it's more of a conversation about what's the right presentation there and that's a pretty big transformation. Right, I mean the data before was completely opaque. You know I started off as a performance engineer at Internock digging into the logs I used to be able to tail it and watch the log go by and like the matrix reading it and seeing what was happening in the system and as we grew it, I couldn't do that anymore and it was really me and a couple of the other people that had been there some in the beginning that kind of knew oh there's a log message for that we can grep for it, we can dig in there for it and now it's just as easy if you can use Google you can just put in an error message that somebody got in the UI, throw it into Splunk and see what it comes back with and non-technical people are able to kind of it makes it accessible for them really. So you've got Splunk's ear right now, they're all watching if not now, later. What are some things they can do for you to help you execute better as you move forward? Well, so we are a mission critical operation for demand response and because Splunk is monitoring the platform, it really needs to be as mission critical and as reliable if not more than the platform. So on the platform side, we've built and embraced that failures are gonna happen and built in redundancy and fault tolerance and all that sort of thing and we really need Splunk there to be able to watch that, to know what's going on. It's the Immerset Flight Data Recorder. It was really one of the first key things that we put out on Amazon so that if we completely lost one of our data centers we'd be able to log into Splunk and see what's happening. So that high availability mission critical piece is really our biggest sort of want and they actually kind of answered the question for us. It was kind of amazing actually yesterday to hear that they're doing more with search head clustering to be able to enable high availability and that was really like the key thing that we were after. So we'll have to think a little bit about what else we would like to see in there but I think it's mainly around that theme of making it highly available and self healing and able to tolerate Amazon failures more robustly and that type of thing I think is the main want that we have. All right, good. So thanks for coming on. Great story. Thank you. You know, it's great learning about new businesses where you guys are impacting significant amounts of dollars, not to mention it's environmentally the right thing to do to help conserve energy and efficiently put the energy where it needs to be and take it away from the office on Monday after the Memorial Day when there's nobody in there. Absolutely. Jim Nichols from InterNock. Thanks for stopping by theCUBE. I'm Jeff Frick with Jeff Kelly. We're at Splunk.com 2014, the 50 year Splunk Teaser Conference, the 30 year of bringing theCUBE. We go out to the events, we extract the signal from the noise and get great guests like Jim to tell you how they're using these technologies and trying to change the world, change their business and change really the culture within their companies and powering other people within the organization to more aggressively get out front of the curve and make changes and really be innovative. So we'll be right back with our next segment after this short break.