 Live from the Julia Morgan ballroom in San Francisco, extracting the signal from the noise, it's theCUBE, covering Structure 2015. Hey, Jeff Rick here with theCUBE. We are live at the Julia Morgan ballroom in downtown San Francisco at kind of Structure Reborn, Structure 2015, come back to life after the one guy canceled this summer. So we're really happy to be here. We're kind of wrapping up day one here with George Gilbert from Wikibon. So George, impressions of the day? You know, I'm reminded of that scene in the original Indiana Jones movie where they're digging from the surface of the desert all the way down into this hole and they're passing all these levels of strata. And I know strata is a dirty word because it comes from the competition, but basically we're seeing many layers of technology that we saw, that we've talked to, vendors representing those layers. The last one, Intel, was talking about how what used to be hardwired infrastructure in terms of x86 servers, storage, networking is now much more fluid in the sense that just the way we had virtualization for servers, we're now seeing that same capability on networking and on storage. And so everything in the data center becomes programmable. That's the lowest layer of the strata. We talked really at the highest layer with Joseph Chirosh of Microsoft where you don't even know where his stuff is running. It's out in the cloud somewhere, but he's got this really deeply integrated data lake that doesn't bear any real resemblance other than the programming interface to this Hadoop data lake that we all hear about. This one is industrial strength, it runs as a service. You don't need 47 different administrators taking care of 17 different corners of it. And then they've got a whole lot of analytics that layers on top of it seamlessly. So top to bottom, we saw a lot. So one of the core themes is always best of breed versus an integrated solution. And for best of breed, you might get better capabilities on a particular silo of functionality, but now you've got an additional layer of overhead and an additional layer of management. Go with an integrated stack in theory everything works really well. So maybe you don't have the best of breed on a particular thing. How's that really changing now in the cloud world? Because we have kind of two things going on. Cloud's all about elasticity and being able to expand and contract, but you can't do that at scale without a tremendous amount of automation. And the other piece we talked to about APIs and now all this stuff is interconnected. It's not just a single app and you control. So how are those trends being utilized? How are we using more horsepower to enable elasticity, more best of breed, API integration, as well as automating so much of the configuration and the expansion. Okay, so maybe an analogy works here. Picture the cockpit of an airplane. You've got 47,000 knobs you can turn. That's sort of how big data is dealt with today. Whether on-premise or even in the cloud, you've got to know, you've got to have storage guys, you've got to have zoo keeper guides, and I don't mean the ones keeping animals. I'm talking about all these different servers that have to interact and they all have these different behaviors and administration and programming interfaces. So, but they're tunable, so you can get just what you want. But the opposite of that, which you mentioned, is integrate it all for me. And I might not- Just turn on the autopilot, right? Yes, turn on the autopilot. I don't want to bother with these knobs. I just want to get where I'm going. And I think it's not, there's not one right or wrong answer, but the early adopters have the skills and the inclination to tune. So like an Uber, they might not want all you can eat of one blanket interface, but when you get to a mid-sized enterprise, they don't have 67 administrators to handle a Hadoop cluster. Yeah, and it was interesting, Jonathan talked about, because there are no single vendor customers, right? It doesn't exist. But George talked about kind of how we've got so much more compute horsepower now, and how that can really be brought to bear to start to automate, at scale, the tuning of those knobs, where maybe before you just really couldn't do it, but Moore's Law continues to chug along with all the gusto that it's had before, and we just have such massive scale of compute, store, and now software-defined networking that it's really enabling a level of automation that you just couldn't do. You don't need an army of pilots to use your analogy like he used to. Well, I guess one way of thinking about it is, machine learning is sort of the new black, you know, like as in the new it color, and you can use machine learning to get a sense for how your network, and how your data center, and how your software should behave. This is still a bit of a science project, but rather than having people track down, you know, like in a hospital, the monitors that go off when someone's heart stops, you know, you gotta go figure out what happened. Here, these things get a sense for the rhythms, and when things aren't operating quite right, it'll alert someone before the heart stops. Right, right. That's the idea. Well, what's interesting is, because the whole kind of idea of you throw everything into a data lake, and the answers magically appear, right? We know that just doesn't, it just doesn't work that way. But the other thing is we were talking to Google at the Women in Data Science Summit at Stanford a couple weeks back, and even with the resources of Google, if you don't have some type of hypothesis, some type of guidance, some type of direction in which to target your resources, even they can spend an inordinate amount of time doing things that if you had at least set a direction, you would be there. And even with the massive, both compute and dollar resources that Google have, they don't go on these kind of throw it in there and let's see what comes out. There's always kind of a thought, there's an agenda, there's a process in following it down a path, because there's also this value piece of the equation, right, in theory, if infinite compute and infinite money, yeah, sure, but that's not the world in which we live, right, it's got to tie back to a value. You're making like the most important point, which is you have to start with a question, because you have to reason, you have to test, well, is this condition started by this question that you're asking? Otherwise, it's open-ended and you would never, you wouldn't know where to start and you wouldn't know where to end. Right, so biggest surprise today? Biggest surprise, I guess, understanding more and more how much does a disconnect between the public perception of what big data means, which is still very much Hadoop and what the cloud vendors are doing, which is very much proprietary services that integrate deeply with one another so that they're simple to operate and simple to build applications for. That is a huge disconnect. But hasn't there always been historical kind of knock on Hadoop, just in terms of resources, just the amount of people that know how to operate the system? But you've got to remember, yes, except that it started at a company where they had more rocket scientists than NASA. And the guys had admitted it too, right? Yeah, at Google, so more rocket scientists than NASA and then to Yahoo, which sort of implemented it in a shareable way and for the most part, it's been adopted by organizations that do have a surplus of these very sophisticated skills. It has not made it into the mainstream. And what we're hearing from these cloud vendors is they know it, they're not saying it, but they're preparing offerings for people who need something simpler. Which may or may not be powered by Hadoop on the back end, but it always goes back to solutions, right? And solutions and applications, solving real problems. What are you looking forward to tomorrow? We've got day two, we've got day two on top. Oh, besides getting a good night's sleep tonight. Um, let's see, I'm looking forward to more application stories, like now that we have so much more data, so how do you rethink applications, you know, in a world of huge amount of data? And also when the data is coming from sort of the edge of the network and you're capturing it and analyzing it closer to the edge of the network. You know, that would be another big thing to think about. The Intel, sort of the last interview we did with Intel about sort of programmable infrastructure, I don't think we're gonna hear anyone better or who can articulate better what's going on in the data center than them. So I think that sort of the next layers up will be where we get some more insight. X86, the gift that keeps on giving, right? Yes. All right, well George Gilbert from Wikibon, I'm Jeff Frick, you're watching theCUBE. We are live at the Julie Morgan Ballroom, day one of structure. We'll be back next day all day wall to wall, so tune in, catch all the interviews and this is Jeff Frick signing off from the Julie Morgan Ballroom. Thanks for watching.