 Okay, we're back, this is Dave Vellante. I'm with Wikibon.org, one of the founders, and this is SiliconANGLE TV's continuous coverage of HP Discover. We're live on theCUBE with the Tinker Twin. So now, many of you might remember, we had Greg and Chris Tinker on from an event that we did in LA. We skyped them in, and these guys are basically, you call yourselves smoke jumpers, problem solvers, right? I mean, you know a lot about a lot of different things. You're two key members of HP's Tiger team in the field. So first of all, welcome to the liveCube. It was great to see you guys here. Thank you for having us, Dave. Appreciate that. Yeah, so our pleasure. So we're going to talk about cloud infrastructure, big data, maybe talk a little bit about Hadoop, how to make all this stuff work. Yeah, it's complicated out there for clients. So, yeah, so Greg, why don't we start with you? Greg's in blue, folks. You know, what are you seeing? Customers, you know, doing these days. Well, the biggest thing I'm seeing today, Dave, is the fact that as customers are growing their business, the big story of today is cloud convergence whether it be public, private, and the customers are really having a big problem determining which course of action to go, whether they stay with traditional IT departments, or they start moving into the cloud environment, whether it be moving test dev QA, or they move their entire production line into the new cloud, private, public, name your favorite. And that's the vision on leveraging that legacy of hardware and software. And in doing so, what they're taking advantages of, what we are in HP are been striving for is making sure that we provide customers with data protection, and I think that's a good segue into talking a little bit about data protection and how we can take that with cloud services or without and do traditional IT deployment whether it be storage of 3-par, P9500s, EVAs, our storage line goes up, you know, hundreds there. All right. Data protection changes your notion of data protection, right, and how you can, it enables new things, but it brings new challenges, right? Talk about that. A new challenge, especially with data availability. The data availability, now you have thousands of VDIs, virtual desktops, standing up on top of terabytes or zettabytes of storage. So any kind of an interruption of business could easily happen if anybody, essentially a user administrator would make a mistake. And how do we recover from those mistakes? Yeah, so you're saying if one person's laptop has a problem, well, that doesn't affect the entire organization, but there's a lot more at risk, you're saying, in this environment. Yes, specifically whether you're in cloud or not with converged infrastructure now with VMware, with name your favorite virtualization desktop structures, what we're seeing that customers do is they, as they merge all this data down into one hardware platform, the importance of a technology support partner, whether it be HP or other, is becoming a very critical component to the business and from the C-level executives, where 5-9's availability was the big thing years back. Now at 7-9's, one minute or one second of downtime is practically unheard of, especially in these new converged infrastructures. If we have a mistake, whether it be human or hardware, the outage becomes a huge business impact. And I think that's a key segue in those business outages is identifying the outage, identifying the actual root cause. That's one of the things we work on is identifying, we need to take of all this virtualization, where's the actual problem at? What the abstraction layers and identifying the, whether it be a CPU, level two detach, whether it be a contact switch, whether you're digging deep into the BIOS, IOCTLs, all the way up into the application layer. So I've been able to identify the virtualization layer, is it a virtualization problem? And of course, identifying whether or not the actual administrator made a mistake or... So how does a customer deal with them? Take a security incident, for example, in a virtualized environment, where you don't necessarily know physically what's connected to what or do you? Well, that's a good thing. Most customers, though they say they do, most customers actually don't know their infrastructure from soups to nuts. They think they all know everything that's interconnects, but when it boils down to it, the ACLs that get involved with all these security layers that are in the abstraction layers that are engaged become a very complex nightmare to debug, especially when the business impact is so severe that tension is high and customers want the business to be on the cycle line immediately. That's when the pressure becomes very intense. Simplistic concepts like, for example, in Schleswig-Steck at the PGR locks, these are locks that were placed on LUNs. Simplistic to you two. Well, yeah. But when we look at these LUN locks, this is actual data, the corporation's entire business data access. And all of a sudden, the host is no longer able to access the data. So now the application layer, the user environment, all stalls. Wow. It's being able to identify that. So, I mean, a simple problem if you know what you have. Well, and that's usually the phone rings, and, oh, I can't access my data. But the next question is, where in the stack is that issue? And I think that's the key point to be made in the HP Technology Services Branch. When a customer calls in, they tend not to know where that problem is. That's why they're calling in the first place. They have their own, most customers all have their own IT support staff. And then they are calling because they need that assistance to figure out where that is, ask the right questions to basically define the decision tree to basically tackle the problem at hand. So, what do you guys see in cloud? We just did a survey, we did a survey a year ago, and basically nobody was doing hybrid cloud. And now today, everybody's doing hybrid cloud. And I tweeted that out, and some people were watching, somebody said to me, well, I just did an informal survey at my Birds of a Feather session, and a private cloud outranked public and hybrid by a wide margin. So, what do you guys see? Is it still private cloud? Are people moving to hybrids? Is it hybrid? Yeah, hybrid clouds, yeah. I mean, a hybrid cloud leveraging third party hardware, leveraging legacy hardware, legacy solutions that they've had on their production floor. But when it comes to private managed clouds, public clouds, it all depends on the business model. There are certain types of engineering, certain types of lab work that you just probably don't want to put into public. Not yet, because of the concerns regarding security. Yes, the security's there, there's some concerns, valid concerns, that perhaps are preventing the public acquisition, but are great for managed clouds, and of course, the cloud. I've seen a hybrid cloud, and my brother does as well. We see these technologies every day from a customer-client perspective. When they call in, what do we see in real life is, we see the hybrids. Very, very infrequently, one solution fit the entire portfolio of a given client. I would say that would be extraordinarily rare. Unless it's a small mom-and-pop shop that is starting up, but your big mom-and-pop shops, those customers, there's always going to be some type of hybrid, probably for years to come. And the technology landscape is so varied. It's evolving extremely fast, and these customers are adapting it, and of course the technology experts they have on staff, are having to learn those technologies. And it makes a lot of sense because the CapEx, already the money they have invested in the hardware, they can't roll all of it immediately to meet the new technology at hand. So we're going to see the hybrid models for the next, I'd say two to three years, easy. Yeah, well that's good confirmation because the numbers were starving me. It was like single digits last year, and now it's almost 40% are saying, this is our primary strategy. And then the other notable thing, and you guys know this, because a couple of years ago, the typical IT person was very skeptical about cloud. They wouldn't even use the term, as we call it IT as a service, and now everybody's using cloud. The number of people who say it's a buzzword of unclear meaning is really, really low now. Right. It goes back to the concerns years ago about shared services, shared models. You know, it's the same concept. Now you have large corporations that have silos of data. Well, they got silos of servers, different IT groups from different divisions of the corporation. Now they're able to leverage all those hardware. So they're able to put a private cloud or a public cloud. You don't want to do a public, they'll probably do a private cloud inside the corporation. And some sense of the words, cloud is the new buzzword. A lot of customers, and I would say for several years in the past now, have already been using this methodology. They didn't call it the cloud, they didn't know that was what it was, but that's basically what they were doing already from start. What we are now doing is we are, as an industry, putting basically a NAS wrapper around it, putting a standard model with it. And it's just standard servers are getting incorporated into the model so it can easily be replicated and growth can easily be achieved. Well, HP's creating cloud orchestration software. It allows you to be able to deploy servers, image systems already. I mean, if you wanted to work or a side baser in Formex or Linux or Ubuntu or Red Hat or Windows, you're able to deploy these systems at a click. Click of a button. Where it used to take weeks to provision the storage, do the zoning and network administration, create the ACLs, create the network, the route tables and all this infrastructure. Now it's a point and click. But there's still a lot of complexity underneath the cover, right? And that's what we do. And that's basically where our job comes into play. And that's why we exist actually is because this complexity that you just spoke of, we do have, our engineering teams and our labs have spent a great number of years trying to reduce this simplicity. Try to make it a simple screen that you can click and get what you're trying to achieve. A nice dashboard and reports. Right, and so when those go wrong, and let's be honest, they will. Everything will go awry. It's all human done. And so what we have to do is we have to be able to mitigate those risks, isolate those problems, and fix them in a very time-consuming fashion. Very nationally, right. It requires partnerships with support. Sure. And it's not like HP's doing everything here. It's because we have a partnership agreement with a lot of different vendors. Emilex, QLogic, the list goes on. And we are working with and do diligent with them to make sure that we can support these new convergent network adapters that people are hearing about. And a little nervous, some customers stick with the old methodology. Single adapter for a certain task. And a lot of customers today, in the last few months, I've seen a lot of customers buying the new adapters, moving their entire infrastructure onto the single adapter 10 gigabit interface, and they're really liking it. Really? You know, I hadn't expected to talk about that. But the CNAs, when I first saw them come out, I said, all right, it may be take a while, but it's inevitable. It is. It's got to happen. You're going to cut your connection cost in half. Connection cost, infrastructure cost, power requirement. I'm a big fan of it. It's a no-brainer, really. We're big fans, yes. But there's always that inertia. Yeah. Getting people off the log and moving forward. And then the whole fiber channel over ethernet thing. It goes back to the administration. There's a lot of advantages to it. Like Chris and I are huge fans of Boot from Sand. Personally, I don't think a server should have a disk inside it. But there's pros and cons to both of those strategies. But Chris and I are big advocates. There's a whole lot of... Why? Talk about why that is. Well, backups? Well, backups, for one, but then another great advantage of you just turn the server into a black box. Yes. If the server crashes or hardware problems, you just replace the black box, boot it back up off the network, or back off the brand. It's a simple zone change at this point. You zone the storage to the new black box. Same architecture. You can't jump from a PA risk over to an ITAM, of course. Yeah, but it simplifies your environment, it minimizes your risk, and it allows you to respond more quickly to all the things you were talking about. When you have a business risk, when a CPU fails, when inevitably a hardware component does go awry. Sure. It makes it far faster for customers to have that DR strategy and come right back online in a blink of an eye for the most part. So I want to shift topics and talk about big data. It's hot right now. Yes it is. It's kind of the new buzzword. It's taking over for cloud. But it seems real in that companies are trying to figure out how they can get value out of data. Value out of information totally makes sense, right? Now, I'm intrigued. You guys, experts in many, many different disciplines. And then this thing called Hadoop comes on. Sure. So don't tell me you're like born experts in Hadoop, right? Right, right. Pig and hive and scoop and all this crazy stuff. So what's happening out there with big data? Where's it fit in what you guys and HP are doing? Okay. So Chris and I are spending a great deal of time with all types of analytics right now. You have Vertica, Hadoop, Autonomy. And then talking about Hadoop, you have those abstraction layers that sit on top. Hive, Pig, the other one that you mentioned. So we are catching up to speed with that on all of our engineering groups. We spend a great deal of time debugging those things and more than importantly anything, I think the biggest problem we're seeing with customers is performance. At the end of the day, we're always partnering with those programmers, the guys who actually write the code for Hadoop and of course Vertica and Autonomy. The issues that we've seen more often are performance. Okay, so you're talking about going out there and the business would like to analyze all this data and essentially extract meaningful data from it. Well, how do they do that? They have to go out there and scan terabytes of data. Sure. How do you speed that up? How do you do that in efficient manners? You got to look at parallelization. You got to look at the CPU context. You got to look at network throughput saturations. That's where we get engaged. A lot of time is strictly on the performance basis of whether it be Hadoop or name your favorite out there, whether pulling the data in, having the plugins, ODBC calls, et cetera, the plugins that will allow us to pull the data out of Oracle, side-based informants, name your favorite. Structured data with using Vertica or unstructured data as well. Just recently I took a case on a situation where you would always think having more RAM on a computer is a better thing. It's always more RAM is better, right? Well, we had a terabyte of RAM and the situation was it was regarding how it was actually doing interleaving and the CPU context switches. I could not get the performance up to the custom requirement. I started analyzing the computer, analyzing the application, Hadoop, looking at how it was actually making the system calls, looking at where it was spending its time and sure enough, I was able to ascertain it was in the memory and we were able to essentially make some BIOS changes, re-architect the actual solution and got over double the performance. So the beauty of Hadoop, of course, is you can bring five megabytes of code, terabytes of data, and you don't have to bring all the data through the little teeny pipe, right? So that's great. But then now I've got all this data out there distributed and I want to get to it. So how are people actually bringing the nuggets back in and analyzing it? That's a hard problem. It is a hard problem. And some customers are bringing it into a new database, like putting it on a Hadoop file system or Vertica. And some customers leave the data where it is and actually pull it in partials and do the analytics there like that. The biggest question of the day is when you have to pull all the data in, that's where we've seen most of our customer complaints is when you have these databases and I can't give customer names of course, but when you're talking about 100 terabytes, you're not going to do anything fast with 100 terabytes. And that's the current problem we're running into is we have this massive amount of data. We have to analyze it and give real-time results back to the executives within a few minutes. You're using a batch system. Right. In real time, that's a catchphrase. That's insane. It's most often near time. Near time, right? Because you're usually a few minutes back. I like my colleague, David Floyer, I said my definition of real time is before you lose the customer. There you go. That's not good enough. It is, yeah. Most customers that we're dealing with now are used to the traditional IT shops, the HPCCs, the high-performance computer clusters. Yeah. Yeah, when you're talking about latency in the nanoseconds to multiple milliseconds, now we're having to go up into the multiple seconds to analyze these big data pools, customers start to lose tension, get aggravated because they're not used to this new engine, if you will. Especially when you're talking about the amount of data that's out there. And especially when they start pulling in all the tweet feeds and start doing analytics on that. Well, you mentioned latency, wasn't it? I think it was Goldman Sachs who said, a couple years ago, said that for every millisecond we can shave off of our application performance, it's $100 million to the bottom line a year. That's the correct statement. And I think that... That's in the... Yeah, in the banks, the monetary basis, the exchange and trading. Look at what I was working on two weeks ago. I was having to shave, I was in the microsecond range and I needed to shave about 20 microseconds off of a process that was running for several hours. Yeah, that's a good analogy there, Chris, is the fact that when we say we're debugging the latest problems, this is a new... I can't get into a lot of particulars on it, but I'll say this is the bleeding edge technology that's out there that is being used by customers in ways that honestly was never perceived. And we were running into a particular problem where we were spending time on a CPU thread at 68 to 86 microseconds and we needed to shave it down to 28. So those are the things we're working on. Wow. Even with our technology backgrounds, we still partner with the actual... Your guys who actually wrote the chip spun the A6 all the way up to the guys who actually write the kernel code, it does the actual scheduling of the CPU context. What's your backgrounds? Are you guys CS guys or are you mechanical and physics? So our mathematics is our background and computer programming happened to be a hobby and I think that's true for a lot of the Hadoop guys out there. Yeah, well the data scientists are this interesting mashup, right? Statisticians and programmers and data hackers really. Basically, most of us I think it became a passion that we got into the companies, whether it be HP, Google, name your favorite, that we actually just enjoy what we do and we're lucky to get to work for the companies. Yeah, well it's interesting to me, you seem technology agnostic, it really, technology doesn't matter. It's going to come, it's going to go. It's your perspectives and then how to deal with it from a, certainly from a process standpoint and then how to make it work. Absolutely, the technology is extremely agile so it moves with the business. At the end of the day, the business, the IT is there to serve a function, it is there to serve the business. At the end of the day, what our objective is in HP and this is true for any engineering branch is we want to better the business strategy so we know that today's technology will probably obsolete in 10 years, of course it will. 10 year technology ago is already gone, nobody talks about it anymore. I'm trying to think of the old bag phone, you remember those? You know, thank God we're not there no more. But yeah, so that's, me and Chris have always been very admired, the technology growth and how we're trying to keep up with it, it's a lot of fun. All right, Chris and Greg, thanks very much for coming on. I really appreciate you guys, your unique pair and a real asset for HP. Congratulations on all your success and really appreciate you making time out to come in theCUBE. Thanks for having us. All right, thank you everybody for watching. Keep it right there, we'll be right back. We've got a great CIO segment, Ernie Parks, the VP and CIO of 3M Corporation is coming on next, keep it right there. First time on theCUBE baby, rock and roll. Probably five or six times I've been on theCUBE now. Right, at first the guys are just fun to work with. Pat, welcome back. Hey, always a pleasure to be in theCUBE. Hey, I'm about to go on theCUBE, you never know what's going to happen. A three time veteran of being on theCUBE. I hope many, many more. Chad Sackets, Chad, welcome to theCUBE. Dave, John, it's great to be here man. Keep coming back because great insightful questions from John and from Dave. What face melting action have you seen here at the event and I know there's a lot of it. It's a great vehicle to communicate with a broad audience, a lot of folks watch. Great to have you back, good job. All right, Craig Nunez, VP of Marketing at HPStore. Thanks very much for coming on theCUBE. When people mention theCUBE, they're like, oh my God, I saw you on theCUBE and they're all excited about it. It's an experience, it's not just information. They experience kind of what's going on there. It's like real time, it's like they were there. That was like going to the gym. My pleasure. Legendary IBMer, CEO of Symantec and now CEO of Virtual Instrument. Great to have you on theCUBE. So for theCUBE to be here at a conference like this, it's got 15,000, 20,000 people and sharing that live around the world, that's consistent with the way the world is evolving. So it's a wonderful meeting, a really wonderful thing. John and Dave are amazing. I don't know how they keep everything in their heads the way they do. It's a great format and we're obviously seeing that this notion of real time coverage and a real conversation is what's driving us as a company. And I said very seriously when the questions and the comments that we hear from them and from all the different guests here are directly turned into the products that we build. Yeah, that was my first Cube and I really enjoyed it. It was the rapid fire of questions. It made me think on my feet, but they were very thought provoking and really got me going on analyzing the greatness of Arista and the greatness of theCUBE as well. John and Dave, the reason their approach works, they're not just guys reading down the question.