 This is Dave Vellante and Stu Miniman. We're here at HP Discover. This is theCUBE, SiliconANGLE.tv's continuous wall-to-wall coverage of the event. This is where we bring you the smartest minds that we can find. We extract the signal from the noise. We bring it out to you, our audience. If you've got questions, tweet me at atDVellante. He's at Stu and we're here with Paul Miller. Paul is the vice president within HP of the converged application systems. It's part of the converged infrastructure play in Donatelli's enterprise group, and we're going to talk about converged systems and importantly, the affinity, the link, the alignment with applications and workloads. So Paul, first of all, welcome to theCUBE. Thank you. Good to be here. Appreciate you coming on, and this is a hot area. I said a number of times over the last couple of years, Stu, it's a two-horse race, converged infrastructure between VCE and HP. And of course, the market's changed a lot. It's an enormous market. We size the TAM at 400 billion. So it's obvious. When we look at a couple of years. Yeah, it's a 2017, but it comprises servers and storage and networking. So everybody wants a piece of that pie. HP was early on in that. You've got a little different philosophy than the single block, any color you want as long as it's black approach. So we're going to talk about that, but tell us about your role and your organization and we'll get into it. So my role is essentially taking the converged infrastructures and mapping and optimizing the applications on top of them and doing some engineering work to really bring out the best performance, best user experience in the marketplace. One example, let me just give you this. Yesterday we announced, and we brand these under a couple different names, HP app systems, the VIRT systems, and then my team also does the cloud maps, which is the application, layering and optimization on our cloud system. Yesterday we launched our app system for Hadoop, apps for Apache Hadoop. And what we did there, different than competition, who just kind of ship Hadoop with their hardware. We actually did unique engineering integration. We have a unique piece of software that manages clusters. As you know, Hadoop is a big scale out cluster, right? And it's very difficult to manage because there are no tools to manage very large clusters of Hadoop. So it can be quite daunting for any IT professional to say, yeah, I'm going to spray all my data into Hadoop and then manage it without any tools to understand and manage the cluster. We integrated Hadoop with those unique tools, put it together with optimized hardware, our networking and our DL380 Pro Alliance and have a unique, highly optimized Hadoop solution for the marketplace that is 3.8 times faster than any part of the competition. But as someone pointed out, even more importantly, I can manage it with enterprise class tools, deploy over 800 Hadoop nodes in less than 30 minutes, provide complete enterprise fault tolerance and disaster tolerance. So we take converged infrastructure and application and then make it real for the end customers. Paul, I'm wondering if I could poke at that for a second because we've spent a lot of time looking at the converged infrastructure and Hadoop big data marketplace and one of the things, Hadoop was not built to run on your, it's not your sand, it's a scale out, super low latency, very different environments. A lot of the big practitioners say their data science team doesn't know how to talk to their infrastructure team because it's just oil and water, they don't work. So it's interesting to hear kind of, you say convergent, which most people think kind of virtualization, automation versus Hadoop. So can you peel the onion a little bit? Yeah, so we think about convergence. It's not only for virtualized environments, but on bare metal, which Hadoop runs on. You're absolutely right. Hadoop was designed to almost be a black box, right? Just throw a cluster at it, scale across it. The problem is, as I just said, is that no tool, it's very, very hard because there are no tools. No tools to help you set up and actually deploy. You know, it was designed in the open source community for a very, well it was thought to be a very, very small opportunity, right? The big web houses, right? That's who was really designed by and for, who have dedicated gurus on the infrastructure and Hadoop gurus who knew how to write the schemas and the queries. What we've done is really took, what we have as cluster management utility, which marries the hardware, understands hardware bottlenecks, instrumentation, how to actually lay out Hadoop onto a scale out architecture. So not only lays that across one node, but thinks about managing it across multiple nodes. So we think we have a really great solution, really proud of it, and we think it's going to help really take and make it easy for enterprises to actually adopt and deploy Hadoop. What's the storage under that? So this is all, Hadoop is all internal storage. So this is, yeah, DL380s, racked with storage, and then we hook it together with our HP networking products, which actually have very deep buffers, and that actually increases the performance, reduces the amount of drop packets in an Hadoop cluster. Yeah, I mean, I talked to a lot of Hadoop practitioners and they're constantly frustrated that they're having to re-architect their infrastructure and several times during the life of a project. So what you're putting forth is an infrastructure, if I understand it correctly, that can be much more adaptable and flexible. Much more adaptable, and give the customer actually the insight. You talk about over the life cycle, what's real interesting about the nature of Hadoop, Hadoop was designed to marry structured and unstructured data together, to take voice, video, business data, web, click streams, et cetera. Unlike a traditional database where the data's fairly consistent when you set up a CRM system with a database, it's always the sales calls, the customers, the orders, right, very consistent data in structure. Hadoop by its nature deals with all types of data. And so how the cluster's going to perform on day one when you may be bringing in a lot of voice and video versus the next day when you're bringing in click stream, complete different characterization. We have a tool, a 3D visualization tool that helps customers understand how to optimize on, optimize over the life cycle of the changing data within the cluster. Are there things you can do, I'm sure there are things you can do, but are you actually doing them at this point in time or will you in the future with Vertica? Yeah, so we actually integrated Vertica with Hadoop. So Vertica and Hadoop exist in the same nodes, so that enables us to do deep real-time analytics. We also integrated it with autonomy to do human meaning analysis out of this, so it's an intense solution. So one of the other big problems with Hadoop is once you put all the data into Hadoop, how to extract value out. That's a huge problem. Huge problem, by integrating Vertica and autonomy, we have very simple, easy to use tools for customers to actually extract value and data out in a real-time basis. So how's that work? This is definitely a big problem. Again, you talk to a lot of Hadoop practitioners, we actually are a Hadoop practitioner, we have a big data tool, and so the problem is, like you said, the data's out there, it lives, it's not in a God box, but there's tons of data, and there's needles in the haystack, and you want to bring them in and be able to analyze them. So you're saying, if I understand it correctly, you've integrated with Vertica, so that's the sort of enterprise data warehouse metaphor for this whole thing. And then on top of that, autonomy provides analytical tools. So it's not only search, but other... Yeah, so the ability to do advanced search, the ability to do real-time clickstream analysis and optimization based on meaning, just not based on the hardcore structured analytics that Vertica can bring as well. Okay, and of course, the cloud maps is something that we've seen for a while now. I used to always ask, what do you guys do in terms of reference architectures? And then of course, heard about cloud maps, and now people sometimes think of that term reference architecture as a pejorative, right? You're going beyond reference architecture. Yeah, yeah, even Dave Donatello used to say, some of these things are just, you know, wrapping paper around things. Yeah, yeah. Well, so yeah, when people think, now they say, no, reference architecture is a white paper, we're going beyond that. Can you add some color to that whole discussion and that narrative? Right, so what a cloud map is, is taking sometimes up to 10 plus years of understanding how an application and infrastructure operate, right? So a reference architecture actually defines how much disk I should have, how much IO I should have, et cetera. What the cloud map does, it takes that core and then automates it on a cloud system. So let me give you an example, SharePoint. SharePoint is one of the fastest growing applications in enterprises today. Customers are spawning like crazy within corporations. So customers are looking for a fast way to deploy new SharePoints and implementations, but they want to optimize it. And the other thing about SharePoint is it's just not one application, right? It's SharePoint, it's SQL, it's Exchange, it's about eight different layers of code that you need to lay down. What a cloud map does, is it takes all the intelligence of HP and Microsoft in how you need to lay that software down, how it's best optimized on the server, how it builds in complete redundancy and failover, and puts that in a simple tool, a simple script that a customer can go to a portal, click, download it, it checks to see that the resources are available, that the resources are available, automatically deploys it onto a customer environment. So it's really powerful. We've got about 500 of these cloud maps across infrastructure applications all the way up through complex applications like a multi-tiered CRM application for SAP. When I talk to customers, we believe it takes about 200 hours of application to infrastructure design. The test and certification of, do I have a layered every piece of the software on correctly from laying on the OS to laying on the applications and puts that all together. And we have a website, cloudmaps.com, hp.hp.gocloudmaps.com where customers can come and download these for free and start getting running today. It really simplifies that whole, how do I design an application to run in the cloud process? Yeah, so the converged infrastructure, we talked about this a lot, Stu, was really designed to aim at that problem of, what I call the IT labor problem. We look at how much we spend on labor, it's huge, it's about 60, 65% of IT spending goes into labor, either outsourced labor or internal staff, and we're supposed to be automating all this stuff. So that's a huge challenge that has frankly constricted innovation. That's why people always talk about, well, we spend 70% on running the business, 30% on growing the business. That is the reason why all these processes built around it. Paul, do you think we can move that needle with converged infrastructure and how long is it going to take? Well, I think we're moving it today. I mentioned with Cloud Maps, at least 200 hours of time taken out of application design. Let me give you another, one of our app systems, that's one of our most popular app systems, is the app system for SAP HANA. Customers are looking to deploy HANA quickly. A lot of push to get into new databases and move off of legacy databases. HANA is a real-time in-memory database. But again, it's quite complicated to set up and configure. So what we've done is we've built an apps system for HANA that can scale to multiple terabytes across the cluster, interlinked with our iBricks technology that ships from the HP factory that literally in two days, a customer can have integrated with their applications up and running in their environment. Normally that would take two, three months easily to set up the hardware, integrate the technologies to achieve high availability, seamless scalability, test it all, and then deploy it. It arrives at their data center, fully tested, fully deployed. They all have to do is integrate it into their existing IP addresses and add them to a little. So SAP is interesting. We were at Sapphire three weeks ago with theCUBE. And the interesting part is you think of SAP, big, complicated, expensive, inflexible. The messaging from SAP at Sapphire was much different. It was mobile, agile. But the reality is the SAP customers need to simplify the infrastructure and they're looking at converged infrastructure as a way to do that. So are you actively working with SAP and you're getting a lot of traction in SAP accounts? Are you seeing that? Can you confirm that trend? Yeah, so SAP, HANA, we're getting a lot of traction, deep pipeline customers are- HANA specifically. HANA specifically. Okay, so I was just talking about general SAP, but so- HANA specifically, deep, deep pipeline, that's where it gauge very closely. If we look at HANA, it's kind of the tip of the spear for transformation in an SAP environment, bringing in a real-time database, and then looking at what it can do, the applications around it is really transformation of a lot of customers. Because now they can do things, get richer data, make decisions faster. So we're seeing HANA be the tip of the spear and then customers upgrading and transforming the rest of their ERP, CRM environments. Yeah, I mean, it's obviously, if I were in HP, I'd be pounding HANA and telling all my employees every time you say HANA, you get a dollar. Yeah. At least a dollar. At least a dollar. I give them two personally. Right, right, right. So Paul, if I remember correctly, virtual system and app systems launched a year ago at this show. Correct. I'm wondering if you can give us any data points as to not just how many different solutions you have, but how many customers? How is the adoption take up and the rollout in the marketplace? Okay, great. So we launched a virtual system and app system. The virtual system portfolio spans across a small, medium, large solution that spans both VMware and Microsoft. So we have Hyper-V as well as VMware solutions. Really strong uptake, especially around the VS-3 product, which is the top, the top, the large system. That's got the three-part storage in it. It has three-part storage at the left-hand end, at least. And yeah, and the Blade system. We are in just ton of customer accounts, really, and customers actually see it as their private cloud, their entry to the private cloud. So very, very strong uptake. I'm trying to push as many through the factory as possible as we go. We also launched solutions. I'm sorry, let's just follow up. So, I mean, David Scott talked about over 100% growth of the three-part business. What can you give a percentage or rough number as to how much of the portfolio is going to converge? We're gross margins, we're gross margins. Hi, how are you, I'm sure. I can't give those. I would say that virtual system is pitched in more than 50% of those accounts, right? As the N10 solution. People are coming in saying, I've got this great solution. The three-part sales force around virtual system is the best sales force. They're going in, it helps them differentiate, helps them move an end-to-end solution. You mentioned our friends at Cisco earlier, right? It's the killer of the VBlock because they can't scale, they don't have the capabilities, et cetera, that we have there. So that's a very powerful, powerful solution. On the, let me give you another stat, on the cloud maps, in the last 12 months, over 5,000 cloud maps downloaded. Customers understand, they're using them, they're deploying them, very, very strong. HANA, I'm not sure if I'm at liberty to disclose the pipeline on there, but it's in the hundreds, and it's all your marquee accounts. Think of every retail, every consumer good, every one in the energy sector, all looking in. There's a lot of enthusiasm out of Sapphire for that. But I've been criticized, actually, because of course that's a Sapphire, that's SAP's messaging, but you're confirming that there's actually legitimate pipeline. Just legitimate pipeline, and we're seeing just significant improvements. We've had a lot of fun with that. Larry Ellison actually was quoted as saying that SAP coming after a work on databases like me taking Kobe one-on-one. So yeah, it's fun, but at the end of the day, people want something different to have a major impact on their business. HANA, maybe not for OLTP, but for OLAP, it makes a lot of sense. For OLAP today, it's an outstanding solution. Then we go to OLTP and the Microsoft solutions. The DL980 solution that we have for OLTP, just screaming off the shelf. We're seeing tremendous growth. Microsoft, when I talk to Ted Kumert, who runs that business, just tremendous growth in the SQL business. That's our flagship for OLTP, as part of the AppSystem portfolio. It is rocking and rolling me. The DL980 Proline DL980 platform, eight sockets, outstanding memory footprint, tons of IL. It just rocks for OLTP. And so we're seeing a lot of Oracle to Microsoft migration on that platform alone. It's a great architecture. Aligning infrastructure with applications is what it's all about. Paul Miller, that's the key glue to business value, right? We always talk about infrastructure and the missing link is that application. So you're in a very strategic position and it sounds like you guys are executing well. Thanks very much for coming inside theCUBE. Appreciate it. Appreciate it, thank you. All right, everybody, this is SiliconANGLE.tv's continuous coverage of HP Discover. Keep it right there, we'll be back.