 From London, England, extracting the signal from the noise. It's theCUBE, Cover, Discover 2015, brought to you by Hewlett Packard Enterprise. Now your hosts, John Furrier and Dave Vellante. Okay, welcome back everyone. We are here live in London for HPE Discover, a special presentation of theCUBE, Silicon Angles flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier, the founder of Silicon Angles. I'm Joe Mike, Coz Dave Vellante. Our next guest is Brent Allen, Group Manager of Vertical Solutions with HPC and Big Data, part of HP Enterprise servers. Welcome back to theCUBE. Good to see you. Cube alumni, new role. Good to see you guys. Yes, absolutely. So the last time I was talking to you guys, we were talking about converged systems and I've been, was doing that for a while around virtualization in cloud. Great opportunity for me personally. We've seen this merging of high performance computing and big data coming together in the marketplace and so I was very excited about this opportunity to start up a brand new area inside of our Hewlett Packard Enterprise Apollo segment part of our converged servers area, so. So talk about the new role. Yep, brand new role, brand new team and it's spawning off of this high performance computing alliance initiative that we signed with Intel back in the June timeframe. You guys may remember that. We've had a couple of developments and updates along the way. Some great stuff happening even back at SuperCube Computing back last month and we've set up these centers of excellence across the world, starting with Grenoble in Europe and then also Houston there in the US and it's really a couple of parts to this alliance. We're working with very specific industry leaders around a couple of different industries looking at oil and gas, financial services and healthcare and life science to go out and tackle some of the really big and daunting problems that many of these customers have by bringing together high performance computing and big data looking at not only cost optimization but how do we optimize performance and so HP we're bringing to bear a lot of our technical expertise in the area. In fact, we've even onboarded net new folks from these various industries that have a very extensive background in engineering and those capabilities there. And from the Intel side, Intel is bringing in a lot of their new technologies, so their open path architecture, some of their new advancements around FI, we're looking at new advancements in non-volatile memory, a variety of different things to help these customers optimize their environments, taking advantage of things like next steps around luster, for example, as well inside of the HP. So vertical focus, that's a big part of big data having a vertical domain expertise, high performance, a lot of compute involved in a lot of these areas. Is that part of the rationale? Absolutely, so the thought is that rather than creating these generalized structures that we try and take to market, we work with very specific customers that are leading edge, if you will, and help them to evolve the next generation of infrastructure jointly to be able to solve their specific problems. We then work with them to take that and roll that into a more generic reference architecture that we can bring out to a larger population within the industry vertical. So far it's gaining a lot of traction, we've got a lot of interest from some really major players, hopefully next time we're together I'll be able to have some of them here with me and we can call some of them out by name, but you can see some of them even around the floor out here at Discover. So organization around solutions seems to be the theme. Absolutely. What's the objective of the group? Help customers migrate to the cloud? Is it more app development only above? So we take a very consultative approach. We believe that it's the right way, especially as we're making forays into these new areas of technology. There is a lot that's unknown and so there's a lot of code optimization that likely needs to happen, understanding workflows and helping to optimize those things. So we're taking again a slightly different approach than just being an infrastructure provider, if you will. We actually bring the customers on site or we go to their site with our engineering teams. The customer engineers as well as our own work side by side in evolving their stack and ensuring that the end result is a optimized solution in the end. So Brent, I wonder if you could talk about the intersection between HPC and big data. Big data, the whole meme kind of became a tailwind for HPC. Yeah, the hardcore HPC people will give you the line of big data, big deal, but doing this forever. So what are the similarities? What are the differences if there are any? Well, and it really boils down to the actual workflow and the particular applications that the customer's trying to run. So what they're trying to actually digest. In the case of many of these environments, we're seeing what is being referred to as high performance data analytics where in order to get to the next level, if you take, for example, visualization, let's use oil and gas, right? So if you're looking at seismic information, in order to get to the next level of rendering a 3D image in more detail, you need more data going into that model. So they're taking larger sets of data and being able to digest that into richer model configurations. From moving from 2D to 3D where we have been from 3D into 40 where it's bringing in time lapse as well. This is also true in genomics, so a lot of the genomics research. Right now, it's for doing a sequence on a genome, it's approximately a terabyte of data that gets generated. If we look at cancer, for example, inside of the United States alone, there are 13 million cancer patients. So one of the objectives is taking in cataloging the genomes for all of those people. So 13 million terabytes, you can kind of walk down the line and you end up with a significant amount of data that's being generated. So the solutions that you're creating, can you talk a little bit more about them? Sort of what are they, how do they relate to the sort of HPC side of Hewlett Packard Enterprise? Sure, so I mean from a hardware perspective, we're utilizing the advancements and advantages that we've been able to pull together within the Hewlett Packard Enterprise Apollo line where we've actually taken compute platforms on the HPC side, and we focus them for specific development of high performance computing environments that are x86 based and that are both cost optimized but also performance optimized. And we bring in a number of different accelerator type technologies inside of that environment as well, whether that's coming from working with NVIDIA around bringing in GPUs and customers that may have optimized for a CUDA base or bringing in things like PHY type technology with Intel, bringing those into the environment to help further accelerate the environment. That's then often paired with the advancements that we've made around storage based server platforms so that really are geared more toward the big data element. If you look at our 4,000 line of servers, it really is storage focused. We're there working, in addition to just being able to help provide a storage platform for the HPC side, we're working with a number of companies as partners like Scality and Iternity to be able to create archives where a lot of that resident information lives, big focus on being able to have object archives available to be used as a part of the overall environment. So separately, these things can be digested as HPC and big data, but quite often what we're finding is that in actuality as they're being run, it's starting to merge closer and closer together. So when I say HPDA, high performance data analytics, I'm not necessarily saying there's one system that we implement that does both. Quite often these are two elements inside of an overall architecture that need to be able to work together as a seamless whole. Okay, and so your job as a chef pulled together different solutions across HP, but within a big data rubric? Across both big data and HPC. And so we have some solutions that are specifically HPC focused. For example, if we look at the healthcare and life sciences arena, we're working on one particular project that is specifically taking genomic sequencing to the next level around building a next gen genomic sequencing platform, which is very much focused on HPC. It's very much focused on how do you take the work stream and be able to pull that together as much as possible, ensure that performance is staying at a high utilization level, where we can do more of these genomic sequences faster than ever before. On the other side of that, we're also working with a number of institutions that have access, for example, to these government locations where a lot of genomic data currently exists. In the US, for example, there's the health organization that holds all of those pieces. There are a number of universities that have access to that. They're taking those, pulling that genomic data into a cloud base to create a centralized genomic hub where multiple organizations and institutions can then come in and grab the information that they need in order to do the next level of sequencing inside of their own environment. A lot of this stuff is being done for research, and then they contribute that back to the overall cloud. So there's a genomic cloud element there as well. We're working on a specific solution with a couple of different customers for that. We also do some things that are more enablement outside of just traditional HPC and big data. We're working on a number of visualization solutions, remote visualization. There are a number of healthcare providers as well as researchers that have been asking for a standardized remote visualization solution for a long time to be able to manipulate 3D models of genetic and genomic data. The same thing is true inside of oil and gas where you've got engineers that may be going to a platform out in the North Sea or out in the Gulf and wanting to be able to look at the seismic formation, the reservoir in a 3D model and be able to manipulate that and potentially even be able to work with other people in a sort of a sharing model across the globe. How do technology partners fit into this? You mentioned visualization a couple of times. Is that something that HP provides? Are you working with partners like CLIC or others? We're working with a number of folks. In that particular case, we're working with Altair. Well, we're working with Nice. We're working with our own, well, not our own anymore. It's now HP Inc, but RGS, right? So there are a number of different providers. Our feeling is that the architecture, when we're looking at this and working through how it needs to be built out, there are similarities between these software packages in that we can provide a infrastructure architecture that will be pretty much the same between the software, different software providers. And then we offer options for these different things. Now we do, in testing and certification of these systems with these different ISVs. So we have that level of confidence and we're also working with a number of customers to deploy these things in real world scenarios. And that's the difference as well here versus some of the other things that I've worked on and talked to you guys about in the past is we're starting with the collaborations so that we have Lighthouse customers and public references as the basis for taking these solutions to market, these reference architectures. We believe that that's incredibly important in areas such as HPC and big data where a lot of tuning and expertise really is required to come to bear. So public references that you can share with us on theCUBE or is these sort of general examples? Public references that I'll be able to share with you guys very shortly. We're just now getting this. But we are working with a number of companies that have already agreed to be public today. Are there, is there a grand challenge in this space? The high performance computing is always the smartest, the hairiest, most gnarly problems. Is there a grand challenge that they're looking toward maybe not like 20 years out, not 10 years out, but within the next two to three years that you can see the horizon that they're going to be able to solve, in other words kind of went through a big data 1.0 then you started to see the HPC thing come together. What do you see in sort of the near term, big challenge that your customers are asking for that you're going to be able to address in the next say 24 to 36 months? Well, there are a number of keys here. One, I mean years ago I did a lot in HPC and at the time everybody was very focused on the top end supercomputers and there are still some companies out there that are. The majority of people that we work with that are tier one companies across the number of different verticals are really looking not necessarily to build the most extreme supercomputer they're looking to figure out how they can get a similar level of performance but cost optimized. So there is a budgetary element here but as far as where they're looking to go, a number of things have been discovered. So this is really all about innovation within these companies. How do they get to the next level of innovation? And it's no longer, we're no longer in a world where you can just trip over discoveries. It takes a lot of crunching of data in order to get there and so that's where HPDA is really coming to bear. There is a huge number of sensors that are out there in every one of these industries. That sensor data is being collected. A lot of times folks are not 100% sure of exactly what to do with it but they know that there's value resident in that and so what we're doing is we're working with them to take that data, move that data into fairly complex high performance computing models to be digested so that decision support and innovation can come out the other side and we believe that this is the trajectory that these companies will be on in order to be able to hit that next rung of major innovation. So for example, again inside of the genomics space, we've seen initially when the human genome was mapped we were looking at a million to millions of dollars to do that. From that point it's moved into the thousands and within the next five or less years we're expecting that to be down in the hundreds for many of these institutions. That means that at some point in a very short order, every one of us will be able to go to the doctor and have our own genome mapped and have that on file so more personalized medicine can take place. There are a lot of really brave and bold ideas in every one of these industries and we have an opportunity to work with these customers to help them really harness that. We see the same thing in financial. For example, there's tons of social media data that's being collected. A lot of that social media data if properly harnessed can actually predict the direction of markets. We're already starting to see some of that. And so it's this wild and new world that we're part of, right? Which is incredibly exciting. So we're helping some of these major financial institutions to start to make that turn. Which name should we buy? That's what I'm saying. Grant, talk about the final question. Talk about the vibe here at the show. What's the conversations you're having? Obviously the vertical focus, prepackaging, the applications. Big focus for customers around big data, certainly having high performance power behind it makes a lot of sense. What are some of the conversations you're having here at the show? A lot of the conversations that I'm having here at the show, and likely because this whole alliance and this new way of doing things, it is fairly new for us. We're doing a lot of very exploratory conversations. Honestly, it's more on the research and development side working with a number of different institutions. So I had a theater session yesterday and had a number of folks come up to me from universities as well as from industry. And we were just talking through what some of their various problems are and what they're looking to do trying to bring these elements together. A number of the research institutions really are sort of the harvesting places. It's really where a lot of these ideas come together before making its way into industry. And so we're also working with a number of these institutions. Pretty academic with industry. Exactly. And we believe that that is, it makes a lot of sense. And honestly, the industry leaders that we work with across the board also believe the same thing. And so there are a lot of alliances that have been formed between these top tier companies and universities, and we're very fortunate to also be part of those environments. And evolving as a partner. Real quick, what's the bumper sticker for the show this year, here? The bumper sticker for the show. Well, Hewlett Packard Enterprise is not only new and we've kind of revitalized everything that we're doing in this space, but we really are focused on driving to these end-to-end solutions. We've got them in a number of ways. The previous things that I talked to you about in the past were about the IT back end. We're doing more amazing things than ever before in that space. You guys probably heard about synergy coming to bear as well in the announcement yesterday. And then we're taking that a major step forward in looking at the specific problems that customers have and we're working with them directly to be able to solve some of the harriest issues in the industry. Brent Allen, Group Manager of Vertical Solutions at HPC and Big Data at HP Enterprise. Thanks for joining us on theCUBE. Thanks for sharing the insight here on theCUBE. Vertically focused insight. This is theCUBE. I'm John Furrier, Dave Vellante. We'll be back with more live coverage from London after this short break.