 Live from Las Vegas, it's theCUBE covering HPE Discover 2017 brought to you by Hewlett Packard Enterprise. Welcome back everyone, we are here live in Las Vegas for day two of three days of exclusive coverage from theCUBE here at HPE Discover 2017. Our two next guest is Bill Menel, VP and General Manager of HPC and AI for HPE. Feel great to see you and Dr. Nick Nistrom, Senior Director of Research at Pittsburgh Supercomputer Center. Welcome to theCUBE, thanks for coming on. Appreciate it, we wrap up day two. First of all before we get started, love the AI, I love the high-forms computing, we're seeing great applications for compute. Everyone now sees that a lot of compute actually is good. That's awesome. What is the Pittsburgh Supercomputer Center? Give a quick update and inscribe what that is. Sure, the quick update is we're operating a system called Bridges. Bridges is operated for the National Science Foundation. It democratizes HPC, it brings people who have never used high-performance computing before to be able to use HPC seamlessly, almost as a cloud. It unifies HPC, big data, and artificial intelligence. So who are some of the users that are getting access that they didn't have before? Can you just talk about some of the use cases of the organizations or people that you guys are opening this up to? Sure, I think one of the new communities is very significant is deep learning. So we have collaborations between the University of Pittsburgh, Life Sciences, and the Medical Center, with Carnegie Mellon, machine learning researchers, where we're looking to apply AI, machine learning, to problems in breast and lung cancer. Yeah, we're seeing the data. Talk about some of the innovations that HPE's bringing with you guys in the partnership, because we're seeing, people are seeing the results of using big data and deep learning in breakthroughs that weren't possible before. So not only do you have the democratization cool element happening, you have a tsunami of awesome open source code coming in from big place. See Google donating a bunch of machine learning libraries. Everyone's donating code. It's like open bar and open source, as I say, and the young kids that are new to the innovators as well. So not just us systems guys, but a lot of young developers are coming in. What's the innovation? Why is this happening? What's the aha moments? Is it just cloud? Is it a combination of things? What are you talking about? A combination of all the big data coming in and then new techniques that allow us to analyze and get value from it from that standpoint. So in the traditional HPC world, typically we built equations which then generated data. Now we're actually kind of doing the reverse, which is we take the data and then build equations to understand the data. So it's a different paradigm. And so there's more and more energy understanding those two different techniques of kind of getting two of the same answers, but in a different way. So Bill, you and I talked in London last year with Dr. Goh. Yes. And we talked a lot about SGI and what that acquisition meant to you guys. So I wonder if you could give us a quick update on the business. I mean, it's doing very well. Meg talked about it on the conference call this last quarter. Really high point in growing. What's driving the growth and give us an update on the business? Sure. And I think the thing that's driving the growth is all this data and the fact that customers want to get value from it. So we're seeing a lot of growth in industries like financial services, like in manufacturing where folks are moving to digitization, which means that in the past they might have done a lot of their work through experimentation. Now they're moving it to a digital format and they're simulating everything. So that's driven a lot more HPC over time. As far as the SGI integration is concerned, we've integrated about halfway. So we're about the halfway point. And now we've got the engineering teams together and we're driving a roadmap and a new set of products that are coming out or Gen 10 based products are on target and they're going to be releasing here for the next few months. So Nick, from your standpoint, I mean, when you look at the, there's been an ebb and flow in the supercomputer landscape for decades. You know, the way back to the 70s and the 80s. So from a customer perspective, you know, what do you see now? Obviously China is much more prominent, you know, in the game. Discord of an arms race, if you will, and computing power. From a customer's perspective, what are you seeing? What are you looking for in a supplier? Well, so I agree with you there is this arms race for exoflops. Where we are really focused right now is enabling data intensive applications. Looking at big days of service, HPC as a service, really making things available to users to be able to draw on the large data sets you mentioned. To be able to put the capability class computing, which will go to Exascale, together with AI and data analytics under one platform, under one integrated fabric, that's what we did with HPE for bridges. And looking to build on that in the future to be able to do the Exascale applications that you're referring to, but also to couple on data and to be able to use AI with classic simulation to make those simulations better. So it's always good to have a true practitioner on the cube. But when you talk about AI and machine learning and deep learning, John and I sometimes joke, is it same wine, new bottle? Or is there really some fundamental shift going on that just sort of happened to emerge in the last six to nine months? I think there is a fundamental shift. And the shift is due to what Bill mentioned. It's the availability of data. So we have that. We have more and more communities who are building on that. You mentioned the open source frameworks. So yes, they're building on the TensorFlows, on the cafes. And we have people who have not been programmers. They're using these frameworks though and using that to drive insights from data they now have access to. These are flipped upside down. I mean, this is their point. I mean, you'll point it out. It's like the models are upside down. This is the new world. I mean, it's crazy. So if that's the case, and I believe it, it feels like we're entering this new wave of innovation, which for decades we talk about how we marched to the cadence of Moore's law. That's been the innovation. You think back, you're five megabyte disk drive and then it went to 10 and then 20 and 30 and then it's four terabytes. Okay, wow. Compared to what we're about to see. I mean, it pales in comparison. So help us envision what the world is going to look like in 10 or 20 years. And I know it's hard to do that, but can we, can you help us get our minds around the potential that this industry is going to tap? So I think, first of all, I think the potential of AI is very hard to predict. We see that. What we demonstrated in Pittsburgh with the victory of Lebradas, the poker playing bot over the world's best humans is the ability of an AI to beat humans in a situation where they have incomplete information. Where you have an antagonist, an adversary who is bluffing, who is reacting to you and who is, you have to deal with. And I think that's a real breakthrough. We're going to see that move into other aspects of life. It will be buried in apps. It will be transparent to a lot of us, but those sorts of AI's are going to influence a lot. That's going to take a lot of IT on the back end for the infrastructure because these will continue to be compute hungry. So I always use the example of Kasparov and he got beaten by the machine and then he started a competition to team up with a supercomputer and beat the machine. Humans and machines beat machines. Do you expect that's going to continue? Maybe both your opinions. I mean, this is sort of spitballing here, but will that augmentation continue for in a definite period of time or are we going to see the day that doesn't happen? I think over time you'll continue to see progress and you'll continue to see more and more regular type of symmetric type workloads being done by machines. And that allows us to do the really complicated things that the human brain is able to better process than perhaps a machine brain, if you will. So I think it's exciting from the standpoint of being able to take some of those other roles and so forth and be able to get those done in perhaps a more efficient matter than we were able to do. Bill, Nick, talk about, I want to get your reaction to the concept of data. As data evolves, you're talking about the models. I like the way you're going with that because things are being flipped around. The old days, I want to monetize my data. I have data sets, people look at their data. I'm going to make money from my data. So people would talk about how we monetizing the data. Old days like two years ago. People actually try to solve and monetize their data and that could be used case for one piece of it. Other people are saying, no, open, make people own their own data, make it shareable, make it more of an enabling opportunity or creating opportunities to monetize differently. A different shift. That really comes down to the insights question. What trends do you guys see emerging where data is much more of a fabric, it's less of a discreet, monetizable asset, but more of an enabling asset? What's your vision on the role of data? As developers start weaving in some of these insights, I think that's right on. What's your reaction to the role of data, the value of the data? Well, I think one thing that we're seeing in some of our, especially our big industrial customers, is the fact that they really want to be able to share that data together and collect it in one place and then have that regularly updated. So if you look at a big aircraft manufacturer, for example, they actually are putting sensors all over the aircraft and in real time, bringing data down and putting it into a place where now as they're doing new designs, they can access that data and use that data as a way of making design trade-offs and design decisions. So a lot of customers that I talk to in the industrial area are really trying to capitalize on all the data possible to allow them to bring new insights in, to predict things like future failures, to figure out how they need to maintain whatever they have in the field and those sorts of things at all. So it's just kind of keeping it within the enterprise itself. I mean, that's a challenge, a really big challenge just to get data collected in one place and be able to efficiently use it just within an enterprise. We're not even talking about sort of pan enterprise, but just within the enterprise. That is a significant change that we're seeing. Actually, an effort to actually do that and see the value. And high-performance computing really highlights some of these nuggets that are coming out of you just throw compute at something, if you set it up and wrangle it, you're going to get these insights. I mean, new opportunities. Yeah, absolutely. What's your vision, Nick? How do you see the data? And how do you talk to your peers and people who are generally curious on how to approach it? How to architect data modeling and how to think about it? Yeah, I think one of the clearest examples on managing that sort of data comes from the life sciences. So we're working with researchers at the University of Pittsburgh Medical Center and the Institute for Precision Medicine at Pitt Cancer Center. And there it's going to bring together the very large data as Bill alluded to, but there it's very disparate data. It is genomic data. It is individual tumor data from individual patients across their lifetime. It is imaging data. It's electronic health records. And trying to be able to do this sort of AI on that to be able to deliver true precision medicine, to be able to say that for a given tumor type, we can look into that and give you the right therapy. We're even more interestingly, how can we prevent some of these issues proactively? Dr. Nysham, it's expensive doing what you do. Is there a commercial opportunity at the end of the rainbow here for you? Or is that taboo? I mean, is that a good thing? Thank you. It's both. So as a national supercomputing center, our resources are absolutely free for open research. That's a good use of our taxpayer dollars. They've funded these. We've worked with HP. We've designed the systems. It's great for everybody. We also can make this available to industry at an extremely low rate because it is a federal resource. We do not make a profit on that. And so, but looking forward, we are working with local industry to let them test things to try out ideas, especially in AI, where they haven't, a lot of people want to do AI. They don't know what to do. And so we can help them. We can help them architect solutions, put things on hardware, and when they determine what works, then they can scale that up either locally on-prem or with us. This is a great digital resource. You think about, you talk about the federal funded. I mean, you can look at Yosemite, it's a state park. These are the yellow stuff. These are natural resources. But now when you start thinking about the goodness that's being funded. You can talk about democratization. Medicine is just the tip of the iceberg. This is an interesting model as we move forward. We see what's going on in government and seeing how things are instrumented. Some's not. Delivery of drugs and medical care. All these things are coalescing. How do you see this digital age extending? Because if this continues, we should be doing more of these, right? We need to be. It makes sense. So is there, I mean, I'm just not speed on what's going on with the federally funded. Yeah, I think one thing that Pittsburgh has done with the bridges machine is really try to bring in data and compute and all the different types of disciplines in there and provide a place where a lot of people can learn. They can build applications and things like that. That's really unusual in HPC. A lot of times HPC is around big iron. People want to have the biggest iron basically on the top 500 list. This is where the focus hasn't been on that. This is the focus where the focus has been on really creating value through the data and getting people to utilize it and then build more applications. You know, I'll make an observation. When we first started doing theCUBE, we observed that we talked about big data and we said that the practitioners of big data are where the guys are going to make all the money. And so far that's proven true. I mean, you look at the public big data companies and none of them are making any money. Maybe this was sort of true with ERP, but not like it is with big data. It feels like AI is going to be similar, that the consumers of AI, those people that can find insights from that data are really where the big money is going to be made here. I don't know, it just feels like. You mean a long tail of value creation? Yeah, in other words, you used to see in the computing industry, it was Microsoft and Intel, became trillion dollar value companies and maybe there's a couple of others, but it really seems to be the folks that are absorbing those technologies, applying them, solving problems, whether it's healthcare or logistics, transportation, et cetera, looks to where the huge economic opportunities may be. I don't know if you guys have thought about that. Well, I think that's happened a little bit in big data, so if you look at what the financial services market has done, they've probably benefited far more than the companies that make the solutions because now they understand what their consumers want, they can better predict their life insurance. And you can make that argument on Facebook for sure. Absolutely, from that perspective. Do I expect the same thing to your point around AI as well, so the folks that really use it well will probably be the ones that benefit it? Yeah, because the tooling is very important and you got to make the application, that's the end state in all this, that's the end, rubber meets the road. Exactly, absolutely. All right, so final question, what are you guys showing here at Discover? What's the big HPC? What's the story for you guys? So we're actually showing our Gen 10 product, so this is with the latest microprocessors and all of our Apollo lines, so these are specifically optimized platforms for HPC and now also artificial intelligence. We have a platform called the Apollo 6500, which is used by a lot of companies to do AI work, so it's a very dense GPU platform and does a lot of processing in terms of video, audio, these types of things that are used a lot in some of the workflows around AI. Great, Nick, anything spectacular for you here that you're interested in? So we did show here, we had a video in Meg's opening session and that was showing the poker result and I think that was really significant because it was actually a great amount of computing, it was 19 million core hours, so it was an HPC AI application and I think that was really interesting, it was a success. The unperfect information, really we thought this earlier in our last segment with your colleagues, it really amplifies the unstructured data world, people trying to solve the streaming problem with all this velocity, you can't get everything, so you need to use machines to, otherwise you have a haystack of needles, instead of trying to find the needles in the haystack as they were saying. Okay, final question for, just curious on this natural, I'm not a federal resource, a natural resource, feels like it. Is there like a line to get in, I mean like go to the park in this camp waiting list, I got to get in there early, how do you guys handle the flow for access to the supercomputer center? Is it, my uncle works there, I know a friend of a friend, is it a reservation system, I mean who gets access to this awesomeness? So there's a peer reviewed system, it's fair. People apply for large allocations four times a year, this goes to a national committee, they met this past Sunday Monday for the most recent. They evaluate the proposals based on merit and they make awards accordingly. We make 90% of the system available through that means, we have 10% discretionary that we can make available to the corporate sector and to others who are doing proprietary research in data intensive computing. Is there a duration when you go through the application process minimums and kind of like commitments to get involved for the folks that might be interested in hitting you up? For academic research, the normal award is one year, these are renewable, people can extend these and they do. What we see now is of course for large data resources, people keep those going. The AI knowledge base is 2.6 petabytes, that's a lot. For industrial engagements, those could be of any length. Any startup action coming in is more bigger, more. Absolutely, a co-worker of mine has been very active in life sciences startups in Pittsburgh and engaging many of these. We have meetings every week with them now it seems. And with other sectors because that is such a great opportunity. Well congratulations, it's fantastic work and we're happy to promote it and get the word out. Good to see HP involved as well. Thank you for sharing and congratulations. Good work guys. Okay, great way to end the day here. Democratizing, super computing, bringing high performance computing. That's what the cloud's all about, that's what great software's out there with AI. I'm John Furrier, Dave Vellante bringing you all the data here from HP eDiscover 2017. Stay tuned for more live action after the short break.