 From London, England, it's the Cube. Covering Discover 2016 London. Brought to you by Hewlett Packard Enterprise. Now, here's your host, Dave Vellante and Paul Gillis. Back at HP Discover 2016 and the banks of the Thames at the London Docks, Excel London, where it is chilly, it's below freezing here in London. At least it was this morning. And Paul and I are really pleased to have a segment now on SGI, an acquisition that Hewlett Packard Enterprise made for, I think, around $250 million. Bill Manel is here, he's the Vice President and General Manager of the High Performance Computing Group at Hewlett Packard Enterprise. And he's joined by Dr. Eng Lim Go, who's the Senior Vice President and CTO of SGI. Gentlemen, welcome to the Cube. It's great to see you guys. Thank you for having us. So we're excited about this. It was kind of a $250 million. Okay, that's not super small, but it's sort of a tuck-in acquisition for HPE. What was the logic behind it? Well, so the logic behind it was to be able to take a lot of the technology that SGI had in particular areas and be able to scale that, given the scale that Hewlett Packard Enterprise had. We felt that a lot of the technology, a lot of the capabilities were very complimentary. So it made a really good fit in order for us to advance in both our high performance computing and in our mission critical areas. So Dr. Go, how would you describe sort of the architecture of SGI and give us the background there? Yeah, there are two architectures. The first one is the HPC side that goes into Bill Manel's organization. And that one is where we've built highly scalable supercomputers that rank high in the top 500 of the supercomputers of the world. This is ICE? This is the ICE-X. Well done, yes. And you have HPE being number one in the 100 to 500. And what we bring is that high performance layer, both the 100, right? And this is where the ICE architecture plays well. Now we've deployed huge systems and it will be leveraged well by Bill's organization. Now the other architecture is a scale up architecture called UV that will be incorporated into the mission critical side of HPE business. And one of the biggest businesses there would be for SAP HANA. In the two years we've been selling this architecture for SAP HANA, we've already done 130 systems and almost a petabyte of in-memory HANA sizes in total. Okay, so these are obviously Intel-based systems, right? Yes, they are. And did you sell Apollo into SAP HANA or is there an overlap there? No, Apollo is more for HPC and probably more hyperscale from that standpoint. So there was really not a scale up version of Apollo, right? No, no, no, it's a scale out. Okay, so this fits nicely into that portfolio. That's correct, yep, absolutely. And then obviously SGI, a very mature sort of code base and company and so it positions you. Talk about that a little bit in terms of the Apollo overlap. Sure, so on the Apollo side, as Dr. Go was saying, we've really focused on a lot of the commercial space. So in terms of financial services, manufacturing on gas, whereas SGI has focused a lot on the public sector research and so forth and so on. So that has driven a difference in terms of portfolios that we see. Now there's a little bit of overlap from that standpoint, but in general, as we look at the portfolios, they're very complimentary as well. And so we do expect in some cases we're going to migrate the technologies into a single product line. In other cases, the product lines will continue to exist as they are. Well, your underlying architecture will remain the same or will it be? Well, certainly on the HPC side, we focused on water cooling architecture at the top end because that's where customers are focused on the highest performance per rack, going to really do that with water cooling from that perspective. And then underneath it's going to be our air-cooled portfolio as we go forward. So very similar to what HP offers today. Now what industries are you targeting right now? Has that changed? I'd say that, so as I mentioned previously, HP has been very focused on the commercial side, especially financial services, manufacturing on gas. We've started to expand more and part of the advantage of the SGI acquisition is that the public sector, they're very strong in. They're strong in life sciences. And so in a lot of areas that we want to get better and they're very strong in. So again, even from the go-to-market standpoint, we have good complementary type resources. And the market for HPC, the traditional market for HPC, if I understand it, I'm not an expert in this market, but I read HPC, what is HPC wire? HPC wire, yes. Great publication and used to sort of follow that market, but it seems to be expanding as a result of, we joked about big data before in your title, but it seems to be expanding into this space. Talk about the market dynamic. Yeah, so the market, actually the HPC market is one of the fastest growing markets in that perspective. So it's growing probably in the high single digits versus a lot of the other markets and servers are probably either just going a little bit or even being flat from that standpoint. And so a lot of that growth is around big data that you mentioned. So more and more customers, whether they're commercial or public sector are needing the architectures in HPC to actually get value out of all that data. Dr. Gov, top gun performance is sort of, there's bragging rights there, but it also has strategic implications. And of course now China is bragging about some of its capabilities. It's becoming self-sufficient and going to have its own chip by the end of the year, et cetera. So what's the importance and significance of that sort of top supercomputer performance? Both business-wise as well as strategic and technology-wise. Business-wise, it is interesting that if you add up all the capability of the top 10 machine, that capability equals half of the remaining 490. That's a very surprising statistics. So that alone tells you that being in that top 10 gives you a position of strength. Now secondly, strategically, because you push the envelope to be in the top 10 in order to get this system to stay up and run one application across the entire machine, right? It requires capability to achieve that. This is very different from a hyperscale data center of say a Google, where they will build a machine as big as the top 10 supercomputer, but that machine will support millions of users. However, the machine in the top 10 must be able to support only one user running one application across the entire machine reliably. That requires capability. But that's a use case, it's a very limited use case, and it seems like bragging rights aside, the application of your technology, I mean, were there markets that you simply were not able to approach as an independent company because of lack of resources? That's a very strong point. In fact, our biggest machine today is NASA's machine at number 13 on that list. We've been given opportunities to bid for the top 10, and there are times when it is because, one of the reason was because of the financial resources of our smaller company that we decided not to bid. It wasn't the only reason, but it certainly was one of the reason. And now with this acquisition, being part of a bigger organization with stronger financial resources, we should be thinking of starting to go for that top 10. Could you describe what's unique about your architecture and in words that mere mortals can understand? First and foremost, you have to build a machine by assembling parts that are available from industry. That's the first thing we do. The reason for this strategic decision is so that we can embrace a big swath of applications out there. It is using open standards to build a machine. Number two, we make sure that when we integrate it, we integrate the best of breed and integrate well. And finally, when we do so, we do it highly energy efficient wise. The reason for this is that those biggest machines are now starting to consume of the order of 20, 30 megawatts of power. If you translate that to business, that's $30 million of electrical bill a year. So if you don't focus enough on energy efficiency and let that just run wild, you might just go to a 60 megawatts, and that will be $16 billion of electrical bill a year. So there is a lot of effort put in to try and control and yet perform well, control the energy consumption and yet perform well with open standard components. So that requires capability to do that balance well. 25 years ago, there was this sort of explosion of supercomputer companies that hit the market. It was obviously Cray and then it was, I think Son of Cray and whatever instantiation came there like convex and Kenda square research and thinking machines, right Danny? What's his name? And there was such promise for that industry. It was getting a bunch of investment from venture capital. And then you sort of didn't hear much. And now it seems like HPC is coming back. First of all, is that true? Is that accurate? The historical characterization and will we see more companies entering this space? Yeah, well I think what's happening is that the, there was a period of time during which there was a lot of innovation and so you innovated and then you could get performance and you found advantage of doing that. Then after a while, standard architectures since the Intel architecture became powerful enough to be able to do a lot of problems. Now conversely, you're now seeing that that architecture starts to see limitations. So when you talk to a lot of big customers, they're saying we're really concerned because even though the chip itself is becoming more powerful, getting data in and out of it is now the problem. So bandwidth becomes the problem. I left IBM out of it, fairness right to the power. And so now you're starting to see a proliferation of new technologies. So FPGAs are coming back for example. I worked with them 10 years ago and HPC now they're coming back. So again it's a response to the fact that the standard architectures are starting to run out of gas. The problems continue to get bigger. I talk to customers all the time they're saying I've got a problem that I can't solve with the current architectures. And these are common names and not just folks that you associate with like the top 10. These are common names of big industries and aircraft and automobiles and so forth that are saying hey, the current architectures are running out of gas. We need to have a new paradigm from that standpoint. So you're seeing that reinvent itself. And that's not new microprocessor is it? I mean you're going to Intel base right? It's a new microprocessor, new buses, new technologies both from memory perspective as well as the story. Is your Intel based across the board? Is that correct? That's where we are currently, yes. But I mean for instance you've seen IBM always talks about power and it's bandwidth and how Intel's running out of gas. Andy Bechtelstein said don't worry, Moore's law is not dead yesterday. It was an interesting debate but clearly from a bandwidth standpoint, even Spark which I don't think Oracle plays in the supercomputer space at this point in time. But those alternatives always seems to be room for two chipsets in the world. At least two, I see others emerging. So you would expect that, so it feels like there's another renaissance coming in HPC. I'm sure you've been looking closely at the machine that HPE was demonstrating this week. Very exciting. What is your impression of what is breakthrough about that technology? And will you be one of the first to apply it? Yes, absolutely. We got really excited. We knew about it of course before the acquisition but now that we have the insight of it, we are even more excited. The way I see it, there are three parts to the machine that is exciting. That there are many parts that are exciting but those three parts are the ones that will be very keen to incorporate into our architecture going forward. Number one is the concept of memory being a unified one where everybody else come to. Number two, the enablement of that unified memory through Gen Z, which is an open consortium, brilliant, because we've always believed in the concept of an open approach to building a supercomputer. And then finally, the delivery of that openness through high bandwidth low latency with the silicon photonics. So those are the three parts. Unified memory, Gen Z to deliver the unification and silicon photonics to deliver the Gen Z. And we are looking at taking pieces of it, implementing it in the scale up architecture early, even before the V machine materializes. So when HPE talks about commercial availability of some components by 2018, that could show up in your persistence. Precisely. It could show up in the HPE machine now, right? And you didn't cite persistent memory. Is that because IO bandwidth is not as much of an issue or that's part of the number one? Yes, that's part of the number one. The concept of a unified memory and the fact that it's also persistent gives it strong value, yes. So eliminating essentially IO. That's right. The need to make confidence. Gene Amdahl, right, said the best IO is no IO. The biggest issue with IO, right, if you take a step back is that in the supercomputing side, the US government is putting out proposals to build supercomputers at the exascale level, which is a thousand times faster than the previous peterscale level. But when they put that out, they don't just talk about exaflops of compute power, but they're also talking about needing to handle the exabytes of data that results from it. So the issue with IO is, if you have to make a copy, it is loss of IO. It has to move somewhere else or from out and back to the same look. That's IO, right? So the whole goal is to reduce copying. Persistent memory allows you to reduce copying. Just quickly, the potential of GPUs as an architectural component of supercomputers, what are you doing in that area now, if anything? It is essential, right? If you are highly compute intensive focus, number one, or if you are highly machine learning, deep learning focus, the accelerators in the form of GPUs are becoming the de facto way to achieve those numbers, right? So there are now, in fact, you're starting to see the GPUs coming up with two versions, right? The highly floating point intensive ones, and the reduced floating point precision ones for machine learning. And we need to build those, the ICE architecture to be able to accept blades with those GPUs in them. And we are getting inquiries from customers now who use to buy supercomputers of one kind to now thinking about, you know, I'm actually building a supercomputer from machine learning that could be quite different from a supercomputer that is for highly compute intensive applications. Interesting. How much of the business is government? So specifically US government, I'm really interested in defense. Is it still sizable? Is that right, or? In terms of the total market? Yeah, in terms of total market. Yeah, it's probably, I would roughly say at least 10%, 15% as public sector from that standpoint. Yeah, okay. And a big chunk of that is defense. Yeah, that's correct. And defense intelligence. Yeah, okay. And so a US administration that is more likely to invest in defense is a good thing for the industry, is that right? Yeah, and so the, and we probably already know this, you know, roughly about a year, 18 months ago, the Obama administration issued an executive order around the National Strategic Computing Initiative. And that's the fact that they feel the need to drive super computing, not only for defense purposes, but also for economic well-being going forward. Yeah, and, you know, for competitiveness globally, and it's kind of a, you know, the moonshot example, no pun intended. Okay, last question. Take us through the sort of timeline of the integration and what we should expect in terms of milestones over the next 12, 18 months. Oh, I'm sure. So basically, we agreed to come together in August. We actually closed the agreement. So we went through a regulatory and so forth with analyses and permission, if you will. The shareholders of SGI voted to make it happen. That was on the 1st of November. So now we're one company, if you will. Now the employees of SGI become HPE employees in the US on January 1. So we call that employee day one. So then at that point in time, you know, they show up on our system, they get HPE benefits, et cetera, et cetera. Outside the US, you'll see that around the May timeframe is where we're thinking. This takes longer outside the US. So that's in terms of where the employees are. From a roadmap perspective, we can actually have an NDA conversation with the customer today to talk about roadmap. So we're actually pretty aligned as we came out of the November 1st close of the business. We had a lot of engineering teams working together, flying back and forth to 200 various sites and getting that roadmap together. So we do have a consolidate roadmap that we can share with customers today. And then over about a period of a year, we'll begin to start to integrate. So in the short term, Cassio Cancello, who's the COO of SGI, he's actually going to manage, if you will, the SGI business. And there's some discontinuities in terms of fiscal years and things like that that present some challenges, some quota and sales reps and those sorts of things. So we'll be working through those, but the goal is, about this time next year, we'll be well on our way, if you will, to have a solid integration. And you'll keep the SGI brand, or that'll be folded into the SGI? No, we're still discussing that from that standpoint. So in general, probably, in some of the lines going forward. Good. All right, gentlemen, thanks very much for the update. Really appreciate you coming around. Thank you. Very well, thank you as well. All right, keep it right there. Paul and I will be back to London right after this short break. Right back.