 From Denver, Colorado, it's theCUBE. Covering Supercomputing 17, brought to you by Intel. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at Supercomputing 17. I think it's the 20th year of the convention. 12,000 people. We've never been here before. It's pretty amazing. Amazing keynote, really talking about space and really big, big, big computing projects. So excited to be here and we got our first guest of the day. He's Bernard Freeby. He is the senior director of FPGA. I'll get that good by the end of the day. Software Solutions for Intel's program world group. First off, welcome Bernard. Thank you, I'm glad to be here. Absolutely. So have you been to this conference before? Yeah, a couple of times before. It's always a big event, always a big show for us and so I'm excited. Yeah, and it's different too because it's got a lot of academic kind of influence as well. It's kind of you walk around the outside. It's just pretty hardcore. Yeah, it's wonderful and you see a lot of innovation going on and we need to move faster. We need to move faster. That's what it is. And that's what you're all about, acceleration. So Intel's making a lot of announcements really about acceleration and FPGA for acceleration and data centers and big data and all these great applications. So explain to us a little bit how that seed is evolving and what some of the recent announcements are all about. Yeah, so I mean, the world of computing must accelerate. I think we all agree on that. We all see how that's a key requirement and FPGAs are really a truly versatile multifunction accelerator. They accelerate so many workloads in the high-performance computing space. May it be financial genomics, all in gas, data analytics and the list goes machine learning is a very big one. The list goes on and on. And so we're investing heavily in providing solution which makes it much easier for our users to develop and deploy FPGA in a high-performance computing environment. Right, so because you guys are taking a lot of steps to kind of make the software programming of FPGA is a lot easier. So you don't have to be kind of a hardcore hardware engineer so that you can open it up to a broader ecosystem and get a broader solution set, is that right? That's right. And it's not just the hardware. How do you unlock the benefits of FPGA as a versatile accelerator through their parallelism, their ability to do real-time low latency acceleration of many different workloads and how do you enable that in an environment which is truly dynamic and multifunction like a data center, right? And so the product we've recently announced is the acceleration stack for Xeon with FPGA which enables that use model. So what are the components of that stack? So it starts with hardware. So we are building hardware accelerator cards. These are PC Express blocking cards called program accelerator cards. We have integrated solution where you have Xeon and an FPGA in package, but what's common is a software framework solution stack which sits on top of these different hardware implementation which really makes it easy for a developer to develop an accelerator for a user to then deploy that accelerator and run it in their environment. And it also enables a data center operator to basically enable the FPGA like any other compute resources by integrating it into their orchestration framework. So multiple levels taking care of all those needs. So it's interesting because there's a lot of big trends that you guys are taking advantage of. Obviously we're at super computing, but big data, streaming analytics is all the rage now. So more data faster reading it in real time, pumping it into the database in real time. So the, and then of course right around the corner we have IoT and Internet of Things and all these connected devices. So the demand for increased speed to get that data in, get that data processed, get the analytics back out is only growing exponentially. That's right and FPGAs due to their flexibility have distinct advantages there. The traditional model is look aside or offload where you have a processor and then you offload your task to your accelerator. The FPGA with the flexible IOs and flexible core can actually run directly in the data path. So that's what we call inline processing. And what that allows people to do is to whatever the sources, may it be cameras, may it be storage, may it be through the networks or ethernet, you can stream directly into the FPGA and do your acceleration as the data comes in in a streaming way. And FPGAs provide really unique advantages there versus other types of accelerators, low latency, very high bandwidth and they're flexible in a sense that our customers can build different interfaces, different connectivity around those FPGAs. So it's really amazing how versatile the usage of FPGAs become. And it's pretty interesting because you're using all the benefits that come from hardware, hardware based solutions, which you just get a lot of benefits when things are hardwired with kind of a software component and enabling a broader ecosystem to write ready made solutions and integrations to their existing solutions that they already have. So great approach. Yeah, so the acceleration stack provides a consistent interface to the developer and the user of the FPGA. What that allows our ecosystem and our customers to do is to define these accelerators based on this framework and then they can easily migrate those between different hardware platforms. So we're building in a sort of a future proofness of the solution. And the consistent interface then allows our customers and partners to build their software stacks on top of it. So their investment once they do it and we target our ARIA 10 program accelerator card can easily be leveraged and moved forward into the next generation Stratix 10 and beyond. So we enable really encourage a broad ecosystem to build solutions and you'll see that here at the show many partners now have demos and they show their solutions built on Intel FPGA hardware and the acceleration stack. Okay, so I'm going to put you on the spot. So these are announced. What's kind of the current state of the general availability? So we're sampling now on the cards. The acceleration stack is available. We deliver it to customers. A lot of it is open source by the way, right? So it can already be downloaded from GitHub, right? And the partners are developing the solutions they're demonstrating today. The product will go into volume production in the first half of next year. So we're very close. All right, very good. Well, Bernard, thanks for taking a few minutes to stop by, give us a little update. All right. Thank you very much. Bernard, I'm Jeff. You're watching theCUBE from Supercomputing17. Thanks for watching.