 Live from Las Vegas, it's theCUBE, covering AWS re-invent 2017, presented by AWS, Intel, and our ecosystem of partners. Hey, welcome back, and we are here live at AWS re-invent in Las Vegas. 45,000 people are here inside the Sands Convention Center at the Venetian, the Palazzo, and theCUBE is here for the fifth straight year, and we're excited to be here. And I want to say this, our fifth year, we've got two sets, and I want to thank Intel for their sponsorship, and of course, our next guest is from Intel, Scott Macepole, director of the CTO's office at Intel PSG. Welcome to theCUBE. Thanks for coming on. So, we've had a lot of Intel guests on, a lot of great guests from customers of Amazon, Amazon execs, Andy Jassy coming on tomorrow. The big story is all this acceleration of software development. You guys at the FPGA with an Intel are doing acceleration at a whole other level, because these clouds have data centers. They have the power of the machines, even though it's going serverless. What's going on with FPGAs, and how does that relate to the cloud world? Well, FPGAs, I think, have a unique place in the cloud. They're used in a number of different areas, and I think the great thing about them is they're inherently parallel. So, you know, they're programmable hardware. So, instead of something like a GPU or a purpose-built accelerator, you can make them do a whole bunch of different things. So, they can do compute acceleration, they can do network acceleration, and they can do those at the same time. They can also do things like machine learning, and there's structures built inside of them that really help them achieve all of those tasks. Why is it like got to pick up lately because what are they doing differently now with FPGAs than they were before? Because there's more talk of that now than more than ever. You know, I mean, I think it's just finally come to a confluence where the programmability is finally really needed. It's very difficult to actually create customized chips for specific markets, and it takes a long time to actually go do that, and so by the time you actually create this chip, you may have not had the right solution. FPGAs are unique in that they're programmable, and you can actually create the solution on the fly, and if the solution's not correct, you can go and you can actually change that, and they're actually pretty performant now, so the performance is continue to increase generation to generation, and I think that's really what sets them apart. It's a relationship to Amazon because now I'm kind of connecting the dots from, I'm connecting the dots in my head. Amazon's running full speed ahead, and they're moving fast, I mean, thousands of services. Does FPGAs give you guys faster time to market when they do some joint designs with Intel, and how does your relationship with Amazon connect on all of this? Absolutely, we have a number of relationships with Amazon, clearly the Xeon processors being one of them. The FPGAs are something that we continue to try to work with them on, but we're also in a number of their other applications such as Alexa, so when there's actually technologies within Alexa that we could take and implement either in Xeon CPUs or actually in FPGAs to further accelerate those, so a lot of the speech processing, and a lot of the AI that's behind that, that's something that, it's not very prevalent now, but I think it'll be in the future. So all that speech stuff matters for you guys, right? That helps you guys, the speech, all the voice stuff that's happening, and the Alexa news, machine learning, that's good for you, right, I mean that. It's very good, and it's actually, it's really in the FPGA sweet spot. There's a lot of structures within the FPGAs that make them a lot better for AI than a GPU. So for instance, they have a lot of memory on the inside of the device, and you can actually do the compute and the memory right next to where it needs to be, and that's actually very important because you want the latency to be very low so that you can process these things very quickly. And there's just a phenomenal amount of bandwidth within side of an FPGA today. There's over 60 terabytes a second of bandwidth in our mid-range Stratix 10 device. And when you couple that together with the unique math capabilities, you can really build exactly what you want. So when you look at GPUs, they're kind of limited to double precision floating point or single precision or integer. The FPGAs can do all of those and more, and you can actually custom build your mathematical path to what you need, save power, be more efficient, and lower the latency, so. So, Andy Jassy talked about, this is a builder's conference. The developers giving the tools to the developers they need to create amazing things. One of the big announcements was the bare metal service from AWS. How do you see something like an FPGA playing in a service like that? Well, the FPGAs could use to help provide security for that. They could obviously be used to help do some of the network processing as well. In addition, they could be used in a lot of the classical modes that they could be used in, whether it's like an attached solution for pure acceleration. So just because it's bare metal doesn't mean it can't be bare metal with FPGA to do acceleration. And then let's talk about some of the, you guys, FPGAs is pretty big in the networking space. Let's talk about some of the surrounding Intel technologies around FPGAs. How are you guys enabling your partners, network partners, to take advantage of x86, Xeon, FPGAs, and accelerating networking services inside of a solution like Amazon? We have a number of solutions that we're developing both with partners and ourselves to attach to our NICs and other other folks NICs to help accelerate those. We've also released what's called the Acceleration Stack. And what that's about is really just kind of lowering the barrier of entry for FPGAs. And it has actually a driver solution that goes with it as well, it's called OPAE. And what that driver solution does, it actually creates kind of a containerized environment with an open source software driver so that it just really helps remove the barrier of, you know, you have this FPGA next to a CPU, how do I talk to it? How can we connect to it with our software? And so we're trying to make all of this a lot simpler and then we're making it all open so that everybody can contribute and that the market can grow faster. Yeah, let's talk about ecosystem around data, the telemetry data coming off of systems. A lot of developers want as much telemetry data and even in from AWS as possible. Are you guys looking to expose any of that to developers? It's always something under consideration and one of the things that FPGAs are really good at is that you can kind of put them towards the edge so that they can actually process the data so that you don't have to dump the full stream of data that gets generated down off to some other processing vehicle, right? So you can actually do a ton of the processing and then send limited packets off of that. So we looked at the camera today, super small device, doing some really amazing things. How does FPGAs play in the role in that, the IoT? They do a lot of FPGAs are great for image processing. They can do that actually much quicker than the most other things. When you start listening or reading a little bit about AI, you'll see that a lot of times when you're processing images, you'll have to take a whole batch of them for GPUs to be efficient. FPGAs can operate down at a batch size of one so they can respond very quickly. They can work on individual images and again, they can actually do it not just efficiently in terms of the kind of the amount of hardware that you implement but efficiently in the power that's required to go do that. So when we look at advanced IoT use cases, what are some of the things that end user customers will be able to do potentially with FPGAs out to the edge? Of course, less data, less power needed to go back to the cloud, but practically what are some of the business outcomes from using FPGAs out at the edge? You know, there's a number of different applications for the edge. If you go back to the Alexa, there's a lot of processing smarts that actually go on there. This is an example where the FPGAs could actually be used right next to the Xeons to further accelerate some of the speech and that's stuff that we're looking at now. What's the number one use case you're seeing that people, what's the number one use case that you're seeing that people could relate to? Is it Alexa? Is it the video? For Edge or? For FPGAs, the value of the accelerator. For FPGAs, I mean, while there's usage, well beyond data center, there's a classic what we would call wire line where it's used in everything today. If you're making a cell phone call, it likely goes through an FPGA at some point. In terms of data center, I think where it's really being used today, there's been a couple of very public announcements, obviously in network processing and some of the top cloud providers as well as AI. So, and I think a lot of people were surprised by some of those announcements, but as people look into them a little further, I think they'll see that there's a lot of merit to that. The devices get smaller and faster and it's just a deep lens device. It's got a graphics engine that would have been on a mainframe a few years ago. I mean, it's huge, software, power. You guys accelerate that, right? I mean, I'm looking at data direction. What is the future direction for you guys? What's the future look like for FPGAs? Fully programmable, so it's really limited by what our customers and us really want to go invest in. One of the other things that we're trying to do to make FPGAs more usable is remove the kind of barrier where people traditionally do RTL, if you're familiar with that, to actually do the design and really make it a lot more friendly for software developers so that they can write things and see your open CL and that application will actually end up on the inside of the FPGA using some of these other frameworks that I talked about, the acceleration stack. So they don't have to really go and build all the guts of the FPGA. They just focus on their application. You have the FPGA here. Whether it's attached to the network, coherently attached to a processor or an extra processor on PCI Express, all of those can be supported and there's a nice software model to help you do all that development. Do you want to make it easy for developers? We want to make it very easy. What specifically do you have for them right now? We have the, they call it the DLA framework, the deep learning framework that we released. As I said before, we have the acceleration stack. We have the OPAE, which is the driver stack that goes along with that. As well as what we call our high level synthesis tools, HLS, and that supports C and OpenCL. So it basically will take your classic software and convert it into gates and help you get that on the FPGA. Will bots be programming this soon? Soon AI is going to be programming the FPGAs. Software programming software. That might be a little bit of a stretch right now, but in the coming years perhaps. Scott, thanks for coming on theCUBE. Really appreciate it. Thanks for having me. Scott baseball, who's with Intel. He's the director of the CTO's office at Intel PSG. They make FPGAs, really instrumental device and software to help accelerate the chips, make it ready for developers. Power, your phone, Alexa, all the things. Pretty much in our life. Thanks for coming on theCUBE. Thank you. We back with more live coverage. 45,000 people here in Las Vegas. It's crazy. It's Amazon web services reinvent. We'll be right back.