 Live from San Francisco, it's theCUBE, covering Spark Summit 2017, brought to you by Databricks. Welcome back to theCUBE, continuing our coverage here at Spark Summit 2017. What a great lineup of guests. I can't wait to introduce this gentleman. We have Intel's VP of the software and service group, Mr. Michael Green. Michael, welcome. Thank you for having me. All right, you must have George with us over here and George and I will both be peppering you with questions. Are you ready for that? I got the salt to go with the pepper. Well, you just got off the stage. You did a keynote this morning. What do you think was the most important message you delivered in your keynote? Well, it was interesting. One of the things that we're looking at with a big deal, so the big DL framework was we're hearing a lot of the challenges of making sure that these AI type workloads scale easily. And one of the things when we open sourced big DL, we really were designing it to leverage that spark ability for massive scale for the beginning. So I thought that that was one of the things that connected with several of the key notes ahead of me was talking about if this is your challenge, here is one of many solutions, but a very good one that will let you take advantage of the scale that people have in their infrastructure, lots of zeons out there. Might as well make sure they're fully utilized running the workloads of the future, AI. Okay, so Cisco or Intel, not just a hardware company, you do software, right? Well, you know, Intel's a solutions company, right? And hardware is awesome, but hardware without software is a brick, maybe a warm one, but it doesn't data break. That's right, not a data break, just a brick. And not melt it down either. That's right, that's right. So sand without software doesn't go very far. And I see it as software is used to ignite the hardware so that you actually get useful productivity out of it. So as a software solution and as customers, they have problems to solve, it's rare that they come in and say that, nope, I just need a nail, right? They're usually like, I need a home. Well, you can't just provide the nail, you have to provide all the pieces. And one of the things that's exciting for me, being part of Intel, is that we provide silicon, of course, right, the processors, Xeon, accelerators, and now software tools, frameworks to make sure that customer can actually really get the value of the entire solution. Okay, go ahead, George. So, Michael, help us, those of us who've been watching sort of from afar, but aren't up to date on the day-to-day sort of tactics and strategy of what Intel's doing with deep learning in terms of where does big DL fit? And then the acquisition of the floating point gateway technology so that there's general purpose or special purpose acceleration on the chip. So how do those two work together along with the rest of the ecosystem? Sure, great question. So if you think of Intel, really we're always looking at how we can leverage Moore's Law to get more and more integrated into the solution. And if you quickly step through kind of a brief history, at one point we had a 386, which was a great integer processor, which was partnered with a 387 for the floating point accelerate. 486 combined that because we're able to leverage Moore's Law to bring those two together. Got a lot of reuse for the instruction set with the acceleration. As we bring in, Altera was recently integrated into Intel. They come with a suite of incredible FPGAs and accelerators, as well as another company with Nirvana that's also has accelerators. And we're looking at those special case opportunities to accelerate the user experience. So we're going to continue to follow that trend and make sure that you have the journal purpose capabilities and where new workloads are coming in. And we really see a lot of growth in the growth in AI. If we're going to, as I think I said in the keynote, about 12X in growth by 2020, we need to make sure that we have the silicon as well as the software, and that's for BigDL to pull those two together to make sure that we're getting the full benefit of the solution. So a couple of years ago, we were told that Intel actually thought that there was going to be more Hadoop servers, and Hadoop is umbrella term for the ecosystem, than database servers in three to five years time. When you look at deep learning, because we know it's so much more compute intensive than the traditional sort of statistical machine learning, if you look at three to five years, how much of the, I don't know, compute cycles, share of workloads, do you see deep learning comprising? Yeah, I think that maybe in the last year, deep learning or AI as a workload is about 7%, but if you grow by 12X, it's definitely growing quickly. So what we're expecting is that AI will become inherent in pretty much every application. An example of this is at one point, facial detection was something that was like the new thing. You can't buy a camera that doesn't do that, right? So if you can pull up your camera and you see the little square show up, it's like it's just commonplace. We're expecting that AI will just become an integral part of solutions, not a solution in and of itself, right? It's there to make software solutions smarter. It's there to make them go furthers, there to make them smarter. It's not there to be independent. It's like, wow, we've identified a cat. That's cool, but if we're identifying problems or making sure that the autonomous delivery systems don't kill a cat, there's a little bit more that needs to go on. So it's going to be part of the solution. What about the trade-off between processing at the edge and learning in the cloud? I mean, you can learn on the edge, you can learn in the cloud, you can do the analysis on either end, the sort of the runtime. How do you guys see that being split up in the future? Absolutely. I think that the deep learning training, right? There's always opportunities that go through vast amount of data to figure out how to identify what's interesting, identify new insights. Once you have those models trained, then you want to use them everywhere. And what makes sense then then we're switching from training to inference. Inference at the edge allows you to be more real time. In some cases, if you imagine a smart camera, even from a smart camera point of view, do I send all the data stream to the data center? Well, maybe not. Let's assume that it's being used for highway patrol. If you identify the car speeding, then send the information, except it's leave me out, hitting on that. But it's that kind of piece where you allow both sides to be smart. More information for the continual training in the cloud, but also more ability to add compute to the edge so that we can do some really cool activities right at the edge, real time, without having to send all the information. And if you had to describe to people working on architectures for the new distributed computing and IoT, what would an edge device look like in its hardware footprint, in terms of compute, memory, connectivity? Yeah, so in terms of connectivity, we're expecting an explosion of 5G, a lot of high bandwidth, multiple things being connected with some type of communication 5G capability. It won't just be about, let's just say, cars feeding back where they are from their GPS, but it's going to be cars talking to other cars. Maybe one needs to move over a lane. Can they adjust? We're talking autonomous world. There's going to be so much interconnection through 5G. So I expect to see 5G show up in most edge devices. And to your point, I think it's very important to add that we expect edge devices all have some kind of compute capability. Not just sensors, but ability to sense and make some decisions based on what they're sensing. We're going to continue to see more and more compute go to the edge devices. And so again, when we look at leveraging the power of Moore's Law, we're going to be able to move that compute that today is like, I mean, the cloud is just incredible with this collective compute power. That will slowly move away and down we've seen that from mainframe to workstations to PCs, the phones and to edge devices. I think that trend will continue and we'll continue to see bigger data centers and other use cases that require deeper analysis. So from a developer's point of view, if you're working on edge device, make sure it has great connectivity and compute. So one last follow up for me. Google is making a special effort to build their own framework, open source at TensorFlow and then marry it to specialized hardware, TensorFlow processing units. So specialization versus generalization would you expect or do you have a sense for someone who's running TPU in the cloud? Do you have a sense for if they're learning TensorFlow models or TensorFlow based models, would there be an advantage for that narrow set running on TensorFlow processing units or would that be just as supported just as well on Intel hardware? You know, specialization is, you know, anything that's purpose built, as he says, it's just not general purpose but as I mentioned over time, the specialized capabilities slide into general purpose opportunities. Recently, we added AS and I, which is an encryption algorithm, into our processors very specialized for encryption decryption. But because it was so generally used now, it's now just part of our processor offering, it's just part of our instruction set. I expect us continue to see that trend. So many things may start off specialized, which is great, it's a great way to innovate. And then over time, if it becomes general purpose or if it comes, you know, if it's so specialized that everyone's using it, it's now general purpose and it slides into the general purpose opportunity. I think that will be a continuation. We're seeing that, we've seen that since the dawn of the computer, specialized memory, specialized compute, specialized floating point capabilities are now just generally available. And so when we deploy things like a big DL, a lot of the benefit of it is that we know the Xeon processor has so much capability because it has pulled in over time the best of the specialized use cases that are now generally used. All right, great deep dive questions, George. Just wanted to have a couple of minutes left. So I know you brought a lot to this conference, they put you up on stage. So what were you hoping to gain from the conference? Maybe come here to learn or have you had any interesting conversations so far? What I'm always excited about these conferences is that open source community is just one that is so incredibly adaptive and innovative. So we're always out there looking to see where the world is going. By doing that we're learning where, because again, where the software goes, we want to make sure that the hardware that supports it were there to meet their needs. So today we're learning about new frameworks coming out, the next spark on the roadmap, what they're looking at doing. I expect that we'll hear a little bit more about scripting languages as well. All of that is just fantastic, because I'm always impressed, but have come to expect a lot of innovation, but still impressed by the amount of innovation. So it's good to be in the right place and as we approach things from an Intel point of view, we know we approach it from a portfolio solution set. It's not just silicon, it's not just an accelerator, but it's from the hardware through the software solution. So we know that we can really help to accelerate and usher in the next compute paradigm. So this has been fun. Wow, that would be a great ending, but I got to ask you this, when you're sitting in this chair next year at Spark 2018, what do you hope to be talking about? Well, one of the things that we're looking and talking about is this massive amount of data. I would love to be here next year talking more about the new memory technologies that are coming out that allow for tremendous more storage at incredible speeds, better SSDs, and how they will impact the performance of the overall solution. And of course, we're going to continue to accelerate our processing cores, accelerators for unique capabilities. I want to come back in and say, wow, what did we 10X this year? So that's always fun. It's a great challenge to the engineering team who just heard that and said, he's starting off with 10X again. Great, Michael. That's a great wrap up too. We appreciate you coming on and sharing with the CUBE audience the exciting things happening at Intel with Spark. Well, thank you for the time. I really appreciate it. All right, and thank you all for joining us for this segment. We'll be back with more guests in just a few. You're watching the CUBE.