 from Las Vegas, it's theCUBE covering AWS re-invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. Oh, welcome back to theCUBE, continuing coverage here from AWS re-invent as we start to wind down our coverage here on the second day. We'll be here tomorrow as well, live with theCUBE, bringing you interviews from Hull D, Sands Expo, along with Justin Moore and I'm John Walls, and we're joined by Jonathan Ballin, who's the Vice President of the Internet of Things at Intel. Jonathan, thank you for being with us today. Good to see you. Thanks for having me, guys. Interesting announcement today. Last year it was all about deep lens. This year it's about deep racer. Tell us about that. What we're really trying to do is make AI accessible to developers and democratize various AI tools. Last year it was about computer vision. The deep lens camera was a way for developers to very inexpensively get a hold of a camera, the first camera that was a deep learning enabled cloud connected camera so that they could start experimenting and see what they could do with that type of device. This year we took the camera and we put it in a car. And we thought, what could they do if we add mobility to the equation and specifically wanted to introduce a relatively obscure form of AI called reinforcement learning. Historically this has been an area of AI that hasn't really been accessible to most developers because they haven't had the compute resources at their disposal or the scale to do it. And so now what we've done is we've built a car and a set of tools that help the car run. It's a little miniature car, right? I mean it's a scale. It's one 118th scale. It's an RC car. It's four wheel drive, four wheel steering. It's got GPS, it's got two batteries, one that runs the car itself, one's that runs the compute platform and the camera. It's got expansion capabilities. We've got plans for next year of how we can turbo charge the car. Right now it's baby steps so to speak and basically giving the developer the chance to write a reinforcement learning model and algorithm that helps them to determine what is the optimum way that this car can move around a track. But you're not telling the car what the optimum way is. You're letting the car figure it out on their own. And that's really the key to reinforcement learning is you don't need a large data set to begin with, it's pre-trained. You're actually letting in this case a device figure it out for themselves and this becomes very powerful as a tool when you think about it being applied to various industries or various use cases where we don't know the answer today. But we can allow vast amounts of computing resources to run a reinforcement model over and over perhaps millions of times until they find the optimum solution. So how do you, that's a lot of input, right? That's a crazy number of variables. So how do you do that? So how do you, like in this case, provide a car with all the multiple variables that will come into play, how fast it goes and which direction it goes and all that and on different axes and all those things to make these own determinations and how will that then translate to a real specific case in the workplace? Well, I mean the obvious parallel is of course autonomous driving. AWS had Formula One on stage today during Andy Jassy's keynote. That's also an Intel customer and what Formula One does is they have the fastest cars in the world and they have over 120 sensors on that car that are bringing in over a million pieces of data per second. Being able to process that vast amount of data that quickly, which includes a variety of data, like it's not just, it's also audio data, it's visual data and being able to use that to inform decisions in close to real time requires very powerful compute resources and those resources exist both in the cloud as well as close to the source of the data itself at the edge in the physical environment. Now, tell us a bit about the software that's involved here because people think of Intel, some people don't know about the software heritage that Intel has. It's not just about the Intel inside is isn't just the hardware chips that's there, there's a lot of software that goes into this. So what's the Intel angle here on the software that powers this kind of distributed learning? Absolutely, software is a very important part of any AI architecture and for us, we have a tremendous amount of investment, it's almost perhaps equal investment in software as we do in hardware. In the case of what we announced today with DeepRacer and AWS, there's some toolkits that allow developers to better harness the compute resources on the car itself. Two things specifically, one is we have a tool card RL coach or reinforcement learning coach that is integrated into SageMaker, AWS's machine learning toolkit that allows them to access better performance in the cloud of that data that's coming off their model into the cloud. And then we also have a toolkit called Open Vino. It's not about drinking wine, darn. Open means it's an open source contribution that we made to the industry. Vino, V-I-N-O is visual inference and neural network optimization. And this is a powerful tool because so much of AI is about harnessing compute resources efficiently. And as more and more of the data that we bring into our compute environments is actually taking place in the physical world, it's really important to be able to do that in a cost-effective and power-efficient way. Open Vino allows developers to actually isolate individual cores or an integrated GPU on a CPU without knowing anything about hardware architecture. And it allows them to then apply different applications or different algorithms or inference workloads very efficiently onto that compute architecture, but it's abstracted away from any knowledge of that. So it's really designed for an application developer who maybe is working with a data scientist that's built a neural network in a framework like TensorFlow or Onyx or PyTorch, any tool that they're already comfortable with abstract away from the silicon and optimize their model onto this hardware platform. So it performs at orders of magnitude, better performance than what you would get from a more traditional kind of GPU approach. Yeah, and that kind of decision-making about understanding chip architectures to be able to optimize how that works, that's some deep magic really. The amount of understanding that you would need to have to do that as a human is enormous. But as a developer, I don't know anything about chip architectures. So it sounds like there's a theme that we've been hearing over the last couple of days is these tools allow developers to have essentially superpowers that you become an augmented intelligence yourself rather than just giving everything to an artificial intelligence. These tools actually augment the human intelligence and allow you to do things that you wouldn't otherwise be able to do. And that's I think the key to getting mass market adoption of some of these AI implementations. So for the last four or five years since ImageNet solved the image recognition problem and now we have greater accuracy from computer models than we do from our own human eyes. Really AI was limited to academia or large IT tech companies or proof of concepts. It didn't really scale into these production environments but what we've seen over the last couple of years is really a democratization of AI by companies like AWS and Intel that are making tools available to developers so they don't need to know how to code in Python to optimize a compute module or they don't need to, in many cases, understand the fundamental underlying architectures. They can focus on whatever business problem they're trying to solve or whatever AI use case it is that they're working on. Yeah, now I know you talked about DeepLens last year and now we've got DeepRacer this year and you've got the contest going on throughout this coming year with DeepRacer and we're going to have a big race at the AWS re-event 2019. So what's next? I mean, what are you thinking about conceptually that to, I guess, build on what you've already started here? Well, I can't reveal what next year's project will be but what I can tell you is what's available today in these DeepRacer cars is a level playing field. Everyone's getting the same car and they have essentially the same tool sets but I've got a couple of pro tips for your viewers. If they want to win some of these AWS summits that are going to be around the world in 2019, two pro tips. One is they can leverage the OpenVINO toolkit to get much higher inference performance from what's already on that car. So I encourage them to work with OpenVINO. It's integrated into SageMaker so that they have easy access to it if they're an AWS developer but also we're going to allow an expansion of almost an accelerator of the car itself by being able to plug in an Intel neural compute stick. We just released the second version of this stick. It's a USB form factor. It's got a Movidius myriadX vision processing unit inside. This year's version is eight times more powerful than last year's version and when they plug it into the car, all of that inference workload, all of those images and information that's coming off those sensors will be put onto the VPU, allowing all the CPU and GPU resources to be used for other activities. It's going to allow that car to go at turbo speed. It really cook. Yeah. All right, so now you know, you have no excuse, right? I mean, Jonathan has shared the secret sauce, although I still think when you said OpenVINO, you got Justin really excited. I mean, it's almost OpenVINO time. It is five o'clock actually. All right, Jonathan Cody, thank you for being with us. Thanks for having me guys. Appreciate it and good luck with DeepRacer for the coming year. It looks like a really, really fun project. We're back with more here at AWS re-invent on theCUBE live in Las Vegas.