 Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at the North American International Auto Show in Detroit, Michigan. It's all about these beautiful new vehicles, but the big trend that we're hearing over and over again is autonomy. Self-driving cars, autonomous vehicles, this is really where it's happening. You would think it would scare the Bejesus out of these car companies, but no, they're embracing it wholeheartedly and we're seeing the adoption in all the different manufacturers. And we're really excited for our next guest because the brains behind a lot of this autonomous vehicle technology is the NVIDIA chip, treating the world like a video game. So we've got with us Daniel Shapiro. He's the Senior Director of Automotive from NVIDIA. Daniel, welcome. Thanks so much, Jeff. So it's been a busy week. You guys were at CES last week. You had the keynote. Now you're raced up here. Exciting times for NVIDIA. It's been great. We've been working with the auto industry for 20 years, essentially. All the cars are designed using NVIDIA graphics and that's what our company's known for. But more recently, we started bringing our graphics inside the car, so all these outies you see here are powered by NVIDIA, the graphics on the screens, whether it's a touchscreen like in a Tesla, the MMI here, the virtual cockpit, it's all NVIDIA. But now we're also taking it a big step further. So before we get to the big step, what is it about a GPU chip that makes it the right application or the right solution for this application? So the GPU has many cores and let's contrast that with the CPU. Maybe you've heard of a dual core CPU or quad core CPU. Think of that as lanes on the highway. Maybe there's two or four lanes for the data or the traffic to go through. Our cores count goes to thousands. So imagine a highway with 3,000 lanes. How much data could you push through that? And that's why we're able to handle this massive amount of computation on a very small chip. And then if you've seen any video that comes out of these things, it does kind of make the whole world into a big video game, right? Creates shapes around objects, identifies those objects, tracks those objects. So write up your guys at Bailey. Essentially what we've done now is created a custom supercomputer for the car and it uses artificial intelligence. We call it DrivePX2, we're on the second edition now. And that's what's now going into all these self-driving prototypes as well as production vehicles. So the first real production self-driving cars on the road today, the newest Tesla Model S. So that has our DrivePX2 technology in it, the hardware there, plus the sensor array. Exactly like you said, we can detect everything going on around the car and it does start to look a little bit like a video game inside the brain of the car. We're trying to track different objects, people, cars, buses, motorcycles, whatever. We have a map so we know where the roads go. We know where we are on those roads and then we can figure out and predict where the other vehicles are going to be to avoid them. And this was big, I mean, you guys key noted CES with this announcement. That's right. You know, 300,000 of your closest friends were there for this big event. That's great, we announced again a continuation of our partnership with Audi but moving beyond graphics in the car to now use artificial intelligence to put a level four automated driving vehicle on the road starting in 2020. And just for those who don't know, level four is just one step below where there's no control even in the car, right? Which is a level five. Yeah, basically it means wherever the car has been authorized to be used. So whether it's a highway or whether it's urban or certain geofenced areas, the car can take over and fully drive. Now in California, we have currently laws that means there has to be somebody behind the wheel. But again, like the project we're doing with Volvo, the drive me, people will be able to do other things behind the wheel. So when they turn over the car's control to the autopilot, they're then free to read, watch TV, do a video conference, whatever it is. And then the car eventually will turn control back to them maybe when they leave the highway. This is really interesting because, you know, there's a lot of hubbub when the poor guy crashed his Tesla a while ago. But it really wasn't an autonomous vehicle. It was driver assist. He wasn't supposed to be reading it, but. I think that's the key thing is people need to understand that there's a huge difference between a driver assistance system, a front-facing camera, and a fully autonomous vehicle. And so you just look at what's happened in that six months or so, the new Tesla system, now with our processor inside, has 40 times more processing power than they previous had. So we're bringing a supercomputer into the car and then it's gone from one camera to eight cameras, a big difference. Now, you sit in a nice kind of catbird seat because you can see all the various automakers coming at this thing at kind of different speeds and feeds and different strategies. Wonder if you can kind of share, you know, who's kind of going aggressive? It sounds like Volvo. You said they're having a hundred cars on the road with a level four autonomous vehicle versus some of the other people that are taking it a little bit slower. Yeah, I mean, we work with over 80 different automakers and tier one suppliers and other startup companies. Yeah, they're definitely moving at different rates We can't talk about them all obviously yet, but you look at, again, what's going on with Audi, with Tesla, Mercedes, even Ford, their new prototypes are leveraging the power of the GPU because everyone is realizing that there's no way you can possibly program or write code for the randomness that happens out on the road. Right, right. But instead, with artificial intelligence we can train the system and we can iterate very quickly if we want to teach it how to recognize street signs. Instead of spending two years writing software to recognize street signs like some smart camera companies do, in just a couple of hours we can actually feed a massive database of images of signs into the system and it then knows how to recognize stop signs, yield signs, street signs, everything. So it's this accelerated learning process that is enabled by artificial intelligence. Right, there's so many interesting things on this story. Another one that I think most people just miss, I like to drive. Well, you don't like to drive in traffic but really where the early stage is going to come is in commercial vehicles, commercial fleets where people can make a single purchase decision and turn over a boatload of vehicles all with one purchase order. I think there'll be that. There'll be ride sharing services that will go this way, those fleets, but also trucking, right? We just announced with ZF, one of the top suppliers to the trucking industry. They're taking our drive PX technology and bringing it to market, putting it into production for other types of vehicles, passenger cars, plus commercial vehicles, trucks, even tractors, mining equipment, factory automation, remote AI and autonomous forklifts, for example, or the material handling. So this is not just about cars, it really affects the entire $10 trillion transportation industry. Right, in the, I guess so many angles. Another thing I think is interesting that people don't really pay attention to is even when you're just driving to the store and back in an autonomous vehicle, it's collecting data about the environment that's happening all around it that may or may not have anything to do with that trip, but feeds all kinds of data on behavior of bicycles and behavior of pedestrians and behavior of other vehicles. So there's a whole bunch of positive learning that comes out of this that has much more to do than just what do you get to the store and buy. I mean, you're right. There's several different aspects to that. One is just the data collection in general. Even before we get to that autonomous car collecting the data, once we put the sensors on the car, even if you're driving, it's still collecting data. And that's part of the process of teaching it. Now the car isn't necessarily going to learn from you or I, because if we're not good drivers, we don't want that data to go into the whole training system, but we can use a lot of that data, you're absolutely right, for the training of how do you recognize bicyclists or other vehicles or to create a system that can discern the difference between an ambulance and a FedX track, we need to have a lot of images of FedX tracks and ambulances. And so all of that data collection from the fleet will help with that process. Yeah, it's just really interesting how there's kind of this, this collection of data analysis and big data and massive amounts of compute power, cloud enabled systems, you know, are coming together to create these cool technologies that you guys are spearheading. That's right. And I think again, if you look at automotive as an industry, it's still a very small part of NVIDIA, but it's a very important and growing part. But our underlying technology, the GPU and artificial intelligence applies in medical with healthcare systems being developed on AI to discern, okay, you look at an x-ray, you can tell, oh, that's a cancer cell. Well, that same technology applies to looking at an image from a front-facing camera and deciding that's a pedestrian. And again, we can apply this both in the cloud with companies like Amazon and Google and Netflix and Facebook and Baidu, and this goes on and on. They're developing AI solutions on the GPU, that same core technology that we can apply to self-driving cars. Yeah, we just think cars are cool, they're fun, but they are kind of the bright, shiny pointy end of this whole internet of things and some would argue you're biggest and most expensive wearable, but they do kind of represent all those technologies that are happening right now in the scene because I can't see the oncologist looking at the cancer cells. All right, well, Daniel, I'll give you the last word. What's getting you up in the morning as hopefully you get a little break after CES in this show, but next week, what's on the short term, what are you excited about? We were just really excited about what happened at CES, the announcements there, the announcements now with our customers here, rolling out more vehicles with our technology inside. And when we were in Vegas, we had one of our test cars that we were putting people like yourself in the back seat and we'd send the car out with nobody in the front seat and we were driving on different roads. It was a close course, of course, but different road surfaces, dirt roads on grass with different landscaping on the sides, instructions on, we had cones set up, a big construction flashing out, we'd wheel in and wheel out in the car, we'd then navigate and change course based on what it sensed. That was such a hit, everyone wants to do it. So I think I'm rebuilding that course out in California and we'll have people coming through that on a regular basis. All right, excellent. Well, I live in Palo Alto, so I see the Google cars. Sometimes they'll like practice left turns on my street. I got kind of a tricky left turn onto a busy street, so we see them all in place. We'll film segment two down in Santa Clara at our headquarters. Well, Daniel, thanks for taking a little bit of time and congratulations on a great couple of weeks from Vidya. Thanks so much. All right, he's Daniel Shapiro. I'm Jeff Frick, you're watching theCUBE. See you next time.