 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in Mountain View, California at Phantom Auto. Really an interesting company taking a different spin into the autonomous vehicle space and really excited to have Elliot Katz here. He's a co-founder and head of business legal and policy. You probably have the hardest job in the building. Great to see you Elliot. Good to see you. So interesting times. We could talk a lot about the tech but really what is going on in the policy? Obviously the government has a whole lot to do. We hear about trials here and there and how do you see this kind of regulatory environment changing in the face of oncoming autonomous vehicles? Sure, well I think what we've seen in the last few years is that the underlying technology that powers these cars has become incredible. The AI I would say is about 97, 98% of the way there but these are life-saving vehicles. We want these vehicles out on the road as quickly as possible and I think the regulators are coming to the realization at this point that if we want to get these vehicles deployed right now we need to have some sort of bridge to that technological gap to get us from 98% to 100%. And a lot of people are looking at teleoperation to fill that gap. A specific example is in California. Right now we have draft regs where we expect those regs to go live in the next couple months here. If you want to do driver list testing it's a regulatory mandate that you need a teleoperator and a remote communication link so that there can be communication between the vehicle and a remote operator. So do you need the remote operator if you also have an operator in the vehicle or that's kind of another step closer to full autonomy I guess? So it's only if you want to do driver list testing. So if you want to remove a human safety driver from the vehicle which obviously most of these companies if not all of these companies would like to do is that gets you one step closer to deployment then you will need a remote operator in California. Interesting and so you talk about the 96, 97, 98%. So are those really the edge cases? Is that where this kind of final frontier sounds like the basic operation is pretty much nailed at this point in time? Yeah, so I mean the problem with edge cases is they don't happen frequently, right? Or at least frequently enough. So it's very hard to gather data on edge case, right? Say you're driving and you come across a construction site and in the construction site you have construction workers giving hand signals, you have mud that's piled up or gravel that's piled up, you have moving objects, you have different things that are happening at different times. These vehicles may not be able to understand exactly what they need to do during that situation and in that case they could ping, the vehicle could ping a remote operator at Phantom and a human would be able to operate the car in real time, see everything that a human in the vehicle would be able to see and drive the vehicle in the same way they're sitting in the driver's seat. So when you're doing POCs or talking about POCs with some of the autonomous vehicle companies, what's the expected latency of a use case like you just described, the construction one? Cause clearly the remote driver is kind of on call but not specifically on call for that one particular vehicle in that one particular instance to call in. So what's kind of the use case in the scenarios that you guys are working on? Sure, so in terms of latency, we've obviously that's our secret sauce but we've been able to get that very, very low. That's how we're able to do this in the first place. But I don't mean the latency of the communication back to the car for the driver. I mean the hey, I need help, kind of that type of a turnaround. So that's a very important point. So one thing that we're not doing here is we're not doing the type of thing where you're driving say on the freeway, 65 miles per hour and all of a sudden there's a fire in the middle of the freeway. You ping the remote operator and he or she instantly has to take over the car. That's not what we are. In this case, because we do have autonomous vehicle legislation at the federal level and also already passed in many of the states. So at the federal level, there's a bill in the house and the bill in the Senate. Neither of which have been passed but we expect that one will go the distance this year. So you might actually have the rare scenario where the regulation outpaces the technology which is a good problem to have. In fact, I would say it's not a problem at all. So I think that the biggest barrier to deployment right now is the technology. It is that last 3% which has proven to be a very, very difficult piece of the puzzle and could take many, many years to solve if ever. Because these cars, to have the AI be able to do 100% of what a human could do, that's very, very difficult and it can take a very, very long time to happen. And it's interesting too because machine learning is so different. They want a lot of quantity to run through the learning algorithm. That's how they learn. It's very different than people. They're not quite so good at stitching things together. Okay, what about in California? I'm just curious. DMV, obviously there's more cars in California than any state in the union by far. I'm sure the impact here will be greater than most places in a significant way. Not to mention the fact Waymo's right up the street. So when you look at kind of the California regs, anything special there or are they just kind of following the line of the feds? So I think that California is unique obviously in that it's a state that a lot of other states typically look to. As I mentioned before, we do have those draft regs that would mandate teleoperation if you're doing driverless testing. And I think that idea is born out of everything that we've been talking about at a high level, which is these are life-saving vehicles. Everyone wants these deployed as rapidly as possible, but we also want that deployment itself to be as safe as possible. So I think that California is forward thinking in that they understand that there is this fallback mechanism that isn't necessarily the machine. We think we have the ultimate fallback mechanism at this point, which is actually still a human. The machine is very, very good, very, very good. And I would say much better than a human driver at 98% of the driving task. But for these edge case scenarios, you still need to bring a human back into the loop. And so I think that regulators like in California, like in Florida that have been thinking about how to deploy this out in the safest possible manner are quite forward thinking to the extent that in California, basically what they're saying in the draft regs is you need a remote operator. You need this to a communication link. But we're kind of leaving it in the industry's hands to figure out exactly what that means. We think it's good to have from a safety standpoint, but we want to see what you actually roll out because it's nothing that anyone has done to this point. And I'm wondering, was there a tipping point? It surprises me when you say that the regs are actually ahead of the innovation in this category. How did they get to that conclusion? This seems so anti to everything that kind of typical regulators want to keep in terms of the status quo. This is so radical. There's employment implications and labor implications and green implications and smart city implications. There's so many things here you would think that they would be lagging. Was there a particular event or a particular test case or a proof point or something that happened that kind of flipped a switch from the legislative and the regulation point of view? Honestly, I think it's how staggering the numbers are. So if you take a step back and look at the forest and not the trees, you have 1.2 million people dying every year worldwide due to traffic accident fatalities, 40,000 in the US in 2016 and 94% is due to human error, right? So if you, if to use an example, that would be the equivalent of eight 747 airplanes crashing every single day in a given year and killing everyone inside. Eight a day, every day. Yes, that's for the worldwide numbers, the 1.2 figure. So if we had that happen even just for two weeks in aviation in the US, aviation wouldn't exist as we know it. But for whatever reason, well, not for whatever reason, because as long as we've had vehicles on the road, all we've known is that humans are supposed to drive them, we've just accepted the fact that there's going to be a certain amount of loss of life on our roads. Now we have a technology that at least, again, going back to that fact, 94% is due to human error. So if you eliminate the human for the most part from that equation, you can save a lot of lives. And so if the goal of NHTSA and the DOT and our government in general is just to make our roads as safe as they possibly can, this technology is an absolute no-brainer. And I've been doing this for quite some time, but I think around 2015, 2016, I saw somewhat of an awakening in the government, like we're really scared of this being deployed, but in reality, we should be scared of this not being deployed, right? Because the life saving and other benefits as well, but that's just the most jarring stat. All right, well, exciting times. Well, Elliot, thanks for taking a few minutes and we're excited to kind of watch the ride. Yeah, absolutely, thank you. All right, he's Elliot, I'm Jeff. You're watching theCUBE from Phantom Auto in Mountain View, California. Thanks for watching.