 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at the Innovation in Motion Mapping and Navigation event here at Western Digital in Milpitas, California. And we're excited to have our next guest, Shravan Putagunta. He is the CEO of Civil Maps. Shravan, welcome. Hi, thank you. So what do you think of the event today? It's pretty exciting. It's good to see that mapping has a lot of players in the ecosystem and it's a great learning experience and I'm happy to be here. So like all these things that I go to a new industry I haven't been to, it's just fascinating to me. You know, the levels of complexity, the number of contributors, the number of companies involved. And again, once you kind of get under the covers on really any topic, there's a whole lot more going on than people probably think. You know, you think, ah, Google Maps, you know, it's better than Apple Maps. That's probably the depth of information. But the drive and autonomous vehicle, that's a whole other layer of detail you guys have to provide. Yeah, definitely. I think the sensor data coming out of the cars is really huge. The cost of the sensors are going down. So there's a data problem emerging where you're gonna have exponential growth in data that is growing at scales we've never handled. And that's a big problem for people to solve. So tell us a little bit about civil maps for people that aren't familiar with the company. Yeah, so civil maps, what we do is we provide cognition for cars. So if you think of the self-driving space as three categories, the eyes, the cognition, and the decisions. So eyes plus cognition plus decisions equals driving. So the eyes are like sensors. Decisions are like electronic control systems with plat planning. So cognition is the layer that describes the world, right? The sensors are the layer that see the world. And the decision engine is the layer that interacts with the world. So we provide that middle layer. So, and then how does a car see the world? That's a really interesting thought. Obviously it's not human cognition. Does it need human cognition? But how does it have to quote unquote see the world and have cognition than to drive the decision making? So I'll give you an example, right? We as humans, we have multiple senses. We have our eyes or ears, our sense of touch. Even the sense of orientation through our sinuses. So cars are very similar. They have different information. They have information from the IMU, GPS. They have information from radars, cameras, and lightars. So they have to do a similar sensor fusion to create a more enhanced perspective where when any single sensor doesn't work, they can still recover. Same way, if my eyes get blinded for half a second, I'm still holding onto the steering wheel. I still know I'm going straight. So those different senses have to pick up the slack when one is kind of degraded. So cognition takes care of all of that sensor fusion and it processes all the sensor data and it makes sense of the world so the car can make decisions in a very confident way. And with self-driving, you need it to be better than the most expert human driver at all times. So that's the only way I see people giving up the control. So are, well, and that's not hard to do, right? Because they didn't get in the fight with their spouse. They're not worried about work. They had a good night's sleep the night before. Unfortunately, the things that cause most of the problems with human drivers have nothing to do with their driving skill. It has to do with all the distracting things that get in the way. But I'm just curious on the state. What is the, is it mapping kind of its condition to a bunch of pre-programmed states in terms of the cognition? Because that's how we kind of look at the world, kind of how we are based on what we expect. Let's say I go to Los Angeles for my first time and I'm spending a lot of brain cycles looking at my phone, which is giving me instructions and trying to map out. That's not what you're driving. Yeah. Figuring out where I'm supposed to go, where the exit is. But the second or third time that I go through, it's a lot more easy, because it's almost like muscle memory. And that is a human's ability to anticipate and remember things. So you could think of mapping and localization as giving the car the ability to anticipate and remember where things are. It is technically possible to have a self-driving car without that layer, but you have to spend a lot more of your cognitive cycles on the real-time compute. So the mapping makes that easier, makes the car more confident and gives it better reaction times. And all of this translates into better user experience for the passenger. For the passenger, which is interesting, because you talked about something before we turn on the cameras, which is supporting an augmented reality type of experience for the driver. I think you're the first person that we've had on that really talks about what's the driver experience comfort level. And so how are you using augmented reality plus your technology to make it a more enjoyable experience for the person that was formerly known as driver? So there's a concept of marrying the map and the sensor together, right? So the sensor is seeing data in real-time and the map remembers what is supposed to be there in the real world. So augmented reality essentially is the application where it's doing that and providing an interface to the users. So as a passenger, or even a driver, if I'm giving up the steering wheel, if I'm giving up my ability to communicate through the car mechanically, I need to have some other interface where the car communicates to me visually. And augmented reality is that replacement. It communicates to the passengers, the car's understanding of the world, its intentions, and when it's going through that road and being consistent with what it's communicating, it's developing my trust in the system over time. And that's really important for people to not feel anxious. So we've seen, a lot of us have seen kind of the, what does a Google car see? You know, videos with the graphics around the bicycles and the dogs and other cars. Is that what you're talking about? Is it more of a softer communication? Is it, you know, I'm gonna slow down because I see a stop sign ahead. I mean, how will that be displayed? And is it a heads-up display? Is it something you wear? How do you get that communication back so I feel comfortable with, I'm sure people name their cars, right? With my cars, is doing the right things in advance of suddenly slamming on the brakes. So it doesn't need to be you wearing a headset, right? The car has its own eyes. So if you can augment its eyes and display the fused perception on a infotainment system, I can start interacting with it. I'll give you a use case. Let's say I'm going to Tahoe and you're going with me. And I'm in the autonomous car. The car stops in the middle of street where your address is, but your ski racks are in your driveway. How do I tell the car to pull into the driveway so you can mount the ski racks, right? So that level of interactivity needs to come about in the cars. So just the functionality of going from point A to point B is not enough. People need that sense of control to do all the things that they use car for. Interesting. And where are you guys just kind of in terms of the company, lifetime of the company, product delivery, et cetera? Yeah, so we're about two and a half years old. We have around seven customers. We raised our seed round and now we're raising our series A. So we're rapidly expanding. We were one of the first to have a product in the market that demonstrates some of the functionality here. And we're looking to expand into other markets now and scale up. And will you be, is your go to market to be part of a bigger system of which you're a component within a system that goes into a Ford car just to pick a name out of the hat? Or are you kind of an independent technology that sits aside but does integrate with the other autonomous stuff? What's kind of your go to market? We're sort of like Switzerland. We provide a platform that enables really good user experience, and we have all defined interfaces. So other systems in the car can talk to us very easily. And people can pick and choose which modules they want to use from civil maps. We don't really force a package solution. All right. Well, Sravan, sounds like a really interesting story. I look forward to watching, because the comfort thing is a big thing, right? If people aren't comfortable, they're not going to use it, or they're going to be nervous. That's how brands will differentiate themselves in the future. And I think the companies that focus not just on functionality but product experience are the ones that will get the market. Love it. Great way to close. Well, thanks for taking a few minutes. Thank you.