 From Milpitas, California, at the edge of Silicon Valley, it's theCUBE, covering autonomous vehicles. Brought to you by Western Digital. Hey, welcome back everybody, Jeff Frick here with theCUBE. We're at Western Digital's office in Milpitas, California at the AutoTech Council Autonomous Vehicle Event. About 300 people talking about all the various problems that have to be overcome to make this thing kind of reach the vision that we all have in mind and get beyond the cute, waymo cars driving around and actually get the production fleet. So a lot of problems, a lot of opportunity, a lot of startups. And we're excited to have our next guest. He's Dave Tochich from the VP of Marketing and Strategic Partnerships from Algalux. Dave, great to see you. Great, thank you very much, Jeff. Great to be here. Absolutely. So you guys are really focused on a very specific area and that's about imaging and all the processing and imaging, getting the intelligence out of imaging and getting so much more out of those cameras that we see around all these autonomous vehicles. So give us a little bit of the background. Absolutely. So Algalux, we're totally focused on driving safety and autonomous vision. It's really about addressing the limitations today in imaging and computer vision systems for perceiving much more effectively and robustly the surrounding environment and the objects as well as enabling cameras to see more clearly. Right. And we've all seen the demo in our Twitter feeds of the Chihuahua and the blueberry muffin, right? This is not a simple equation. It's somebody like Google and those types of companies have the benefit of everybody uploading their images and they can run massive amounts of modeling around that. How do you guys do it? In an autonomous vehicle, it's a dynamic situation. It's changing all the time. There's lots of different streets, different situations. So what are some of the unique challenges and how are you guys addressing those? Great, so today for both ADAS systems and autonomous driving, the companies out there are focusing on really the simpler problems of being able to properly recognize an object or an obstacle in good conditions. Fair weather in Arizona or Mountain View or Tel Aviv, et cetera, but really we live in the real world. There's bad weather. There is low light. There's lens issues, lens is dirty and so on. And so being able to address those difficult issues is not really being done well today and there's difficulties in today's system architectures to be able to do that. So we take a very different novel approach to how we process and learn through deep learning the ability to do that much more robustly and much more accurately than today's systems. So how much of that's done kind of in the car? How much of it's done, you know, where you're building your algorithms offline and then feeding that back into the car? How does that loop kind of work? Great question. So the objective for this, we're deploying on, is the intent to deploy on the, is the deploying on systems that are in the car, embedded, right? We're not looking to do a cloud-based system where it's going to be processed in the cloud and the latency issues that are a problem. So right now it's focused on the embedded platform in the car and we do training of the data sets but we take a novel approach with training as well. We don't need as much training data because we augmented it with very specific synthetic data that understands the camera itself as well as taking in the difficult critical cases like low light and so on. And do you have your own dedicated camera or is it more of a software solution that you can use for lots of different type of inbound sensors? Great, yeah. So what we have today is what we call canna. It is a full end-to-end stack that starts from the sensor output. So say an imaging sensor or a path to fusion like LiDAR, radar, et cetera, all the way up to the perception output that would then be used by the car to make a decision like emergency braking or turning or so on. And so we provide that full stack. So perceptions is a really interesting word to use in the context of a car vision and computer vision because it really implies a much higher level of understanding as to what's going. It really implies context. So how do you help it get beyond just identifying to starting to get perception so that you can make some decisions about actions? Got it. So yeah, it's all about intelligent decisions and being able to do that robustly across all type of operating conditions is paramount. It's mission critical. We've seen recent cases, Uber and Tesla and others where they did not recognize the problem. So that's where we start first with is to make sure that the information that goes up into the stack is as robust and accurate as possible. And from there, it's about learning and sharing that information upstream to the control stacks of the car. Right, it's weird because we all saw the video from the Uber accident with the fatality of the gal, unfortunately. And what was weird to me on that video is she came into the visible light, at least on the video that we saw, very, very late. But you gotta think, right? Visible light is a human eye thing. That's not a computer. That's not, you know, there's so many other types of sensors. So when you think of vision, is it just visible light or do you guys work within that whole spectrum? Fantastic question. So really the challenge with camera-based systems today starting with cameras is that the way the images are processed is meant to create a nice displayed image for you to view. And there are definitely limitations to that. The processing chain, you know, removes noise, removes, that does de-blurring, things of that nature, which removes data from that incoming image stream. So we actually do perception prior to that image processing. We actually learn how to process for the particular task, like seeing a pedestrian or bicyclist, et cetera. And so that's from a camera perspective. And so it gives us quite the advantage of being able to see more that couldn't be perceived before. We are also doing the same for other sensing modalities, such as LiDAR or, you know, radar and other sensing modalities. And that allows us to take in different disparate sort of sensor streams and be able to learn the proper way of processing and integrating that information for higher perception accuracy using those multiple systems for sensor fusion. Right. So I wanted to follow up on kind of what is sensor fusion because, you know, we hear, and we see all these startups with their self-driving cars running around Milipark and Palo Alto all the time. And, you know, some people say, you know, we've got LiDAR, LiDAR is great, LiDAR is expensive, we're trying to do it with just cameras, cameras have limitations. But at the end of the day, then there's also all this data that comes off the cars are pretty complex data receiving vehicles as well. So in pulling it all together, that must give you tremendous advantages in terms of relying on one or two or a more singular type of input system. Absolutely, I think cameras will be ubiquitous, right? We know that OEMs and Tier 1s are focused heavily on camera-based systems with a tremendous amount of focus on other sensing modalities such as LiDAR as an example. And so being able to kit out a car in a production fashion effectively and commercially, economically is a challenge. But that'll, with volume, will reduce over time. But doing that integration of that today is a very manually intensive process. Each sensing mode has its own way of processing information and then stitching that together, integrating, fusing that together is very difficult. So taking approach where you learn through deep learning how to do that is a way of much more quickly getting that capability into the car and also providing higher accuracy as the merged data is combined for the particular task that you're trying to do. But will your system at some point kind of check in? Kind of like the Tesla they check in at night, get to download so that you can leverage some of the offline capabilities to do more learning, better learning, aggregate from multiple sources, those types of things? Right, so for us, the type of data that would be most interesting is really the escapes, the things where the car did not detect something or told the driver to pay attention or take the wheel and so on. Those are the corner cases where the system failed. And so being able to accumulate those particular, I'll call it SNPs of information, send that back and integrate that into the overall training process will continue to improve robustness. And so there's definitely a deployed model that goes out that's much more robust than what we've seen in the market today. And then there's the ongoing learning to then continue to improve the accuracy and robustness of those systems. I think people so underestimate the amount of data that these cars are collecting in terms of just the way streets operate, the way pedestrians operate. Whether there's an incident or not, they're still gathering all that data and making judgments and identifying pedestrians and identifying bicyclists and capturing what they do. So hopefully the predictiveness will be significantly better down the road. That's the expectation, but like numerous studies have said, there's a lot of data that's collected that's just sort of redundant data. So it's really about those corner cases where there was a struggle by the system to actually understand what was going on. So just give us kind of where you are with Algalax, state of the company, number of people, where are you on your lifespan? Great, so Algalax is a startup based in Montreal with offices in Palo Alto and Munich. And we have about 26 people worldwide, most of them in Montreal, very engineering heavy these days. And we will continue to do so. We have some interesting forthcoming news that please keep an eye out for of accelerating what we're doing. I'll just hint it that way. And the intent really is to expand the team to continue to productize what we've built and start to scale out, to engage more of the automotive companies we're working with. So we are engaged today at the Tier 2, Tier 1 in OEM levels in automotive. And the technology is scalable across other markets as well. Pretty exciting, we look forward to watching and then you're giving it the challenges of real weather and unlike the Mountain View guys who don't really deal with real weather here. There you go, fair enough. All right Dave, well thanks for taking a few minutes out of your day and we again look forward to watching the story unfold. Excellent, thank you Jeff. All right, Dave, I'm Jeff, you're watching theCUBE. We're at Western Digital Milpitas at Autotech Council, Thomas Vehicle Event. Thanks for watching, we'll catch you next time.