 Hello everyone. You all know ChatGPT, Dali, Yamato, I'm here to talk about the next revolution of embodied AI, which is the next revolution in automation, which is embodied AI. Foundation models have been the foundation for the rapid acceleration of AI the last decade. They wouldn't be foundation models if they weren't open source. So the reason why they became foundation models in the first place was because the open source community could discuss the data and the model. We saw it with computer vision. So whenever there was some company who claimed we beat the benchmark in computer vision, within very short time there would be a computer vision model open source that beat that benchmark. And so we've seen that happen over and over with natural language processing and we're starting to see that with generative AI right now, where foundational models are being open sourced and the data is being open sourced so everyone can discuss it. So we know generative AI has shortcomings. All foundation models have had shortcomings. The reason why we solved them is because the problems are out and open. So the community can discuss the issues with the model or the data. Now we've seen foundation models take over computer vision, natural language processing and generative AI. The next natural frontier for foundation models is going to be embodied AI. So we're moving from safe AI in browsers and iPhone. Everyone is talking about the AI alignment problem. The next thing we're going to experience is that this AI has to coexist with us in public spaces and operate in a safety critical environment. We're going to see this. We're going to see embodied AI in everything from our household robotics to aviation to drones to mining equipment, construction equipment and most notably what most people know it for is for self-driving cars, buses, trucks, forklifts and so on. Now right now most embodied AI is done by big corporates. It's closed source, it's proprietary data which is okay. They spend a lot of money invested in this but it's going to coexist in public spaces with us. And right now it's really difficult for everyone to actually understand what shortcomings they have. We believe the next natural step is that we need the same level of transparency in embodied AI that we've seen within all other foundation models. So some examples on this is Mercedes got their level three system certified for 10% of German highways up to 60 kilometers an hour. Crews just lost their robot taxi license in San Francisco. So why didn't they get certified for more? In what scenarios do they fail? Why do they fail? Like all of this discourse is not happening within embodied AI right now because it's simply not possible. We don't know what scenarios they fail in and why they fail it. So Jag is taking some cues from the open source community. We see hogging phase, mistral and meta proposing to use data cards and model cards to address the pitfalls of these models. It's a toolkit for transparency. Who trained the model? With what data? When did they train it? What issues does it have? It allows the community to propel and bring the AI models forward. So Jag is taking the next logical next step which is we propose scenario cards for embodied AI. Scenario cards will allow the community to advance embodied AI and accelerate the development but in a safe way where we can actually discuss the shortcomings of the data or the model that's being applied right now. So the challenge with this is that not all data is equal. So we already know right now from generative AI that you need expert data to actually train your model and you need a feedback loop. So how do we put this in the hands of the many instead of the few? Jag has built an expert platform that allows us to actually source drive data across 30 cities in Germany right now. We collect roughly 50 terabyte of curated drive data every day right now through eight cameras on driving instructor vehicles. The driving instructors give feedback on driving mistakes that the students are making which allows us to extract the scenarios and make those available to all developers of safety critical embodied AI. So the driving instructors don't only extract driving mistakes, so counterfactuals, they also actually demonstrate how an expert would have been driving in this city. So now we kind of curate, we generate the best curated drive data in the world how to drive in a city and how not to drive in a city. So one thing is to have the best curated drive data which will allow us to actually align, solve the alignment issue for embodied AI. The other issue is how do we then actually discuss how your AI is performing. And so to enable that, Jag is proposing that we start generating open metrics. So the same way that we have open data cards, and we have model cards, we're proposing open metrics that allows us to actually assess in a unified way across the scenarios. So embodied AI, in order to, we think actually in order to reach the finish line is that we actually start building a community around it that helps us get safe autonomous vehicles, aircrafts, forklifts to the masses would be by agreeing on scenario cards, agreeing on the metrics that we need to hold them to, the standards we need to hold them to, and by agreeing on the data quality that we actually need to collect to enable this. So at this point, Jag has been going for three and a half years right now, and we built an expert platform that allows us to scale across the world. So we have a backlist of driving schools who want to work with us. So the 55 cars we have driving around with our sensor kit in Germany right now is just the beginning. It's 50 terabytes a day, and it's growing rapidly right now. What's important with this is that we want to make it available to the many. So how do we make it possible for anyone to develop safe embodied AI? We do that through our expert platform. And what we're working on right now is the tool chain that allows any third party to actually access the data, access the scenarios, ingest it into their own environment, work with it, and assess their own drive models, for instance. So what we have also developed is a safety GPT. It allows you as an embodied AI developer to rapidly assess the challenges with your AI. So this is a model that's never been told what a car is, what a stop sign is, what a yield sign is. It's never been on this road segment before, but still it would be able to look at what your autonomous car is doing, or your autonomous forklift, and it would be able to tell you whether it's safe or not safe. So by doing this, we allow like we help you understand the shortcomings of your embodied AI so you can accelerate your development. That was it.