 All right, our next speakers are TK, Chairman of the Board and CEO of Mido Kura, and Dan Dmitriou, CTO of Mido Kura. TK has a wealth of experience in technology startups and the Japanese IT industry, where he served in top management positions. At CTO, Dan leads the R&D team for advanced development of edge computing and AI technologies. Today, they will be talking about the wedge project, Edge AI enabled by Wazem. Please help me welcome TK and Dan. Okay, hello everyone. My name is Tatsukato. Just call me TK, it's hard to pronounce. And my business partner. It's pretty short, let me begin with our presentation. Okay, so let me give you a brief into the company introduction first. We started the company in 2010, born in Tokyo. We were developing a network virtualization. It's called MidoNet, fully open sourced by using over ray technology at that time. After that, we did the people of the business focus to edge computing for industry IOT. In 2019, Sony acquired us and now we are 100% subsidiary of Sony's semiconductor solutions. Our pedigree is in distributed computing and the system, large scale system. We are now provide a key technology to Sony for the coming era of Edge AI. Okay, next, Aeterius. As you may not know, Sony has less than 60% of market share in vision sensor. As they were the various related vision technology assets. They provide Aeterius solution platform over the world targeting at solution provider and AI developers. I don't want to dissentence in the slide. Okay, the next is my last one. What needs of Aeterius give us to our Edge project? A couple of reasons comes from actual real use cases. I don't want to mention all, but let me give you some. One is they like to have tiny ML run on the IMX500, which Sony invented world first DSP stacked with CMOS sensors. Two is they like to accomplish sensor fusion world where as a non-RGB sensor can run on low capability device like MCU with RTOS. The last one, this is more critical to them. Since Edge AI industry vertical requires security, they have to secure trusted execution environment for Edge native applications. That's why our Edge project was born. Okay, so I'll hand it over to Dan, please Dan. Thank you, okay. So some challenges of development for IoT devices which some of you may know. One is development in C is hard. Maybe not for everybody, but for some people it's hard, especially the embedded world where there's very little memory and lots of other constraints. And code is not very portable per se. Because of the architecture is one thing, but also because of the OS, like there's no standard OS. It's super fragmented in the embedded space. And memory isolation, which we take for granted on Linux on servers, does not exist in microcontrollers. So everything is running in real mode, essentially. These are tosses, if you can call it an operating system, aren't designed for multi-user at all. So the Cisco interface is really not secured well to do capabilities and such. And dynamic loading is sometimes not supported at all. Like you need to recompile and rebuild the entire OS image to make a software change. And if it is, it's not safe because you might load some code that is gonna crash the system. And the last one, not that important, but the development language might be completely different between your cloud application and the embedded. And the embedded is gonna be C and the cloud is gonna be something else, probably. Another point here is some pain points of IoT devices and vision cameras, like security cameras. One is after deploying the product, after shipping the product, typically you don't change the functionality like ever. That's it. And on the security fronts, it's practically not possible to find vulnerabilities all ahead of time. Like that would be great, but it's not gonna happen. So you need to patch things. So these are some of the motivating factors for why we did this. So three goals of our project. One is to solve these security and isolation problems. Actually I put that in third place for some reason. And then that was like our initial motivation. Let's solve this isolation sandboxing issue. And then we started getting the idea of hey, we could try to enable like a developer ecosystem by creating a plugin architecture and SDKs and so on based on this, because we can do that potentially in different languages. So what is wedge? Obviously the W stands for web assembly at this point at the edge. So wedge is a couple of things. It integrates the entire IoT device lifecycle. So it runs alongside an IoT platform. We didn't develop the IoT platform per se. We can use like, we actually use a project called things board for our development. Maybe you've heard of that, where we could do AWS IoT or anything else really. And then the system also has this part of creating the application, deploying the application and then doing the whole lifecycle management. Now since that's quite generic, that part, right? But since we're dealing with vision sensors mostly, we developed this model called the vision sensing application, which again in the SDK, the developer has access to certain interfaces like controlling the sensor, accessing image manipulation functions that we've actually implemented with OpenCV at the moment. And essentially you can break down the application into multiple modules that are kind of decoupled so that you can partially for code reuse and also partially for better isolation. Okay, so wedge consists of two parts. One is the wedge cloud, which is kind of running on the server side alongside the back end of the IoT platform and has a REST interface and so on. That's what the developer would interact with the REST API to register modules, create a deployment manifest, kind of describing the application and do the lifecycle management of the application. And one important point is that we do AOT compilation and so that all happens in the cloud. So that system knows about the different target device types and it figures out which compiler tool chain it has to use, all that is running in the cloud and it's in containers and whatnot. And then the agent side, basically it's, I think we've seen this a couple of times, the agent basically wraps the WebAssembly runtime and calls it in a particular way and exposes certain native libraries and device drivers and OS features. Well, we use Whammer because that's what fits on tiny devices and again, I should mention that we're dealing with a variety of potentially a variety of operating systems. So at the moment we run on Linux just because we do development there and whatnot and primarily we use a real-time OS called NutX, N-U-T-T-X, that's because that's what Sony's using for various hardware platforms, but we can support others. And then we have, besides WASI, we have wedge services APIs which includes a couple of different things. One is sensor interface and this is, we will try to standardize this, we're gonna make a proposal WASI sensor for controlling sensors, hopefully all kinds of sensors but particularly CMOS image sensors. And communications, so we have various types of communication both the kind of module-to-module communication which currently is a bit custom, hopefully we can use components for that when that's ready. But also like off-the-box communication like device-to-device, pub-substop type of communication as well as HTTP and access to blob storage and so on. So we have various types of primitives like that. And then we have WASI NN as well which is if there's an accelerator on the device or even if it's in software, we use WASI NN to do the inferencing. Although IMX 500 is a little different if you wanna learn about that, there's something on the website. That actually has an in-line accelerator where the image just passes through the DSP and then comes out the other side. So it's not like a thing you have to call. It just happens on every frame, it's triggered. And we also have some data storage primitives like blob storage access on the cloud and the local database. And OpenCV because we need to do image manipulation for certain things. If you went to the workshop you saw or you can stop by the booth and check it out. Okay, so I mentioned that we wanted to provide these SDKs and in different languages. So what are some of the challenges of polyglot development in WASM? I think probably you all know very well. At the moment these types of scripting languages are not the ones that are well supported in WASM, right? Not yet, especially Python, right? And Python happens to be the one that's really, really preferred by AI and machine learning developers. So we really want Python at some point. Also because we discovered something that's interesting, speaking of the programming language being different on the cloud and the device, is that the training loop and the inferencing are using different code. And then sometimes that code should be exactly the same but it's actually implemented differently and sometimes that gets messed up. So that's something we want to correct. So we did implement something that we call PytoASM. It's not very original and name clashes with somebody else's PytoASM. But essentially we have a limited support of Python in our module framework. But we wanna move to something much more supported and standardized as soon as possible. And what's coming up next? Well, we are definitely gonna stay engaged with this ecosystem. This was a really great conference. We are contributing to WAMR upstream. So we have dedicated developers for that. We would like to open source wedge if the corporate masters let us and push our standardized Wazzy extensions like Wazzy Sensor into the community and add more support for various types of MCUs, especially like RISC-5 and other RTOSes besides NodeX. Thank you.