 Hi, my name is Dan Dimitriou and I'm the CTO of Midokora, a Sony group company that's focused on IoT platforms and visual sensing. And this is some joint work done with my colleagues Dong Shenyang and Bo Jiejiang from Sony Semiconductor Solutions, China Development Center. So some of the pain points of IoT manufacturers are that it is difficult or impossible to change functionality after deployment. And there's no standard component model. So it's pretty costly to customize devices from a base platform. And cybersecurity, as we know, is a big problem, especially for IoT. There have been many, many instances of compromised IoT devices. And vulnerabilities really can't all be found before shipping. That is just not gonna happen. So in fact, the industry needs to move to a SecOps model where as vulnerabilities are found, they're quickly fixed. Typically, the companies in IoT, they don't have these competencies. So one of our goals is to build customizable IoT devices. That means that we need to change functionality without doing full OTA over the air update, which involves replacing the entire firmware. It means that we need dynamic loading and linking of application code, which again, traditionally in embedded systems, it hasn't been available. And we need memory safety and control flow safety on IoT hardware that doesn't have MMUs, which are basically most 32-bit MCUs. We also would like to enable application portability across hardware platforms and across operating systems, which in the IoT domain, the real-time OS ecosystem is quite fragmented. It's not like in the Linux world where all the cloud world where everybody's running Linux or Windows, there are many of them. And finally, we'd like to reduce the development effort and support a high-level languages. So on the topic of the platform heterogeneity, again, in the cloud world, it's pretty much all x86 and Linux. In the IoT world, there's a very large diversity of architectures. There is a lot of ARM, but ARMv7, ARMv8, no, these are different. There's some x86 potentially. There's extensa, which is cadence ISA that's used, for example, in the ESP32. And then there are a variety of operating systems. So Linux might be there, but also there are systems like NutX, FreeRTOS, ThreadX, et cetera. We don't have a stable application platform. We don't have MMUs on most MCUs, as mentioned. That means no virtual memory, no process isolation. And it is known that the number one source of security vulnerabilities is actually memory bugs. So if this is true in the server side as well, certainly it's probably true in the IoT domain too. So what we propose is to make an edge stack that separates the application layer pretty cleanly from the OS and creates a sandbox and a set of services for these edge applications that are portable across multiple platforms. So we looked at several options for safe language run times, or let's say safe run times, execution run times. We looked at MicroEJ, which is a commercial option based on Java. It's a 32-bit JVM for MCU devices. We looked at MicroPython, we looked at WebAssembly, JerryScript, which is basically like JavaScript, and also Lua. So my colleague, Henrik Sundstrom, in the Sony Research Center in Lund, Sweden, they did some evaluation based on a bunch of different criteria, like power performance, general system requirements, the runtime model, the ease of integration, the security, the software update model, and also the HIT support, which is not so much a concern for us in the embedded IoT world, but it is for some types of applications. So long story short, we decided to go with WebAssembly. And we decided to go with WebAssembly because it seems to be the most future proof and it allows us to support multiple languages with a single runtime system. It's also because it's compiled to native and it's high performance, as opposed to, for example, the MicroPython, which is interpreted. So here are at least three of the WebAssembly runtimes that are available on MCU-class devices that we consider. So we looked at WasmTime, WebAssembly MicroRuntime, and Wasm3. And finally, we decided to go with the WebAssembly MicroRuntime because first of all, it supports the, all the instruction set architectures that we care about, extensa, ARM, in the future, risk five. It has an interpreted mode, a JIT mode and an AOT mode ahead of time compiled. And typically we are going to use the AOT mode for our IoT devices. And it runs on multiple OSes, including Linux and NutX. It also supports load time dynamic linking of modules, multiple modules, which is interesting for our application platform. So some of the pros and cons that we looked at, the pros, it's easy to embed with the C API. So using C code, it can be embedded in another application, you know, like for example, the, whatever the bootstrap application agents, you know, on our IoT device has a relatively small footprint, supports various languages, and it has a built-in libc support. And it definitely ensures the memory safety and control flow integrity that we care about. Cons that we discovered is that there is some overhead for the memory safety because the bounce checks are implemented in software. It has no built-in GC that hasn't been standardized yet in the web assembly. So we can't directly support languages like Python yet, but that will come in the future. So just some interesting numbers, you know, to look at. So we benchmarked the, we benchmarked this, you know, using the CoreMark benchmark on the ESP32, which is a dual core extensa running at 240 megahertz. And also on an all winner V3S, which is a single core 32 bit RMV7A running 800 megahertz. And the numbers are pretty interesting. So first of all, we can see that the AOT mode is far better than the interpreted that's expected. And also on the ESP32, we see the AOT mode is basically one-third the speed of the native code. On the all winner, it's about half of the speed of the native code. So altogether, this is pretty good. You know, it's an acceptable level of overhead for our application. And we haven't really done much to try to optimize it so far. One of the other issues though is the program size, especially when using C++, the program starts to get quite big, the code size. So especially in fact, the AOT compiled code is pretty big. The ESP32 actually has a pretty difficult memory architecture because it's a hardware architecture, meaning that it has separate instruction and databases. And so not all main memories executable and the instruction memory is relatively small. You know, it's less than 500 kilobytes. And in fact, we just ran out of space. So our solution is to execute from flash memory. So just to look at this, you can see that the AOT code size is actually even bigger than the interpreted size of the web assembly by code. So our solution for the execution for flash memory, which we implemented in the web assembly micro runtime, is first when at load time, we will relocate the text segment, and then we will write it into the flash memory. And then we will use the ESP32's MMU to map the flash memory into an executable region of the memory. And this seems to work and the overhead is actually pretty reasonable. So what we plan to do with this is to build a full automated cloud to edge pipeline where the developer writes some code, compiles it into web assembly, pushes that web assembly compiled module up to our cloud platform. And then the cloud platform based on the target type of the device will take care to compile it to the final target. Could be the extensor on ESP32, it could be 32-bit ARM, it could be x86, et cetera. And then finally deploy the device. So the developer is really decoupled from the target. So in conclusion, it works. The performance is approximately 50% of native code, which we hope will get better. The memory is bigger than we would like, especially for C++, which we have a lot of. And at some point we would benefit from having hardware support for bounce checking, but that if we don't, it's still okay for now. And the automation of the end-to-end is very important. So I hope you enjoyed my talk and thank you very much.