 Okay, thank you all for coming. My name is Ryu Ishimoto. I'm the Director of Ecosystem Development at Midokura, Japan. Today, I want to talk about the WebAssembly-based AI Sensing Pipeline orchestrator for resource-constrained PowerEdge devices. It's a long title. Okay, so to start I want to share our vision. We're building a platform that is an accessible platform that powers and enables intelligent solutions based on distributed visual sensing. So it's a platform for solution developers who may not necessarily have expertise or deep knowledge in AI or computer vision or sensors to quickly build their own solutions and provide service and also to develop visual sensing applications for various sensors out there that are used in various protocols. So to achieve this vision, I've listed six mission statements here. First, we want to make sure that the application developed on this platform can run on cheap and resource-constrained devices because we think that in IoT space, there's still a huge demand for these devices. But these devices are very difficult to develop on, so we want to define a high-level abstraction so that it's easier for developers to develop. So with the high-level abstractions and the components that implement abstractions, we have a well-defined device stack. So the next step is I want to open-source the device stack so that developers can easily get this device stack, get a device, put the device stack and develop on top of it. We believe that this is going to accelerate adoption. And also, we want to lower the operational cost because we want developers to quickly get their services started and provide service painlessly. And you also want these applications run on far-edge devices. Devices that are closest to the place where the data is generated by sensors. This way, the data is processed at the edge and it never leaves the device to preserve privacy. And eventually, we envision our marketplace, a central place where developers can upload their applications and share code. Okay, so the agenda for this presentation, I want to go through the challenges that we have to solve. Then I'm going to introduce EVP agent, which is the component that we're going to open-source. Then I'm going to introduce the cloud component of the platform we call the Sensing Pipeline Service. And then we're going to have a demo and then Kenji is going to go over the ecosystem expansion that we're working on. Okay. So here are the challenges we have to solve. The application development is difficult for these devices. When I say resource constrained devices, I mean not just small Linux devices, but also like a microcontroller unit with real-time OSes. Okay. So embedded programming in general is very hard, right? You typically code in low-level languages like a CC++, which may alienate a lot of developers out there. And there's no component model to speak of, right? So you can't just download packages and build your logic on top of it. So there's not much reuse of code out there. The features you develop are not very customizable, right? So you actually develop these features, put them on a device, and if you want to modify these features, you basically have to do like full OTA, right, or get a new device. And typically you code your application to be device-specific, right? So your code ends up being, when your code is device-specific, and that code is no longer portable, right? So you can't take that code and run on different devices. And this is a problem in IoT space because in IoT, the devices and real-time OSes are still fragmented. So if you want to provide a service, especially in like a multi-tenant environment, you would have a case where you have multiple applications running on a single device. So you need to provide full isolation for the applications and proper memory protection. But unfortunately with MCUs, they don't really come with memory management units, so they don't provide memory protection at all. And for the isolation technology, the container is way too heavy for these devices. And finally, we want to run these in near-native performance because a lot of these applications on these forage devices are critical applications. They may be detecting critical events or handling critical events, so which basically means that you want these applications to run natively instead of, say, interpreting code. So how about the operation side? Okay, so there are a lot of pain points there as well. Well, if you want to provide a service, you need to provide basic IoT functions, like device provisioning and device management and things like that. Also, if you want to provide applications, then you need to have some sort of application management as well, where people can download and consume these applications on their devices or maybe for developers to share a reusable code. And in the service, you probably want to support multiple devices or different types of devices, in which case, they have different capabilities. So what you want to have is the automation of the application having the proper access to the hardware resources, like sensors or the accelerators. And these applications that run on the devices, they need to be connected to the cloud or maybe to another application running somewhere. So that needs to be configured. And in the last case, it's interesting where, because these are small devices, it's likely the case that an application may not run on that device. So you may actually have to split that up and run it in different places, okay? So you might run some workload on the device and then maybe the output of that is sent over to a different place, like a different computer server on premise or maybe a computer on Mech. Mech is a multi-access edge computing, it's edge computing computer resources that are provided by telcos, okay? So you may want to have a single application split up and deployed in multiple places. And how do you do that, right? So EVP agent, this is a component that we want to open source. EVP stands for edge virtualization platform. We named it this way a couple of years ago when we started the project, but we're hoping that once we open source it, the community member is going to come up with a better name. So this agent is a component that runs on a device. So firstly, it connects to the IoT platform in the cloud, okay? And it's designed to be IoT platform agnostic in that it can easily be configured to integrate with different IoT platforms. Right now we have a things board supported, and if you guys know things board, it's an open source solution for IoT platform. And several others as well. But it can easily be configured and integrated with others. And by connecting to IoT platform, you get device management and basic IoT functions like telemetry for free. Okay, so it's also responsible for the application lifecycle management on the device, right? You have to spawn the applications, start, initialize, terminate, and clean up. And as I mentioned before, we need to provide proper access to hardware resources on the device, right? And EVP agent manages that. And not only does it provide access to the hardware for the applications, he also reports device information to the cloud. This information is useful because then the cloud can take that information and make a decision on whether the application can even run on this device or the executable format that you used to create to run on this device. So, and lastly, the network connectivity. These applications may be connected to each other or maybe to the cloud, right? Maybe in different locations or maybe on the same device. And all of these, all those cases are handled by the EVP agent. So, as for the isolation, we've chosen WebAssembly as the isolation technology for various reasons, which I'll go over next. But just a side note, EVP agent does support containers, but for the sake of this presentation, I'm going to focus on WebAssembly, okay? So, just like with IoT platform, we want to design EVP agent to be wasm runtime agnostic as well. Right now, we have chosen one specific runtime, and I'll explain how we chose one, but by open sourcing, well, maybe hopefully it will end up having a design that is going to be agnostic to runtime. So, EVP agent and CSDK, the SDK to develop applications for EVP will be open sourced very soon. Okay, so how do we come, how do we choose WebAssembly for runtime? Okay, so we looked at several options. MicroEJ was an interesting one because it had all the features and stability that we needed, but it's a commercial product, right? So, we wanted to have an open source solution for, to be able to customize, which was very important for us. And among all the open source options, WebAssembly was the best one because it was, it had the, it was the most future proof. It actually did more extensive evaluation, so if you're interested, I'd be more than happy to go over them later after the presentation. So, how do we choose the runtime? Okay, so we had, we chose from the first three. So, we evaluated the first three, the wasm time, wasm three, and, and WebAssembly micro runtime. Okay, and we chose WebAssembly micro runtime because, because of the support for the different hardware architectures and, and operating systems that we cared about. Because of familiarity, we, we cared about the not X support, so they had that. And another important reason why we chose this runtime is because, because of the support for ahead of time compilation, right? I mentioned before that the, one of the goals we had is to make sure that the, the wasm application runs natively. Now, wasm edge, we sort of discovered recently an interesting project and it's getting traction, so we're definitely watching it. But as I said before, we want to design EVP agent in such a way that we can swap these runtime. So, it doesn't really matter which one you use, we just have, we just using WebAssembly micro runtime for now. Okay. So, EVP agents on a device side, okay? So, next I want, I want to explain the, the cloud side, which we call that sensing pipeline service at a very high level here. So, sensing pipeline service is, is basically a collection of micro services on top of an IoT platform, which I mentioned earlier that is, it's, it could be anything, right? Things were being one of them. And it is, it provides features for solution developers to be able to design a solution and also to instantiate it in a, in a physical world, like onto devices. Okay. So, the way it goes is that the developer would design a solution and submit that design to the sensing pipeline service. But then sensing pipeline service will transform that into actual WebAssembly modules running on devices and maybe beyond devices. And these WebAssembly modules will be running in the correct format of executable. And the way it does that is by, it takes the system information, device information that is reported by EVP agent. So, the sensing pipeline service knows the architecture of the device, right? And the capability of the device. So, it can do the ahead of time compilation to make sure that the, the WebAssembly source code is compiled properly to, to end up with the right format, but the right binary format to run on that device. Also, based on the capability of the device, it knows exactly where to deploy and also provide access to the hardware, right? So, the application may need to access the sensor. So, it tells the EVP agent to do so. And it can only, it'll only give access to those that is needed, right? And nothing more. And finally, network connectivity. So, I mentioned before that there may be cases where you have an application split up and running in multiple places or you might have applications, multiple applications running on the same device. Either way, the connectivity is configured by the EVP agent and the service. So, important concept here is that, there are two different stages here, right? The, the solution design and the instantiation of the solution. So, the concept is that the developers don't really need to worry about the instantiation of the solution design because that part is taken care of entirely by, by a sensing pipeline service. So, the developers, developers only focus on creating the solution. Okay. So, this is the solution design in more detail. Sensing pipeline is the, is, so what is sensing pipeline? Is, is the connecting of simple tasks to complete a more complex and meaningful task. Okay. So, a simple task may be a small AI component that detects an object and outputs a bounding box, which is useful but by itself, it doesn't really, it's not enough. Right. Typically, you, you get the output and, and do something else with it. So, a sensing pipeline. So, typically what we imagine is that the data coming from the sensor is going to be processed in a pre-definable sequence of tasks. Okay. And, and, and that's basically what sensing pipeline is. It defines the, the sequence of tasks and, and the relationships between them input and output. By breaking a big task into small components, now, now we allow the composability and the re-usability of these random modules, right, which didn't exist before. And, and these nodes, these pipeline nodes, now can be represented and created, constructed using visual tools and low-code, no-code tools. Okay. So, it makes it even easier for developers to, to design the solution. Again, I'm reiterating my previous point, but here developers are detached from the actual instantiation of the design. So, you basically, so basically the green box on the left is what the, the designing phase is, right. So, developers are basically using some tool constructing a pipeline and submitting that design to the service. And the rest is taken care of. And how does it, how does it take care of it? Okay. I'm sorry about the misalignment of that box there. So, this is the instantiation process. So, you need to have the device, right. That's where the WebAssembly module is going to be spawned, right. So, you need to provision the device. What else do you need? You need to compile the source code to run on that device, right. So, as I mentioned before, this is done through the collaboration between the EVP agents who's reporting the information about the device and sensing pipeline service that is doing the ahead of time compilation. Then, you actually have to deploy the device, right, on the device with proper access to the hardware resources. And the last case, the last thing here is interesting as well. As you can see here, you have a pipeline of three nodes. And the first two nodes are instantiated on the device. And the third one is on MEC, right. So, when the developer was creating the sensing pipeline, he or she was not aware of exactly where this is going to be deployed or what format, right. But the sensing pipeline service has determined that this is the optimal deployment strategy for this, right. And all the networking connectivity is configured automatically by the sensing pipeline service and EVP agent. Okay. So, now I want to show a demo, okay. So, this demo, let me explain the setup here. The demo, it does the, it's basically a license plate detection and reading. So, as an input, we're going to use a Japanese license plate, okay. And the sensing pipeline application is going to detect that license plate. And eventually, you're going to extract the characters on that license plate and send it out to the cloud as a telemetry data. Okay. So, the device is Raspberry Pi based camera set up in Barcelona office. We have a Barcelona office. And that device is running EVP agent. We have a sensing pipeline service running in the cloud and things for being used as the IOT platform. So, things were going to be used for device provisioning. It's going to be used to view the telemetry. And we're going to use Node-RED as the visual tool to construct the sensing pipeline and to associate the sensing pipeline to the device for deployment. Okay. And, what else? Yeah. And in this demo, you're going to see all the concepts that I talked about before, where after you construct the sensing pipeline and you submit that to the sensing pipeline service, it is going to do the automatic ahead-of-type compilation. And it's going to deploy WebAssembly modules with access sensors and connect the property to each other. And also to the cloud as well. The last node. Okay. So, without further ado, I'm going to start the demo. Apologies I'm not used to using PowerPoint and sorry about that. Okay. All right. So, this is the demo. So, this is the things board. I don't know if you guys have seen it. So, this is the IOT platform that we use. And we're showing the step of enrolling the device sort of a small text. But here we're pasting the device certificate, right? And by selecting okay here, we're effectively enrolling the device. Here, now you see the device is enrolled, which means that the EVP agent that is running on the device has not connected and reported information, right? So, here it's hard to see, but the system information is reported here, but there is no telemetry and there is no module application running on the device at this point. It's a new device. Okay. So, here's node red. So, next step is to construct the sensing pipeline for license plate detection. So, here we're dragging in the device that we want to deploy the application onto. Okay. And next step is to actually define the sensing pipeline itself. And this license plate detection reading is a good example because it actually involves a lot of steps, as I explained. Okay. So, first extract. Okay. This is responsible for detecting the license plate. And it has two outputs. So, one is the output tensor and the other one is raw image. So, in our sensing pipeline design, you can actually have multiple input and outputs. These are just annotations to indicate what the outputs are. Okay. So, they don't do anything functionally, but it's easy for you guys to see, except the small text, maybe you guys can see it. But it's basically saying output tensor and raw image. Okay. So, this is the post-processing node. So, what we're doing here is that we're converting the output tensor that's specific to the network that was used in the first node into common format so that the node after it doesn't need to know what was used previously. Okay. So, this is just creating a common format. Here, we're connecting OpenCV. So, what we're doing in OpenCV here is that we want to, now that the license plate has been detected, we want to crop that section and resize it. Okay. And OpenCV is a tool we use for that. And OpenCV takes in two inputs. One is the tensor and also the raw image that was outputted from the first node, which is going to be, which OpenCV is going to crop and resize. And also annotate with a bounding box. So, OpenCV output is sent to send image node, which basically just streams the images out so that we can actually see these cropped, resized annotated images from node red. And so, here, now we're connecting the nodes for the second round of detection. So, now we've detected the license plate. Now we want to detect the characters on the license plate. Okay. So, we're detecting the license plate character areas and then doing the post-processing again to make it a common format. And then now, finally, we're extracting the characters. And extracting the characters, we're going to send it to the cloud as telemetry data. There you go. So, now that we define the solution of license plate detection reading, we're going to associate it with the device for deployment. Now it's deployed. Here, you now see that the EVP agent has reported the modules that is instantiated on the device. So, you see all the modules there, right? Extract, post-processing, and so forth. And now we're going to show the actual demo of the license plate detection. Here, we're using Japanese license plates. So, these are telemetries that are sent to ThingsBoard. So, we can see it there. We're just enlarging it to make it easier to see. So, this is having real-time together, the image and the telemetry. So, this is like a conceptual demo here, this part, where we're going to show that a single application can be deployed on two different types of devices. So, we have two devices of type ARM and then one x86, which you will see later. And this is how you would do the deployment on these devices with one call. And what happens behind the scene is that, as I explained before, there will be an automatic dynamic ahead-of-time compilation performed by sentencing pipeline service. And the write executable is going to be deployed on these devices. So, this is the last part of the demo. I want to show the customizability of our platform. So, we were running license plate recognition before, and now we're going to replace that on Node-RED with a different application. And when you deploy this on a device, it will switch. And this new application is a face detection. So, after this, you'll see that the license plate reading, detection reading, is going to be switched over to face detection instantly. This is basically doing the same thing as the license plate detection, except that it's detecting a face instead. And here, we're just sending the video stream and we're not sending anything to the things board. Deploy. Now, you'll see that the new modules are running. Sorry, it's not zoomed here. But modules have been updated. And you see it on the left side. On Node-RED, you see the face detection working. Okay. So, that's the demo. Okay. We're showing the demo at the booth. So, if you have questions about that, please come by the booth and we can answer the questions there. For the next couple of slides, we want to talk about the ecosystem expansion work that we're doing and I'm going to hand it off to Kenji. Hi. My name is Kenji Shimizu from middle grade Japan and my role is alliance manager. And I'm going to talk about another topic, which is fragmentation, another pain point for IoT devices. And yeah, as you mentioned in the previous slide, I'm going to open source the part of software mentioned here. And in addition to that, yeah, we need to solve another problem, which is called fragmentation. So, this slide showed a well-known picture to represent fragmentation of Android. And yeah, because there is a lot of fragmentation for Android in the early stage, so it was very difficult for developers to make an application which can run properly on every devices. So, we think that the same thing will happen for IoT devices. So, we, to avoid from such fragmentation problem, we want to propose and design the software stack, device stack with partners increasing device vendors. And yeah, this slide showed a device stack, which is still a work in progress, but it showed a list of what we want for such a stack, software stack. And one, firstly, we need a abstraction layer for multiple operating systems and CPU type, like an instruction architecture with WebAssembly runtime. And we want application developers to utilize standard application interface like WASI, WebAssembly system interface. So, by using such a standard interface with slide extension capability, the application developers can easily access to devices like sensors or storage, something like that. And we want to include some third-party user applications, such as AI simple software module, like object detection and pose estimation, which is run in a sandbox environment safely, thanks to WebAssembly runtime. And we want to include some system applications. And yeah, system application is for device and WASI module and machine learning model lifecycle management. And we want to include some connectivity management. Connectivity means connection between the device, agent software, and cloud service. So, yeah, it's still in the tentative, so we need to design these patterns. And, yeah, ecosystem expansion. So, we will take a collaboration-based approach to make these things happen. And we need to support a variety of de-fact technologies of cloud network, IoT platform, and operating system, as a thing, a lot of things. So, yeah, this slide showed a relevant project from Linux Foundation. So, we want to collaborate with people from these very relevant communities. And at the same time, we will plan to establish a reliance of devices that I mentioned in the previous slide in the near future. Yeah. Thank you. This is the end of my introduction. Thank you. So, the platform is still under development. And we expect and we are now preparing for open sourcing some key component of the system. And, yeah, we welcome your feedback and hope your interest in the project. Thank you very much. We can't take questions. I think we have time, right? So, the question is how about the performance of the LASM compared to the native, okay, on the edge devices? We actually have comparison done for this benchmark of interpretive mode, you know, AOT compiled mode and native. So, it's near native performance, not quite there. But it's, I can't recall the exact numbers, but I can get the exact benchmark numbers if later, you know, but it's actually pretty near native and native speed. Yeah. Sorry, I can't give you the exact numbers, but it's close. And, but it's much better than interpretive mode, by the way. Any other questions? Okay. I think it's lunchtime. So, we'll end it here. And, yeah, thank you for the, for attending this talk and enjoy your lunch.