 Okay, I think we can get started. So, thank you very much for being here. Just let me introduce myself again, because it's my second presentation. So anyway, so my name is Tiejun Chen, just call me Tiejun. I'm a staff of two engineer and technical leader from VMware, Office of the CTO. So you probably imagine I'm supported to working on some innovation and research. That's true. So our team is trying to incubate some new project, a new area at VMware. So on my side, I just get involved like live with OS, and then you let the function as a services, and then you let hardware assist to the virtualization, and then you let automotive system and edge computing, especially for that machine learning interference at edge side, okay. So before I enjoy the VMware, I work at some company like WindRiver system. For WindRiver, you know, Lynx, Kernel, and BSP, and even WindRiver tradition development. And also join the Intel OSTC, open source technology center. So I'm trying to enable some hardware feature to the open source project. So that's it. Back here, today I'm going to talk about something about that edge computing, edge computing system. Let's try to build some integration between that ROS and that HX. You might know that they both are popular open source project. I want to make this talk on this item. First is some background of what's the ROS HX, and then introduce our approach, our solution, and also show our simple and machine learning inference framework, and then to share how to fix them into that real-time requirement. And even based on my some experiments to talk about intelligent edge computing. I will add up this presentation by mentioning my hardware on developer environment. So overall, I don't want to dig into that is a two-project or even edge computing. They're supposed to have more protection to make that very clear. I just want to use these, introduce these two-project and these integration to help you get into that edge computing. Just start or enjoy that edge computing journey from your side. So what's ROS? ROS is a robotic open system. According to the definition from that big page, the ROS is an open source and met open system for your robot. But actually, it's not that new open system. Even it's not that open system, okay? It's just comprised of some service you would expect from an open system like low-level device control and even some hardware abstraction or even some communication across the processes and that packaging management. They also provide some utility and tool and library. You can use them to set up your environment and develop ROS and the write code and run the ROS just like this. So it's like the ROS framework, okay? The ROS are currently just run on top of that unique space system. It's tested or validated by like Ubuntu and the Mac OS system. So at this point, we can see that ROS is just at the middleware, just middleware. Okay, this is the ROS. ROS has three levels that are concept. The first is at the field system level. It's specifically to that ROS resources like that package. Package is a basic unit of organized software in that ROS. It may contain some runtime node and even that dependent library and even some configuration file, okay? Mental package is specialized on software, a package for all, you know, which that's just provided that, only so to provide that other related that package information. Manifest is that XML file and provide metadata information about that package like that name and the description and the version and the licensing and some dependency just like that. A service message, I'd like to talk about this next night. So next concept is a computation graph level. Graph is that peer-to-peer of our network at the ROS process. That looks coupled with that ROS communication infrastructure. So there's several concepts on why that's like node. Node is that process that perform that computation, just like a fine-grind task in the ROS. But master, master provides some service like a name registration and they even can look up those registered other computation graph. Without master, nodes cannot find with each other and exchange the message and even work some service. Message, nodes communicate with each other by passing the message. So the message is just simple data structure. The topic, now the message is routine by the transport system by node with that publish and subscribe the semantics. Like one node can send out a message by publishing it to that one given topic. Any subscriber can receive that topic and get that message. Service, now ROS essentially that distributes the system. They need that request and reply that communication mechanism. So in ROS, this is down with that service which are defined by that apparel that message structure, waiting for that reply, waiting for that request. Okay. So, ROS just opens up as summary is very popular in that robotics community. And even you can see that in some industries, even you can see that in that automotive system. It has a good ecosystem. Another project I want to mention here that ROS Industrial is another open source project. No, it's targeted to that enable that extend that capability of the ROS to that some manufacturer. You can see that some top manufacturer later on here. Okay, that's a ROS. The next is the HX. I'm not sure if you have HX. HX is targeted to that LOT edge computing. So because I only talk about LOT edge computing, we face some challenges. We have a different software like Open System besides of the traditional on Linux and Windows 10, that particular LOT OS specifically on the LOT endpoint device, okay. And even on their various type of that hardware platform, based on X86 or based on that ARM, even at risk file. And another challenge is that there are various layer of LOT architecture. So we talk about LOT internet center. We need to connect the center to the cloud. Some of the center, the AP keyboard wall, they can be connected to the cloud directly. So the two tier structure. But most devices cannot be connected to the outside directly. For some reason, they are not AP cable. Or even they are not AP cable. I don't want to connect that to the cloud directly. For some reason, security. And even some cost. So I want to encrypt the data. I even want to fill the data and aggregate data, something like that. So besides the same on the cloud platform, there's edge system, edge layer, 3T architecture. And even we all have that edge server because we have realized at edge side, we need some power computing and storage right there. So 4T architecture, SYN and edge gateway and edge server and cloud. Okay, other problem like that protocol. This device I connected with our rice industry on connectivity or network or protocol. Like Modbus and maybe on some Bluetooth and even some ZigBee, something like that. And even if you want to connect to the cloud, you have some protocol specific to LTE, like MQTT and the other corp, something like that. And even when we go to the cloud platform, those top public cloud provider, they have their own cloud platform specific to the LTE. And LTE is a big scope application and even a security management. So you have to make your decision on which one is best we see all the good for your case. So it's very hard for that production environment. So we have the EdgeX Foundry. So basically Edge Foundry is a common open edge computing framework. Essentially it's a containerized open source loose car pod that make a service architecture and build that communication over the rest API. This picture is a big picture to that EdgeX architecture. You can see there are four on service layer from bottom to top on device service layer and the cloud service layer and the supporting service layer and exporting and application source layer. And plus two that system level on service layer, security and management. But I think that architecture is straightforward. The proper path that call layer of the EdgeX Foundry. You know, talk about LTE. I think just too soon connect the data and dispatch that command. Consider data and command. But we need that layer, device service build that communication between the physical device and the service. So device service layer. On top of the cloud service layer, now we have the data. So you can define some service like that alert notification and rule. And even you can build, you will define additional services right here. Beside of this, sometimes you want to export your data to the cloud or you want to share the data with the other EdgeX instance. So we have that export service layer can help you on export the data outside. So again, just remember that Edge Foundry has a Microsoft architecture. So if that network is accessible, that means you can deploy this service, distribute this service anywhere. So there are some flexible deployment. Like you can deploy that device service layer, just device service layer on that endpoint device. Other service can be placed to the cloud. Think about the 2G architecture. For 3G, you still can deploy that device service layer to the endpoint device. Other service layer to the Edge Gateway. Or you may deploy all services to Edge Gateway. For 4G, there are similar situations. Just according to your own, use the keys on your requirement to deploy that EdgeX Foundry. So you can see there's also some good pattern ecosystem. There's an overlap between that ROS and that EdgeX. OK. So now back to my side. In our team, part of my regular work is about something with EdgeX Foundry. Some efforts are about ROS. So I was sort of like, what if we build them together? We really needed that. Because those systems are featured with ROS, you still need to consider that edge computing. You still need to consider how to connect this system to the cloud. Back to EdgeX Foundry. See, instead of challenging to the EdgeX Foundry, remember that device service layer. That means that you have to rise the new device service before you want to support as some device. That's a big challenge to the EdgeX Foundry. And even though Edge Foundry essentially is infancy, so you need to figure out how to think some new what to call you the case to build that good ecosystem. So I'll build some experiments in my spare time. So what do we do? This approach and solution is not difficult. Sometimes we can do that by a mental environment, just like ROS can play as that device service layer to EdgeX Foundry. Or in contracts, Edge Foundry can play as that ROS node, just like that. And even I try and put that in the context of the virtual environment. It's like I mentioned in my first presentation in terms of that AGL on virtualization solution, automotive grid, things, virtualization solution. So what we can do? The first way is that ROS has a device service layer for the EdgeX Foundry. The next background is that EdgeX architecture layer. So in terms of device service layer, I deploy the ROS. Think about in some system, there's some device, Edge Foundry does not have such that device service can support them. We have ROS, ROS can support them. And then I drag this node into that Edge Foundry device service layer. Here, I try to leverage one existing that Edge Foundry device service, MQTG, device MQTG. And then also create some mirror device on top of that, MQTG device, ABCD. The core of this device go on that is the EdgeX ROS adaptive driver. I will register the callback function to this ROS node. So once data is connected, the callback is triggered. And then this driver will convert this data to that MQTG device, and then send out to that core service layer. Just like this. I'll give you one example. For example, ROS typically can take over the IP camera role, even as a USB based camera. And also have that packaging to expose that video or streaming outside. Like that, it has a topic on imaginary role. And back to EdgeX, I leverage the device MQTG. And use that ROS subscriber to subscribe to the topic and enable this callback, immediate callback. And then convert that immediately to that open CVE with the next function. And then that Edge Foundry has the opportunity to take over that image, take over the video streaming. Even we can do that machine learning influence after that. So another way is very simple. We put down that EdgeX, sometime Edge Foundry can take over some device. In terms of EdgeX Foundry that expose the service layer, it supports MQTG. So I expose that data to the local MQT broker. And then I register the new EdgeX node to the ROS. And then we can use this mechanism with the communication between that ROS and that EdgeX Foundry. My experiment is that I use the Edge Foundry by enabling the EdgeX model bus device service layer to connect the summary for sensor data, like a temperature and humidity over that IS485 model bus. And then call that MQT bridge and the tool that the ROS can pass this information. So another thing about the virtualization environment, OK. Here I build up one minimal that's super VM, just like a domazil in that Xen architecture. But I hope our solution is a bit agnostic. Anyway, I just deployed the EdgeX service layer in this super VM, where I pass through all the industry that IOT on the controller or use some interface to this VM. And this device service layer can take over plus that the ROS can take over that device. And then we deploy the other device services to the different VM. After that, I can build that communication across this VM. We have opportunity to build that secure and trust that our communication. Even in the virtualization environment, more importantly, we have a chance to monitor and detect that a physical bus and a physical device has to behave here. So I still have an example. Just one thing is different. Remember, we knew that local MQT broker to broker this message between that ROS and EdgeX Foundry. So here, what I'm doing is in the environment, you have to consider that security issue. So instead, I'll put that local MQT broker to the one or more OS, Unicrono. I love Unicrono. Unicrono is a specialized single-dress machine made by Unilever OS. Essentially, I just keep those natural components on this OS to make sure I'm just wrong at one game application. It's very small and quick boot and secure and have very good air performance. Here, I use one Ignis Unicrono OSV to make this. This is some solution approach of that integrity between that ROS and EdgeX Foundry. So the next topic is about that machine learning, EdgeAir. So on the right side, I grab this picture from a gardener. You mentioned that EdgeAir. You can see they are innovation trigger-free just with 2.5 years. So EdgeAir is very promising. We need to do something in this area. On my side, I'd like to distinguish EdgeEletics from the common Edge computing. The common Edge computing probably is just on field data and the aggregate data or pre-processed data, just like that. But in some user cases, like retail or in the industry IoT, we want to do that machine learning inference. So that means you need to use that machine learning technology with some model like a predictive or prescribed model. It also needs to speed up by some hardware accelerate specifically to that Edge. I'll mention that. OK, now what do we have now? We have ROS. ROS has been built in the open city. That's good. And Edge device, as I mentioned, no. Many of that hardware advantage produce virus type hardware accelerate specifically to that Edge. You have to consider power consumption, something like that. And then even we can have flexible EdgeX development. So this framework is just like that. One node is that camera, capture the data, capture that image. And then pass the message with that video stream open city on that service. And then build that common machine learning inference service node right here. It's composed by several components, like pre-processing. Sometime you need to resize that image to make sure that it can work very well with that machine learning model. And even that model, another component in the model adaption, just like I showed here, there are various types of Edge AI accelerator. It's different from a cloud that's hardware accelerate, like GPU or FPGA. You have to enable different SDK or the toolkit to use this machine learning, this Edge AI accelerator, like that this one, Intel VPU, you know that. You have to convert that common machine learning model to another format, XML or binary file. And even back to Google GPU, we can support that TensorFlow Lite. But you still need to use that to convert that TensorFlow Lite format to another format. And even as a Google GPU can work with that. Even sometimes, maybe we can figure out some way to optimize that model. We put this task into that model adaption, this component. Then we can do that machine learning inference. After that, we use that export service to export that interference result. Another node, Edge X node, they can just subscribe to this topic and then get that result, interference result data. And then according to our predefined policy to take some action. Next topic about real time. Basically, Edge X and ROS, they are sort of that service framework. So it's very hard to meet that hard real time requirement. Just meet the software outcome. But we can do something to make this better. So one way is they are based on running on that NINCE distribution. So the first thing we can do is just integrate a primitive patch to this NINCE distribution. It's easy to make that. And even furthermore, we can choose that system. Like we can use that CPU and ILQ affinity to get some performance better. So another way is from that NINCE kit. The NINCE kit is from that Docker company. Because Docker, everyone wants to enable Docker to the different platform. But they find not all platforms can support that Docker. So instead, they release this NINCE kit. That's a minimal kit. The core of these two kits is that NINCE. It has a very minimal NINCE kernel and has a very minimal system in it. Just make sure this kernel can support that container just like that. And after that, you can send bulk everything into that container. Everything in the container. Everything in the container. Now, I want to use the NINCE kit, but it does not support the primitive patch. So I just enable the primitive patch to that NINCE kit officially. So that means now you can build that NINCE primitive NINCE kit officially from this branch. How to define that NINCE kit in major? You need this YAML file with some description, like kernel information. What kind of kernel? RT, or even that kernel version, and even the command line, and even NINCE, or even the unboot. Other user service is still in the container. You can see everything in the container. Everything in the container. So when it's boot, just like this, NINCE and the wrong C on the unboot, and the container D involved, that's every user service, or maybe ROS, Ajax, and other Ajax-defined service. So the last topic about internal edge computing. So I want to address that IoT and challenging, I think, just two problems. One is segmentation. Another is heterogeneous architecture. But Ajax foundry, actually, you can build them in a native application. So in the user case, you can, part service is running on that binary application. Some services can run in the container. Even the container can be built in to support that ARM-based architecture, or even that X-based architecture. Even RISC-5 also support that container. So in my experiment, I have that RedBerry pack. I have that some industrial IoT gateway on the X80-based. I have that one ARM-based edge server. Just use this way of distributing Ajax-founded-plated ROS to make sure we can leverage this device from the IoT endpoint device and get a way as so. But the problem is, when I did this, how can we migrate workload from this node to another node? Again, there are different CPU architecture. So that would be the problem. So what I'm doing, what I have done is I build a different application, specifically the difference of the architecture. Before that, according to, before I migrate the service or migrate the application, I will detect on my destination target, which type is that? And then just pull up a new document or new application to enable that. But the performance is not good. So this is about my experiment environment. On the left side, I have that robot. It's not on DRV, it's that product robot. You can see there's a radar right there. On the right side is that industrial gateway from Dell, that's why I saw the gateway. I connected that one camera and then even that Google GPU. I even externalized the IS-485 controller and do some more experiments like I shared today in that presentation. Maybe I can give you another one. So say one example, you can turn the camera to the screen. Actually, one is connected so that Ross passes it immediately to that X-Foundry. Another one is connected to the X-Foundry and then pass through the Ross. And then I load that YOLO3, that objective action model to make this machine-inferenced framework in my experiments. I'll use that Google IDGTPU. One is Google IDGTPU, another one is Intel GPU design. I guess I'm trying to talk about today. So any question? The problem is it's not Google. One reason I'm using service architecture, microservice framework. So I have that data to show that latency. But in my feeling, just like one or two seconds. It depends on your network environment. Finally, if I can set up a network environment, private, separate from our office environment, I have a good performance. But we put that into our official office network environment, the performance is not very good. I guess there's some traffic in package right there. OK, no question. Thank you, everyone. We'll see.