 Hello everyone, I am Guichi Xia from College of Computer Science and Electronic Engineering, Hunan University, China. I am very glad to participate in this conference. I will introduce to you the Virtualization Management Platform, ZVM, an embedded real-time virtual machine based on Zephyrutus which designed by our team. By the way, the name was inspired by KVM. Our report consists of three parts. First, we will introduce the integration of Zephyrutus and virtualization technology. I believe most of you know Zephyrutus. Our platform is based on Zephyrutus. The reason why we choose Zephyr is that Zephyr has the following features, such as multi-hardware support, product design and maintenance, lightweight deployment, flexible configuration and support for multiple IoT protocols. These excellent features make Zephyr popular. Because the computing power of embedded systems has begun to increase in recent years, the virtualization technology used in cloud computing in the past has begun to be deployed in embedded systems. These embedded virtualization require the following features, such as strong isolation, high performance, low resource usage and support for multiple architectures. Using Zephyr to develop a virtualization system can meet these features. In the traditional cloud computing, desktop computer or embedded environment, many virtualization technology solutions have emerged, such as VMware and VirtualBox commonly used in Windows desktop. Linux KVM and Docker commonly used in cloud computing scenarios. However, there are some problems in these systems, which make it inconvenient to apply to embedded scenarios. Some closed-source solutions such as QNX in some embedded scenarios are not easy for everyone to use, so we want to develop such an open-source embedded virtualization platform. In some technologies sharing in the past years, we learned that Zephyr, as a guest OS, has been deployed in various hypervisors, such as ACRN, XEN and KVM. In such systems, Zephyr acts as the control center and can undertake some single tasks. So, we started thinking, why can't we use Zephyr as the host OS, like Linux, KVM does, and use Zephyr as the host OS? In our opinion, using Zephyr as the host operating system has the following advantages, including using Zephyr, as features to better save resources, better scalability, and better testing and debugging system. So, we just do it. Now, we begin to introduce the second part, the design of a virtual machine based on Zephyr. For the entire Zephyr system, we have added a virtualization management module to support virtual machines. Except for the virtualization management module, other modules are consistent with the original Zephyr module. Then, we introduce the Zephyr based virtual machine, ZVM, and its system architecture and main modules design. The overall system architecture includes the Zephyr host OS, the virtualization management module and the upper virtual machine. In the virtualization management module, the virtualization layer is designed as a Zephyr module that interacts with the kernel to provide a virtualized environment. This module provides a set of interfaces that allow guests to access resources in a controlled manner. As shown in the right figure, our current work is mainly based on the hardware of ARM company. ARM has many supports for hardware virtualization, such as processor virtualization support, memory virtualization support, interrupt virtualization support and device virtualization support. Through these hardware virtualization supports, the performance of the virtualization system can be greatly improved. In order to support type 2 virtualization support and reduce the overhead of context switching in the host operating system, we also use ARM's virtualization host extension support, in this mode, the host operating system can run at the exception level to privilege level. And there is no need to process the host's trap operation when the virtual machine exits, which significantly reduces the context switching overhead. These supports include register redirection and new memory models that can be seen in the below figure. After understanding these features, we mainly designed five virtualization sub-modules, including VCPU, VMEM, VARQ, VDEV, VTIMER, etc. At the same time, Zephyr Routous provides support for thread schedulers, memory management units, interrupt management modules, timers, and device management modules for our system. Just introduce the design of the VCPU module in ZVM. In our system, each VCPU is simulated by a thread. We use the VCPU structure to build a context for each VCPU. Compared with the original Zephyr thread, we add an identifier and a series of context information required by the VCPU thread to identify it during the scheduling process. In addition, in terms of system CPU allocation, we adopt a master-slave model, the host thread tends to be deployed on Core Zero. And the virtual machine thread is scheduled on other cores. And we use the inter-processor interrupt, IPI, to realize inter-thread communication. In the ZVM system, in order to reasonably implement thread scheduling in the system, we presuppose that the priority of the host thread is the highest, followed by the real-time application thread in the virtual machine. And the lowest priority is the non-real-time application in the virtual machine. In terms of scheduling strategy, we are now using the single-linked list scheduling with preemptive priority in Zephyr. In addition, under normal circumstances, threads will be bound to a fixed physical CPU, which can reduce thread switching to reduce loss. For some real-time applications, we need to provide timer services. First of all, the ARM hardware provides a set of virtual machine-oriented registers for the virtual machine, and these registers can be directly accessed by the virtual machine without emulation. However, this kind of access must ensure that the current VM occupies the physical CPU. If the CPU is preempted by other virtual machines, it is necessary to use the timer-out mechanism of Zephyr to add the time-trigger event of the virtual machine to the host events, and when thread triggered by event. In terms of memory virtualization, we use two-stage address translation mechanism to support virtual machine access to memory. In the entire system, memory is divided into guest virtual address, guest physical address and host physical address. The first stage of address translation is done by the virtual machine without hypervisor monitoring. The address translation of the second stage is completed by the hypervisor, which converts the physical address of the virtual machine generated in the first stage into the real physical address space. This strategy ensures the memory security of the system, and the virtual machine cannot access non-self-address spaces. In terms of specific implementation, we use a three-layer structure to describe the memory of the virtual machine. The first is VM underscore domain, which describes the memory address space of the entire VM, the second is VM underscore partition, which describes one of the memory regions of the VM virtual space. The last is VM underscore block. It's record the mapping from physical address to virtual address. In our design, this MMI-O based address translation can achieve performance close to bare metal. In addition, in order to achieve smaller memory overhead, we will consider the mechanism of dynamic memory allocation later. In device virtualization, we mainly use the following two types of device at this stage, including fully virtualized devices and device pass-through solutions. In terms of fully virtualized devices, we use memory and exception handling functions to simulate device access. This type of device has the following characteristics, good compatibility, high overhead but poor performance. Used for devices that cannot be independently allocated, such as the interrupt controller GIC. Device pass-through refers to directly assigning devices to VMs. VM uses existing hardware drivers, does not add new drivers, and has performance very close to directly using the physical hardware on the host. Used for devices that can be independently allocated, such as a serial port. In both ways, we use MMI-O to access the device, and construct a virtual device control address in the memory. When an address device access error occurs, the system will fall into the callback function for processing. At the same time, we also provide virtual interrupt support for each virtual device to provide interrupted services. In the last part of the module design, we designed the interrupt processing logic to route all shared peripheral and internal interrupts to the hypervisor layer and process them in the hypervisor. Specifically, we handle interrupt handling for each VM by building a virtual GIC device for them. GIC can inject physical or virtual interrupts that occur into the virtual machine through the VCPU interface for processing. The specific steps are shown into in the figure which provide interrupt services for virtual machines. So far, all the virtualization modules have been constructed. We test our system on QEMU and ARM FVP platform. And we can successfully started Zephyr and Linux virtual machines in the ZVM. As shown in the figure on the right, we take QEMU as an example to show our system. The figure show that we start the virtual machine of Zephyr 3.2.0 and Linux May 16th 12 in ZVM. At the same time, some basic peripherals are supported, like interrupt controller and serial port. Then, in order to test the performance of the ZVM, we used Zephyr's own performance testing tool to measure various delays of the system. The specific platform is QEMU 6.2.0. As shown in the figure, the left side is the measurement results of various delays without ZVM. And the left side is the test results after adding ZVM. Here we mainly compare the timing of a series of behaviors such as thread switching, interrupt response, and thread creation, etc. The results of our test are shown in the figure below. We mainly compared the following seven items. From the results, compared with running directly on the QEMU platform, there is a greater delay in creating threads and starting threads on the ZVM platform. And at the same time, the increase is smaller in processing logic in the interrupt part. Although all expenses have increased, the overall increase in latency is within microsecond. Since the above test experiments are carried out on a virtualization platform, there is some lack of stability and reliability, so we will adapt our system on the RK3568SOC. At this stage, the native Zephyr 2.7 has been supported on this platform. And the follow-up will be in functional and performance tests with ZVM on RK3568. In the third part, we will summarize the above content and illustrate the future directions. As can be seen from the previous introduction, Zephyr-based virtual machine aimed to provide a lightweight and efficient platform for embedded virtualization systems. Therefore, our system goal is to apply to scenarios that require lightweight virtualization, such as the Internet of Things, industrial control, and autonomous driving. In order to better apply to the scenarios mentioned above, our future optimization direction mainly starts from the following four points. The first point is the real-time capability of ZVM. The second point is virtual device framework. The third point is ZVM security. The final point is considering the popularity of intelligent computing at this stage. We regard the transplantation plan AI support. These features will work together to build a comprehensive embedded intelligent computing system. First of all, in terms of real-time computing, we will use a variable priority scheduling strategy based on preemption to improve the real-time performance of the system. In addition, we will develop a real-time communication mechanism based on shared memory to ensure the real-time performance of the overall system in order to cooperate with various VM in the future. In terms of virtual devices framework, we are porting the VIRT-IO framework, as shown in the right figure, to our system. We work mainly including the development of VIRT-Q and VIRT-Device, due to the characteristics of Zephyr itself. We choose to directly integrate this part of the content into the Zephyr's kernel. Considering system security, after providing basic isolation for VM, we plan to provide additional protection for VM and hypervisors. For general Linux virtual machines, we plan to use CFI and LLVM to build a secure system. Regarding the security issues of hypervisor itself, we will monitor the access of hypervisor call to avoid security issues caused by some illegal hypervisor call. In terms of AI support, Zephyr currently supports lightweight TensorFlow Lite applications, and we are also supporting lightweight Paddle Lite applications. ZVM will also incorporates the support of these frameworks into our tasks. At the same time, we will also analysis the feasibility of embedding GPU virtualization to support AI in hardware. The above is all of our content and long-term future plans. As an open source project, we look forward to actively joining and developing this project. The following is our project address. We are calling for participant. Thank you for your watching.