 Okay, let's we start, let's start the presentation. So thanks for joining the session. My name is Bing Yang and the software engineer works for Intel Corporation based in Shanghai. So today I will introduce our professor, Ned. Hosting multiple in-activity physical clients on single computing platform. So this is the agenda. We will introduce why we try to raise this proposal and what the background and use case behind the proposal. Firstly, and then we will go through the architecture and go through the detailed implementation of solutions. And then we will provide a real use case and give a summary. Okay. Okay, as we know, the Android has the most popular operating systems in the world. In most cases, Android OS is used in mobile phone and tablet and TV, but due to the rich ecosystems and many of the application developments, Android systems has been used as a spreader to some emerging use cases, such as cloud gaming and automotive and retail. The first page, the first picture is automotive. In the modern cars, there was a lot of displays installed in the car to give the passengers entertainment service. For example, in a concept car provided by the audience, there was a 17 displays in one car. Every car, every displays needed to provide independent interaction to different passengers at the same time. So this is the first use case. The second is cloud gaming. Cloud gaming is trying to run the games on the cloud and rendering and encoding and sending the encoding streaming to the user's mobile device. Use the cloud gaming solutions. It can let user, it provides the user experience run anywhere at any time. And it also can reduce the effort to distribute and maintain new games. And for the Android games, which is, which occupy a big, important in the game market, for example, in China, more than 50% of the games is Android games. And most of the game developers, game campaigners, follow the strategy that mobile first. So Android cloud gaming is also an important topic or important use case for Android. For retail, retail is a sad use case. We know Alibaba and Marvin raised a new concept about the new retail and free staff retail. In this retail store, there was a lot of display which provided a checkout service and advertisement service to the customers. User can interact with the different display at the same time to reduce the service duration. So this is a sad use case. The first use case is a robot. The service robot is also very hot in China, especially for the education at home and the guide service at the hospital or the bank. So traditionally, the service robot is provided, is included to OS. Why is the ROS OS, which is used to connect the sensor data and control the robot? And the other is the Android, which is trying to interact with the customers and accept the commander from the customers and control the robot, ROS OS, and get the feedback to the customers. So all use cases are trying to leverage the bigger ecosystem of Android and also raise the new requirement. Behind the use cases, we can find that it requires assistance to service different users at the same time without interfering with each other. And the second point is it requires assistance to support a mixture of different links based OS. So this is our point. Okay. So to satisfy the requirement behind the use cases, the easiest solution is just to use multiple devices. You can use a lot of devices to serve different customers. One device service on your side, it is the quickest solution and it will not bring any software development effort, but it will also bring high cost and complexity. Let's take an example as I mentioned in the first page, the concept card. If we use, if the car has 17 displays, we use an Android-based device to back every day for power, every display. It means we need to install 17 displays in the car, which bring high cost for the hardware and bring bigger effort about the management because you need the update, you need to do the security update, you need to do the OT update for all devices. So this is the solution one. The solution two is enhance the Android framework. Of course, we can change the Android framework to support multiple displays, to let the user, to let the Android to support different users in actions at the same time, but it also can achieve the highest performance because it can support multi-users at the framework layer, but it will also bring big effort to implement this feature because you will need modified, a lot of source code in the Android OSP, for example, the AMS, activity manager service, the UMS, user manager service, and the PMS, package manager service, and even you need to change the window manager service. So a lot of source code should be touched and modified. And it also brings a bigger, maintain effort because it is hard to persuade Google to accept these patches, so you need to maintain these patches. Because Android will be upgraded every year, so you need to maintain the bigger effort, you need to have a bigger effort to maintain these patches from one data to another data. And the last one is low isolation because the multi-display and the multi-user supported in the framework layer, it means it's hard to isolate, provide a strong isolated to the different customers. It's hard to provide different QoAs to different customers. Maybe there is a high property users want to get a better user appearance, it is harder to implement the modification solutions. Framework enhances the solutions. So based on these conclusions, we reached our solutions. We try to consolidate Android workload, a single complaint for using container solutions. So with these solutions, we choose we run a single kernel and based on the single kernel, there was different Android instance, use-based stack isolated by the container, by the namespace and the C-group. And this solution can reduce the hardware cost because only one computing platform can support a lot of users at the same time. And it also can reduce the development effort because we avoid the bigger modification in the OSP in the framework. The set benefit is reduced to maintain the management effort because most of the modification is out of the Android OSP. So we can upgrade our solutions from N to P, from P to Q quickly. So comparing with some visualization solutions, we also can provide some good performance because we just isolated the process in use the C-group and the namespace. So our performance should be near the native performance. So the last is the scalability. We can support different, more Android instance at a single computing platform. Okay, this is our solutions. Well, this is the arc for our architecture for our solutions. The right figure introduce the hardware and the software. From the figure, we can see there is a single computing platform which include some pre-built hardware such as USB controller, display controller, GPU and CPU and SNI. And we also have some external L devices such as HDMI cable and USB cable and HDMI audio. So the external hardware L devices are connected to the hardware computing platform directly through the HDMI port or USB port. So based on our computing platform, we use a single kernel to host different Android instances. There are four Android instances, which will be isolated by each other. So every Android instance will be assigned to a dedicated set of external hardware IOS. For example, the Android instance one will be assigned to the left display and the left USB touch. So with this design, the different user can interact with different Android instances through the different IOS devices. The customers don't aware they are interacting with a single computing platform. So besides the Android instance, we also have an instance manager which will be responsible for the life controlling or the life cycle of the Android instance. For example, it will be used to start, stop, pause, resume the dedicated Android instance. It also can be checked the status of Android instance. Based on this arc, we can see, we can support, Android is only one Linux space always. We can support any different Linux space always. For example, the Loss, the Yaktor and Ubuntu on the single kernel. And we integrate our instance manager and Android instance with the Docker and Kubernetes. So it can be easy to deploy it and the management through the cloud and Docker command. And one thing I want to, it's worth noting is that our solution is, we have extended our modification out of LCP. We implement most of our modification in the Hauer. Hardware, hardware adapter layer. So which is out of LCP and kernel. As we know, Hauer is maintained by the vendor and SOC and hardware vendor. LCP is maintained by the Google and kernel is maintained by the Linux Foundation. And LCP code will be upgraded once a year and the kernel will be updated quickly. So with our design, with our no-intensive design, we can move most of the, we decoupled our solution from LCP and kernel so that we can upgrade our solution quickly. So we have introduced Arc for our solutions. So of course, because for different use cases, maybe we have different feature side in our architecture. For example, for the Cloud Gaming, it is different for the use case for the LTEG. For the Cloud Gaming, there was a little L devices. The most important is the density and the GPU rendering. So the Arc maybe have, architecture maybe have some small changes. So we will go through our implementations. So we will introduce the container, kernel modification and IO, water relation and LCP modification. So as we introduced at the beginning, the most important technology we, our solution based on is the container. Container is a combination of the namespace and the feed group. What is the namespace? The namespace is the feature provided by kernel which can be used to isolate a set of process. So it includes the side namespace, network namespace, mountain namespace and IPC namespace and the PID namespace and blah, blah, blah. And so with the namespace, we can isolate different angel instances, use space stack by each other. So the one process in container one can't be communicated with the process two in container two. Without the, yes. And the feed group is any other technology provided by kernel. It'll provide a solution to assign a different resource such as CPU and memory to different namespace. For example, you can assign maybe 400 memory to the container one and two GB memory to the container two. With container support, we can provide a different property to the different angel container. So that we can, maybe we can give more importance to the some instance and give low importance to the other instance. So this is the modification in kernel. So as we know, the binder is the most important model in the Android, which is being the kernel and it provides the service to Android process for the communication. For example, if the process one want to communicate with the process two in Android world, it'll need to send a message to the binder and the binder will transfer the message to the process two. So in the original design, the binder don't support multi-angel instances. So the straight forward solution is just to modify the source code of the binder and maybe we can add a data structure such as the list into the binder source code and map the every node in the data list to angel instance. We made a PLC, but it brings a lot of modification in the kernel. It brings about 1,000 source code in the kernel which is hard to maintain and upgrade. So we choose a different solution. We create a multiple binder instance, binder device node in the kernel and we assign different binder node device to different Android instance. With this solution, no modification in Android LCP is required. This is a very clear solution. So on the other hand, different binders are assigned to different Android instance. It means the isolation between different... between the different Android instance is better. So with this solution, we just only modify 10 lines of code to enable the multiple binders. So as we know, for the most... for the IoT use cases, the most is the IO virtualization solutions. So we group all IO devices into two parts, two types. One is the dedicated IO devices and the other is the shared IO devices. For the dedicated IO devices, which means the HDMI display, HDMI audio and USB camera and USB touch or keyboard. These IO devices are assumed that it will be assigned for different Android instances. So we extend your UND to assign to different... to assign to different dedicated IO devices to different Android instances. So for the shared IO devices, such as graphics, video, trust-in sensor, we add new power into the class instance demon and also leave some wish power in the Android instance. Why we use the wish power in Android instance? Because we try to avoid the... modify the source code in the USB directly. We try to keep our USB cleaner. So this is dedicated IO devices. For example, we have... let's take the USB input as an example. When we plug multiple USB display... this USB input device, the kernel will be able to create multiple device nodes. So choose a similar solution with a binder which assign different input device nodes to different Android instances. So similar to the binder solution, we have nothing to change in the Android LCP. And because all devices are pass-throughed, so there is no performance overhead in these solutions. For IO sharing the device, shared device, it is more complicated. Let's take graphics and display as an example. For most of the hardware, there was only one display controller in the hardware and only one display controller driver in the kernel. In most of the cases, the controller driver has a requirement that only one main processor can accept the driver. So that we need to create a display demo. We need a display composer which is running across all instances to interact with the display driver directly. So we have two modifications. One is the display composer driver. The other is the gridlock we have and hardware composer we have. So for the Android instance, when the finger tries to create a frame buffer, it will call the gridlock we have and the gridlock we have will call the real gridlock to allocate the buffer. At the same time, the gridlock we have will get the FD for the frame buffer and share it with the display composer. So the buffer has to be shared between the display composer and the Android instance. So Android instance can do any operation on the frame buffer without the orange design. So there is no overhead in this frame buffer operation. So when the Android instance tries to send this buffer to the display hardware, it will send a display request. So the hardware composer will capture the display request and redirect this display request to the display composer. And the display composer which is running as a demo will accept all display requests from different Android instances and merge them into a single display request and then interact with the display controller driver. So with the display composer's help, we can avoid the conflict between different Android instances. We can avoid the screen slash and we can bring little overhead on the graphics. Try to support different hostways. We also provide different backend for different backend for different hostways. For example, we provide Android, X11, Wayland, DIM, backend for different hostways. So we have little patches in OSP to enable our solutions, but we have some enhancement patches. For example, the first one, the patch is to non-memory killer. As we know, non-memory killer is a mechanism provided by the links which will help to select the best candidate of the process to kill when the system's memory pressure is very big, which is very heavy. So the AMS in Android will be responsible for updating the adage into the kernel and kernel will select the best candidate in the kernel and ask the user base to kill it. But this solution doesn't provide multiple Android instances. So we enhance it. We assign different weight according to different properties of different instances. So even the background process in the high-property Android instances will be killed later. Because if we want to provide the high-property service to the high-property customers. So this is just a small change. Okay, just five minutes. So we have introduced the Arc and architecture and the implementation of our solutions as we gave you the case. So as we mentioned, we have implemented POC for the kernel gaming with our solutions. This is a general Arc for the kernel gaming. The left is the mobile device which displays the video streaming and captures the user input and send to the server. And in the server, we have GPU pull and CPU pull. Which GPU will be responsible for running the logic of the application. And the GPU will be responsible for rendering the encoding. And the right side is the game server which is running the game server logic. So this is a detailed Arc for our solutions. The left is the GPU pull. The right is the CPU pull. The left is the GPU pull. As we can see, the user action or user touch will be captured and sent to the server and will be injected to our Android instance. And which will drive the scenario change in the game. Then the GPU command for the games will be captured by our graphics rendering model and will be sent to the GPU pull through the TCP IP stack. And in the GPU pull, the rendering receiver will be packaged as TCP IP data and send it to the GPU and ask the GPU to render the encoding. Finally, the video streaming will be sent back to the CPU server and let the CPU server make an audio video sync and send it back to the device. So this is a real use case for the cloud gaming which we implement in the Android container solutions. Okay, we have only one minutes. So let me give you my summary. So we propose and provide a consolidated solution to consolidate the many Android workloads in a single computing platform. Using container solutions which is suitable for a lot of emerging use cases such as cloud gaming, automotive and IoT. We design a simple L-sharif design patterns to reduce the performance overhead. We implement, we design a low-intensity design to reduce the maintenance effort and we can upgrade our solutions to one depth to another depth quickly. So next we plan to improve the isolation and security of our solutions because currently we just use a container solution to isolate the different Android instances. And we also try to make some POC to make the different links based on OS. For example, we can make lots of OS and Android running on a single kernel. Okay, that's all. Any questions?