 Okay. Hello everyone and good afternoon. Thank you for the coming my sessions and I'm glad to see you in face-to-face the conference. During the COVID, we continue to develop the HGL. However, there is no chance to explain to our activity. But today I want and introduce to the activity using Linux container in the HGL community. This is today's topic. At first of all, I will introduce myself and background of the HGL and instrument cluster activity at first. And second, this is the main topic of today's session. I will explain about Linux container. Third, we tested a feasibility study using this container and the HGL. So I will introduce and explain what happened. Fourth, I explained the issues and this result. Finally, I conclude. So my name is Harunov Kurokawa. I'm working in UNICEF Electronics Compilation. I live in Tokyo. So I worked for over 10 years during UNICEF. I'm working as an embedded software during that first, the mobile platform in the Japanese domestic future front. And from 2013, I worked to start the working Linux and open source community for automotive development. So from 2016, I started the HGL development. And now I'm interviewed the BSP developer and provider to the HGL. And I'm now the reviewer and asset member. That is the system architect teams. And so the latest, I joined the instrument cluster expert group from 2019. So this is the background of about HGL. So HGL, this is the HGL development policy. So HGL and we collaborate and aim to rabbit development for automotive software. So policy is code first and upselling first and unified code based. Unified code means that we are together all of the software from the open source. It means the reuse and create the product, not only their own, so new software. The final target is the product. So we are framed to the product readiness and development. So this block diagram is just reference software component layers. There are several layers. The upper side is the application using the reference as the HGL component services. And the framework service layer, the bottom layer is the OS and the kernel driver in the sole. So this stack is basically open source. The HGL target is this integrate and creates the product for the automotive areas. However, current HGL is mainly focused IVI. So last previous block diagram is basic six on the infotainment component. So now we would like to, next target is a new, we want to create a new profile to next to IVI. So we are target is create the cluster profile. So we will, we are developed cluster profile platform of course based on Linux. So we are continue to discuss to how to create the cluster software so architectures. So policy is two point. We are not only focused high end model as well as low cost model. And we are forked and reused original from UCB. So we are create variable platform from UCB and combine and new create the new product. For example, low cost model is using only the low cost cluster. High end model is using provide from the high end IVI, high end cluster and ECSOCs. So several profile is provide many kinds of platform. So that ICEG cluster expert group was launched 19th. So however, how to create the IC cluster profile? But there are many issue and many puzzle in the automotive system. There are different future in between IVI and cluster. For example, IVI need to require the repeat innovation or various function. Rapid innovation means the most of application is basically so the IVI require new feature and it was installed. And develop viewpoint is a limited short-term development phase. And if we have to find the bug, quickly rapid bug fix is required. And of course IVI is required to summarize the entertainment of futures. So many future is required. So many pre-installed application is required. And after release, some application was from installed. By the way, so the cluster is very, very different from IVI. So cluster side is required, very quality management is required. So for example, full pass coverage testing need to high quality. And formal verification is required. And even if the big fix find the bug, but need to take care of the query bug fix, not laboratory. And not only the cluster requires the selected future only. For example, first boot up and combinational verification is selected. So in order to resolve these many issues, so ICG may but approach to using the need to isolate method. So now we select is a Linux container technology to separate these items. So IVI and cluster are different. However, if we are running on the one SOCD, we need isolate. Then we select it. It's a background of this concept. Next I introduce the Linux container. So Linux container is as famous as called LXC. So LXC is a system level virtualization software. LXC provides container. This container includes shared library inside container software. And also multiple application is inside in the container and running. So it means it's most similar as OS system level. And LXC is a user space interface for support the Linux container maintenance. And so multiple container is running on the single kernel. It's the main point in this session. So for example, container can copy from another operating system and it running on the single kernel. So that inside the container, container does not have own kernel. So multiple containers share one single kernel. And LXC has a future. So the main future is the future. So from using the set root, so it can isolate several root files system. And she groups is the main topics. So LXC can control the resources, CPU or memory or so on. And name space. So some high resources is another container. So that separate the isolate and separate the other systems. In addition, LXC is easy to use the embedded future environment. So the user can only go to the Linux BSP. It means if user want to use, no need additional virtualization driver into the kernel. So that user can available enable to the necessary kernel configuration. So it means most of the BSP is based on the Linux version of LXC. So LXC is already using LXC. And the user space viewpoint. Yacht recipe is already supported in the meta virtualization layer. So very few lines are added following the description in the host site. Then you can use easily LXC your system. So license is LGPL plus answer 2.1. So using LXC, so we can separate the IVI and the cluster. So we can select a root file system. Because IVI is using reused build by standard software from the existing IVI. So it can be changed after product release to upgrade to the product, upgrade function to the market. In cluster site, build high quality and achievements software. So basically keep to the one system to the product. So no change after product release without pre-fight bug fix. So Linux container can provide these isolation mechanisms in automotive use cases. This is just roughly example for multiple container use case in automotive. As I explained the IVI and cluster, basically so to main topics. Additionally, we can create and separate additional container. For example, so the left side, I described the web browser. Browser is a very, very heavy and high end system in the automotive use case. So to keep the container site, cluster, resources, then, for example, web browser only uses limited resources, CPU or memory, to isolate the block. LXC can block the cluster site container resources. And IVI and any, sorry, so because of the, we can separate the container, we can upgrade and separate each container one by one. Then in automotive use case, LXC is a fix acceptable. From this point, we are debug for the feasibility study using the LGL use case. So several years ago, we start to investigate using existing LGL profile. LGL profile is already created as IVI and cluster for the reference. And this, we selected based on the CES 2020 mechanism, demonstration mechanism. So three years ago, we demonstrate existing LGL to profile. However, each profile used the one, each SOC, IVI side using the RKH3 and cluster side using Raspberry Pi 4. So it's separated and needed to hardware. So our investigation, so we are trying to unify the one system to a container using the one running on the SING SOC. So we selected, try to using the RKH3 single board. So how to find the good or bad point? So as a result, we can easily to create the mechanism. So good point is we can just, as I said, just install LXC into the host system. So of course, we need to, several related settings is needed, but it just relax container running on the, and launches each system, IVI and cluster side on one single board. And of course, we can set the amount, get the root file system and using LFSC configuration. And developer viewpoint, application side, we don't need to modify application, separate resources and any settings each side. It both gets container reused existing the LGL profile. For example, LGL application is specific and special API using. However, it's already inside LXC container inside, so that we can reuse to easily to the new system. So for example, API application requires app ID or layer application framework, API or layer management is needed. So user needs to select and each layer and control and management inside. And there is two, several different containers in each system. However, it's not conflict in the single SOC. Then we can run each, both IC and IVI system integrate one single, one SOC. And next one, its reboot is very, very fast we can find. So for example, we can compare system reboot and container reboot. Left side, the system reboot is used. Right side is container reboot. It reboot only the IVI. So this system reboot takes over 30 seconds. However, container reboot only the five seconds. Container site, no need, no any optimization. So as that is using the original LGL. So let's start the movies. Go. Three, two, one. Right side is already rebooted. This system is running a keep, kernel site is a keep. So that only the container site rebooted. However, system site is as well as known, so all rebooted. Then kernel is restart. System already restart. And container site is restart. So there are many hopes. Then heavy. So there are very different point in the container and normal case. However, not only the good point, there are several issues occurred. So main topics issue is DRM issue. So in NVIDIA's viewpoint, peripheral or some resources required, so shared. So DRM device access is a main big issue occurred. For example, container use cases, each container require need to each display. IVI have one display. Cluster also one display. So cluster, each system require the compositor and display accesses. However, DRM device is controlled by only one process. So this container inside is a waste land compositor. We are not compositor running on the other process. So in the kernel view point, there are two, another different process is controlled and access to the kernel DRM device. However, DRM master is only assuming compositor. Then secondary compositor cannot access and cannot show the displays. In demo use cases, we decided to hack the use of nested backend compositor. We called it waste land backend compositor. So it means there are three compositor inside. Each container and host side. The fact happened. Host read it when the compositor and in addition, the several library is required. Then host side, five system is huge. And the cluster, sorry. And cluster view point. Cluster need to keep the rendering and not affect the other system. However, the nested compositor is strongly affected another system, hosted rendering or IVI side rendering. And then boot up cases, boot up sequence case. Cluster compositor is waiting for the host side compositor. Then need to wait time. Then it's not a good approach for this. So how to resolve these issues? We decided and create a DRM manager. So IGEC, we select and develop, create DRM manager. DRM manager supported realize multiple containers. It bases using DRM, DRM, kernel futures. So basic design is master side. Master is created and this from the host side. And these resources release each container side. Then container compositor can control resources each display. Then both container can access independence, independence. So each displays. It's a data flows. This manager opens the device as a master. And if request from the container side, this manager releases the resources. Then each container where compositor can access like directly DRM back end. So now current status was this DRM release manager. DRM release manager is already supported from HGL. And not only that, more DRM client can access same time. So additional future, dynamic transition future, it means one DRM client can transition another process to the same one displayed. It's already supported. And this configuration file is not only how the first release is harder coded. However, latest in the latest activity, so we can create the configuration file easily to select the kind of the display is required and mapping to the fetch kind of the container. The container and display is mapping by configuration files. And this future is described in the documentation in the HGL. And latest needlefish is already supported. And second one, second issue is occurred in the HGL application framework. So HGL is required supported API. This API provides, is called service library. So HGL service is provided HGL future to application. However, this future is closely with LSM. Previous HGL strongly combined is SMAC. SMAC is used in order to use protect from the un-starting application. So security viewpoint, it is a very important point. However, it's because of this future in force, it needed to install and share some database. So previously, so HAC viewpoint, host needed to should install to the SMAC level and database in advance. It means we should mount SMAC level and the database and shared from each both container. It means the isolation is broken. So all and in advance, we need to install the container information in the whole society. Then it's very difficult to the container upgrade after product release. So that it is just a demonstration case, but need to resolve its issues. So however, so current HGL switch to the HGL application framework and SMAC use case. So product viewpoint, we select the, HGL select the S3 Linux use case. So now already in this year already set to the SMAC to the S3 Linux. So we can use a new future in the LXC container activity. So it can try to using the latest HGL, 14 NINDIRFISH. So it's already supported LXC, so that we try to integrate and exchange each parameter, each container of your system. First of all, I try to integrate, I try to integrate the LXC container. It is just a HGL feature, S3 Linux features inside only the whole site. And create a bit back from each system. Only add the idea with future. And other future no need to create and modify to create, to support running to running on the LXC. It's just two line is added. And S3 Linux almost so similar, need to share to the information, so amount, entry is required, only one line is added. So it's just two line is added. And we need to share to the information, so amount, entry is required, only one line. As a result, we can combine to, sorry, I can running on the latest HGL profile. Currently, HGL have four separate different profile for IVI. Most simple IVI is provide from ICEG. It's just simple application inside. Very, very light demonstration container. Qt IVI is very famous for the HGL. It means legacy IVI for HGL. The future we create two different new HMI. One is HTML5 profile. It's running based on the Chrome and based on the HTML framework profile. So last one is the new future, Flutter. Flutter is the latest activity in the HGL. So HGL is contributing, we are contributing the Flutter profile. And now, newly, HMI framework. And almost done the fourth profile, HGL is inside. So we select a success to combination for IVI and cluster. We can exchange for container at the same time and exchanges. In HGL booth, we are demonstrate exchange the profile. So the demonstration is just Qt and switch to the Flutter was simple NABY profile. So please go to the HGL booth. So we can explain this activity. And also the similar demonstration, almost same demonstration, demo is ex-habit. But in the booth, we explain the HTML5 profile. So we collaborate with us contribute the activity in the futures. So the status was the cluster profile. So historically, so we are step by step, we are the cluster LXC future and release to the HGL. Initial release is from LAMPRE. In this version, it supported only LXC, some software. So we only display on single display. Second is the marine. It was merged in the DRM release and similar in futures. So we can launch the two displays and separate the container. And third release supported a new hardware. HGL difference hardware is provided in the Panasonic and so on. This hardware is supported and merged in this year. So finally, just working in progress, we will contribute the latest activity into the HGL and demo code for the future. Please wait that much and contribute in the HGL. So if you want to create and test these futures, please contact me or you can contact me. Next, access to HGL documentation. HGL documentation described to the future. As I mentioned, most of the future is based on the IVI. IVI supported QM last part and so on. So you can use and check easily from HGL document. You can build IVI demo, HTML, Flutter, and so on. And the cluster IGP point, our member creates an example of the setup, not only the build, and set up a method in the HGL documentation. For example, what kind of equipment needed for demo to display? How to connect some equipment and board? So this documentation already described, you can use what and create these LXC demonstrations. Please contact this URL. Finally, it's conclusion. So we selected LXC for automotive system as system container because we want to use it, isolate both IVI and cluster root file system. And LXC can separate and other IVI and cluster and other profile select, separate to control resource usage. So we tested and easily studied. We understand it is easy to upgrade and exchange containers. It is a right way to upgrade the file system instead of the system. And easy integrate additional IVF profile for the demonstration. So future work, we will add other peripherals, devices for embedded system, sound video, and so on. It's a specific issue in the embedded and automotive use case. And we will develop another future container like plaza and so on to because high load container. And finally, we wish to develop more systematic isolation mechanisms using other RTOs, open source RTOs inside in the AGL or collaborate with another Linux project. And it's just last slide. So one more message from you. We are create a new account. Automotive Great Linux JP account is opened last month. So we will help anyone's questions about AGL in Japanese, not English. So it's a localised activity, but we want to help to the many newcomer to coming to the AGL activities. For example, what is AGL? How to join development called or so on in each event? How to run AGL? Difficult, which is better to how to run? And what is the most important future, interest future in the latest AGL? So many, many issues is occurred, but question is you have, however, there is no approach is inside. Then we are create a new account in the Twitter. So please follow the account right now and let's join and develop AGL with us. That's all. Thank you. Any questions or advice? There are three questions. In term of development of a new IVI container, any idea on how? Sorry. And for the Linux system we use the LXC. LXC is a normal future, not only the AGL component, then no need to modify from into the LXC. We can just use inside AGL, no need to modify. Sorry. It's a time up. Since later I will answer these questions. Thank you.