 Thank you for coming to my session and not leaving the previous one and that's it. So I would like to explain the status of the ICE update. So my presentation is from myself. However, this activity is collaboration in the ICE working group. And my name is Haruno Kuroka. I'm working in the Ruinstass Electronics Corporation for over 10 years. First one is from 2007, I'm working as an embedded software engineer. And from 2013, I'm using the Linux and open source activity for automotive development. And my experience of open source community. From 2016, I joined the AGL development. Mainly, I would support BSP development, reviewer, and SART member. From 2019, I joined this instrument cluster expert group. And now I work as the AGL Steering Committee member from 2020. Here is today's topics. First one, I would like to introduce the background AGL and instrument cluster expert group. And current AGL ICE called implementation. Next, status of upstreaming and discussion. Finally, future work and Q&A. What is AGL? In this year, I have many questions from outside of the AGL member, especially for BIST or another committee members. So I think several ICEs are not uncreate words in the AGL. For example, AGL is leading open source automotive software project, it's a project, and hosted by the Linux foundations. A third one is a complex Linux based open source operating system and middleware and application framework. It creates a software and operate as a non-profit organization. And finally, initial target is IVI at first. And right now, so we focused on SDB and ICE. Instrument cluster also are also covered. Then it's several information in the AGL world. Here is the overview of the AGL. This message is from Dan Cosi. The development policy is code first, upstream first, and unified code based. It creates as a single software. So we should collaborate and aim to rabbit innovation for automotive industry. And we will collaborate to build an ecosystem and decrease time to market. So our goal of the AGL, build a single software platform for the entire industry, of course automotive and other industries. Focus target is a developed 70% or 8% of the starting for production project. And it's special to minimize the fragmentation code by combining the best of the open source. Finally, so develop an ecosystem of the developer's suppliers or using the single platform. These are AGL code. Here is just a transformation of the AGLs. Initially, so AGL is found at 2013. It's aimed to create the IVA system by Linux. Then our first target is a create specification. The specific number one is launched at the 1915. That main target is from RTOS to Linux. And around 2017, I was joined in the AGL. So we focus creates a unified code base for IVA. So that several expert group is launched. Navigation, UI and graphics, focus audio graphics, and application framework, security framework, and speech to voice recognition, so on. Then we focus code first, reduce the fragmentation in this term. Recently, so the vehicle system is bigger. And so we focused creates the vehicle system by open OSS. So right now, the IVA system is focused IVA, EG and cluster EG, and other EG is also around. Then, not only the IVA system, the AGL expands beyond IVA. So I will explain the instrument cluster expert group background. So our motivation is we would like to create a new profile following to IVA. Then we will develop cluster platform based on Linux. So during these years, we are continuing to discuss for cluster software architecture. So we focused both high-end model and low-end model, especially low spec is needed in this expert group. Of course, we create variable profile from UCB to combine the EG profiles. So what is the product development issue? Of course, the quality and robot test is needed in automotive requirement, especially so functional safety is important point. Instrument cluster has telltale function. It shows critical failure information to driver. On the other hand, the quality management is also needed. This has separate quality requirements between instrument cluster and existing IVA system. So our approach for functional safety. There are several isolation methods for it. Mainly, isolation method using hardware or hypervisor is needed. What are the safety function layer? Its isolation approach depends on each company or as a community system. Then ICs mainly target is very function of our systems, the rabbit information requirement, advanced quality management of innovation, cybersecurity, and so on that we focused main function for bigger systems. So right now, there are many puzzles in the automotive system between IVA and cluster. Sorry. So IVA need is a little innovation and various function as needed. On the other hand, instrument cluster is advanced high quality management and selected functions. And so it's a different requirement. So in the result of the main discussion, we decided using the QM isolation using the container. It's one more isolation method rather than hardware or hypervisor. It takes each software developed on quality management level, separated IVA and cluster or other side. Each quality level is not same then defined each software type. So when all software integrates and mixes in the one big software, but this requirement will cross pages around one big software. And our approach is isolate each software using relaxed container technology. Here is a background of our activities. Next, current ICS code implementations. So it's a review of the previous presentation last year. I introduced a solution of the container guest integration for NVIDIA. Integrate an LXC feature in host, support device sharing between guesties, and how to mount device to share between guest and host. And at that time, four IVA and cluster root file systems are lunged for existing ADL host at that time. And same ADL space key issue is resolved. It's one year ago, I explained these features and results. In the beginning of this year, our collaboration output is this exhibit, the CS2023. There are three types of collaboration and demonstration is demonstrated. First one is IVA and new HMF Flutter demonstration. Second one is our output, container isolation, and demo using the IVA cluster integrated by ICG work. Third one is similar with the integrate the single SSOC using QVM-based virtual IO integrations. Here is the container isolation demo overview. IVA and cluster are consideration on a single SOC. Separate multiple file systems using container as you know, the LXC container. So this demonstration can ensure quality assessment assurance for cluster site by isolation using the file system of IVA site and cluster site. And IVA update, like the OTA or other reasons, does not affect the cluster site. So this demonstration needed to appeal some product use cases by reference implementation. So isolation use cases, we decided the crash IVA guest does not affect cluster site and upgrade use cases. Realize the IVA system exchange between several IVA containers. The reference implementation for container isolation. We implement user can switch IVA guest image by human operation. It simulates the crash system or upgrade image. This video shows the implementation by switching IVA home screen. Right now, Flutter IVA home screen is showing. After that, it was changed. Let's get started. So user switched the bottom and rebooting and switching the new IVA. Next, another one is showing. It's a simple IVA system. So this demo can visualize the benefit of the Linux container features. And this demo can show the good example of use case by available code by open source. This demo can help to understand container future for any users. So I explain the detail of the architecture of this switching mechanism for 2022 demo. Here is the keyboard operating the switching mechanism. First one, the system booting, system start kicked the DRM list manager and mocked container manager. Second, container manager kicked LXC and LXC starts up to container as default, IVA container and cluster container. In parallel, the LXC mounting is a background. The other three container exists. When user pushing the button, the container manager grabbed the event dynamically. Container manager send the event, the LXC, and LXC switching to the IVA container. First, the stopping the simple IVA container exists. After that, restart another container by default. This is the mechanism of this architecture. So next, the status of a springing and discussions each other activities. Right now, so there is two type of the demonstration future is existed. First one is the cluster demo LXC host demonstration image. Second one is the instrument cluster container demo. What is the different? The first one is just use the two containers and two guest images. There is one container image for each. It does not use the container manager. Then install this guest into the one partition. And LXC config file is used as default file. Of course, you can use that can change the configuration file as the mount point and so on. This demo aim to run and study and develop the HGL IC container integration for developers. This image is supported from HGL 13 called Marine. Bottom one, there is to container and support for IVA guest images. This image supports container manager for switching guest as I mentioned. Separate each partition and install the guest for each because security view point in customer for the product use cases, we will split in the partition, for example. And container manager configuration file is exist. This demo aim to explicit the HGL demonstration for each invent by developer. Right now, this image is supported from latest HGL from 16 called is a pike. It is busy, but basic configuration file example. This example is IVI demo features. This container configuration file is prepared for IVI and cluster demo as default. The user must use different file for each hardware. So please modify for your hardware. Current RCA Gen3 and Raspberry Pi only prepared. This information is spread and for each file is located. And then, with a bit baking, the information configuration file is merged into the single configuration files. On the other hand, container manager configuration file is special for HGL specification. Here is an example for Qt IVI demo. HGL container manager specification configuration file is here. Users should prepare for each guest container. Of course, there is configuration file exist for IVI and cluster. This configuration file also must use different file for each hardware. This description is based in X configuration. So user can compare the original LXC configuration and compare and apply this file. For example, this one is specific for this container manager develop image. So split defines the block file to split the partition. And here is a setup procedure, HGL container demo. The procedure is described in the conference right now. And so, upper side is cluster demo using the LXC version. Bottom side is an instrument cluster demo includes container manager version. The different point, please note the different point. The instrument cluster demo is required existing in the HGL IVI demo platform. So user needs the bitbakes to create the image as prepared. And after that, user defines the image deployed directly. After that, bitbakes cluster demo, sorry, I mistaked the bitbakes container, instrument container demo image. So notice do not leave demo implementation as is. Typically, it is not a good approach to submit the demo code into the community. However, in HGL activity view point, we are welcome to even the demo code. It we should refactor and share inside the HGL so that suggests to submit code into HGL repository. Because the common code will become an asset, common code will reduce the fragmentation. For example, if not upstream of the code, the user creates the each own development demo or POC or product. It creates a fragment. Of course, some code is needed. But if needed, we will share and combine for the community. In the upstream case, your upstream code becomes a common code. After that, another user can use your code for next product or demo or other. It's collaboration activity. Second, easy of the maintenance through a generalized platform code. That means the user or developer creates the original code. In that time, it's good activity. But in the OSS view point, we will upgrade version app. After that version app, we need to upgrade the code source code. But sometimes, it's a conflict or code and affect and drop the change APIs or sometimes unexpected phenomenon. But I'm not sure the fact happened after the merging the development. We should focus the earlier activity. And after that, there is big gap to merge inside your code. So upstream case, after if you contribute the community, the version app after that, the original code can use as is. Third one, easier to find mistakes and bugs by other users. For example, if you have the bugs or mistakes in your code, only you or your team can find only the bugs fix and bugs. That means you should fix then on your code. On the other hand, other user cannot find any bugs or mistakes on your code. But if you contribute the community, your code is shared and another user can use the source code and find your code. If most likely someone will find mistakes, then you are aware. We are aware of the mistakes from the report or sometimes community user can modify or bug fix in community. It becomes an ecosystem activity. Then don't leave even demo code. Please contribute, start, review, and get the feedback of the community. So here is the support hardware and upstream status and support hardware. Currently, so two hardware is support. One, first one is the AJ reference hardware board. Mount on the ARCA H3. Second one is Raspberry Pi 4. There is only two hardware is supported because there is this demonstration required to display output. Right now, initial POC or demonstration is using reference hardware. But upstream code is support Raspberry Pi and reference hardware. In documentation point, reference hardware is described in partially. Raspberry Pi is not yet. On the other hand, I try to launch the QM, it's the ARM and x86 version. Right now, it is out of scope of the initial activity, demonstration, and so on. Right now, so try to upstream. It's a work in progress. There is some limitation due to the two output is needed. Actually, so no plan, but not yet. It's upstreaming the container manager version. Documentation is not available. But this demonstration and setup information is described to the conference. Anyone can access here. In addition, container demo documentation is described in Azure main site and even outside as partially. Azure document site is described in here. And E-Linux week site support described in the same information by community. Last one, here is discussion task with other expert groups. First one, RTOS world implementation. This activity is required as any other expert group, as well as the ICEG. Initially, ICEG plan to create and support communication between MCU site and SOC site. It means isolation hardware implementation using both hardware. The communication SOC and MCU is needed. So now we are continue to discuss the RTOS activity in the third call. Second is audio activity future using bad IO sound. This activity led by SEBZ expert group. Its focus is non-hypervisor using the bad IO work. Then, ICEG try to implement the bad IO route back task in container isolation. Last one, we are trying to implement it using the big signal future with a Cook Saber. The Cook Saber is the output of the co-based activity and the reference implementation by Eclipse Foundation. And Cook Saber is implementation by Scott inside the IBM profile. Right now, we are trying to integrate into the IC container demo with committee expert group. And the future work is on ICEG. Right now, we will develop the resource isolation. Container can support resource management for CPU and memory. Then, we try to isolation for CPU and memory. This implementation and demonstration will turn. So we need to improve this discussion and visibility and upstreaming. We will develop the QM support. It's working in progress. Class add-in works in both QM and X86 and ARM64 version. But we need to investigate two-display output QM so that, right now, the DRM release can work into each container working for single launch only. So last one, we should update the document station. ICEG work and the instrument container demo is built. And how to customize the configuration is not available in documentation, we should do. Here it is the last presentation. Thank you for this activity, the collaboration in this instrument cluster group developers. So not only the means of this developer is leading the activity. Yamaguchi-san from Aihin, Ishii-san from Panasonic Automotive Systems, Tokita-san and Nobuta-san from Fujitsu, and Harakisa-san from Suzuki. We are all here, sorry, Yamaguchi-san not here. So thank you for the collaboration, this activity, and support from us. So in addition, please introduce another related developer talks and Azure Exvisit booth. Related developer has technical sessions. Motai-san from Cybertrust, collaboration activity using the CIP kernel on Azure. Tokita-san from Fujitsu introduced this Zephyr activity. We are trying to integrate Zephyr to Azure. And our Azure has the demo booth in the hall for. So the latest demonstration is experienced, and we are explaining the switching demo and another demonstration. Last one, one more explained. We work as the automotive-grade Linux Japan member. Right now, we created the Japan X account as a Twitter. So we will have someone's question about Azure activity, Azure demo, not only the ICEG, as well as all of the activities. What is Azure? How to join the develop? How to run Azure? And so on. If you have a question or interest in this Azure activity, please access and follow us. And don't hesitate the question to us. Thank you for attending. Let's collaborate automotive Linux container with us. Thank you. Do you have any questions or comments? OK, if you have any questions, I will stand to the Azure booth around the Runesus booth and also then please come here in English or Japanese or Japanese better. Come here. Thank you so much.