 Okay, let's start my presentation. Hello everyone, my name is Kinari Deguchi. Thank you for coming on my presentation. I'm a member of Panasonic Automotive Systems Company. I'd like to talk about achieving a software defined match display system with unified HMI. Okay, this is my first presentation at the meeting, so I'd like to introduce myself briefly. I'm working for Panasonic Automotive Systems Company for three years, and I'm focusing on graphic related development, especially display virtualization, a close match display. I like watching baseball games, and this year I went to watch a game at Cauchy & Sturgeon, and I live in Kyoto, so I sometimes go out to Kyoto city for exercise when I have free time. Okay, before I start my presentation, I will introduce the players presentations about a unified HMI that is software-defined match display technology. Our company, Panasonic, have introduced the overview of unified HMI at such as yesterday's Jerry Sun's SDB EG session. Panasonic has contributed to automotive-grade Linux, AGL. AGL is a collaborative open source project that is bringing automakers, suppliers, and technology companies with Linux at its core. In September this year, we integrated some of the features of unified HMI to AGL unified codebase. And today I will talk about detailed main technology on unified HMI and how to use unified HMI that has been integrated into AGL UCB. So by listening to this presentation, you can easily try using some of the features of unified HMI. Okay, let's start my presentation. Agenda is as follows. First, I will talk about why software-defined match display technology unified HMI is needed by automotive industry. Second, I will talk about the clinical overview of unified HMI. Next, I will explain that we have integrated one of the unified HMI components to AGL UCB. Next, I will talk about how to use RVGPU that is integrated to AGL UCB by illustrating sample use case. Finally, I will talk about the future vision of unified HMI and conclude my presentation. Okay, in the first section, I will provide why automotive industry needs unified HMI. First, before my explanation, please watch the unified HMI concept demo video. An open-source display virtualization technology developed by Panasonic. In this demo, we will show how unified HMI enables to design and develop entire cockpit UI and UX efficiently across multiple displays without dependency on hardware. Here is a cockpit UI and UX development environment that virtualizes the physical automotive environment. For example, this environment virtualizes a cockpit consisting of AGL and two Yachto Linux to evaluate the cockpit UI and UX across multiple displays. It's a unified HMI associated layout design tool. It enables to develop cockpit UI and UX across multiple displays in the virtual environment. Take this sample graphic application of the Linux Penguin as an example. If you want to change layout of it, you can flexibly move and rescale the application across multiple displays. You can even design across display animations. Once the layout design is complete, you can easily verify the entire cockpit UI and UX in the virtual cabin environment. Now, you can see the designed animation across multiple displays and operating systems. Last, after verification in the cloud virtual environment, the developed cockpit UI and UX can be seamlessly deployed to the physical ECUs with same binary. In this way, unified HMI enables to efficiently develop entire cockpit UI and UX by anyone, from anywhere, without dependency on physical automotive hardware. Therefore, it greatly contributed to reducing lead time and costs of entire cockpit UI and UX development. Okay, thank you for your, thank you for watching unified HMI demo video. Next, I will introduce tolerance in automotive industry. In recent years, the number of in-vehicle display has been increasing, especially in high-end vehicles. For example, the appearance of head-up display, camera monitoring system, and the digitization of information display, such as meters. This has led to a focus on flexible application display technologies across multiple display. And these technologies are expected to provide new UI and UX possibilities. However, if we want to develop this flexibility using existing graphic frameworks, ad hoc interoperability development of displays for each hardware platform, and it is required, so it is very costly and time-consuming. Therefore, in automotive industry, there are needs for software-defined display framework that separates software from hardware. Okay, we developed a software-defined display virtualization platform based on Bataille GPU called Unified HMI. And Unified HMI allows for flexible development of the entire cockpit and cabin UI and UX, across multiple displays, and it is independent of hardware and OS configuration. The entire cockpit of UI and UX is developed using virtual environment, and it can be seamlessly deployed to physical issue. Okay, and let me introduce some values that Unified HMI can provide for both automotive developers and automotive users. First, automotive developers will be able to agile and software-defined cockpit UI and UX development. And specifically, Unified HMI enables efficient and integrated cockpit UI and UX development and evaluation across multiple displays on virtual environment. It is independent of hardware and OS configuration. And it is scalable to deploy seamlessly to various scale grades and models. And second, automotive users can experience first and personalized evaluation with cockpit UI, UX. And specifically, users can receive upgraded customer experience from frequent over-the-air updates on UI and UX improvements. In addition, flexible cockpit UI and UX able to be customized according to user preference, no matter all the grades and models. Okay, so Unified HMI provide a value mostly for automotive developers, but efficient development using Unified HMI leads to UI and UX experience, which meets user preference. And so it also provides value for automotive users. Okay, next section, I will provide a technical overview of Unified HMI. Unified HMI consists of two main components. The first component is remote battery or GPU device, and we call this RBGPU. It is shown in green box in the figure. And we recently integrated this RBGPU to Azure UCB as a part of Unified HMI features. RBGPU can render application remotely in different SOCs or virtual machines by a network. The second component is distributed display framework. It is shown in yellow box in the figure. This allows flexible layout control of applications across multi-display. And in summary, distributed display framework determines the layout of the applications across multiple display. And RBGPU renders application remotely in different SOCs or virtual machines as needed. And thanks to this system, Unified HMI allows for flexible development of the entire cockpit, UI and UX, across multiple displays, independent of hardware and OS configuration. The following describes these two components in detail. First is RBGPU. RBGPU is a network extension of battery or GPU that is commonly used for GPU virtualization in virtual machines. RBGPU can be further divided into two components. RBGPU proxy presents on the left side and RBGPU renderer, it is present on the right side of the figure. RBGPU proxy transfers the GPU commands generated by OpenGL ES and to other SOCs or virtual machines. And RBGPU renderer receives GPU commands transferred by RBGPU proxy and renders application graphic using that commands. RBGPU also creates virtual input device, such as mouse, touch and keyboard using UI input devices. If you touch remote display, input events can be sent to application via your input device. Okay, next is distributed display framework. As shown in below figure, distributed display framework maps multiple cockpit physical displays into a single large virtual screen. By placing application on the virtual screen, you can control the layout such as placements, sites and display audits, multiple applications. If applications are placed across multi-display on the virtual screen, RBGPU renderers those applications remotely in the corresponding essence sheets or virtual machines. Next section, I will provide the past and the future unified HMIs contribution to AGLE-UGB. In September, we integrated RBGPU, showing the green box. It is already available in the latest version of AGLE-UGB, Blickly Pike. Specific usage of integrated RBGPU on AGLE, we will introduce in the next section. In the future plan, distributed display framework shown in yellow box will be committed by first half of next year. The application shown in blue, it is currently only available for QT application, but will be available for AGLE Flutter applications by second half of next year. And we will continue to make contributions in bringing more advanced features of unified HMI to AGLE-UGB. Okay, next section, I will provide how to use integrated RBGPU features on AGLE. Learning RBGPU on AGLE can be implemented in total seven steps. Those steps contains from preparing your environment to rendering applications remotely. In this section, I will introduce details of those seven steps and focusing on differences from the AGLE official document. Okay, step one is about how to prepare your environment. Next slide to learn RBGPU on AGLE. And in addition, I will talk about flow of RBGPU commands. Currently, RBGPU supports three platforms, X86, Raspberry Pi 4, and AGLE reference hardware. And to use RBGPU, you have to prepare at least two of the above three platforms and use one as the sender and the other as the receiver. And sender and applications are running on the sender and application graphics are rendered in receiver. The sender and receiver can be changed as you like. Even if you don't have any ECU devices and if you only have an Ubuntu PC, you can easily use RBGPU by using Ubuntu PC and X86 emulation. All used platforms must be connected to the same network and accessible by IP address and connection port. The following figure is a flow of RBGPU command. First, learn RBGPU block sheet command on the sender site and learn RBGPU renderer command on the receiver side. And then a virtual DLM device related by a virtual GPU called this CardX here is created and RBGPU is connected to the network. On the sender side, if Western and Wayland applications use this device, CardX, those applications will be rendered in the receiver display. After the transfer, those applications displayed on the receiver can be operated by, for example, touch and keyboard using U-Input devices. Okay, once the environment is ready and step two, it is needed to download the software necessary to learn RBGPU on AGL. First, you need to download the AGL software referring to downloading AGL software section in the AGL official documentation and showing this URL. After that, you have to get the RBGPU recipes which were recently integrated. RBGPU is in the meta-AGL level directory and it is available in master branch or after version 16.0.2, that is the latest version of AGL UCB, Prickly Pike. The software will then be downloaded in the following configuration. In the figure, meta-RBGPU contains the recipe for RBGPU and AGL RBGPU contains the feature to use RBGPU on AGL. Now, the software is downloaded. From step three to step six, it is about build and boot the AGL demo image. Step three, initialize environment variables and path settings in the build environment. If you add the AGL RBGPU feature, when initialized, RBGPU is installed in your build. Step four, customize your build referring to AGL documentation. Here, no specific operations are required. Step five, build your AGL image. Currently, RBGPU supports AGL demo platform and the AGL image Western and so on as AGL demo images. Here is the example command of building AGL demo platform. And step six, once the build of the AGL image is complete, deploy images on each platform you prepared referring to the AGL documentation. Here is the URL of deploying AGL image on x86. Okay, finally, in step seven, I will introduce how to use RBGPU commands. In the use case here, Unified HMI renders sample application that is running on sender to remote display of receiver. Here is an example of an experiment with Ubuntu as the sender and the AGL on each platform as the receiver. Please check our GitHub documentation on how to use RBGPU on Ubuntu. Only main commands are shown here. So, but several setting commands are required to learn RBGPU. And so please check our readme documentation in Meta-RBGPU directory on how to use RBGPU on Azure in details. And first and second commands are learn RBGPU renderer on the receiver side and learn RBGPU proxy on the sender side. And RBGPU proxy specify the window size of application and the receiver's IP address and connection port. And RBGPU renderer specify application window size and connection port. And when these commands are complete, DRMDeviceCard.x is created and RBGPU is connected to the network. Okay, third command learn Western applications with specifying the DRMDeviceCard.x that is created by RBGPU. And the Western application is rendered in the receiver display. And next fourth command learn Wayland application while Western application is being transferred the Wayland application can be rendered in the receiver side. And here you can see that the GLmax application which is not installed in SGL by default. This is rendered in the receiver display. Okay, while application graphic is rendered, application is running on the sender side. This concludes introduction of how to use RBGPU on SGL. But what is described here is just an example. For example, SGL can be configured as a sender to transfer an SGL application to another Linux hardware. And therefore we think that the value of SGL can be enhanced as unified HMI expands. And if you are interested in unified HMI, please try running and playing with it by referring to our GitHub documentation. Okay, final section, I will provide the future vision of unified HMI. We are currently targeting the following three activities to achieve more flexible virtual display framework with unified HMI. The first activity is expansion to support flat application, as mentioned in our activities on SGL. The second is enable applications graphics to render in and out between Linux and the other OS. A currently on remote transfer of application is possible only between Linux's. But in the future, we'd like to support transfer applications from Linux to Android and from Android to Linux. The third is to extend unified HMI to more media, for example, audio and video. Currently only graphics are supported, but in the future, we'd like to expand the media that can be controlled by unified HMI. Let's work together to create an ecosystem to enable new values of UI and UX in the multi-display environment with unified HMI. This QR code is GitHub page about unified HMI. If you are interested in unified HMI, please access it and give it a try. Okay, finally, let me give you a preview of the future development for unified HMI. We are going to show a demonstration about cloud-native unified HMI at AGL booth in CES 2024. The cloud-native unified HMI technology enables designing, developing and validating cockpit UI and UX, without depending on underlying hardware architectures. It empowers developers to rapidly and efficiently create cockpit UI and UX, which leads to software-defined digital cockpit solutions. We are planning to exhibit a demonstration at CES 2024 to show such values, and we'd be delighted to meet you there as well. Okay, I'd like to conclude my presentation here. Thank you for listening. Is there any question or comment? Is there any limitation or minimum requirement for the remote processor or the local processor which is rendering 3D graphics for, I don't know, having glitchless communication and no delay and no lag? Okay, thank you for your question. There is a minimum limitation. For example, using ECUs include the virtual IO and graph drawing performance will be decreased such as frame rate, network speed, by depending on using applications or network environment. But the performance will be, for example, frame rate is maintained on 60 FPS. So in automotive quality, this limitation is so... It is sure that there is a limitation for performance, but it is a useful technology, I think. Okay, so... Let me give you some follow-up explanation. So I think regarding your question, so basically two points need to be considered. The first one is the connection between the two ECUs. So you need to probably enable a high-speed Ethernet so that the communication itself is smooth. The second one is that when you're checking the architecture, actually it's transferring the GPU commands or say the OpenGL commands. So that means when your original data has... Original rendering contents has a lot of texture. So in that case, probably the performance, there will be some overhead. But for most of the cases, it should be guaranteed with the 60 FPS. Hi, so thank you for your presentation. And I just want to clarify the minimum... Like the previous gentleman said, the minimum required specs of CPU and because the old rendering of the one unified HMI is done on the server-side CPU and the rest of CPU is just receiving the image. Is that correct? The unified HMI depends on GPU performance. So CPU performance is so... Yeah, GPU performance. Yeah. So only the server-side CPU, SOC, is related to rendering the whole frame buffers of the image. Is that correct? Yeah, CPU... Yes, it is correct, so... Remote ECU. So it will be the GPU on the remote side where we render the contents. But the application itself is running at the local ECU. So that means you still need computing resources on the center-side. Yeah, so on the receiver-side, we still can utilize the DRM device as a rendering device. Thank you. Is there any more questions? Okay, there is no question. I conclude my presentation. Thank you.