 Dobro. So, I believe we can start now. So, my name is Michele Paulino. I am active in the talking about the software defined vehicle in AGL since 2016. And yes, you recall, right, Dan and Walt. So, actually, now is the right time, more and more is the right time to talk about these topics. Today, I'm going to show you what we have done today, what we have done this year in the context of the VIRTAIO loopback activity of the SDV expert group. So, VIRTAIO loopback is a technology that we designed and implemented last year in the context of SDV expert group. I will start reminding you more or less what it is and why we have done it in that way. After, I'm going through a short description of the updates done in this year. I have also a small demo related to the host user sound that I'm going to show you. And then, of course, what's next and what are the proposed features for 2024 or in any case for the future. So, let's start from what VIRTAIO loopback is. I believe Jerry Sun was very clear yesterday when he said, we wanted to build an application, we want to run it anywhere, meaning cloud, real hardware or virtualized systems. So, this is actually the real objective of this technology that is called VIRTAIO loopback. What we have done is to build a bridge between the driver, you see there in kernel space in the pinkish side of the square. So, a bridge between the driver, the VIRTAIO driver, you can imagine a VIRTAIO driver like, I don't know, VIRTAIO sound, VIRTAIO video, VIRTAIO net, all the VIRTAIO divers that you might be using today. We build a bridge between this driver and an application in user space. To do this, of course, we implement some mechanism that are in the middle based on shared memory. And what it is important to say is that the application that is now in user space is able to implement all the machinery needed to respond to the driver requests. Usually, the very common case is that this user space application is what they call the host user devices. So, devices built for being run in user space. This is very common in, again, in NFV. So, in the network field, this is very common with VOS user net. So, the network drivers usually today are run in user space, at least in NFV. The reason why they do this is mainly performance. They are able to pin the core, for instance, to these drivers in user space. But, of course, there are also other reasons that are led to the fact that user space is easier to maintain, to develop than caner space in general. More details, well, actually, about the benefits of this technology. Well, number one, you can already reuse existing drivers. So, we don't want to touch virtio sound, virtio net, virtio can, and all the other implementations that are already there. This means that for you, as an application developer, using this technology is almost transparent, so you don't need to know what it is inside. Second benefit, well, is that the fact that using virtio, we are not really, we are hypervisor agnostic. So, we don't really focus given hypervisor. This is a technology that runs mostly on the host side. Because, actually, when you are using an hypervisor, you are already in a virtualized system. You will use loopback when you want to use a VM built for a virtual machine inside your host. Second benefit is about shared memory. So, we designed everything in a way that performance should be impacted as less as possible. Of course, we are adding some intermediate steps, so we expect some performance drop. This is something that we measured last year. It was not really, it was something negligible. It was around 5-6%. So, shared memory, better performance. Then the third part is the fact that being connected with virtio, virtio now is a community that is getting bigger and bigger. It started being something only data center related, but now, finally, everybody is using it. And for this reason, using virtio means using something that will be open also in future and stable for a long time. So, the two pictures here below shows applications that run both in a native side and in a VM. Now, let's talk about a bit of the design. So, how the solution works. There are mainly two new components being added. What you see in yellow already exist. So, is already your application with some kind of libraries that maybe is put in the middle. Your virtio driver is already there. And the many of the host user devices on the right are already available. As I said, we build a bridge. So, the components that we added are the ones you can see there. One in user space, the second in kernel space. Let's go a bit better into the details of each one of the two. Kernel space has been enriched with a new transport. So, every virtio driver then uses a transport to really connect to the real hardware. The transport can be MMO, PCI express, anything that is, let's say, a bit more real than the virtio interface. In this case, we build a new transport, this called loopback, which simply doesn't forward the calls directly to the hardware, for instance, in memory, like MMO does. But actually, forwards everything in user space. In user space, we have a new component that we call adapter. The adapter is actually a component that set up the communication between the device on the right and the driver on the left. So, the adapter is the guy who says, here are your V-rings. This is a notification of a control message that you should consider and things like that. If you see the picture, you will see that there are system calls, usually IOCTLs, sockets used by the user space applications. The virtio loopback adapter has a lot in common with Kernel, in the sense that today, if you have any experience with vios user devices, you already use the KAMU plus KVM, typically. KAMU implements, well, maybe it's better to say the opposite, the adapter implements part of the KAMU code, which is actually all the library needed to discuss with the vios user devices, because today vios user devices expect to talk with KAMU. So, we have simply, we are, let's say, emulating KAMU, giving, respecting exactly the same APIs and provided exactly the same response to the drivers, to the device. So, the device is interacting with our adapter in the same way as the device is interacting with KAMU. For this reason, it's also important to say that this is another component that you don't need to modify. So, today, what we do is simply we take existing devices and we make them work in this architecture. More details about, maybe I already anticipated some of them. So, in Kernel space, we have a char device and transport. The char device is the one that exposes the IOCTL. You can see how then there is an M-app syscall from the user space device. That enables actually shared memory between the driver and the application itself. While on the adapter side, this is actually a C application, uses event FD and Unix sockets to talk with the user space devices, and here we have IOCTLs together with event FD. So, these are the components that are in place in this architecture. I went through a kind of, let's say, a wrap up of what it is. Now, let's say, able to talk about what's next. Well, what was the activity, not what's next. What was the activity of 2023? So, we started the year having a few devices available in this architecture. Bulk, RNG and input were the devices that we presented last year in Yokohama. These were both rust in the RNG case and C-based in the case of bulk and input. This means that we were able to use, with the architecture that I showed you, VOST user BLK, VOST user RNG and VOST user input. Now, this year things went a bit farther. We added new devices, GPIO, I2C and CAN and SOUND are already functional-wise complete. We are working now to integrate them in AGL in a way that you can build your image with everything you need directly from Yokohama. We have some activity ongoing, being console and GPU. These are things that are ongoing and that are today under development and that we will be presenting probably next time. Together with adding new devices, we also worked at the infrastructure level, meaning we improved the code of the architecture as it was before. In the version of last year, we were needing some changes at the device level. As I said, now this is not anymore the case. You can use directly things that are commonly available and, of course, supported. A few more details about two special devices that we encountered this year. There is now upstream version available of VOS device CAN and VOS device console. We were asked, let's say, we had to build the devices first and then all the rest. The devices are now almost ready, let's say, in cleaning version, at least for CAN. Console, as I said, is under development, but this is something we finish very, very soon. We target to, let's say, share this code with VOS device, which is actually the common rust crate for this type of things. All the VOS devices that are written in rust are available in this crate. The plan in the next weeks, it is actually something that is very imminent, will be to make sure that CAN and console that we implemented ourselves will be then integrated there. The ongoing work is related to GPU and sound. For what concerns the GPU, the implementation that we are targeting is the C1, so what you actually find that is available in KEMU is that there are other possibilities, but for instance, there are solutions that are based on rust, however, they are not so stable. For this reason, we went for the same implementation. I will come back to you probably in the next months with this, because today this is still under development. For what concerns sound, we were working together, we were working on a very recent implementation in the crate that I mentioned before by Red Hat and Linaro. They are really implementing the device in these days. As a matter of fact, the device is under the staging folder of the crate, meaning it is something that is not yet very, very stable. This is what we are using in the demo that I'm going to show you. Jerry, do you have a power adapter? USB-C? Because maybe it's this one that is failing. No? If there is any question in the meantime, I can take them. LibreOffice always want to recover something from the previous... Objectives of the demonstrations. Sorry for the interruption. We are actually having... Well, the demonstration is about the most device sound. So the very recent implementation from Linaro and Red Hat, this is a rust crate. So what we want to actually show you is, number one, we are showing the latest version of the loopback infrastructure. So if you are using any of these components, please be aware that the kernel driver was updated. Then we are actually running a special scenario. I will tell you more later. So we are demonstrating the feasibility of handling complex scenarios. Then the objective is also to identify bottlenecks and additional requirements for the next developments. And actually what we want to do is also to highlight the benefit of dealing with this type of devices in user space. The hardware that we are using is an AGL reference hardware platform and, of course, the... The OS is AGL Pike. In one of the latest call, AGL calls, we were asked by the IC work group, so the instrument cluster work group, if it was possible to use vjust user sound and to connect it to multiple containers. What they want to do really is to have two containers with two devices, different devices, with a different level of priority, because actually what they have is an instrument cluster device, sorry, an instrument cluster container and something that is QM. So the solution that we were agreeing during the discussion, during our expert group discussions was, okay, today we didn't think about it, because we didn't design at the beginning, at least up to now, we didn't think about such a scenario. So we duplicated all the components in a way that there is complete separation between the vjust user sound infrastructure of one application with the vjust user sound infrastructure of the second application. Like it would be if you have a container which is an instrument cluster and something else that might be more IVI-ish, let's say. What it is good in this thing, in this actually new architecture is that at the end of the day you could use user space applications to prioritize between the two. In the sense that it is pipe-wide at this point that is able to say one container as the priority the other one doesn't. So let's go to see the video. We are doing several windows actually here because you have seen there are several components. On this side we are going to insert two virtajosound modules the other for the other application this window and a new virtajosound logic and on this side we have sound device sound device adapter and adapter. So sound device and adapter for the first application sound device and adapter. So first virtajosound module was inserted. We can see that the message is saying something there. Now we are going to enter the second one well actually virtajobuck module this brings the registration of the new now when we start the first of the two vios user devices and then the adapter you can see that something changed at the virtajosound level so before we are not having now same story for the second adapter and the second vios user sound device this will bring another virtajokard there so at this point we have two cards registered in the kernel so the kernel knows that there are additional sound cards we have the devices there on top that are actually simulating, reproducing the device in this case the device is actually simply writing asking the pipe wire to play something because actually this is the let's say ost-based execution and now we are ready to play something hold the second this was the first sound then we go with the second one and then the first takes over so actually the very last point which is important, maybe should be described again what we have done is we reproduced the first beep beep then we ask the second device to reproduce beep and then actually when the second device beep is over we go back to the very first one so there is a priority in the second application in respect with the first one this is something that we had for free in the sense that this is something that pipe wire gave us and this is one of the reason why the instrument cluster eG group is interested in this technology so they wanted to have an easy way to handle this type of priorities and they don't want to mess up with the kernel because actually when you mess up with the kernel of course it is doable however it tends to be slightly more complicated and less portable so let's go back to the final part of the presentation after the demo so this is what we have seen the two devices that were actually playing first and then the second that there are that then at a certain point were conflicting with pipe wire giving priority to one of them now this is about the sound device, the sound support for what concerns the rest in general development status is that we added the new devices two of them were built from scratch in console the other devices can GPIO and console are completed, console is in finalization phase, this week at the end of this week probably is ready to be shared, GPU is ongoing we are working together with Jan Simon to integrate everything in aGL we created new repositories for the driver and the adopter these repositories are available under the AGL Garrett system so if you wanted to have the latest version or to see what is in the latest version you can reach out there additionally we added a new feature well we changed the behavior of AGL and AGVIRT feature so today when you build an AGL image and you add the AGL AGVIRT feature you are automatically including VIRTIO loopback so this is a way for his testing from your side we are testing it on mainly reference hardware meaning RKH3 however we know it works also on virtualized system so if you try to build exactly the same thing say it in another way during the development we use with virtual machines so we know it works with VMs and then we port everything on the H3 meaning that if you want to test you can use a VM or alternatively you could go for the real hardware meaning an H3 steps so we have additional work to do of course finalizing what still needs to be done on GPU and console side then there is a new task to open which is the AWS one so the idea is to enable also Amazon images with VIRTIO loopback inside this is a way for us for instance to enable high usage for instance and also a way to enable seamless portability of an application inside AWS or inside your PC or inside your embedded devices we also as I mentioned are going to go upstream with the crates with the REST VMM crates for Khan and console that we are going to do benchmarks are not yet available in the sense that we are working on feature and integration benchmark will be available very likely this is something that we will be available in the next event next year very early next year last point on my side what can be done next well the activity is now steadily ongoing there could we could add new devices this is a possibility but there is also a lot of work that could be done on AWS side meaning actually the fact that we are now breaking the barrier between the embedded and the cloud makes us brings us in a good position to start doing things also in AWS for what concerns new features one possibility would be to go deeper into the multi device support meaning supported the same architecture that I have demonstrated before but with actually a kind of more native support in a way that the adopter is aware of multiple streams coming and could also do job that pipe wire is doing because we are in the sound case we have pipe wire but in case of can or any other device we might not have such obstruction that helps things so this add multi device support goes in this direction the second part was something we discussed already we have already a spec for this is about the cross meaning that today we have this bridge between driver and application running on the same CPU one in kernel space the other one in user space this cross issue wants to go further meaning the driver is on one CPU or vice versa the application is another processor this means that we can spawn let's say the component of the architecture between for instance an R core and an A core Cortex A and Cortex R in a way that for instance also Cortex R could access part of the Cortex A features so that's all on my side conclusions work ongoing something else still to do about AWS console and GPU integration is something that is happening really these days so if you you might encounter some troubles if you try to do these things tomorrow but in a few weeks from now these should be fixed then project finalization is expected end of 2023 so everything that I mentioned at least internally the results of everything I mentioned at least internally in the expert group will be shared as early as possible very likely by the end of the year so questions do you plan to upstream but I look back to the upstream in the next kernel not the automotive kernel I think that can be it's actually simply a transport driver so it is a driver it's not part of the kernel score it's something you can plug and unplug so we are actually putting the driver is already in where I have the sources so you see virtual loopback driver is where you can find the sources for now we did not share it yet with the kernel on the kernel side this is something as soon as the maturity is a bit stronger we will do we are in contact with part of the virtual community for this so people know that we are working on this last year we had occasion to discuss also with the videos guys the videos team who is doing something similar but for the data center case so this is more or less however it will be done as soon as there is more testing so you will keep you will keep maintaining these drivers out of source out of three kind of and you mentioned the videos so how are you interacting with VDP or previous people yes we do we had already discussions last year in the ALS 2022 we have also benchmarked videos against this solution so there are slides available so if you want you can have a look in brief the comparison is not is not that easy because actually the only devices that we have in common is bulk, BLK storage so we did it but actually there were similar performance between the two thank you any other question well if not in any case you can just ask these days if you find me or join the SDV call by weekly so feel free to ask anytime thank you then