 So hello, I'm Zhiyi Guo from Red Hat and I'm a senior quality engineer that focuses on testing the graphic virtualizations. There's many works with Intel and Emilia, and my responsibility is testing these solutions inside Red Hat. So today's main topic is to introduce more than one desktop that running on the TVM is infrastructure. So I think my colleagues have mentioned that open source communities and open source software environment. So I think since we don't mention the components that are on the open source community, so the virtualization infrastructure that use on the QM basically includes four components. The first one is the QM, that is a kind of virtual machine that can provide four virtualization solutions for the Linux. And I think this is the most popular solution for a lot of cloud providers. And the second one is the QEMU. The QEMU consists of a lot of regulated devices that can provide a lot of functions like the memory, the virtual CPU and the others. So the QEMU and the QEMU actually is not user friendly. That means that many people are looking to the developers. So there are two layers that cause the leverage and the work managers that can optimize the user experience that let you to write a simple command that is the QEMU variable and also from the GUI that you can build something that's on the virtualization. Actually there are all the open source. So talk about the virtual desktop. So we might need to know the enterprise user cases during our testing and the investment. And through our investigations from the customers. So the virtual desktop through the cases from the customers may need to discuss. First there are desktop on their local one. So they always put their desktop inside the data centers. And for the IT companies they won't give you very powerful machines. They just give you virtual machines that can provide the desktop experience to you. And just give you very simple device that is the local cost. That's how they can reduce their, they look at, they can bring a good cost savings. And for the desktop, so if you want to use them, so you need the audio videos. I think most people, most of the users will want the video functions and audio functions that if you run some software inside, use a simple user, if you want to use a user example, if you want to use your office inside your virtual machine. So if something is wrong, then it will give you a loan. But if your desktop doesn't have audio output. So you cannot hear the, you cannot notice these changes. So this may bring you a bad experience. So we want, so we want all the functions of the desktop for the virtual machine like we did for local machines. And for you, for enterprise users, you may have a lot of video conferences. So the virtual machine won't provide you a conference cameras. So you need to use your, you need to use USB cameras that can pass through to the cloud. Then you can use the conference content and the voice from the virtual machines. And the last one is the graphic capabilities. So a lot of people, so a lot of people may run in their heavy computing software. So this software, when they use some graphic API like the DirectX or the OpenGL. So, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so, so. So let's see the whole open source solutions already built in the KVM. So this is called, this is called the space project. So from these pictures, so you can see that there are typically three computers on this screenshot. The first one is your local machines. That may be a new system or Windows system. And there are two computers actually connected to your workspace. One could be, one could be a real, one could be a new machine and another could be a Windows machine. So the space project is studied by RedHeads. That's it, that's we need the business from the enterprise users. And these users are mainly what's the desktop for their daily, for their daily needs. So talk about this space project. It provides a lot of functions that business people really want to use. The first one is that this is a cloud server mode. And you can, so when you travel to other countries, you may only bring your laptop, a small laptop of a tablet with you. But you can work with your workspace in the cloud space. And the second one is that a lot of people want their watch desktop to let the local machines. So the space project provides a lot of business algorithm that can help you to make these possible. So from the video streaming and the desktop content, that is heavy graphic data, that if we transfer the raw data to the clients, they really cause a lot of bad wires or a lot of compute results. So with the video streaming and the visual completion to these video streamings and the image results. So that's we can reduce the network users and provide the same qualities as transfer the raw data to your clients. And for the audio playback and the capture is also possible with the space project. You won't worry about the voice. That's the data centers. That's the space project. So transfer the voice or the lungs of the applications to your clients. And they can also transfer the voice from your microphone and other audio input devices to the data centers. So for the business people, there are two waterpots that can let you to connect your USB devices like the camera and some smart card or USB disk to the watch machines to help you to keep your data in a permanent workspace and you can access from everywhere. And for the performance, the space projects that take advantage of the emulating devices that build the QEMU layer, QEMU applications that with highly optimized QXR on GPU devices and highly optimized the guest servers. So these three, the space projects, the QEMU and the guest servers, three complex types together can provide you a very smooth experience. So for the waterpots and with Lensys best projects, it is very easy for you to create a simple desktop with a CRI command. Let's draw these pictures. You can see, besides the legacy computer resources like the CPU and the memory and the disk. So you can just use a very simple command. Let's call the graphics and video to let your watch machines to use the space projects and the QXR devices for best 2D experience. For these pictures, I captured this screenshot and verified this command from our real system. But our real system, but the Fedora system is also free and open source to the OS that's maintained by our companies that you can also move the same operations to the Fedora that creates the similar experience. So for the current, what is that called? I think because of the Windows VM and the LVM become more modern and they will also generally use 3D functions to provide you the best experience. But that this space can still be qualified to the 3D workflow. From my data testing, the 3D workflows, we only use the Lensys devices created from the open source from the Lensys open source communities. The 3D workflows, there are these very good performance for the 3D workflows and the experience. Let the QXR, let the CPU, let the CPU device can only provide the software-based 3D capabilities. That is your CPU resources and if you run very large applications that heavily use the 3D functions, all your resources will be used for computing the 3D and it will make your system irresponsible. Even if these applications use all the resources from your virtual machines, it can only provide you a very low frame rate per second. That will make you play some static pictures inside the VM. So in my team for testing, if my virtual machines have 4 VCUS and 8 Gigabit RAMs, then we try to run a benchmark that is very popular benchmark called the Heavens. Even with very low settings, very low resolution and very low view quality, it can only provide you around 7FPS in your VMs. You can also check the overload in your VMs. That is your CPU overload is over 214%. That is actually your VM cannot respond to any interactions with you and all the resources have been needed by the 3D applications. So the open source community and the current upstream want hardware-based desktop. With hardware-based desktop, we can upload the graphic intensive upload to the GPU. That is a very popular idea for, I think, a lot of open-source users, other than enterprise users. So the only open source technology offered by KM and QM, is the GBCG that's from the Intel. If you want to use hardware-based accelerations to your graphic upload, when you want to enable these accelerations in your desktop, you will need not very long GPU that can provide some hardware needs for all these technologies. Because you are using virtual machines, some of the workloads might be transferred to the host that allows you to control the GPU devices on the host. So you need some kernel modules that help you to transfer the workloads from the VMs to the host. And then back this and then transfer back this data to the VMs. And you also need to get all these functions from the host. That's because if you use the GPU virtualization, that means that some of the host graphic workloads may be interrupted by the virtualization's GPU workloads. The internal GPU virtualization, that is 4GPU virtualization, that's because it's also emulated GPU devices. But this emulated GPU devices are controlled by the host internal GPU devices. That's the host GPU devices have to compute the graphic workloads in the host and try to move the band to the VMs. If you use an emulated only solution, all your workloads will go to the host VCPU. And then CPU is not good enough to calculate the graphic workloads. Then you don't get a very good performance, you won't get a good performance against 3D workloads. So compute the host to use the GPU for the virtualization is very easy. So you can just use a very simple command that will create the most internal GPU devices. And then you can just push this host to the GPU devices in the Librex Xml VM descriptors. Then they will start this with virtual machines. Then you will get a VM that will hardware-selected GPU devices. So I also measure 3D performance inside the virtual machines with the internal GPU devices. When you pass these GPU devices to the VM, from these virtual machines, you can see that you already think about the latest graphic steps that developed by the initial companies that they can provide to the latest OpenGL capabilities, like OpenGL 4.5.0.6. And once the graphic step has some new features, then they will also reflect this to your VM. And I also try the same configuration with the same benchmark with the same configuration as these VMs. So I think you know that when you use any latest video devices for the benchmark, you can only get 7 FPS for this testing. But when you switch to the internal GPU device, then this performance will increase very large. So from here I can say that I have measured for 58 FPS for running these benchmarks. That's actually what we saw with these test results. I think when you try to run some business applications, I think it will be very simple work tools for all the others, like the open office work tools that enable the primer experience, that you can get a very good experience, that the module really by the software is very interactive, and the experience, the UI interaction is also very smooth. And for CPU work tools, it's also reduced to 90% for a VM system that has 4 CPU devices, that you can still get enough CPU power to process other tasks that are more sensitive to the CPU workload. So this is my last page of my slide. So this is the future technologies for the test source that I think will be very soon in the open source communities. The first one is that's the worst GR 3D GPU devices. This is also one of the latest GPU devices using the QEMU applications. But the difference between the QXL devices and this 3D GPU devices can also output the 3D work tools from the VM into the host GPU devices. And any GPU devices from different vendors can also work with these open source projects. You don't need to try out solutions to Intel or NVIDIA to only try out solutions to NVIDIA or AMD or Intel. So you can mix up all of them. Another very exciting technology is that maybe some people know that Intel has already released their discrete GPU devices for the data centers. That has this performance like this. And the open source and Intel GPU accelerations will also take advantage of these discrete GPU devices that can provide all the graphic performance to your machine. Or you can share these GPU devices to more users that result into the infrastructure. There are 3D daily usage. Was that transferred over to the internet or was it from the host itself? Yeah, it's transferred from the internet. That's because 3D work tools and desktop streaming are currently two modules. That's because when you run your 3D work tools inside the VMs with Intel GPU that this GPU will reflect the performance. That's very... I think it's just like the workflow in the native with the GPU devices. But definitely you need another application, you need the space to transfer your image, to transfer your desktop image to your clients. These two work tools are different. So currently when you have lower backlines, you have lower network backlines. So when you see this content from your laptop or your client, you will see some functions or some lessons. That is caused by the space cannot capture... That's because of the space cannot kind of network very well. But you can still see that the FPS within this VM is very high. So the next solution is that we also need a software that can help to capture the desktop image. And the trust kind of puts this image to the GPU devices and the GPUs encode in it. To encode them to some H264 or H265 stream. And then it's best to send these video streams to your clients. Then you will get a similar experience. You will get the same experience like the private content inside the VM. And the behavior you observe from your clients. Because I'm also testing the solutions from the videos. And I learned that if you want to have questions regarding the graphic components of testing with these vendors, we can also talk offline. Thank you. Thank you from your side.