 Good morning, everyone. Thank you for attending our session about what's behind the 8K video streaming at cloud age. Well, I hope you can enjoy our session as well as the splendid scenery in Berlin. And I'm Jinghua, and my English name is Coco. I will give this presentation with Changzhi and Shaohe. While I and Changzhi are both the staff researcher from the novel research, Shaohe is a cloud engineer from the Intel. Today, we are going to talk about 8K video at age. Firstly, we will introduce a background of 8K video. Then we will elaborate our age-optimized architecture, which is based on the Stalin's project. Next, we will focus on the work we have done to meet the 8K video requirements, followed with their demo and their summary part. You know, Berlin is a magical city. And two months ago, in Berlin, I feel consumer technology show, it was 8K TV that capture the headlines. Big TV manufacturers like LG, Samsung, and the Chinese manufacturers, TCL, all launched their 8K TVs. And the conception of 8K contrast back to 2012, or much earlier, in these years. And the news about 8K can be seen everywhere, like in August 2006, NHK kicked off their first 8K satellite broadcast. And in March this year, Alibaba released their 8K video cloud solution. Oh, and I think the most well-known one must be that Tokyo Olympics will be held and will be shoot and broadcast in 8K. So we heard a lot of news about 8Ks. What make me think that? What is 8K and what kind of features that 8K have? The answer to this question may be much simple if we compare 8K with other resolution. And the table on the right compares those at five dimensions. And the first dimension is the resolution. 8K refers to any screen or display in that width is around 8,000 pixels. So it's four times over 4K and 16 times over 4 HD, all we call 1080p. And you can tell from the left picture that the 8K resolution displays higher picture quality. As for the audio channel, 8K Ultra HD supports 22.2 degree of audio channel rather than the 5.1 channel over 4K. And the 100 degree of viewing angle empowers the user to catch every frame and enjoying their video with a master sense. As for the coding format, 8K used H265, or we call it HEVC, or VP9. Not for the network bandwidth, to transmit 8K or 112 FPS, network bandwidth needs to reach 120 Mbps. And while for the HD and the 4K, the number is much smaller. Great features as 8K have, no doubt in the inference, even stronger sense or presents and realness to viewers. Hence, it's the usage of 8K may be well-rounded. One common usage is in consumer, is in commercial TVs. Others like the immense video applications such as panoramic video and virtual reality. Also, in real life applications, such as the health care and the high precision monitoring, in the remote health care with 8K resolution, doctors can see the internal structures of the blood vessels and the boundaries between the cancer issues and the normal issues. Also the tiny satyrs, which are very difficult to see even with the naked eyes. And the high precision monitoring is needed in the crowned areas like the street and the airport. To use 8K features, we must first cope with the challenges that 8K video may bring. And the challenges may come from every process procedure from the camera side to the end user. And the first challenge is the difficulty to manage the cameras in their common smart city areas because most of the cameras will be placed separate and far away from data center. A statistic from the media shows that it believes that there are 1 billion cameras worldwide by 2020. And the second challenge came from the more parallel computing of the main coding format of 8K video like HEVCO VP9. And the third one is when using deep learning or machine learning models to analyze 8K video, it poses a heavy load on data processing because of the high resolution. And the fourth challenge came from the transmission of 8K video, which needs 120 or 50 Mbps. To cope with these challenges, first, we need our age computing technologies which provide hand bandwidth, high bandwidth, and low latency. And based on that, we need the different accelerators to empower the deep learning or machine learning models and video codec algorithms. Also, we need to manage the device and topology of different separated age clusters. So that's why we came up with our age optimized architecture. And in the video scenario, it's suggested that we process the video that at the age, then send the result to the data center. So we use OpenStack in the data center. And in the age site, we utilize Darling X Project because it was a very scalable and development ready age solution. And it provides fault management, host management, service management, and so on. And that's what we need. But how we cope with the remaining challenges, such as the accelerator management or how can we manage the topology of different kinds of device? I will introduce Zhi and Shao He to elaborate that. Hello, everyone. My name is Chiang Zhi. I am a staff researcher at the normal research. In this slide, let me talk about our age computing solution architecture. It contains two parts. One part is cloud computing in data center. And the other one is age computing. In our OpenStack platform, we manage physical networking infrastructures by our internal project. In this project, we can see the network topology and operate switches and servers. I will explain you how it works in the next part. At age, we use Darling X as age infrastructure software platform. Beyond Darling X, we do something more on it. First thing is physical topology management. The second thing is age devices management. We need to manage the lifecycle of different Windows devices and monitor them. I will give you a detailed introduction in the next part. The third thing is that we use several to manage our accelerators, such as GPU and FPGA. Shao He will talk about it later. Both in cloud computing and age computing, they bring together technologies like CIF, DBDK, SIOV, and some accelerators. Both in data center platform, both in OpenStack platform and the Darling X, we are using CIF as our storage backhand. We have done a lot of optimization about the CIF performance, such as address code, online upgrade, and date-consistent enhancement. This part, I will give you a detailed description about physical topology management, age devices management, and accelerators. First, let me show you our physical topology management project. Both in data center and the age side, we can manage our physical topology. At the beginning of the physical topology management in age, it will be registered to the data center as an age cluster. Data center will store age clusters information, such as clusters, switches, servers, and links. One data center can own several age clusters, and every age cluster will report its status to data center. This project consists of API server and topology process. API server is the entry of APIs. It receives HTTP requests and the command line requests. API server returns topology data from DB. Topology process gets topology information by using SNP protocol. We need to set the IP address range of switches before start the topology process. At the beginning of topology process, it will scan all IP addresses which we set before. After scanning, we can get these active IP addresses next. We can connect them and get LLDP information by SNP protocol. Additionally, if the network topology is changed, this project can detect the change within few minutes. How does this project work? Briefly, we enable LLDP feature in switches and install LLDPD precise in servers. So we can get the whole network topology by running LLDP protocol between switch to switch or switch to server. At last, we use SNP protocol to get the LLDP information. So we can get detailed information about the switches such as software version, system name, interfaces, status. As for our age devices management, just like Newton-Iobas, Newton-FWAIS, age devices management is a plugable project. We will design some abstract interfaces and a different vendor who wants to use this project need to implement their own driver. Next, we will design camera common plug-in and sensor common plug-in to fit cameras and sensors. Additionally, new age devices will be registered into this service, and we can monitor these devices. Good morning. This is Shao He from Intel Cloud Team. Now let me introduce some hardware and its management in VCO Cloud. Now Intel is unleashing innovation in VCO Cloud. It has a total solution in different scenarios, such as media processing and delivery, media analytics, immersive media, cloud graphics, and cloud gaming. Include, encode, decode, inference, and render. This is a media analytics pipeline. It should go through, decode, inference, and encode process. So VCO Cloud is a full stack. Include hardware and software. The hardware cover different domains in computing storage and networking, such as the young processor, VC card, IPG card, operating persistent memory, graphic card, and Movedas. The software include different tools in encode, decode, inference, and render, such as media, software 3DO, webRTC, open window, media display SDK, and other render tools. Beside these tools, leverage AVX5012. WebRTC can also leverage VC card, and open window can also leverage IPG. Now let's see how these hardware power can work with VCO Cloud. This is the Intel Xenon Scabal processor. All Xenon processors support AVX5012. With mesh architecture, it delivers low latency and high band length among calls, memory, and L controllers. Just the performance of VDO stage, including transcoding, will be greatly improved. This is the open family, including DC participant memory and SSD. They are high performance in bad ones, LPS, and latency. You can see a small, faster endurance and denser than conventional storage medium. This is Intel FPG, the new product is Stratix 10. It supports high floating-point performance, high speed transceiver, high bandwidth for parallel memory interface, useful in inference and HPC. This is QET. It provides hardware solutions for computer intensive workloads, such as data storage and transmission. It supports 100 GBS cryptography and 100 GBS data compression. This one is a smart card, a smart leak. It's used for network acceleration. Currently, it supports 225 Ethernet. It supports full open-v switch acceleration. It is programmable and easy to deployment. Here is the sliders of some accelerators in upstream. Now we have supported the IPG in Thaburger. With Thaburger, we can discover FPG and the program data. And I think we have seen the Thaburger IPG demo in the keynote. The QET carpet and the comparison already supporting itself. Partisan memory spec is under review in NOAA. And we are supporting Partisan memory for read and write caching itself. Libre supports the new ASA for inference. Kubernetes has already supported QET, GPU and FPG. All FPG, GPU, QT plug-in can be run as a demo set. All these accelerators can be shared by multi-pause. For FPG, it supports two modes, AF and region mode. But it does not support image management. For QET, it needs to pre-load and configure DPDQ driver, pre-install and configure quicker assistant technology software. So let's see Thaburger. Thaburger will support more management for these accelerators, such as dependent software management and FPG image management. The community are trying to make integration with Kubernetes. There will be a Thaburger master plug-in running Kubernetes master and a Thaburger node plug-in running Kubernetes nodes. All these are containerized. And the leverage customer resource definition to a container to support all these accelerators function. OK, that's all for me. It's time for demo. OK, before the video demo, let's take a look at the environment. We have a data center site and one age cluster. The data center is managed using the CineCloud OpenStack. Why does each node have one GPU and cameras and other devices connected? OK, let's start our demo one. It's a short movie. First, log our CineCloud OpenStack product. After that, let's go our physical technology page. A few seconds later, we can not only see the data center physical technology, but also get the age clusters technology. At the data center, we can see the switches and servers and their information, such as system and running status. At the age, we can also see the switch and the server Additionally, we can see five devices from different windows with different color which are attached to the server. So this is what we have done in physical technology management. And we still have a lot of work to do. Let's show our demo two. OK, this video is going to have two groups of comparison. And the first, we compare the video process. We compare the process to a accelerate 8K video. So not only the encoder decode the process, but also the detection process. So this video is going to demonstrate that. So we first check the resolution of our video using the FM pack. And then we change to the Dark Knight. And the Dark Knight is a neural network framework. And we also use UNO. UNO is used for object detection. So we check the UNO config and see different parameters. This video where you can see the people from left to the right after we use GPU to decode as well as detection is much faster than the rest. And the video in the middle, we decode using CPU and use GPU to detection. And this way is utilized by the Dark Knight code right now. And in the right, it's the process of using all CPU. So we find that FPS of all GPU is at 25.4. And in all CPU scenario, the people are almost stable, static. And the FPS of the middle video is 10.4. And for the all CPU, it was very small. And this is another comparison because we use 8K video as input as well as 1080p video as input. And we get the output after running the Dark Knight and compare these two videos. First check also check the resolution. And using the Dark Knight, we get the output video also the output video for 1080p. And the video at the top is using the object detection for the 8K video. And the video in the left is the output of using 1080p video to do the object detection. And you can see there are more details and more objects have been tagged from the top of the video. So let's go back to the slide. And this is some information about the object detection. We use Dark Knight and the ULV3. And we also do comparison, two-group comparison. So in this session, we are talking about our age-optimized architecture based on Stylianx. And our solution can manage device and network topology as an age and also manage different accelerators via Cyborg. And the video demo shows that the 8K video can provide much more details in analysis. And for the real-time analysis, that 8K video should be accelerated in every processing procedure, not only the detection, but also the decode process. So there's something about the future work. We want to containerize the Cyborg and use the user space network stack to accelerate our age networking. So it's a Q&A part. Do you have any questions? If all of you have any questions, you can connect us by calling the connection. Thank you. Thank you so much.