 I'm from China Mobile. My name is Xuan. It's very pronounced hardly. I'm the edge cloud computing architecture. For this prediction, I will do tell the story about the edge computing with OpenShift. So I want to tell the background. So what is edge computing? Edge computing is a computing and network deep fusion. So I draw four pictures here. So we can see which is the edge computing. So we can check the red start. For the edge, the environment is very little. It's didn't like the data center, which can contain 1,000 bandwidth machine. In the edge computing center, it can only have 10 or 20 bandwidth machines. So the resource is limited. And I think this is why we want to use container technologies in this field. Because in the telecom field, we almost use OpenStack. But now I think this is a Kubernetes opportunity. So in the edge, we celebrate two parts. One is the network side edge computing, which is below the awareness station. And for the other side, edge computing is something like the gateway. We can put it in the home or in the industry. So for the edge computing, we can provide the path and the source. In the telecom field, we are always following the ETI side or some other international standard. And now I'm pointing to this architecture from the ETI side. So it's very complicated. We can see in this picture in the left hand. So there's a lot of interfaces in here. And they are not defined clearly now. But in China, 5G and the cloud and edge computing are very popular. A lot of companies like Baidu, Ali, Huawei, ZTE, and others, vendors, they want to use these technologies in China. And in this slide, I enlisted some of the pros and cons. So for the pros, it is based on the ETI side and V architecture. And it has defined the network information and the location API, your identity API, and by the way, its management. So maybe all the people in here don't know what it is. So I can give an example. So when we open our phone, so you will see there is a 4G at the top of our screen. So when we clip the Chrome application, so we can search the internet, so all the data flow is in the telecom cloud. We call it some like the private cloud. So all this data flow is in the private cloud. But in the edge computing, who is our customers? In China, maybe Baidu, Ticent, Alibaba, and some industry company. So because all these vendors are not in the telecom field, they are our customers. So we have two private APIs, the public cloud. But in the ETSM EC standard, it sounds like it's defined like the private cloud. So it's not satisfying our requirement. So I give the conclusion in here, is the MEC AIV architecture is too complicated. And there is a far away from that product. But we can use API, which is have defined in the ETSM EC. So in the AIV field, which is in the telecom companies, it's all we all use the open stack. But in edge computing, we want to use Kubernetes. So I try to refactor the architecture from the ETSM EC. So we split the two fields. One is the AIV field. This is the telecoms things. So we don't care about that. And another one is the EC field. EC field is edge computing field. There is an edge computing operation manage center. It's something like the portal. And can manage a lot of the edge computing cluster. And it can provide the VIM, the container, and some third-party API. And for the NMV, it only put their traffic into the edge computing data center. So there are two parts. It's built separately. So while we want to open shift, I have listed four reasons here. So OpenShift is the most mature open-source project we support Kubernetes nearly. Before I joined in the channel mobile, I worked in Red Hat four years ago. So I'm with Red Hatter. So I know it is very stable. And now we start to develop a project based on Kubernetes. So we choose the OpenShift as our basically Kubernetes. And OpenShift provides at least 10 layers of the container security. So container host and multi-tendent, container platform, network isolation, and so on. And OpenShift's ecosystem is flourishing. So I know there is a lot of the telecom and the band companies and industrial companies. There are a lot of companies are using it. So they have lower, so they can use a few efforts to move their application into our platform. So we can build up. We can build up our own edge computing data center. And OpenShift always brings amazing features from the open-source community. So since a lot of the companies have using the OpenShift 4.0, now we are using the Kubernetes 3.0. So we have to catch up with the other guys. So Sigma is an edge computing platform in China. So we have a lot of experimental environment in China, for example, in Zhejiang, Guangdong, and Beijing. So we can see this is our architecture. In the red part is our edge computing operational management. You see, maybe we can think it is in the center of the area, for example, in Beijing. And it can manage a lot of the programs in China. And for the green part is our edge computing data center. So it can run on 10 billion machine or 20 billion machine. So we developed some additional modules. For example, it's a cluster management and service routing module. So it's not like the OpenShift routing, because it's not a certified our requirement. So we developed it by ourselves. And in here is the third party capabilities industry SDK, wellness, and network capabilities. This is our from the industry. So it's very complex. We do it by keys, by keys. And we are using, we are trying to use a three skill to manage our API. Because in some industry, we have to provide the API. And then we can sell the API and earn the money. So it can give the areas a billion. And for the yellow part is edge computing gateway. So it's a very small box. We didn't use OpenShift. I will use Kubeh, because it's very small. Sound like the little box. Maybe it's wrong in the cell phone. So OpenShift 3.0, we use a lot of the GPU capabilities. So now I want to try, I want to introduce the use case. We do, we collaborate with a bunch of companies. The use case, auto vehicle in this computing. So we talked with Baidu company, who has joined their own auto vehicles in Beijing. And they provided the requirement. And we built the attractors. So at the top of this is Baidu cloud. It sounded like AWS. But Baidu has their own public cloud. And there is Sigma. Sigma is just a talk about our edge computing platform. And at the bottom of this paragraph, we can see there is a 5G network and some awareness work. So the work floor is a video stream is saved in the edge node Sigma. And the AI, which is the application running in the Sigma. And AI operates the video stream in edge node. And then send the result to the ICU. ICU, we can think it is a machine in the car. And send back the data into the Baidu cloud. This is public cloud. And ICU sends the result to the car through the PC5 interface. So this is the architecture we test in Beijing. So while we choose two ways. One is a 5G wireless network. Another one is a wired network. It's something like the way we provide the VPN to Baidu. Because they want to balance. If you're using the 5G, the price is too high. So they can transfer it into the wired network. So they balance the two technologies. But when we test it, we find that the wired network is too slow. And it costs a lot of time. So experiment environment here is provided as application. Because Baidu's applications are always containerized. And the applications have already run in the Kubernetes. So we saw it can easily run on OpenShift. So the requirement is the bandwidth is 6 Mbps, upload stream for camera, latency. So why we want to use edge computing? Because it can provide low latency. The latency of the data flow from camera to edge computing server is less than 100 millisecond. The latency of the edge computing server gives the result that we'll send to the response. It's less than 50. A network latency is less than two minutes. Purely. And there are two data centers here. One is the edge computing data center. And another one is the public cloud data center. So the application will be very complex. So you can send the data to the car and send the key data to the public cloud. And in the future, there are a lot of edge computing data center. So all of these applications are distributed. So in this experiment environment, we chose the two-street intersection and deployed a totally 26 camera. There is a lot of interest in Barcelona. I didn't see any camera in the street. But in China, there's a lot of cameras. So 26 camera in two-street intersection. And it calls a lot of the data. And the camera and I still need the electronic power. Because they are all the machine. It can need their power. But when we set up the camera in the single light, who provides the electronic power? So I talked with the government. And the government provides the electronic power. But it will cost a lot of money. So the schedule is here. We, in February, in February, we did the design. And now we're starting testing and debugging. It's still in the testing, under testing now. And in the next month, we will finish the testing. Here are some pictures. You can see the cameras. And there is a single light. So I wanted to introduce some challenges when we are working on it. So there are two challenges here. Let's see. Why in the storage? Why in the GPU? So just I mentioned that each camera will upload six Mbps. So there will be 64 GB a day disk, only one camera. But we have the 26 cameras. So it will use more than one PB per day. But Baidu company told that we have to keep three months of data. It will cost 100 TB. But I just mentioned that in the H computing data center, there are a few VM machines here. But so this is so huge storage, so we have to use some other technologies to reduce the data in H computing. So why we choose to send the video stream instead of image? So we talked with Baidu companies. And they said that if we use the video, we can use some new technologies to reduce the data floor. But one second, almost one MB. So it's so loud for a long time. So we will think it later on the talk with Baidu if they can use image instead of. Another is GPU. Because in the H computing data center, the machine is not like the machine in the data center. So it's small. It's small. We call it the OTI, which has to be fine with Channel Mobile and other companies, something like Baidu Alibaba. And because the machine is smaller. But for this 4S experiment, we have ensured that each server needs at least three GPUs. So three GPUs is OK. So the machine can have the three PCIe. So you can put the three GPU. But the GPU costs a lot of electronic power. So we'll need to rebuild our H computing center. Because the H computing center is not like the computing center. Because the H computer center maybe is near our building, our home, near our industry, and the electronic, it's limited. For example, in China, there is an old building, and it can only support 3KW. I don't know how to pronounce it. OK, 3KW. But if we put a lot of GPU, the data center, the H data center, can't support it. So we have to reduce the resource we have used in H computing. But in the data center, we don't match these problems. We have a lot of space to put our BMWs machine in here. And we don't worry about the electronic power. But in H computing, we have two thinking. Things like how to reduce the resource. The container technology is a better one. So why we want to use the open shift here? And to continue is that we are doing a lot of the things in the future. So now we collaborate with the 3DM companies. They will deploy the API and the China state construction. They provide the APP and the application. But the application is running on the machine. We only have three BMWs machine. But there, we built the open shift here and enabled Kubernetes to provide the virtual machine and give there the virtual machine environment. So they can put their application running on it. And others, I'd buy two companies and the blockchain. And in the next month, we will deploy all the things in Zhejiang. It is a 4G MEC environment. And then later, we will use the open shift for O. Seems we are late. Seems we are late. In August, we will integrate with Acurendo, which is a project in the Linux Foundation H. So H computing is very popular in China. But seems in Europe and the United States, it seems it's not. But the 5G is so popular. So we think the 5G and its computing will bring more valuable things in our industry. Thanks for watching.