 Yes, I think most of you guys has already seen the huge blue exhibition board downstairs, because we are from open-eulers. So let me first introduce the open-eulers. Yes, maybe you haven't seen these pictures before, so I just repeat the words for you guys. Open-eulers is an open source for the Linux distribution platform. The platform provides open communities for the global developers to build an open, diversified and architecturally inclusive software existence. And open-eulers are also innovative platforms that encourages everyone to propose new ideas, explore new approaches, and practice new solutions. So our community started from 2019. So that's all. So we will talk about the infrastructures. You can take a look at the pictures. When we talk about the operating systems, many people will come to their mind of the Centaurus or Federas. So we try to build infrastructures for our own version of operating systems. We find there are so many excellent and mutual projects that can be used to set up the whole infrastructure for our communities. So take a look at these ones. We use the open-build service for open-source projects to build our packages and distribute our packages. We also use the GNU mailman. Maybe you have heard that before as our many needs assistance. You may also find there are a bunch of projects that come from the CNCF. Yes, we have the NGX Ingress which is used as our reverse proxy server and as a simple API getaways. We also use ARC-CDs. Yes, ARC-CDs are nice tools. We use ARC-CDs to centralize our applications, the changes, the codes to into our production environment. We also use the vault from the HashiCop which is used as our sensitive data backend. And we will use another small tool to centralize all the sensitive data from our vault backend to our Kubernetes clusters where you can use all configuration files. And we also use the copper. The copper is come from the Fedora. Fedora use copper to further integrate your package services. And we have enhanced the copper in several ways to make it running better in the Kubernetes clusters. Okay, a lot of the projects. Yes, we also have the pro from the CNCF to review the PRs. And for the next part we will give you the overview of our infrastructures. Actually, there are two tiers in our infrastructures. And for the tier one, we use telephone files to centralize and to create all of the base ICE resource including the VPC, the VMs and the CCE clusters. And for the second tier it's all about the Kubernetes clusters. Actually, we have two different kind of clusters in our environment. The first one is master cluster. We use the master cluster for the DevOps process of all the applications. As you may find, there are several key applications. They are obviously these vault backend and elastic stack and the Grafana. So actually all of our applications are maintained by the August CD. So once we have created a new version of our applications images, we will use August CD to centralize changes into our different clusters. And we have about 155 applications running back the way to 300 VMs. And currently we have 10 solenoid developers everywhere. So it's a huge infrastructure. And all of the applications will be deployed into our business clusters directly. So we will go into the next part. It's about the cloud native DevOps. So we will share some experiments and some master practice for you about how we deploy the applications in a cloud native way. There are four parts. The first part is about deployment of standardizations. So yes, of course, if you want to deploy applications into Kubernetes clusters, you have containerized applications. So there are several rules. The first one is about containerized applications. One process per container. So we need to upgrade existing projects to make it running in your Kubernetes clusters. And the second one is about to Kubernetes deployment and the semantics. So we suggest our developers to use HelmChart and customize to upgrade and deploy YAML files. And the next several rules is including the standard logout pool and the configuration assistive data. We will use both packets and centralize all of the data by our support tool, which is called Secret Manager. And we also ask our developers to restrict images to the restricted image repo. Actually, we don't use double hop because we have some tools, our private repo, which is used to scan the CVEs and to perform some checks before deploying. We also ask our developers to use trusted based images. For example, they ask to use the open URL based images because one of the packages can be easily upgraded if some CVEs have been fixed. And the last one is about to expose the health check endpoint. Yes, we need to restart the applications by detecting the health check endpoints. So this is the first part. And the second part is about the configuration separations. So there are three different rules in the whole process to deploy the applications to our production environment. The first rule is the application developers who are responsible to write out the raw Kubernetes YAML files, which will not contain any details for the deployment. For example, the storage cost lens and the domain lens. And for the second rule is our DevOps engineer, which is used to add more details to the YAML files, which is customized or Helm charts. And who will add the storage cost lens as well as the configurations for our production environment. For example, the real database connection URL, the password and the user name. And for the third part is our informant tenors, who is responsible for reviewing all the changes, all the applications in our communities. He is responsible to merge the pro request and it will be handed to the August CD. August CD is used to centralize the changes. So that's for the part two. And for the part three and part four, it's about the GitOps and the automations. I think GitOps is a great rule and we utilize a lot about August CDs and it can give us several benefits, including increased productivity and they improve the stability, higher reliability and the strong security guarantees. I think some of you guys have already used the GitOps best practice in your environment. And for the last part is about automations. Actually there are three different cases for the automations. And the first one is about the end to end pipeline for every services. So let me, you can take a look at this picture. This is our website pipeline. It's actually the pipeline for our every status website. So once our developers come from the community to submit a new PR, we will create a temporary port which will reflect all the changes and the actual look of our website. And our container will review all the changes and to get the PR get merged. After that everything is auto triggered by our Jenkins August CDs. So it will check and build the new version of our images and we will push to our private image hub repositories. Also we have the tools and the task to scan our images. After that the August CD will use these new images to centralize the reflection, to centralize the changes into our production environment. And we also have the post jobs which is used to purge all of the website catches and will as notify our website maintainers. So that's for all of the best blacklist we have learned during the process when we set up the whole infrastructure for our overall communities. And we are heading to our next part. During this part I will share some excellent applications that we have been working on to make them running better in the Kubernetes clusters. Some of the projects is based on the existing operating projects, but some of the projects is based on the ideas we learned from other open source projects. So there are two cases. The first one is about the bot. So if you guys have ever been the open source communities you must have learned that there is a bot application used to connect all the different applications into the pro request. For example, if the new developers submit a new PR to maintainers will use the bot application to check the code to perform the CLA checks and as well as to assign on or side and to his comments on the pro request. So application actually the Yabot is short for yet another bot. It's based on the idea of the pro project for the CNCF communities. So when first time we tried to investigate the existing bot applications we found the pro project for CNCF and it's a nice project. It has about 30 plugins which can be directly used in our environment. But the Yabot, the pro project still has some issues and some performance bottleneck including once one of the plugins get crashed the whole pro application will get restarted as well as sometimes when there are huge messages comes from the code repository. So it will sometimes lose the event. So we have upgraded the pro project into our Yabot. So we have improved it from several aspects. First is the robust. So now every single plugin will be supported in the containers and it's easy to replicate. And if one of the plugins get crashed it will not affect the whole application. And the second improvement is about the high support. We use Kafka to receive the message from the code platforms as well as deliver the messages to the back of the plugins based on the capabilities of the plugins. And we also have the developer SDKs for the Python, Java and GoLand for our developers to develop the version of the plugins as soon as possible. So within this change a newcomer or new developers in our Info 6 can have their own version have the new plugins in half a day. Okay, I will speed up. And the second application is the meeting board. So heavy meeting is quite the essential requirement for the communities because we have about 10th of meetings per week. So we have created our own version of the meeting board. Actually the architecture of the meeting board is quite simple. And most of the job is done by the communities job. We have our operators which will subscribe to the event that comes from the kitty, the zoom, the waiting and the tensile meetings. We also will watch the meetings from the OPS app, as well as the application. So once the event comes, we will use the colleagues jobs to create the meeting, to download the video and to send an email to our Sieg owners as well as publish the videos to our social platforms, including the PDPD and the U2B. So you can take a look at this picture. So this is the archive site of the PDPD for all of the videos we have. And for the next one is UI in WeChat. And our Sieg maintainers use that WeChat to book meetings and to notify the developers. And okay, the next one. Next one is about signing services because we are a community for the operating systems. So signing packages and signing the binary files is one of the requirements for our communities. At the first, we use OBS sign from Open Susie to do the things of signing things. But we also find that there are several aspects in the OBS sign they can be improved. The first one is about the securities because every private and public keys will be stored directly on the local machines. So it's possible that the PDPD keys or the X5 or 9 keys got leaked by the bad guys. So the second one is about the performance. When we use OBS sign in our production environment, we find that it's possible it will take 5 or 6 minutes to sign a single package. So it's quite slow. So based on these ideas, we have created our own version of the sign service which is named CineTrust. CineTrust is a new word which is a combination of the CineTrust and Rust. Yes, it's based on the Rust languages. And we have improved the sign service in several aspects. First one is about the end-to-end security designs. Within our designs, the sensitive data will be tracked and decrabbed before storing to the databases with external KMS systems such as the Huawei Cloud KMS. And also the process will be located in the TE environment. Actually the InterSGS or the ArcTE. And for the next point is that the whole project is right and pure in Rust. We also use the mutual TLS to communicate with the client and the server. Okay. And the second one is about the highest output. So with our designs, the data servers can be simply replicated. And we also use GRPC strings and due to the reason that the full client and the server is a specific task. So it's much better than OBS sign projects. You can take a look at these pictures. So no matter configured with the PGP agent and all the OBS sign compared with PGP libraries, we got much better performance when compared to OBS sign. And the last two is about the RPM. We support different kind of the batteries including the RPM, ISO, color model files, EFI. And we are also going to support the container images and WSL. Okay, next part. I have only one minute. And for the MUX Studio, yeah, MUX Studio is used for our developers. Because our developers come from most of the developers come from China and we have the developers come from the universities or the high schools. So it's a common case that they have some troubles to get the real environment, especially for the arc environment. So to fix that, we have developed our MUX Studio. MUX Studio is kind of a native terminal playground browser. So for our developers, once he opens the MUX Studio, he can use his own instance of the open URL. And he can directly use the terminal to build the packages, to build the red code, to build the packages as well. And this server is quite simple and fast. Once the developer opens the browser, it will be 20 minutes to have his own version of the open URL. And we also support to customize the different images. And due to the reason that we both use Docker and LXD at the back end. So we support a different kind of the courses, including the containers, application containers and the system containers, as well as virtual machines. So, and for I got two more pages. Okay, okay, okay. So the next part is about the URL. URL is short for open URL user repositories. Actually, this is based on the copper instance from the federal project. But we have upgraded this project in several aspects. First, we have upgraded them to make them collaborative, because now all of the components has been upgraded for the complete environment. Also, we have right our own version of the task schedule, which can call the API to our CCI clusters and to dynamically create the port for our task. And also the URL is highly integrated into our package developer process. Okay, that's all. And for the next two parts, I'm sorry. The next one is about the URL maker. Yes, we first try to use the OBS project for the open source to build our packages and our images. But currently we are designing our new version of the package build and distribute the systems. And the URL maker is in several ways, much better than OBS. The first one is about the dynamic workers, and we support VMS and containers, which will have mass master build compared with OBS. And the second one is about the job key per user with priorities. So it's a common case when you use OBS that for individual developers, it will get built. So we have designed a new create systems that every per user will have its own crease. We also use the global build cache, which will speed up the whole building process. In our test it's about a 30% higher build performance for a single package on average. And the last one is about how to build images. And I think Fedora and OBS use their own tools, which is named QB or OS build to build the images. We also have our own version tools that can be highly custom in the process for the image generations, including the image format, the setup process and additional file art packages. And for the last part it's about our future plans. So there are top three, our task for this year. First thing is about the SSO, because we have so many applications in our communities. So we are trying to associate all the applications with our SSO, the ID platform. And the second one is about cloud IDEs. We support the, in order to support the development and the test process with our developer. So we try to set up the cloud ID for the communities. And the last one is about the message center, because we have different applications, different messages. So we're trying to connect all the messages with cloud events. And within that change, our developers can have the messages come from the OBS and the Jenkins code repositories and many different applications. So that's all. Sorry. Time for question? Sorry. So you can send me an email or just outside of this meeting and if you have any questions. So thank you. That's all.