 Hello everyone and welcome to this session and we are really honored to be able to speak at the open source summit North America. And we are very sorry that we have to change this to a virtual speech at the last minute because our visa problem. So let's start. And our topic today is the open oiler, bringing new opportunities to the diversified computing era and is presented by me and my colleague Zhong Jun and we are from the open oiler community. The content of today's session will be divided into three parts. First we will have a look at what is the open oiler community and then we will go through some technical details about the community and some ideas behind them. And then Zhong Jun will take you to the behind the thing to see what kind of infrastructure that we used to build up the community in this large scale. And before we start, please let us spend a few minutes to introduce us. My name is Zhen Yu Zheng. I'm from the open order community and I serve both as a sick maintainer and also the community manager of the open order community. And here is my email. Please feel free to contact me if you have any questions or interested in the open order community. Hi everyone, I'm Zhong Jun. You can just call me Jin. I have 10 years of experience in open source development and management contributed to open stack, Kubernetes, Chaos community and so on. Currently I'm an open owner infrastructure sick maintainer and also be a chaos metric module working group maintainer. Let's clarify what the problem we try to resolve. There are some tough challenges for the OS industry. The first in development of chipset bring top challenges for the OS development. The chipset has been very booming in recent five years. In the future, we can force cash it. There are more and more new type of chipset will appear. So the OS should need a more aggressive innovation to bring us to the higher and the fast. So OS is special for the new system and to become bigger and bigger and behavior and higher. So our idea is how to reduce it and reduce the size, reduce to make it higher and faster. And another gap is between the server and car and the embedded system because the Linux system is divided to be a server and the embedded system there are two words. They don't think very well so there's another challenger. So we will do some aggressive innovation or modification to resolve that kind of challenges. The open owner will unleash diversified computing power to innovate enterprises. You can see it in the picture. The open owner of a red system support for all devices and one OS also support for all applications. How does open owner handle the new scenarios? So over here we will have adopted this kind of way. We have the release, that's the normal way, right? We release a version two times per year when it is in the March or later in the September. Those three lists we call the innovation release. You can put everything and measure and features into those kind of release. You can try to make us provide some environment to do the verification job. For two years we will release RTS long term server version. We will make sure that the quality and we will maintain the RTS for six years to ensure the company will adopt a major product. So that is the way if you want to try something new to verify some cheapest. You can go very earlier to the innovation version cycle and you can do the verification and go back to your RTS version. And so we will adopt more aggressively with release cycle to meet the aggressive with chip. We will adopt very aggressive strategy to adopt those new features. So you know a new feature specified for some big feature, it's not easy or spend a very long time for the upstream. Then we have another important feature. We share the same code base in this point. We have released two out here for the second RTS in the twenty-two, those three. And this is very milestone version in this version. No matter what you are, what the two server you use at 86 to create three kind of version. So if you are a developer to just click to use the API and you share the same API. If you are a device product producer, the driver will face the same conversion. So we will reduce the gap between the usage of the new architecture. Okay, so is that the general information how we had the cheapest issue, but it looks not aggressive right? The open-unit community started from 2019. In 2020, the open-unit community committee was established. In 2021, the open-unit community died to the open-atom foundation. After three years of development, the open-unit community has grown rapidly. The number of open-unit contributors has increased by four times. The number of open-unit version downloads has increased by 30 times. Growth will be even more than 2023. Currently, we have more than 800 community partners joining us for community development. Those community partners come from various industries such as processor, server, sdp and so on. Next, welcome to join me to bring us the second part of the content. Thank you, Zhong Jun. Let's get to the second part of this session. In this part, we would like to share some technical details about Open Oiler. And this might give you some ideas about why Open Oiler is growing so fast in China. The first reason why Open Oiler grown so fast is that it is not just a distro for Linux. The community is actually much more than that. It is actually an innovation platform. Developers in this community worked on full software stack to fulfill user requirements. Let's take our latest LTS version 22.3 as an example. In this version, we used 5.10 kernel. And the major contributor for Open Oiler, Huawei, has been top two contributors to the Linux kernel for quite many kernel versions. Open Oiler has also done a lot to optimize about the OS on multi-architecture computing. As there are a lot of different types of chips in China. For example, we have to down some kernel architecture optimization for large-scale ARM clusters. And some user-mode protocol stack. Also some HCK decoupled control, which can provide 5% to 20% performance improvement. And now the above mention is still what a normal OS distro does. Let's see what else we have done. The Open Oiler developers have also created some projects to overcome the challenges they faced using the current projects. For example, the traditional Docker has too much overhead and the spawning time is slow in a large cluster. The Open Oiler developers and users created a project called Isola. This was to provide much more lower overhead and much lighter weight. The Isola project has already joined the CNCF landscape. They also created the strato-word project in the virtualization field for the similar reasons. With more and more users start to use Open Oiler, how to manage and optimize the whole cluster became a problem to the user cares. So Open Oiler also incubated O&M projects with latest AI technologies. A2 is the project that can provide AI-assisted cluster toning to user services and AOPS can provide AI-assisted cluster operations such as security scanning, monitoring, live patching and much more. Open Oiler has a lot of innovations also in security, programming languages, development, toolchain and many more. I'm not going to go through all of the details here. You are welcome to find it on the Open Oiler community website if you are interested. But I guess you already got my point. The Open Oiler community is not just a Linux distro. Now let's have a look at some of the other ideas behind the Open Oiler community. As mentioned, Open Oiler is aimed at enterprise usage during its funding. So let's imagine this. Supports you are a cloud admin in charge of 100,000 of machines. And in today's world, you will probably face a huge number of different hardware. And each of them have different architecture or features. And there could be over 14 new CVEs every week. And you, and believe me, the number could only be bigger. And according to many OS strategies, the OS releases updates per month or even per week. I have to say that we are not judging on those releases. We are just taking the fact as an example. I think if you are managing a cluster like this, your face would be like this cat every day. Because all of the above mentioned action needs to be updates on kernels. And as more and more different chips versions come into the market, the kernel is facing greater challenges. It is because that the Linux kernel is highly coupled and unchangeable. We can say that changing anything means changing everything. You have to recompile them all. It could be really complicated if you are managing a larger cluster and you have a business running on them. And in OpenEuler, we come up with an idea to make everything in kernel as a service. We call it the kernel as a service or cost. With this idea, we have to redesign the driver framework in the old kernel and to make the drivers more isolated. And we plan to use the EBPF technology to make those modules to be flexible and reconfigurable. Such as the network services, the storage services, and the schedule services. By doing this, we will be able to provide a fundamental ability for our next step. From the above presented contents, you might already think that OpenEuler is aggressive in release in kernel and in innovation projects. And we have introduced the idea of kernel as a service. With those fundamental abilities, we are thinking about why not to become more aggressive? The idea is that why not create a universal OS platform that is usable for both data centers, HLT, or embedded. If we want to do that, we have to provide abilities not only in OS itself, like in kernel and other components, but also need to have a powerful and suitable infrastructure behind the OS and the community build them in order to build up the OS that we want to create. Okay, let's first have a look at the current status of server and embedded OS and the potential gaps. In the current world, OS for embedded and server side is quite different. They care about different things. Embedded and IoT care more about things like real-time, size, and others. Typical OS is like WinRiver. Typical building tools is like Yocto. And on the other hand, at the server side, Linux kernel is typically the dominate OS. In this side, we currently care about things like virtualization and how to release compute power on different architecture like we just mentioned. Currently, the main tools we use to build the OS is from the OBS, from the OpenSushi community, and things like Koji. With the current status, the industry is divided into two parts. And because of this, everyone in the OS industry will have to pay double efforts to do the compliance work, especially for the chipset vendors. If they want to compete in both scenarios, they have to do driver and optimizations for both OS types as they have different packaging techniques and different dependency trees too. Also for application developers, the application cannot be deployed freely among different machines and OS platforms, also due to different built-to-cham and different tree. But when we step outside, the main idea for OpenSystem is to hide the details for hardware and to provide a uniform platform for applications and users. So on this level, we do think that it could be much better if we are able to create a uniformed OS. On technical level, every OS is just a collection of packages. And with a two-chain to build all those packages, and also the OS itself, alternatively, different OS can be thought as a distinct completion of the packages. So we can come up with an idea. The idea is that the OpenOler can be treated as a Linux distribution and also a Compose system to make OS for different scenarios. We call it the Tendon Socket Structure, which is used by traditional Chinese buildings. Like in the graph on the left, the base structure is solid and reliable, also small. We can, just as the key components of an OS, like the kernel, the driver, and the executor, and with this base structure and a perfectly designed two-chain, we can compose different types of buildings. Like in the traditional Chinese building, we can build the four-binion city, and also a small pavilion with the same basic structure. This is quite similar to our OS story. We can use the same basic components to build up OS for different scenarios. We have already built some OS releases with these ideas and tools, and they have already have a very large-scale deployment. Zhongjun will also give some details about the building tools called Euler Maker in the next part. So with all the information I have just provided, we might have a new understanding about what is OpenOler. It is actually not only an OS digital, it is actually an OS platform which built different OSes. And the OpenOler community has optimized OS components for lighter modular designs that can run on multiple hardware platforms. And after that, OpenOler uses these components to create and tailored OS solutions for different solutions. Okay, that will be all for the technical part. I hope you have a better understanding about what the OpenOler community is and our ideas and thoughts when start this community. And now Zhongjun will give you more details about the infrastructure behind this community. Behind the scenes, there's a lot of work have to do. For example, how to measure issues from community developers, how to measure code merging, how to measure package imports. So we need our own infrastructures for OpenOler community. Let's take a look at the OpenOler infrastructure landscape. We can find there are so many excellence and the Mewtwo projects that can be used to set up the whole infrastructure for our communities. So take a look at this one. We will use the OpenSusi OBS that the OpenBuild service for the open source projects to build our packages and distribute our packages. We will also use the GNU mailman. Maybe you have horizon force as our mail list systems. And you may also find that there are a bunch of the projects comes from the sensef. Yes, we have the engines ingress, which is used as our reverse proxy service. And there's a simple API gateway. We also use AgriCity. This is a nice tool. We use our first status to centralize our application. We also use the word from the hash corp, which is used as our sensitive data backend. And we will use another small tools to centralize all the sensitive data from our vault backend or our Kubernetes clusters were in base or configuration files. And we also use the copper. So we also use so many projects that already exist in other good open source communities. But we still do some fix or do some review or do some fix PR to upstreet open source projects. And we also build our own projects for open urner community. We can send so many applications we have to use. So how do we develop the applications in our cloud native way? There are four parts. The first part is about development standard decisions. So we need to upgrade the exciting projects to make it run in Kubernetes clusters. We have to do the following several rules. The first one is about to calculate application, one process per container. And the second one is about to Kubernetes development semantics. So we suggest our development to use hill chart and customize to upgrade the development Jamil files and output the standard log and the configuration even edit because we will use the robot package and centralize the all of the data. That's it by our secret manager tool. And we also ask our developers to restrict to push the image. Actually, we don't use the Docker Hub because we have some tools. Our private reports, which is used to scan the CVS and to perform some checks before developed. We also trust database images. For example, they are asked to use the open-unit based images just because one of the packing is can be easily upgrade with some CVE issues has been fixed. And the last one is about to expose the health check and points. We need to restart up the applications by dedicating the health check and points. It will make our application more health. For the second part is about as a configuration separation. So there are three different rooms in the whole process to develop the applications to our product environment. The first room is the application developer who is responsible to write out the Docker Jamil files, which is not contains any details for the development. For example, they don't need to care about any namespace or some other Docker or Kubernetes resources. For the second room is our developer engineer, which is used to add more details to the Jamil files, which contain noise or hail standards. The third part is our info maintainers, who is responsible for reviewing all the changes, all the applications in our communities. He is responsible to merge the pull request. So we just separate those three steps and separate those owner. It will make our serve application more easy. Some of the projects is based on the existing operating projects, but some of the projects is based on the ideas we learned from other open source projects. There are two cases. The first one is above. The second one is like developer Robert said application. First of all, we're trying to investigate the existing pro application. We found a pro application from sensef and it's a live project. It has about 30 plugins that can be directly used in our environment, but the pro projects still have some issues and some performance problems. In creating once one of the plugins gets crashed, there's a whole pro application will get restored. That's the one problem. The second is sometimes when there are huge messages comes from the code repository, when the huge message comes, the pro application sometimes will lose the event. So for those two problems, we have to upgrade the pro projector into our own robot. Its name is Yabert, so we have to improve it for several accepts. So the Yabert now supports every single plugin will be separate in the containers and it's easy to replace and if one of the plugins gets crashed, it will not affect the whole applications. Thanks to Zhongjun for the excellent content. Now let me bring you to another new project in the community that we are currently working on. It is called Signal Trust. It is aimed to provide a secure and efficient solution for sending Linux packages. It is a project based on the idea of OBS sign from the OpenSusi and added much more optimizations according to our infrastructure teams daily usage and requirements. So it can provide a much more comprehensive and efficient and quantitative solution for package signing, which is more suitable for our scenarios. We have done some performance tests and according to the results, our new design is much more efficient and we are using it in open-order infrastructure. I'm not going to go through all the details, but you can contact our infra team if you would like to know more. So if you want to know more about the open order community, feel free to engage us with the following ways. On the left side is our social media accounts and on the right side is our official website and ways to download and test out OpenOiler. So that's all for this session and thank you all for your time and patience. I hope this is useful for you and thanks again to the OpenSort Summit to give us this opportunity. Thank you.