 So everything big happened, before everything big happened, always something happened, right? Every time, always like this. So here I was very happy to come here to give a speech about open year. The topic is open year, bringing new opportunities to diversify the computing year. So I'm Rishu, I'm the member of the technical committee of the open year community. Okay, so the first question is, so when we talk about the open year, maybe somebody say, okay, is it kind of another brilliant OS distribution? The answer is no. So let's clarify what's the problem we try to resolve. Personally, I think there are three tough challenges for the OS industry. The first one is the fast development of chipset bring tough challenges for the OS development. I think the chipset has been very booming in recent three or four years. And in the future, we can forecast there are more and more new type kind of chipset will appear. So the second one is the OS need more aggressive innovation to bring things to be lighter and faster. So personally, I think that nowadays the OS, especially for the linear system, it can become bigger and bigger and heavier and heavier. So our idea is how to reduce it and reduce the size, reduce to make it a lighter and faster. And the last problem I think is the gap between the so-called and the embedded system because the linear system is divided to be server and embedded system, not true world. They don't think about it well. So there's another kind of challenges. So what's the obvious way to resolve, I think the one word is, okay, aggressive. So we will do some aggressive innovation or modification to resolve this kind of challenges. Okay, so I think the latest has been the most successful OS during the last several decades and that kind of spread from the server cloud to the add computing to the embedded system. It has been very successful in the industry, but everything looks fine, but is it so perfect? I think it's still a question for us. So first one is the chip set, the challenge chip set. Well, OS OS can work with the chip set. If you look back to the chip set development in maybe five or 10 years before the chip set evolved slowly, you just pick up because chip set is enough, but the more and more chip set has been appeared in recent years. So suppose you are a chip set vendor, what kind of security you will face. It's very easy to understand. You take out the chip set and you put the feature to the upstream kernel and the upstream kernel, maybe it's not that easy. Maybe spend a half and a one year to do that. And if the upstream kernel accept the feature and then the OS vendor will take the feature down and build the OS, and then spend, this may also spend another half a year and prove to the OS to the customer, maybe spend another one year maybe to be mature. So it take around one half and a two years. But the chip set, generally speaking, a chip set has three year life cycle. So that means half of the life cycle, chip set life cycle will be spent to do this kind of job, but the customer can access it then. And consider the kernel evolved very quickly, four, five, six, and four dot one, four dot two, and also the OS vendor also has a lot of kind of distribution. So they also exist a lot of OS vendors, right? So that is also a problem between the chip set vendor and OS vendors. And in addition, we consider more and more chip set type. If everything is, 86, it's fine. It's fine, it's not so complicated, but now more and more ARM architecture have came into the market, especially for the server and the cloud. The more and more company will adopt the ARM server to use, right? And additional, a very shining star with five appears and especially for in China, I guess there are maybe 50 company, kind of company, not bigger, small or medium company, they produce a lot of kind of this type of chip set, as I paid out, paid out. So when we consider all of the things, the OS and the chip set will be a big mess. It's called very complicated. So how can we handle this in some degree? So open URL, we will have adopt this kind of way. We will have the release, that's normally, right? Release a version two times per year. One is in the March, another in the September. So this number, those releases, we call the innovation release. You can put everything in material and the bleeding features into those kind of release. You can try, make a try. So we provide some environment to do the verification job. And then for two years, we will release RTS, long-term service version, which can help the company. We will make sure the quality and we will maintain the RTS for six years to ensure the company will adopt mature product. So that is the way. So always, if you want to try something new to verify some chip side, you can go very early to the innovation version cycle and do the verification and go back to RTS version. And so we will adopt more aggressive release cycle to meet the aggressive chip set development. And secondly, for the new architecture, especially for the ARM and RTS 5, we will adopt a very aggressive strategy to adopt these new features. So a new feature, especially for some big feature, it's not easy or spend a long, very long time to for the upstream, for example, kernel, JLBC to accept the feature. But for OpenUler, we open the door to those kind of feature to promote the usage of the new architecture. And then we have another very important feature is we share the same code base. In this point, we have released two RTS for the second RTS, which is the 202.03. In this very milestone version, in this version, no matter what you are, what chip side are you using, as to say this ARM 645, they share the same code base. That means one code pool to create three kind of versions. So if you are a developer, you just keep to use the API, you share the same API. If you are a device producer, the driver will face the same kernel version. So we will maybe reduce the gap between the usage of the new architecture. Okay, so that's the general information how we handle the chip side of the issue. But it looks not so aggressive, right? That's a regular way. Okay, let's move on. We would just consider the general CPU to as consider has been very complicated, but more complicated is coming because we have more and more GPU, DPU, TPU, or XPU, a lot of kind of PU. So for those kind of PU, how to handle them? And they evolve very, very quickly. For example, the DPU in China, I guess there were over 10 companies, startup company to do DPU, very different and very different SDK. And also, for a very complicated OS, every week we can receive over 40 CVE issues, vulnerable issues, how to handle them. And then, as we release, you have to update the kernel, update the OS in the cloud if you have millions of machine in the cloud. So the only way to make the kernel to be more flexible cannot be fixed. So if you consider kernel, kernel is the interface between hardware and the USB and the developer. It's a very traditional way for us to do it. But the problem is why we cannot handle both the tripside environment and so many new tripside type. The root cause is kernel is highly coupled and unchangeable. Unchangeable doesn't mean you cannot adjust it. You just can put some parameter into the kernel to do some adjust. But it's very slight. It cannot change a lot of things. So generally speaking, when you change something in kernel, you have to compile it to be a new kernel and then deploy it again in cloud. So that means changing everything means changing everything. So that is the problem we have faced. So here we have a new idea. So is it possible to make those kind of key components of the kernel to be more localized? So for example, we know nowadays the kernel will adopt the EPPF mechanism. EPPF is very good, but it's only used in the network in the beginning. Now it's expanded to more everywhere, but most of the user is still limited into the network. But it's a very good mechanism because you can make some policy in your space and compile it and inject into the kernel and the kernel will run those kind of policy to do that. So it's a perfect design. That means the policy is divided from the framework. So framework is framework, policy is policy. Framework is only to load the policy around it. So we can bring those ideas. Is it possible if we can modify or redesign the kernel to follow this way? So that means the kernel only to run the framework that we can make a different kind of policy in the user space and compile it and inject into the kernel to let kernel to always flexible. You can change something. Another way is in the left of the PPT slide, you can see we can the driver. In this system, especially for the kernel, we have a lot of kind of drivers. Driver is very complicated and now easy to migrate. But it's possible for us to make a new driver framework. That means you can write the driver once but it can run in different kind of kernel version. So that is also kind of an idea. If we can finish this two kind of step, that means we can handle some difficulties we are facing. So that is the framework we are doing. We call it kernel as a service. We call it CAS. So the first is for this step, I suppose we can release the first photo step, photo type by the end of this year. So welcome to come to OpenViewer to see it in this year. Okay, so the idea you can see, we all idea to make the same current to be more lightweight and streamline kernel. But OS is not only kernel. We have a lot of things. For example, the virtual machine level and the container level, right? The kernel visualization and the container, the three levels. The same idea, we think Docker is too big to sink. And also we think the cumul is too big to sink. We get together and everything in the same image. So we develop a two-project, one is for the sort of word, which is try to replace the cumul system. And also we develop the asula. Asula is also to replace the Docker. So it's a very slight. It's only the 10% the size of the cumul or the Docker. For example, the sort of word is only, I guess for megabyte is the first small and very fast. So it can be deployed from the cloud to the server to the app, even to the embedded. So embedded system also can enjoy the virtualization work and the container work. So that is the idea. And also, we said this kind of actually the structured component of the OS. We also want to do some grass moon changes. We want to modify the number one process. And then the one process we all know that is the system D, right? It's very big. So also we develop a new project, which is called SysMaster. It uses Rust to rewrite process one, to make it lighter and safer. Because you've got it used to the cumul the system D will find out, always have some CV problem, always affected by the bugs. So we also, for the server, we also use the Rust language to write the program. So we have so many things though. And also we want, all of the project, we want to make the things lighter and maybe faster, but it's only for this. No, we have bigger vision. So then it's a universal OS platform. Somebody will challenge that. Universal is a too big board, right? Okay, let's explain what it means universal, right? Actually, if you check our industry, we find that there is a very clear line between the server side and the better side. For the current side, we often use the CentOS, the SUSE, or Ubuntu. In the embedded system, you always use the Wind River, Wind River Linux, or the Yuktu system, right? But unfortunately, the two systems are not totally unlinked. They share this different architecture. They share the different, they share nothing. We don't have the same code base. And if you use it, it's not totally different. And when you build a system, you want to do the CRCD, the server side will use Koji or OBS, but for the embedded system, I always use the Yuktu, that's totally different. But everything should be connected, not should, it's doing the connection around the devices and the device to add computing and connect with the cloud. So everything of the system will connect together. So in theory, the ideal word is, I wish you should share the same thing and that application can run smoothly from the cloud to add to the embedded. And if I'm a chip side vendor, I just need to work with one group and they can make the embedded, add and the cloud or server, you share the same thing. I don't need to, what complies with this vendor, that vendor, this OS or that OS, it's very tedious and consume a lot of effort and money. So is it possible to do that? Okay, so let's consider, what is OS? No matter it's the embedded or add or server or cloud, actually OS is a collection of packages or component. So no matter you use as an embedded or server, you share the kernel, kernel is one of the packages, right? So we came up to an idea. So can we make the component to be component? And then if we want to do some OS, we can use some language or some configure file. For example, the YAML file is very easy to understand, very easy to run and just give the system a YAML file to describe what kind of OS I want. And then the component system will produce the OS, what I want. So in China, we call it the tenon and the socket structure. It's a kind of structure used by the Chinese traditional building, the component is very similar, but you can use a very similar component to form a very huge building and also very small building, but internally it's similar. So we build a system which is called OlaMaker, but of course it's still under the development, also in the middle of this year, we can release it maybe the alpha beta beta version or call it a 1.0 version, it can reach the requirement that we have a very big component pool. You share the same code base, no matter it's X86 or M64 or RIS5, they share the same. And then we provide OlaMaker Composer System. And you can put some description or requirements for the OS to the OlaMaker and the OlaMaker will produce this OS for you. So OlaOpenUla is not an OS, it's not an OS. It's OS platform which build different OS. So that is the, I think, a more accurate definition of OpenUla. Okay, so let's do a smooth summarize. So we optimize the OS component for OlaMaker and the Modular because we want those components can be deployed in cloud and the devices. So we need to make every component to be smaller. And also we can use those components to reorganize to be different kind of OS. So that is to meet the different scenarios. So that is the idea. Okay, when we talk about we replace a lot of things, but it doesn't mean we abandon all the ones. So for the Docker, for the QML, it's still in our package pool. So you can use it to do the, to compose the system. So we have just provide a second choice. Okay, I've told you, but I came to the end of the speech. More than that, we have developed a lot of new things. You can come to the web scientist, for example, a tune, AI based performance tuning too. You can do the performance tuning by AI. You use this come to, you can have the kernel hard replacement, which can enhance the cloud or ops technology. And then we have bitching JDK, which is optimized for the M64 and it was five. And also we have an FS plus. We reorganized protocol, an FS. An FS is very popular, right? But it's not fast. It's not a robust and not stable. So we redesigned this protocol to enhance and add to six times speed and more robust. And then we have QMS, Gazelle and AI ops. We use AI technology to do the DevOps. So until now, in OPU, we have around over 300 kind of new project. So if you can go to our website to find those components, of course, you can use the component to other platform, for example, Red Hat or Ubuntu. Okay, so here is the, I don't remember, right? It's very big. At least in China, it has to grow very, very big size. And we have accumulated a lot of developers and we have accumulated a lot of your organization and companies. Okay, finally, so it's the end of the speech as a very quick introduction of OpenUR. So here is the website. Of course, some of the material content is still written in Chinese. But if you find some, this can be, so you can reply easier to help us to improve the project. Here is the website, but before, after the website, you still have the project. We have GTI repo. Over here, HottingGager. So you can take the picture and we can communicate in those channels. Okay, everything is new. So we try to bring some new things to the old OS industry. So welcome to John OpenU Learn to enjoy the new world of the year. Okay, thank you.