 Good morning, everyone. Hola. It's my pleasure to be here. My name is Xinhui Hu. I'm the chair of the technical committee of Obano Euler community. It's really nice to be here to see all these good friends. And for today, I'm going to introduce Obano Euler, what we are working on for a future of intelligent and diversified computing. So a lot of things happened in the industry in the past few years. We see a lot of new hardware technologies emerging, especially the rising of domain-specific architecture is quite attractive to all of us. And also, we see a lot of new application demands, because AI has made huge success, and that really impacts almost every area, from the internet of everything to the industrial transformation. So we see that the diversified computing has arrived. What we need is a platform to support all these versatile scenarios. And also, we need a platform to be able to the innovation engine to support and implement all these new requirements from both hardware and application. And that's the reason that's why we start to build Obano Euler. Obano Euler is an open source operating system to support both versatile application and diversified devices. From the application's point of view, we almost cover all these mainstream scenarios. We support CRM and ERP application from the information technology point of view. We support OSS and NFV, if you're familiar with that, from the communication technology. And we even support DCS, SCADA, and PLC from the operational technology that's mostly about industrial manufacturing. And we support all these mainstream applications from AI to cloud native to big data, et cetera. So that's from the application. And for the hardware, we're covering all these diversified devices. We support all these mainstream CPU architectures, including X86 and ARM. And also, we support emerging hardware, including RISC-5, and Launch 7 from China, as well. And we support quite a lot of types of servers and boards. But what really makes Obano Euler different from other OS, we think, is our goal to make a OS for diversified and intelligent computing in all scenarios. Why is that matter? Well, we'll talk about that a little bit more details. First, why is the diversified computing really matters? We think because the world of everything smart needs that. With the very rapid development of AI technology and the internet of everything, we already see the demands of computing power is growing very fast. People estimated that in 2030, the global general computing power will reach 3.3 set-up flops, while that's about 10-fold increase over 2020. And the global AI computing power will increase even faster. That's estimated at about 105 set-up flops in 2030, which is 500-fold increase over 2020. And besides this rapid increase of computing power requirements, we also see the different computing tasks actually requires different computing powers. We use CPU for the general computing. And we usually use NPU or GPU for the AI computing. And sometimes we use DPU for the data processing line. And everything sounds fine, sounds good. But we actually see some shadows casting upon that. Because the utilization of computing power in today's data centers is already very low. In a typical data center, we see about half of the computing powers actually wasted on CPU already. And the diversified computing power will make things even worse because of various reasons. And that creates a dilemma that for one hand, we want to have much more computing powers to achieve the bright future about the world of everything smart. But on the other hand, because of the low utilization of the computing power, make the sustainable development dim. And it's hard to achieve that bright future. So we think this is a challenge the industry face. And this is also a challenge for us to reach the carbon peak in 2030 and the carbon neutrality in the future. And we also think operating systems can play an important role here to solve this challenge for the future. Why is that? Because we see the challenge and opportunity in both memory, in scheduler, and also in management. Let's take memory, for example. For today's GPU and MPU, if you look into the software stack, you can aware that the device driver takes the responsibility to manage the memory inside the device. And the application developer's responsibility to manage the data movement between the devices. And we actually think they don't doing well. And memory management is actually too important to be managed by these individual developers. So the idea is that we think the operating system should step forward and manage and unifyingly manage all these heterogeneous memory of all these heterogeneous devices. So our idea is called GMEM. The proposal is called GMEM. And there will be a separate technical talk later this afternoon. We see a lot of improvement by leveraging this technology. So the device drivers and application developers don't have to reinvent the wheel again and again for the slightly different customized scenarios in the diversified computing. And you're welcome to join us and see if we can improve this proposal altogether. So that's about memory. And if you're looking into the scheduler, you can find something very similar. It's not easy to share the diversified computing power across different tasks, especially when the task has had a different priority. And the idea, so we're still mocking up and we don't have a solid solution. But we think the idea is to make a collaborative scheduler between all these diversified schedulers so that you can have a primitive scheduler across these diversified computing powers. And you can even migrate your tasks among all these computing powers. So that's about scheduler. And we want to make it more efficient. And then about management, we see a lot of work in the community already working on that, too. And for OmoEla, we currently have a special interest group working on DPU already. We want to provide a unified interface for various small DPU devices in China market. And by doing that, we are able to create pools out of these devices. And by pooling, it's able to achieve a totally higher utilization of all these devices. So all to all, our idea is, we think for the diversified computing in the future, what we really need is to convert the OS. So OS will not manage these diversified computing devices as separate devices anymore. Instead, we want to unify them to convert them all together into one so that operating system can manage, can allocate results, can schedule across them from the global point of view. And by doing that, we are able to improve the computing power utilization and also reducing the computing power waste and to achieve a better future for us all. So the second, why we really need an OS for all scenarios that have been asked a lot of times when we start to announce that. And we want to make clear that it does not mean that OmoEla provides a single operating system instance to running everywhere. Instead, we actually as an OS platform to be able to create OS for all these different scenarios from cloud to edge to invalid. And that gets some obvious beneficiary for us, first of all, by improving the toolbox of OmoEla, we're able to combine and tailor the software components on demands and be able to combination on demand that we can actually sharing technologies across cloud edge and embedded. That also means if you put a feature to the cloud, then you can automatically benefit on the edge and even on embedded as well. So the technology sharing is a good thing for us. And we actually see something even better because the boundary between the cloud and edge is already very blur in today's industry. And that happens as well on the edge and embedded. And when we route all the OS into one single platform that we're able to interworking the ecosystem of embedded, edge, and cloud, that also means if you develop an application for the OmoEla embedded version, as the embedded instance that application can migrate to run on edge with no problem and even migrate to the cloud. So that's about ecosystem interworking. And also because we all share the operating system kernel, it's a way to vastly simplify the interconnection between the edge and embedded. And we see a lot of good things happen there too. So our idea is that OS for all scenarios really makes the cross-domain innovation easier. And that cross-domain innovation really helps us to get to the future for all things smart. And then last but not least is about what we need OS for the future of intelligence. What we think is operating system for the future of intelligence really need to be with AI, for AI, and by the AI. So what do you mean? So for nowadays, OmoEla start to support, operate, and manage with AI. We just developed a new shell called OmoEla Copilot to connect the large language model to your system administrator daily work. So by using OmoEla Copilot, the system admin can just use a shell-style language to ask the copilot to collect information, to analyze, and fine-tune the system. And the copilot itself be able to generate scripts and be able to run the scripts by the permission of system administrator. And then can analyze the system bottleneck and fine-tune the kernel, setting, or system parameters. And after all, we can achieve the system running on OmoEla have a better support, a lower latency, use less hardware resource, and best of all, always stay up-to-date. So that's about with AI. And OmoEla is also ready and optimized for AI. We're going to integrate Lama and ChatGLM, for example, into the release of the community distribution directory. And also, we will try some other features as mentioned, like, for example, GMAM integrates into the OmoEla to make AI application running better. And also, OmoEla's assistive build and test by AI. For example, on the compiler side, the vision compiler is an LLVM-based one as trying ways to replace the heuristic algorithm with AI model to provide a smaller and faster binary for OmoEla. And also, we leverage AI for the fast testing. And we use AI, use large learning model to generate testing models, which can help to get a better software quality. And also, these testing results will get feedback to the upstream community to benefit a wider community. So as mentioned, for OmoEla, we're working on the diversified intelligence in all scenarios. But back to the foundation, we're still an open source project. OmoEla is not built from nothing. We actually build based upon the successiveness of a lot of open source projects. And OmoEla itself is an open project, is a community project. When we started OmoEla about three years ago, we found the daily active developer is about 311. And nowadays, when we count the daily active developers, it has been increased for about 300 to 4,000, almost 4,000. That's about 10 times in three years, which is an awesome result. And with the contribution of all these developers and also the users, even more users beyond counting that our community is able to generate about more than 100 pull requests per day, the packages directly integrated into OmoEla increase about 30 every day. And for every month, there are about 10 newly charted projects inside the community. And we release more than 80 bug fix updates to the community every month. So we all understand that community development and innovation is a quite long, long process. And all these numbers cannot be achieved without the help, without the continuing hardworking of everyone. And because of this contribution, we can see that OmoEla has already become a dynamic and innovative community. We have a lot of incubated projects inside community. Well, we don't have time to go through all of them, but just to name a few, like for example, you can see Mika. Mika is our project to co-deploy a real-time OS together with a non-real-time OS on a single multi-core SOC. Well, it's kind of mouthful, but that's the trend of the industry today. And also, you can see the software bus. Software bus is a way to auto-discover and communication for the devices on the edge. We learned that idea from today's smartphone and wearable IoT scenarios, but when we implemented for OmoEla, we found that's really, really helpful for the industry. OK, there are so many other incubated projects. We can't go through all of them, but we consulted a wide paper in English to introduce all of them so you can find one paper on our website nowadays. And we also want to emphasize that OmoEla is not just a technical preview. We also very, very focused on landing these technologies in real scenarios. For the community members, we already have more than 1,100 community members. And all these members, they join the community and they use OmoEla as a base for their daily work. And the installation base for OmoEla already reached more than 4.5 million. That also means OmoEla has been evaluated and verified in a lot of scenarios, including carrier, finance, public facility, and even powers. Oh, given this widely adoption, we have the confidence that OmoEla is gaining momentum. And we also have the confidence that OmoEla can be even more widely adopted. So that concludes my talk. Thank you very much. And you're very welcome to join our community to work together on a OS for the future for the diversified and intelligent computing in all scenarios. And you can learn more about us on Boost D1. Thank you very much.