 So we're here at the Lunar Connect, Hong Kong, and there was just a keynote right now about the OpenAI. So what is that? All right, okay. So OpenAI Lab is an organization which is founded by ARM, and ARM accelerator, and a couple of others, so ARM's a licensed event as well. And here's Minfei, he's a VP engineering of the OpenAI Lab, and they create this open source AI software stack. Utilizing multiple 96 board open hardware platform, creating a real end-to-end OpenAI development platform. Open source AI. Open source AI, yes. Is it the same idea that Elon Musk has where you don't want to have closed AI, you want to have open AI or something else? Well, the open business part has a commonality, but it's not on the same, it's not the same goal, and that's why it's not the same focus. OpenAI from Elon Musk or some more about algorithm itself, a way to evaluate the algorithm in an open way. And OpenAI Lab is actually about creating an open source stack which allow people to have a very easy platform to work on the applications level, and utilizing the expertization and skill sets built up by the OpenAI Lab based on the open hardware platform, such as 96 boards to accelerate their development. So why is 96 boards a great platform to do this, OpenAI? I like him introducing that. And is it about high performance? Is it about heterogeneous, multi-core processing, optimizing everything for high, is it gonna be big ARM chipset, not small one or both? In fact, we focused on the edge computing. So that means the device is in-body device. The SOC will be small because it will have a deep learning accelerator inside. Our work is focused on making a soft open platform to end to end, and one end means SOC or 96 board. Another end is application user. You don't need to care about in the middle a lot of things. We covered that. We give you a best performance to user. 96 board is very great open platform. A lot of kinds of application boards there. You can use that to plus this open ad, this software to generate a product, your application product very soon, very quick. So all the software people need to do, the machine learning part of the SOC is open in what you're doing. Is that part, like the ML, the neural network, all this kind of processor, you are making an open platform for this? Yeah, we're trying to make machine learning or deep learning algorithm running on the platform, like 96 boards. These boards have a lot of kind. It also has, the SOC is different. So our platform trying to cover that to make the user don't need to know what SOC they are using. For example, maybe they want to upgrade to one part to another part. They don't need to know, change their low level software, they just link to different driver, that's enough. So if you use a high silicon neural network or if you use maybe a future ARM ML IP or another IP, it will be compatible software? Yeah, now we support ARM CPU and GPU and the third party deep learning accelerator. In the future, ARM's their own machine learning accelerator come, we also cover that. What is, why do they, for example, high silicon? They say it's 50% more power efficient to run on their neural network IP than running on the CPU. I mean, is it optimized? How does it work, this machine learning? I don't want to ask too much, but... I should probably answer that question. Right, so obviously it's quite difficult to comment on the performance comparison today on different hardware platform with different SOC capability. Each one of them, which actually carry on such neuron acceleration in different way. Hiking anxiety, obviously, is down while, well it could be actually through the standard neural instruction of Codex-A core as well as GPU about the Hiking 970 was focused on the neuron processing unit, which is a dedicated IP for neuron computing. So we can't comment on the performance comparison. Well, I think what Minfe was trying to say is 964 being a platform which is SOC independent. The point is the application developers does not have to care about low level hardware and SOC level complexity. They only need to know, but by using what AI OpenLab has done, they will be able to get the maximized performance no matter what level or SOC or hardware platform they choose to develop an application from. This is a great benefit and OpenLab has addressed the gap in the market which actually supporting AI application developers. And away from the details in terms of what framework are going to use, what middleware you're going to use, how to optimize performance, different ARM cores. And as Minfe said in the keynote, it's not only addressing the Codex-A, it also has a plan to address Codex-M core as well. So I think we're very excited about that and the factor matter is quite, it's an open source project. Everybody can go on GitHub OAID to find out what OAI has been up to. So we are very excited about supporting such project. We are very excited to be part of the project. And today or yesterday you launched 96 boards.ai. So is all the boards optimized for this platform? Well, as you see from the Minfe's keynote today, OAID is already running on Rock 960, Hike 960 and Dragonbolt 820c. It shows how easy it is actually to port OpenAI labs work to different platforms. And each of them feature enable different SOCs. So I say that we're very excited to be able to provide such platform to such open source project. And 96 board OAIs initiative which to enable all those SOCs featuring different acceleration mechanisms such as FPGA, DSP, CPU, GPU, dedicated NPU. So we hope to enable those for the project like this, be able to utilize the latest assets of technology. And so ARM is part of it, but this is heterogeneous, but I'm wondering is any chance it might be multi-architecture also? Or for now it's just only ARM is participating. What if other want to join, you can join? Well, we are focused on ARM technology today and we welcome other, if there's any participation requests, we certainly would like to talk about it. But today we focus on both from 96 board point of view and it's an open specification. And for OAID certainly it's focused on enable ARM SOC technology. And it's quite new, right? It's recent, but you already achieve a lot in this short time? Yeah, we just have one year. We made a lot of progress. We have optimized the framework. We have our own framework. We just use half years to make that. Available in GitHub. So very fast work? Yeah, that's our much experience before. And what is next? How many things will happen in the future? I can't tell you, but it's open. You have to say it's open. Yeah, we will have our framework strong and strong support more and more on the SOC and more and more 96 boards. We have more optimized framework. We have more application APIs there. So everything make your using our platform much easier and easier. So anybody who works with the machine learning software or they should all know about this, they should all work on your, what you're doing? We certainly like to welcome the people who are working on the machine learning, you know, being the machine learning practitioner or being the algorithm designer, being the AI application developers. We welcome you to take a look at OAID on GitHub. We welcome you to try out what OpenAI lab has done and see if you can take it on to the work this guy has put into. And it stands for open AI distro. What is the last word? Open AI distro. Distro, like a Linux distro. Yes. All right. Cool. And all over the world, people are working on this, right? Not only in the U-Baseware. At least in Beijing. Beijing, but it's also Shenzhen. Yeah, and Shanghai. How about in Shanghai? How about in the U.S. and Europe? U.K. U.K. Yeah, U.K. If out of China, maybe U.K. is the first division in the future.