 in working for Intel 10 years and worked many open source projects and products during past years, recent years, I am working on cloud native related projects like Intel Kubernetes device plugins, resource management, Cata containers and cloud hypervisor open source project. Before Intel, I worked for several companies, mainly worked on Linux and Android system development. Today, I'm going to talk about NRI, which is node resource interface, which is a common plugins mechanism to extend container runtime, such as container in CRIO. Let me spend a little time to talk about what NRI is. Not everyone know NRI clearly, but I believe you know resource management, Kubernetes goes, container life cycle and CRI. NRI is not a new concept. NRI is a sub project of Conya nerd. Now NRI work is an evolution based on that. The original design and implementation of NRI v1 were contributed by Apple, but the functionality is very limit and its scalability is not very good because it can only be applied to the container runtime. However, its design and innovation are quite good. So the NRI work I mentioned here, which refers to NRI v2 work, which is intended to create a universal runtime extension standard API, which is not only to supports container, but also can be applied to CRIO. Through this standard API, users have opportunity to inject additional logic into runtime with important container events, triggering such as container creation, update, deletion, et cetera, or user can dynamically adjust container configurations during container running. NRI is a CRI plugin located in the container architecture, which provides a plugin framework for managing node resources at the container runtime level. NRI can be used to solve performance issues in batch computation, sensitive workloads and meet user needs, such as service SLA slash SLO and workload priority. For example, high performance workload needs to ensure memory access within a NUMA node, also by allocating the CPU of the container to the same NUMA node is vital. Of course, in addition to NUMA node alignment, there is also resource topology affinities such as CPU core and L3 cache. NRI is target for resource management and container runtime level. NRI patches are merged into container 1.7 release already. NRI is a plugin. NRI is the highway. What is running on it really depends on what we want. Now, let's see the regular CRI request processing in a Kubernetes environment. As we see, the core of the CRI mechanism is that each container project can implement a CRI shim and process CRI requests. In this way, a Kubernetes has a unified container abstraction layer, which allows lower containers to integrate and connect easily when running. However, OCI level runtime, which needs to input OCI spec, OCI spec has information about which device is a container to use in lots of other parameters. Those are only visible at OCI runtime level. So, the key part here is that the container runtime to translate it from Kubernetes way to details of OCI runtime level. Here you should know that OCI container have more than 40 parameters or fields, but the CRI protocol has 15 or 20 fields only. This means if you want to enable like Intel RDT cache slash memory control, it is part of OCI spec. It can be configured at OCI level. The CRI protocol does not know anything about it. This is why we design and contribute NRI into community. Now, let's see what it looks like after integrated NRI. NRI include three parts, NRI adoption, NRI plugins and NRI protocol. NRI adaptation needs to be built on each runtime, but the plugins develop once and run everywhere. There are some internal differences between Creo and Container. The NRI protocol is a single repository that is used by Creo and Container. I mentioned already, NRI allows user to inject a third party logic into a container runtime, which is compatible with OCI, such as Contianert and CRIO this way, which allows for taking over containers or performing operations outside of OCI at certain time points in the container lifecycle. For example, it can be used to optimize the resource allocation, device management and other container resources. NRI defines node resource interfaces and implements a common basic library to support this pluggable runtime extension, name the NRI plugins. Look at the CRI request flow, information is passed down to a plugin and plugin replies to container runtime after applying resource policies. Here the key part is the safety connection created inside the interface. NRI allows to deploy plugins as a Kubernetes demon set. The plugin then connects to NRI socket. It has access to network, API server, et cetera. It can use the normal Kubernetes client. One of the plugins goal is to simplify the Kubernetes block. Also, advantage of OCI level is that new features integration does not need to touch everything. For example, Intel RDT implemented in OCI spec six years ago, but still not exposed to the upper layer now. Okay, now you know what the NRI is and how it works, but how to write an NRI plugin. It's simple, just clone the NRI plugin template, fill in the missing details, then implement data collection. If you wanna know more detail and create yourself NRI plugins, please take a look at those links. They are NRI design and PR for container and CRIO. Also, there are two plugins implemented by Intel. You can refer to and learn about it. Finally, I wanna call for actions. If you are interested in NRI, please join community to discuss and contribute. Also encourage you practice and develop your NRI plugins for your services. This is all what I shared today. Thank you. Back to our sharing. Ten minutes. I'd like to share three points with you. First, who am I? Let's get to know each other and make friends. Why do I have to use this method in this video? Because there are a lot of content. If I'm wrong, I probably need to save you 20 minutes. So I would like to take five minutes to talk about what I want to share with you. The most important thing is to get to know each other and make friends. My name is Yang Ai-Ling. I'm in Intel. I've been working for IT for 20 years. I've been developing various systems and products for the past 15 years. For example, Linux, Android, mobile phones, and TV. I've been working on this cloud for the past five years. It has become a big system. I've been working on OCTRACION, as well as all kinds of equipment and resources. Why do I have to use this method? Because my voice is not so good. Some of you have been listening to my technical session this afternoon. It's pretty easy to get tired. So I don't want to use my voice and give you a break for another 15 minutes. So I use artificial intelligence. Don't you know what artificial intelligence is? It's a process that makes my voice sound. So I've been training for five minutes. Yes, that's right. So let's get to know each other. In the future, or in our circle, we might meet at another event or a customer meeting. This is the first one. I'd like to share it with you. We have a friend who knows me. I'm Yang Erlin. The second thing is what I've shared with you. You've probably heard what I've shared in the last five minutes. It's called N-I. I don't think I've done a lot of research because I've done less research. I've done a lot of research on cloud development. Because in the community, I often look at our country and see who does PR. So at least I haven't seen too many people who do PR in the community. In the global KBS community, we have two Chinese companies and one is the main company, Daokla and Huawei. I have a lot of other companies. So I haven't seen you do PR. I don't want to share the product. This is a community-based community. It's where we discuss technology. We discuss the community and the community. So I didn't include the product. I didn't include any product recommendation. Let's go back to what N-I-S is. Just now, a few teachers have said a lot of things about cloud programming, the management of resources, or the programming of the GPU, the resources of the GPU, etc. But I want to share with you why there's N-I. In fact, KBS has been here for so many years. It's relatively mature and stable. So in the process of application, basically there's no big problem with programming. But in fact, when you meet the first two controls after me, when I developed a lot of new things for KBS, it's hard for me to contribute. Or it's hard for me to be accepted by the KBS community. The reason is that our innovation is relatively mature and stable. Your patch is not easy to be accepted. Or in the process of application, these mentors will be very strict. And the time is very long. So when you want to do some innovation and develop, it's relatively difficult. It's relatively difficult. This is one. Second, in the actual application process, in fact, when your application can run, your stability is no longer a problem. Or your backup is done. In fact, there's still a big challenge. Your boss will give you a task, to increase your cost. That is to say, in the process of application, we often find that the cost of my unit is not high. Or I have to do everything to increase the cost of my unit. But in the process of my unit, I can't achieve it. Because the unit always fails. Because of the cost. So back to KBS, you may not be able to see the cost management. But in fact, in the process of development, when you face your application every day, when you deploy your workload, you will meet two things, the request limit. This is what most people meet. Because these two words are often an invisible KOS. That is to say, your workload is in the process of deployment. There is a KOS requirement. But this KOS is an invisible thing. There are some differences. It's a set of things. It defines its KOS class. That is to say, your application can show what kind of class I am in. It is a high priority or low priority. You can do it. But in fact, after months, in the entire deployment process, there are many details of the deployment. You can't do it. For example, we are going to do a恐怖. The limit. Or the limit. Or a big one with artificial intelligence. A big one with a large consumption of resources. Or a恐怖 with an online thing. Or a high priority. A low priority. Or a high priority. Or a恐怖 with an online thing. My support for deployment is serious. Or my support for isolation is not guaranteed. So in this community, or in every field, we all have our own ways. But if you want to put these things into the community, be accepted or ignored, or you have to go through it yourself, meet the two challenges. One is that the community can't be accepted. The other is that during your maintenance, your every generation of products will be upgraded with KBIOS. This cost is very high and very painful. So in the community, there is a new way, which is NRI, which is not Results Interface. This method is in container DJI, which is a new API in the wrong time level. This kind of API is that you can achieve your own policy on the basis of not destroying KBIOS. For example, if you have a way to control your resources, you can perform a plug-in method. Then it's a good thing. If you want to promote the process, you can perform a device plug-in with a policy of good resources. So this NRI is mainly in container DCIO. In the wrong time level, if you have this support, you can finish your own device plug-in and then you can deploy it in a demonetized way. So not destroying your own ecosystem is very beneficial to its version of the plug-in. So it solved a problem. So this NRI has now become the basic API of KBIOS that supports efficient management. This is the second NRI that I shared with you today. After listening to this, you don't have to worry about details. You can go online or in the KBIOS community. But at least you know that I shared a thing that Inter did today. It's called NRI. If you are interested in it, go and see it. The third thing I want to share is that you already know something. If you are interested in it, you can go to us to find our community. Finally, join our community to communicate and develop your own Raspberry. That's all I want to share today. Thank you.