 We are coming to you with a new episode today and I hope you're having a great day, whether it is bright or maybe still dark and early in the morning or your lunchtime or maybe it is late at night. Thank you so much for being here with us today. Opening for Alive is an interactive show that is bringing you production case studies, open source demos, interactive conversations with experts and as well as updates from the global open source community. And this show would not be possible without our valued members and I would like to just take a moment to thank them. And if you don't see the logo of your organization on this slide, don't be disappointed, that can be fixed. Go to openingpro.dev slash join and check out how you or your organization can join the foundation as a member. My name is Ildik Ovancha and I will be your host for today's show. And before we dive into the content, I mentioned that Opening for Alive is an interactive show, which means that we need you to participate. We are streaming the show live to YouTube and LinkedIn, so if you have any questions or comments, you can go and add that to the comments section and we will answer and reflect to as many of those questions and comments as we can during the show. And now let's dive into the content. Today's episode is full of information, news and updates, and you will be able to learn all that there is to know about the Computing Force Network Working Group, as well as their missions, goals and activities. On today's show, I have a few of the founders and active contributors to the working group and they will tell you all that you need to know about CFN. And before they start their presentation, I would also like to ask the group and the speakers to please introduce yourselves briefly. If we can bring everyone on screen. Hello, I'm Jie Niu from China Mobile. I'm working for Automation and Integration Tools in China Mobile and I'm also working for promoting Computing Force Network Working Group. We are very glad to have this opportunity to introduce CFN Working Group and thank you for making an arrangement for this. That's me. Xiaqiao, maybe go next. Hello everyone, my name is Geng Xiaqiao and I'm from Inspire and I mainly take responsibility for the ubiquitous Computing Scheduling Group, including the design, the development and relevant open source works. And I'm glad to have this opportunity to introduce the many work of our group to everyone. Thank you. Thank you. Do we have Chiwi? Jingtao, are you online? Hi everyone, I'm Jingtao Wang from China Mobile. So I'm responsible for the DPU research. I'm very glad to see you and introduce the Computing Offload Subgroups. Okay, that's me. Thank you. Sorry, I just want to let you know that because of the network issue, I closed my camera and Chiwi said she just got lost for network. Okay. Huang Lei, are you online? Oh, yes. Hello everyone, I'm Lei Huang from China Mobile and I am the member of CFN and RSO-PTR of cloud native, no, community native, sorry. And it is my honor to be invited to the live show and I will introduce my part to you. Thank you. Thank you all. As it is a live show, sometimes we do have technical difficulties but that will not stop us from bringing you the content. So let's start with the introduction of the working group. And if when Chiwi is back, she can also introduce herself during that intro segment. Chiwi, are you online? Yes, I'm back. Can you hear me? Yes. Yes, loud and clear. Ah, cool. I'm sorry for my internet issue and now are we going to start the introduction of the Computing Network, right? Yes, go ahead. Can you guys still hear me? Sorry. Yes. Okay, cool. I'll just start. Thank you guys. So firstly, a delayed introduction about myself. My name is Chiwi Zhao. I'm from China Mobile and currently I'm working at the CFN group of China Mobile and also I have done my work related to the cloud native as well as the computer native, which is a technology related to the cross architecture and heterogeneous hardware. So then I'll start my introduction. So for the next 10 to 15 minutes, I'll introduce the Computing Network and CFN backgrounds. Okay, so Christian, can we go to the next slide please? Okay, cool. So firstly, let's look at the background. So computing capability, which we call it computability in our system is the hottest topic for the all industry right now. Everybody is talking about buying and GPU card. Everybody is using the large model kind of things and China Mobile together with some other telco operators. We have already get this trend, but how can we join this? We are super good at the network and some of us have a large amount of the infrastructures and we may have the public cloud, some past services, so under this condition we are considering like systematically construct a new information infrastructure. This new infrastructure is focused on the 5G computing network and ability as a service platform. And based on this infrastructure, we are trying to build a new information service system to provide our customers with the connectivity and computing and ability through a one stop service station. And among all of this computing ability is one of the key contents. And then next slide please. Okay, so let's look at what is a computability network. So by looking at these two words is obvious that it has two parts, computing ability and network. And we're using these two together with some other technologies like artificial intelligence, blockchain, big data and security to form this new information infrastructure, which can provide our customers with the ubiquitous network, computing and intelligence. Our goal with the computing network is to promote the computability to become a common utilities just like the water and electricity. So as long as our users can get access to the network, it can have the ability to use the computability existing in the whole system. And with this goal for our operators as well as our vendors and all the industries, the goal of us is to like construct a well covered network and a good computing resource centers and achieve some intelligent collaboration among all these capabilities. So this is the overview of the computability network. And next slide please. Okay, so this is the high level logical architecture of computability network. So firstly, we look at the bottom. It is the infrastructure layer. It contains all kinds of computing resources, which may come from the public cloud, private cloud, the operators network cloud, and some central space, edge size, and even the user terminals. So anything as long as it has the computing capabilities is in the system. And it also contains the network links such as the optical fiber network, wire needs network, DC network, IDC network. And we are trying to use all these network and computing resources to build a net or a grid. And this net or grid is the base for the whole computability network. And then in the middle is the orchestration and management layer. We call it the orchestration center. It is in charge of processing the tasks received by the whole systems. And this center has the global view of the whole system like where is the computing resources. How many flops computability it has. What is the best route to direct our user computing tasks to the resources. So it is in charge of the orchestration and scheduling. And at the top is the service and operation layer. It is the interface for the computing network to actually interact with the users. So that's the high level architecture of the system. Before we move forward, I have a quick question to this slide. In my, as I can see, and you talked about and I can also see it on the diagram that the computing force network concept and architecture has these two main components, the computing resources and the network. And these components are kind of fundamental to the cloud provider as well as telecom and network provider industry. And CFN seems to be creating a strong convergence between these two. Can you tell a bit more about how CFN relates to these two main components and the convergence of them? Like what is the relationship between these? Okay, okay. That's a really good question. So firstly, I have to say that the CFN and the existing cloud system and network system, they're coming from the same region. But actually the CFN is a more diverse system. So firstly, we can see that CFN contains more types of computing resources. We define every hardware and software as long as it can provide the computing capabilities, it is the computing resources. And also, one thing that haven't been covered within this architecture is that the CFN is not only trying to achieve the unified orchestration of computing resources and network, that can be easily achieved by a strong cloud management platform of the cloud and network. But also the CFN is trying to exploring the possibility of converge the computing resources and network in terms of morphology or protocol. For example, we are trying to use the network transmission device to do some simple computation work. Or we're trying to carry the computing capability of the certain devices in the network protocol. So I have to say that these two comes from the same region, but the CFN is definitely targeting on a more intelligent and more complex system. So that's the answer to this. If no more questions, then we can go to the next slide, please. And I think maybe with this page, it is better, it can help us to better understand what is CFN and what is the advantage of it. So we have summarized three new experiences that you may get with the computing network. The first one is end-to-end consistent service quality. A typical use case for this is the Internet of Vehicles. Usually how to ensure the end-to-end consistent service quality for a moving car is the key problem here. Traditionally, we have to take care of the signal switch among all the base stations. We have to deploy the processing software at different cloud sites to provide the fastest response to the moving cars. We also need to consider about how to synchronize the data among all those processing software instances. But with the computing network, all the collaboration, base stations, configurations can be controlled by the orchestration center, which is supported by the artificial intelligence. For example, the systems can help to predict the driving trace of the car. It can help to pre-deploy the applications, the software. It can help to pre-configure the networks and it can even help to predict the fastest signal switch time. So CFM is trying to ensure our customers with a more automatic system. The second experience that our customers can get is the task as a service. Currently, we are trying to apply services from the cloud provider. We need to tell the system what kind of services or resources we need. And where we want them. So it's kind of like we have to give some instructions. But with the computing network, we can just tell the systems that describe the task. For example, if I want to process a video of the whole building from 10 am to 11 am, and I want to like mark all the people existing in the video, I want the outcome in 30 minutes and here is the data. And then we give all these kind of descriptions to the system. The systems can help us to pick, for example, the best pytorch predicting model, allocate proper amount of resources based on the data set that our customer provide. And also we can create a service instance to process this workload and do the work. And the only thing that we require from our customers is to describe the tasks. And I think the last experience is that we're trying to establish a new computing ability trading model. We're trying to use the blockchain to allow all the computing providers to register their computing resources and to mark the price for their computing resources. And then the whole system will provide all the resources directly to the users based on those information on the chain. So this can definitely help our users to get a more diversified resources and also can help the computing providers to gain better resource usage. So next slide please. Yeah, so this is a CFN technology map. We are trying to allocate all the related technologies into the three layers of the architecture and some of them we have already given them some new names. Currently we have already covered the technologies circled by the yellow dash lines in the open source communities. For example, the edge computing. We are trying to track the edge computing working group in OIF, which is led by Iodoco. And also we're trying to track the LFE Acreno project and for computing native, which is focused on the cross architecture and heterogeneous computing. We have already launched a subgroup named computing native in OIF CFN working group, which will be introduced later. And also we're checking some heterogeneous computing related projects like the OneAPI order treatment. And also for the computing offload, which is targeting on the acceleration of the applications and the infrastructure layers, we are following the OCP. And for the cloud native, we are checking the CNCF. So these are the things that we have already done with the open source communities. And also we are trying to plan some new directions, for example, the intent sensitivity, which is trying to figure out the ways that we how to understand the user's description on the tasks and the user's intentions about their required service within the systems. And some, for example, unified orchestration, the technology where it is trying to figure out a better way or a more intelligent way to achieve converged computing and networks of the whole systems and things like that. So I think the CFN has a very big map. And for our team, we're trying to figure out how to cover this in the open source communities one by one. So that is the key technologies here. And next slide, please. So this is the CFN Working Group introduction. In order to promote the development of the system in open source ecosystem and to accelerate the maturity of the key technologies, China Mobile has led to launch this CFN Working Group under OIS last year in July. And now currently the Working Group has involved 19 members including the global telco operators like China Mobile, China Unicom, China Tentacom, some device vendors like Huawei, Inspire, ZTE, and cloud providers like RedHead and 99Cloud. And the CFN Working Group is trying to explore the typical use cases. And based on those use cases, we will work out the requirements on the end-to-end workflows, the overall architectures and the key technology features. And after that, we're trying to implement those feature codes and cover the system integration and testing. So the CFN Working Group is trying to organize an open source CFN landscape that can help the communities to know that which is in the system. And then next slide, please. And this is the current CFN Working Group organization structure. The CFN Working Group has currently established the technical audio mechanism by the TSC and also we established some release procedures. And we launched four subgroups, letting by different partners. They are a use case and architecture subgroup. It is led by China Mobile and it focuses on the use case exploration, the architecture design, some end-to-end processes design, the key technology sticking. And based on all the work, we will try to give some open source recommendations. And the next second subgroup is the ubiquitous computing. It is led by INSPR and it is focusing on the scheduling of distributed and diverse computability. And the third subgroup is computing offload. It is led by Huawei and it focuses on the accelerations and offload with the DPU and the fourth subgroup is computing native. It's also led by China Mobile and it focuses on the cross-architecture compelling and execution. And maybe in the next year, we are trying to establish a new subgroup related to the integration and testing. And this new subgroup will focus on the collaborations of among all the different subgroups and end-to-end integration. So I think that's all from my side about the background of the CFN and CFN working group. Thank you. Thank you. I'm blown away. I learned so much about CFN, both the concept and architecture as well as learning about how the working group and the working group's work is structured. So on this slide, I can see that there are three technical subgroups, the ubiquitous computing and scheduling, the computing offload and the computing native subgroups. Can you all tell us more about each of these subgroups, what they do, how they are structured? Sure, I think maybe we can start from the ubiquitous scheduling subgroup and let's invite Xiaoxiao from INSPIR to start the introduction. Is that okay? Okay, cool. Can you hear me? Yes. Okay, hello everyone. I'm responsible for the ubiquitous computing scheduling subgroup and I will introduce the platform and the networks that we have done. So this platform mainly realized the unified management, perception, and cross domain scheduling of the computing resources such as the central cloud, each cloud, and the social computing power to realize the integrated scheduling, dynamic adjustment, and continuous optimization of the computing resources. So the next slide. Okay, we can see, sorry, sorry. The last slide. Ubiquitous computing, yes, thank you. So we can see from the structure of the platform that firstly, the perception module mainly requires the operation status of the computing and network resources as well as the user fitness indicators. And secondly, the strategy optimization module combines fitness rules and optimization algorithms to generate the optimal scheduling strategies of the resources. And thirdly, the scheduling module is to carry out specific scheduling management and execution. So this is the main structure of the ubiquitous computing scheduling platform, and we have already completed some design and development work. So that's all. Thank you. Before we move forward, just a quick question. Can you tell us more about the application scenarios for ubiquitous computing and scheduling and also maybe what components might already be available in open source from it? Thank you for your question. We have already completed the design and development of the perception module and the scheduling module and the strategy optimization module are in the design process. And for the gateway module, we're sorting through the interfaces of various cloud vendors. And for the typical application scenarios, for example, in the scenario of 7-way video stream scheduling, on the basis of platform capabilities, responsible scheduling of different tasks such as video saving, video backup, and AI processing can be realized. To satisfy users, personalize the requirements of delay, cost, and etc. Thank you. Thank you. That answered my question perfectly. We can jump to the computing offload subgroup. Hello everyone. I'm Jingtao Wan. This is computing offload subgroup. So in this group, we are focused on the DPU. As we know, DPU is called the third main data center chip or processor after the CPU and GPU. The core technical concept of the DPU is offload. So then we can offload many virtualization components from CPU to DPU to get a more flexible, lower overhead and higher performance cloud platform. So in this group, we proposed a DPU-based computing infrastructure in which the DPU software and hardware fusion layer is realized by cloud platform software and DPU hardware. This infrastructure contains five systems, including management, network, storage, compute, and security. The management system is focused on the unified lifecycle management, including virtual machine, container, and bare metals. And the network system is focused on the VPC network acceleration, just like OpenVSwitch, and some high performance network protocol, just like RockV2 based on RDMA. And the storage system is focused on disk device backend simulation and the storage network protocol processing, just like iSCSI or NVME. And the compute system is focused on the virtual IO performance optimization, just like the virtual IO data pass acceleration. We also call it VDPA. And the security system is focused on data encryption and the security input. In this group, we will open source some technical solutions to help implement the DPU infrastructure. So everyone is welcome to contribute. Thank you. This is very exciting. I always get super excited about hardware acceleration. I'm curious, though, if you can tell us a little bit about how China Mobile is handling the challenges of the software and hardware coupling between the DPU and the cloud platforms. Okay, great question. So we know that cloud vendors and DPU vendors need to jointly develop and adapt their virtualization software on the DPU. Therefore, we have defined many decoupled APIs and some interfaces for network and storage acceleration, such as on DPDK, SBDK, or OpenVswitch. Based on these APIs, cloud vendors can easily use unified standard DPU hardware drivers or some SDKs to deploy their software. In addition, we also built an out-of-box DPU OS which can help cloud developers achieve less code refactoring. We will release it soon. Okay, thank you. Great, thank you. We can move forward to the computing native workgroup. Hello, everyone. I'm Wei Huang from China Mobile, and now I will try to introduce the step group named Computing Native to you guys. And indeed, we would like to more precisely name it as Hater Genius Accelerator Migration Technology. About the background, the intelligent computing ecology is mainly composed of a middle-wheel framework, toolchain, and hardware. Each vendor's field corresponding to a chain around its own hardware and generates branch version matching different AI framework. So the ecosystem has become diverse, cross-architecture and cross-stack migration of upper-layer applications is extremely complex, which brings development challenges to application developers, computing forest services providers, and chip vendors. So in order to facing ecological challenges, we prepare the technology named Computing Native, or we can also call it Hater Genius Accelerator Migration Technology. The goal of it is to break the existing compiling execute tightly coupled toolchain ecology to establish a new collaboration mechanism, share the underlying hardware differences in realized cross-architecture non-sensing migration and execution of applications, build a traction model of the intelligent computing industry chain with software as the core and prosper the ecology of intelligent computing industry. As shown in this page, as you can see, the technology architecture mainly consists of two layers. The Hater Genius Accelerator Migration Abstraction Layer and the Computing Forest Pulling Layer. Among them, the Hater Genius Accelerator Migration Abstraction Layer mainly includes native interfaces based on unified programming model and converters, as well as hardware-native stack formed by a cross-architecture compressive compiler at one time, generating a unified, acceptable program format. The Computing Forest Pulling Layer mainly consists of components for Hater Genius Computing Power Restriction Management. Scheduling and pulling, achieving unified management and pooled execution of Hater Genius Computing Power Resources. Thank you. This is very interesting. I have a quick question to you as well. If I have code and model relevant to compute unified device architecture or CUDA, can I directly translate and transfer it to the language that your platform supports? Okay, good. That is a good question. And yes, of course, indeed, through the computing native language converter tool, we will provide tools to convert the existing CUDA language into the SQL language with support. You don't need to modify the original code to achieve language conversion. We will upload this tool as a mirror image to the OIF community for everyone to use. We will also welcome everyone to participate in the project and develop it together. Amazing. Thank you. This is all very interesting. I learned so much about the CFN concept and architecture as well as the working group so far. And I'm curious if you all can talk a little bit more about what you're working on, your progress and some of your achievements so far. Yes, sure. We have some progress we would like to share. Next slide, please. Okay. After launch the CFN working group in 2022, we then participated Open Infrared Days China and held the CFN forum. We had six topics shared by our partners. We got over 18 participants from 18 companies to join the CFN forum. And then we had a long table session in that each of us talked about their ideas and suggestions of how to run CFN working group in open source and their initial plan of how to contribute to CFN working group. And in this year, April, in the open order developer updates held by Huawei in the conference hall display area, we had a large screen to show the collaboration between the open order and Open Infrared Foundation. So Huawei is leading the computing offload sub-working group and together with Huawei, we delivered a demo which is to offload LibWord into DPU for VM virtual machine management. The operating system we used is open order for DPU and the host. The DPU hardware is from Da Yu, which is a Chinese company. In July, the Open Infrared Summit, we got three topics picked to deeply explain CFN DPU solution and our thoughts about heterogeneous computing. But sorry, we weren't able to make it to the summit. But thanks for Open Infrared Foundation giving us this opportunity to attend the Open Infrared live. So here we are, glad we got some exposure of CFN working group and technologies. Together with Open Infrared Committee in China, we are preparing the Open Infrared days in China 2023. We have CFN track and we have received 13 submissions. We are currently making arrangements of Open Infrared days in China as well as the CFN forum. And in the CFN forum, we will officially release our CFN achievements. So that's all for the events. And for the technical group progress, we, as Chifu mentioned before, we launched four sub-groups. For the use case and architecture sub-group, we have delivered by people of CFN overview and use case exploration. That explained the CFN definition, analyzed the capability and service type and the typical technologies and common use cases. Working on the reference algorithm with PyTorch to help with intelligent operation and management of CFN resources and applications. For the ubiquitous scheduling, we delivered reference function architecture for the scheduling system. And currently, we have the prototype, we are working on documents and the code. We will deliver the first version and the computing offload like in the demo is offloading LibWord from host to GPU. And we have already got the guide and the code on the Open Dev repository and working on updates. And for the computer native, we have initial solution of across architecture compiler and runtime. Also, we are working on the documents and the code will upload soon to the repository. All of the code we mentioned in the sub-group is open source and managed in the Open Dev platform. Next slide, please. The initial use case we choose is for the background is facing with diverse AI applications and maybe they are running different AI chips. So we are targeting to provide CFN infrastructure for AI applications. It can better support AI inference and training application across vendor, across region and across architecture deployment and migration. So the key technologies for this use case include the cost architecture compiler and execution part, which is the pink blocks. This is the tool chain we use in the initial CFN infrastructure. With this tool chain, AI apps applications should be able to migrate between different AI infrastructure with no additional cost. Next technology, they distributed resource orchestration and scheduling system, which is the yellow block as ubiquitous scheduling. It is overall scheduling of tasks and it can assign the most appropriate resource to the computing task. And for the performance acceleration with DPU, it also plays a great part in this use case. We use DPU card along with DPU software to help accelerate the computing tasks. So all the tasks covered in this initial use case will be included in the subgroups. Next slide, please. We have scheduled a release that everything we mentioned about the subgroups will be delivered. This release we started in July and like we mentioned before in the opening Fridays in China, we will release everything and hope there are more people joining us and join the effort of CFN working group. So that's it. I will now go through the details about the release plan. Since the release plan is all words, I will now go through the details of the plans. I think that's it for the share. Thank you. It is amazing progress. I'm so excited to see all this. And since this working group is an open source working group, in my experience, every group is always looking for people to get involved and participate. Can you maybe tell us a little bit more about who is the target audience for your working group and how people can get involved? Thank you for asking that. So currently we will come every company and every single person as long as you're interested in CFN or interested in the next generation of infrastructure. You can join as a developer, use case contributor or tester or speaker or even challenger. I don't agree with that. I have different opinions. That's also welcome. The main technical area could be cloud computing, heterogeneous computing, chips, artificial intelligence, network, protocol, SDN, orchestration and etc. Amazing. Thank you. And I would also like to remind people that if you are looking for pointers, you can find links and information on the openinfra.dev. page. You will find a section there for the CFN working group with some links and further materials to learn about the group as well as come in and get involved in the working group. Since we have a little bit of time left, I wanted to ask one more question. I think it was slide 8 where you were talking about the scope of the working group, which is kind of wide. And I remember you were saying that a couple of components are covered in open source and I'm wondering if you already have plans for the remaining pieces that were displayed on the diagram. Okay. Thank you, Yoriko. I think this is a super good question. Firstly, I have to say that these uncovered technologies, the pink ones, are definitely worth exploration. But you know that as CFN working group is driven by the use cases, we will firstly explore the valuable use cases, analysis the technical area, the requirements within the use cases and also we will try to figure out whether there is an existing software solution in the industry or in the open source communities. And then after that, we will decide whether we will cover these new technical areas within the CFN working group. For example, adding a new subgroup or adding some new subgroup tasks. So I think for this question, we will keep exploring but use case is always the first thing that we have to do. Thank you. That sounds great and I would like to encourage everyone in the audience who's watching live right now or will be watching this video later offline to look into the already existing use cases or maybe bring your own and share that. With the working group and help them with the work of covering more and more components within the working group's scope. And with that, that was our show for today. I would like to thank all our amazing speakers today, everyone from the Computing Force Network working group who joined the show today as well as others in the working group who are doing the amazing work. And also I would like to thank our audience for joining us live and if you're watching offline, I hope you really enjoyed the show. Our next opening for a live episode will be on November 30th and it will be another very exciting one. It will be about container security and Kata containers. And if you don't know Kata containers yet, you still have time to go and check it out before the episode so you can focus on the new and exciting information in that episode. We will have Svanko Kaiser from NVIDIA with us on November 30th and he will talk about Kata containers as well as NVIDIA's use case and involvement in the project. I hope you will be able to join us for that show as well. And again, don't forget this is an interactive show so if you have a chance to join live then you can ask questions and share comments. That will be included in the show. And if you have ideas in terms of what else we should talk about here, then you can also submit those on the ideas.openinfra.live webpage. So please go ahead and share our ideas with us and our team will be in contact with you to get more details and talk about next steps. And with that, thanks again to today's guests and everyone in the audience. I hope you will have an amazing rest of your day.