 Hello everyone, I'm Sankur Anganath, a Global Solutions Architect at Intel. I'm happy to be here virtually to talk about lessons from enabling cloud-native Oran Rick for the edge with my colleague Hasna Mustafa, who's learning the effort, along with Mike Rick here from Red Hat. To start off, we have approached Oran Rick enabling using a set of essential building blocks from our Smart Edge Open project. Smart Edge Open is a cloud-native software framework we provide to enable you to build your customized edge solutions leveraging Kubernetes while abstracting underlying hardware and network complexity for 5G. Some of the highlights with this framework is that it uses Kubernetes Engine certified by CNCF and offers additional optimization for AI and media workloads, helps you utilize hardware security capabilities, etc. Smart Edge Open provides the set of building blocks that enables you to optimize your solutions for edge, such as high-performance state-deploying constructs, accelerators for AI and wireless networks, zero-trust security, multi-access, green-edge constructs, telemetry and monitoring, etc., which form the crucial components for enabling Oran Rick. An easy-to-consume model, for example, could also be for someone who wants to deploy private 5G, can utilize private wireless experience kit from the catalog of experience kits, and as is and scale from there. In terms of building blocks, the left side here lists out different capabilities available through Smart Edge Open. Through cloud-native microservices model and aligning with different 5G standards, multitude of these capabilities are being explored for RIC. In collaboration with Red Hat, we are looking at utilizing building blocks on the right to enhance the RIC to operate under certain SLA. Some of the aspects we are looking at are having hardware-aware resource management with Kubernetes constructs, such as node feature discovery, topology management, core pinning, etc., which are really impactful for RAN workloads. Utilizing SRAOV and various CNIs, we are looking at fine-tuning high-performance data plan for RIC. Utilizing open-vino-based inferencing for RAN intelligence and XSAPs, custom hardware telemetry to ensure real-time SLAs within RIC, etc., are some of the items that we are enabling. An example of a reference solution demonstrating open-source SD-RAN near real-time RIC with AI-based connection management XSAP, with inferencing latency less than 10 milliseconds, is available for you to download via Intel Developer Catalog. With that, I'll pass it on to Mike to share further details on RIC enabling. Thank you, Tsungku, and thank you, Hasna from Intel. Hi, everyone. My name is Michael Rekia from Red Hat. I am a global telco solutions architect, and today I would like to talk about application of the Intel building blocks. We'd like to talk about applying the building blocks in three areas. Building blocks are applied to OCP proper, that is the OpenShift container platform itself. The building blocks are applied to a RAN component or an RAN platform, namely the RAN intelligent controller, and in particular the software modules that are major components of the RIC. And then we'd like to talk about the RIC use cases and how the building blocks can be used to optimize the use cases themselves or the end-to-end cellular use case. And so the output, of course, of all this are optimized OCP, optimized RIC, and optimized use cases in general. Again, looking at the different categories for the study, the first one being OCP or OpenShift container platform proper, we'll be looking at how can the building blocks be applied to the native internal capabilities of OCP? What are the things that are important from a performance perspective, a resource consumption perspective, and the networking perspective? And so those things that are part of OCP, sort of the native components of OCP, will be looked at and the building blocks will be applied where they can be applied. The near-run real-time RIC itself is composed of many software modules. We'll look at the architecture of the RIC in a moment, but there are some very important software modules in particular in support of XAPS, the E2 interface itself is a very critical interface. So can we apply the building blocks to the E2 client, for example, on the OCMDU? Can we apply the building blocks to the RIC processes associated with E2, for example, the E2 termination and E2 manager and XAP E2 subscription manager? And of course, there are other RIC modules like the RMR, the RIC mission frowder, the data bus, and so on. Then in a particular interest is putting it all together in an end use case, the use case proper we call it, and mainly that would be what is the cellular use case of interest? We'll probably be looking at proactive maintenance type use cases where there's an anomaly detection in the RAN followed by traffic steering to move the UEs away from the anomaly. So we're going to look at that use case initially and we're going to try to study a couple of areas. One is the network layer flow, for example, the packet flow, the packet loss through put and latency, and the other one being the impact on objects like the SDN controller, the RIC modules, the OCP modules, the network interfaces, and so on. Now, I mentioned the RIC a few times, and I mentioned the software modules involved. And you can see this is the sort of an architecture view of the near real-time RIC. And you can see several different modules. And so the idea here would be, can we apply the building blocks to these different modules to gain some sort of value or to optimize them in some way? So, for example, can we use the building blocks on the message bus as an example? And another thing we're looking at is potentially using redhead middleware as sort of replacement parts for some of these RIC components. For example, using AMQ Interconnect as a component of the message bus. Again, starting out, the use cases we're starting out with tend to be the proactive maintenance telco use cases. So, that involves anomaly detection up front with traffic steering to move away from the anomaly, for example, interference or congestion. We will be starting with an ORNSE provided near real-time RIC and some x-apps that are provided by the SC near real-time RIC. And then over time, we'll be looking at potentially using some of our tech partner RICs as part of the study. And some of the prerequisites for the testing include a far-edge footprint, for example, using an OpenEdge, Nokia OpenEdge form factor, a five-server form factor that can be deployed just about anywhere externally or internally. That's great for mobile applications. If a far-edge footprint is not available, we'll start with a general-purpose machine like an HP or a Dell. We intended to use the ORNSE e-release initially and OCP 4.10 and of course, REL. And then we're starting out in the lab with E2-SIM, eventually migrating to actual OCU-DU with actual E2 clients on the OCU-DU. So, that's how we're starting out. So, the lab initially looks like this. We have the GMA-B emulation with E2 simulation on the left. We have in the middle the representation with the edge data center, the ORNSE block there. It's the OCP there with the building blocks associated with OCP, the near real-time RIC, running on OCP, and then the X-Apps that are part of the use case on top of all of that infrastructure. And then to the right, we're just blowing it up a little bit. And then, as we migrate over time, we're going to have an actual GMA-B in the lab, again, OCU-ODU with E2 clients that will support the use cases of interest for telcos. And that's really the only change between the previous picture and this picture. Again, the near-term RIC is in the middle, running on OCP. We can see some of the more important modules there like the E2 terminator, E2 manager. And we can see the building blocks are distributed throughout the use case, including on the E2 clients, on the near real-time RIC, on OCP, and so on. One of the things we're doing with Intel is we're working on some standards for certain functionality. For example, this one has to do with communicating service action from the network layer to the cloud layer with application of the building blocks to each of the layers. A particular interest here is network slicing, where we have multiple subnets, and we want to apply the building blocks to each subnet and test this particular idea, which is to be able to meet the SLA, the end-to-end SLA requirements on each of the slice subnets. So application of the building blocks to each of the network slice subnets and each of the components within the subnet. Those components we talked about earlier, like OCP and the RIC and so on. Another function that we're working on to standardize is this concept of mapping microservices on the user plane to particular X-Apps running on the O-RAN RIC. Basically, the idea here is that you want to optimize the RAN for the particular services that are running on the user plane. So I've got edge services on the Mac. For example, gaming visual virtual reality rendering. I want to make sure that the RAN is optimized for that particular capability. And so I communicate that from the Mac to the RIC, and then the particular X-Apps associated with optimizing for that particular service are then invoked and then executed. And then again, you can see the intent here is to execute the same capability, but also apply the building blocks as we're doing it. And that concludes my portion of the talk. Thank you very much.