 All right, hello. I will give today a presentation about the Neo Realtime WIC platform. WIC stands for Run Intelligent Controller. In this presentation, we actually have two speakers, the last 10 minutes, Matti, the PTL of the WIC XUP project. We'll highlight some of the aspects of the XUP project. And before that, I, too, have to hear on the PTL of the Neo Realtime WIC platform. We'll give an introduction of the significant changes or updates that we did in the platform over the last year. As a reminder, the Neo Realtime WIC is a platform that hosts XSAPs. These XSAPs receive data streams via the E2 protocol from the run, from, for instance, the Genome B. And by analyzing the data, possibly including machine learning algorithms, they can control and optimize run behavior. And then they change this run behavior either via control messages or policy messages that are sent over E2 to the run. So let's jump to the first slide. Quickly, quick summary of what the Neo Realtime WIC project is. It's a sub-project of the O1 software community. We work closely with a separate organization, the specification organization, the O1 Alliance. And within the O1 software community, there are multiple other projects, including Mati's XSAP project, but also a project, for instance, that implements CU centralized unit. All the source code is stored in Garrett. And the exact list of components that make up the Neo Realtime WIC platform you can see from this link. The links you can obviously download click in the presentation that's downloadable from the Linux Foundation schedule page. All the source code is distributed on the Apache 2 license. And as a reminder, I mentioned this already. The specifications are worked on by the O1 Alliance, which is a separate legal organization, most specifically, working group 3, which works on the E2 specification, and working group 2, which works on the A1. All the specifications are downloadable from the O1 conference site or from the web page, depending on whether you are a member or not. I have on the next slide a bit more details on how we try to align the schedule of the O1 software community in the lower part of the slide set and the O1 Alliance in the upper part, the blue boxes and the red lines. So as you see, we, on the O1SC site, on the implementation side, try to release two releases per year, every half year. One we have just accomplished in June. That's the second release. We call that bronze. And the next one will be Cherry, which is scheduled for release in December. And then we start the next one for next year. And as a reminder, the first release, we actually released last year in December. And the project started in June, 2019. What this slide set also shows that this two releases per year doesn't fit very well or align very well with the specification site, where we have alignment with the Mobile Work Congresses, the GreenBot Bullets here. And they actually published really specifications in February, July, and November every year. With Bronze, we were happy that the first E2 specification actually came out in February so that we could implement this in the Bronze release. And that was a big theme of the Bronze release to adapt to, from a pre-specification E2 specification to the release published specification. For the next release, Cherry, we are not so lucky. So the next specification update will come in November. So in Cherry, we'll stick with the E2AP version 1.0 and then move in only in Dawn, most likely to the next release of the specification. If I move on, and we stay for some time on this slide, this is kind of a bit about the architecture of the near real-time week. One thing that I wanted to highlight here is that if you look at the upper right corner, this is the near real-time week with all its components, including x-ups. And then on the lower right corner, we have the E2 nodes that are managed or controlled by the near real-time week. And managed was the wrong word, controlled. And these are, for example, CUs, and a 5G architecture, G node B, E node B. All of these can be E2 nodes. And as an E2 node, and single E2 node always has exactly one E2 connection to the near real-time week. And of course, in the other direction, the near real-time week can have many E2 connections to the various nodes. It can be 10, 100s or 1000s. These connections are handled by E2 termination instance. And one of the significant changes that we did was that pre-specification, we were assuming that E2 connections, including the SCTP in it, were done from the near real-time week into the E2 node direction. And that was reversed. And now the E2 node is actually connecting to the near real-time week. That caused a lot of changes in the near real-time week code, code base. As part of this change, there was also change in configuration update that was actually removed as part of, so from our perspective, removed. In the specification, also some other message and procedures, load indication, restore status reporting, where removed. Configuration update, interestingly, is coming back in the E2 AP version 1.1 in November. So we will have to re-implement this. If we move on to the second bullet here, what I wanted to highlight also is that the rig platform deals exclusively with the E2 AP protocol, the E2 application part protocol. And this is important because the actual logic on top, it's a layered protocol and the actual logic on top of E2 AP is implemented in E2 service models, indicated here by those colorful boxes, red, green, blue, and so on. And these E2 service models, they are opaque to the near real-time week platform. So even if the E2 AP protocol as such defines, for instance, a concept of trigger, it is the E2 service model for a specific functionality on the E2 node, for instance, for a network interface like the X2 AP or E1, that defines how this trigger is actually defined in terms of message types or information elements in this specific function. And this becomes an agreement between the X sub implementation and the implementation on the E2 node, on the CU or on the Genome B. And from the platform point of view, we are actually fully unaware of this agreement. One thing for, and that's the E2 manager part of the near real-time week platform in the upper right corner, that actually collects the information on which E2 service model functions are implemented by each of those E2 nodes as part of the E2 setup of the establishment of the connection, stores this in a run function database. And this information is available to X sub to query which E2 nodes actually support the E2 service model functions that it requires. This is maybe more important for multi-vendor situations than for any other situation. Yep, so if we move on, this is the implementation status of the E2 AP, the basic protocol in the near real-time week. You see we have not implemented week service update, so week service query and reset yet. All the other messages are already implemented and again be used. The other three not implemented yet or procedures not implemented yet, they are being worked on and my hope is actually that still doing this here, we might get some late commit related to those. So if you move on, this is a reminder on how the procedures are actually used in the near real-time week. On the left-hand side, typically the combination of report indications and poly subscriptions makes up an X sub. So a typical X sub using this path would subscribe for report indications, get indication after the subscription from the run node. It will get from the run node indications on for instance, receiving the message on the network interface and that message will be copied in full or partial to the week. The X sub receives this from the near real-time week platform, analyzes this and based on this makes, changes the policy or the behavior of the run by making a policy subscription. And as you see here, these policies include the trigger and the policy and the run will autonomously apply those policies. Another way to use the E2 protocol in X sub is to first map and that we see on the right-hand side, starting with the lower right-hand side, is to make a subscription for an insert indication and the key difference to a report indication is that the run is expected to halt or suspended call processing or the processing of the push teacher and typically it will send the event that it has received or a message that it has received to the week as an indication and the week will analyze the message and respond and that's what we see on the upper right corner with the control message, for instance, rejecting a message or modifying it or telling the run how to continue with the processing this message. So only after the control message is received, it will resume with the processing. I have two slides on this, which explain this in more detail. First again, the policy mode, what we saw in the previous slide on the left-hand side. Again, here you see now with a GNOT B communicating with the week, it's continuously sends a stream. If you start with message number one on the lower right-hand side, continuous stream of indication reports, they go to the E2 termination, which is a week platform component which will send this information to the XSAP. The XSAP again applies its logic, machine learning algorithms, for example, and will then, and that's a new interface that we have in the Bronx really, that we are working on in Sherry. We'll send the subscription to the subscription manager. In this case, it's a policy subscription and the subscription manager might apply security checks or merge subscriptions together from our XSAPs and send those to the GNOT B. And the new thing that we implemented here that we are implementing now is that initially this interface between XSAP and subscription manager was ASN1 based, and we're using the E2AP messages and we are changing this to a REST-based interface in Sherry. For completeness, the next slide shows the simple message flow and that's perfectly possible already now to implement XSAPs using indication insert messages and control, very simple low latency case. The GNOT B will send an E2 indication to the E2 termination, E2 termination forwards to the XSAP analysis and responds with a compromise message back to the E2 termination and the GNOT B. So those are the two main cases how to implement XSAPs using E2. In this slide, which is probably quite difficult to read, I wanted to highlight again the reversal of the connection establishment and how we generally change the E2 setup. You see on the left-hand side, how it used to be implemented in a pre-specification in the AMBER release and on the right-hand side, how we now implemented it against the published specification. What I want to highlight here is if you look at the first message in the left-hand side, it's an E2 setup that comes from the rig platform and goes to the run. This has now changed. If you look at the right-hand side, the E2 setup request actually comes from the run to the rig platform. What the rig platform does next is it will actually extract the run capabilities or those E2 SM functions that I talked earlier about into the run capability database of the rig platform so that XSAPs can query this and then it will respond back to the run that the E2 setup is completed. This very same extraction was previously done on the left-hand side with the last couple of messages, last four messages. So this is now simplified into one message exchange instead of two, simplifying the E2 setup. And the run configuration conversion, which is in the middle of the left-hand side, that was actually removed as I talked about earlier and it will most likely come back later again. If we move on to the next slide, quickly going to a couple of other changes that we did in the near-wheel-term rig platform over the last year. So we introduced a new component, the O1 mediator. The O1 mediator mediates between the XSAP and the management platform. So it's the interface towards the network management interface or service management orchestration platform. The A1, which you see here is currently almost like a configuration interface as well, but essentially an interface defined by O1 for defining high-level declarative policies towards from the management platform or the non-wheel-time rig into the near-wheel-time rig. So on O1 side, it's netconf-based. It exposes, for instance, configuration and configuration interface for XSAPs. It manages alarms that XSAP might want to send and it forwards makes available metrics provided by XSAPs. On the A1 mediator side, we changed the implementation so, and that's a theme that we had in many other components of the near-wheel-term rig platform as well. First of all, it stores its policies for persistence in shared data layer and Redis. So that over restart in a container platform, it will be maintained this information. And then the other thing that provides statistics here are Prometo's interface, which we then can send over O1 to the outside world. Two slides left to go. Some changes we did on the RMR side, RMR is the RigMessageRouting library. What we did there is, first of all, a more reliable route distribution using the routing manager. And then we did also a major rewrite replacing a library that we previously used, NanomessageNextGeneration, where we had some problems with well-defined behavior or in case of high load situations and where we also had problems with low latency. With achieving low latency. So we did a new rewrite that we call SN95 and that's now part of the rig platform. On the Redis and STL side, we moved from a single Redis instant to an HA deployment, right now using Sentinel. There are some guys investigating also how we can move to Redis cluster here. On the E2 side, where we have E2 termination as one component and the E2 manager as the other component, E2 termination now supports the Prometheus for network, for interface statistics or metrics. And the E2 manager has an interesting support for big red button, where we can withdraw the rig from the run, either by closing all E2 connections and waiting for re-establishments or by closing them and not accepting any new E2 connections. The E2 manager also manages multiple E2 termination instances. Right now, this is a static mapping of E2 termination instances to E2 nodes. So no dynamic scaling. This would probably be something we want to work on in the next year, in combination with some capabilities that come in the new E2 AP protocol. And this information is managed by the E2 manager and provided to the other entities that needed in the near real-time rig platform. As a reminder, what you see here, all the yellow parts are near real-time rig platform. The blue parts in the middle are actually concrete XSAPs that are developed on top of the near real-time platform and that Matti will talk about further. And he will also talk a bit about the XSAP framework APIs used by the XSAPs. Coming to the last slide, upcoming changes that we're planning to adopting to the new E2 AP protocol. There are changes related to the configuration update. I talked about this quickly. There are changes to the transport network layer on the E2 side. This will give us a scalability and a better failover behavior. And then a small change, but quite important, it introduces object identifiers for E2 service models, which makes it actually easier to find the correct E2 service model that you want to use in an XSAP again, especially important in multi-vendor environments. That being said, it ends my part of this presentation and I would hand over to Matti with that. Thank you, Thorough. As Thorough mentioned, I'm the PTL of the RICAP project in the Orion software community. And the focus of that project is to develop open-source XSAPs for this RIC platform. Partially, this is to just demonstrate how to build XSAPs using the XSAP SDK and the frameworks, but also to support the Orion SC end-to-end use cases. There are a set of use cases that want to exercise the different layers of the Orion architecture. The main use case is the traffic steering. I'll talk a little bit about the XSAPs related to that. We have code contributions currently from three companies and we are, of course, hoping to have more and we already have some negotiations underway. So maybe even for the cherry release, we'll have maybe a couple of extra additional contributors. And the contributors can choose whether they want to contribute code under Apache 2 or Orion software license. And currently we support C++ Python and Go as the programming languages. So let me first talk a little bit about the XSAPs. We have about half a dozen XSAPs currently in the RICAP project. Here I'm highlighting the XSAPs related to the traffic steering use case. The idea of the traffic steering use case is to use the XSAPs and the near real-time RIC to basically control in which cell a UE should be residing or which should be the serving cell. So if a UE is having poor performance in the current serving cell, the traffic steering use case would basically have this UE handed over to a different cell where it hopefully has a better performance. In the PRONS release, we managed to complete the kind of a basic components of the traffic steering logic. We have the main traffic steering XSAP that kind of makes the decisions and then a predictor component that predicts the performance of a UE if it was handed over to a neighboring cell. In Cherry release, we are planning of including the KPI MON XSAP from Samsung that actually collects the metrics from the RAN using a KPM E2 service model and it will be stored in this STL namespace and used by the traffic steering XSAPs. We're also working with HCL on anomaly detection XSAP that uses this information to raise anomalies on either UEs or cells. And we will also implement the actual control probably using a pre-spec E2 SM since the final one won't be ready in time. We have a couple of other XSAPs that are not tied to the traffic steering use case the Hello World XSAP that is demonstrating on how to implement XSAPs, all the functionality in C++ and a measurement campaign XSAP that given a stream of X2 messages for my calculates various random metrics. So from the XSAP point of view, the big question is really how do all the interfaces get exposed to it? So from the XSAP is, of course, exists in the near real-time RIC platform, the different working groups in O-RAN are defining different interfaces like the A1, E2, O1 and of course they use different of encoding formats like ASN1, et cetera. So we need to figure out how does the XSAP interface to these different, these protocols and external entities. And in addition to interacting with the outside world, the XSAPs are also interacting with other XSAPs either directly via communication or indirectly via the SDL layer where one XSAP produces some data and the SDL on another one consumes that data. So that's where we get to the really the XSAP SDK how to build XSAPs to take advantage of these different features. I like to present the XSAP SDK as this layered cake diagram where it shows that XSAP writer can pick and choose what capabilities of the SDK they want to utilize. A simplest XSAP would basically just utilize Kubernetes and the XSAP descriptor and lifecycle management. The XSAP descriptor specifies some parameters for the XSAP that are then used to construct the Helm chart to deploy the XSAP. So the simplest XSAP would just deploy on the RIC platform and not do anything. But if the XSAP wants to, for example, do messaging or use the shared data layer, they would include the libraries related to that. If they want to use the RNIB where the information about which E2 nodes the RIC is connected to is stored, what are the cells, then there would potentially be other Nibs about information about UIs, measurements, et cetera. If the XSAPs want to utilize that information, they would include the appropriate libraries. We implemented early in the AMBER release the kind of primitive APIs for XSAPs to interact with A101 and E2. In the cherry release, we're building the higher level interfaces in Go C++ and Python. The goal of which is to make it very easy for XSAP writers to access these interfaces and utilize all these facilities. And of course, XSAPs will build up on services provided by other XSAPs. So it's not only the platform services that are used by the XSAPs. So what are these high level interfaces? We are implementing them using XSAP framework. So you can think of XSAP framework as being a library or a set of packages that you include in your XSAP. And the goal is really to make it easy to write XSAPs. Initially, well, the initial XSAPs were pretty complicated partially because people are not familiar with ASN1 and with all these other new interfaces. Now we are working on really based on the experience of building a few XSAPs and kind of identifying what is common to different XSAPs. What is the unique part of a particular XSAP? We are taking the common code and putting it into the XSAP framework as library code. So specifically looking at the interfaces E2, 01A1 and messaging. For configuration management, XSAP will receive its initial configuration when it starts and that configuration may be updated while the XSAP is running. In the initial versions, the XSAP writer would need to know where to look for this configuration file, how to detect that that configuration file has changed. With the new framework high level abstraction, the XSAP writer only needs to write a function that handles the configuration update. They don't need to know how that configuration file gets to the XSAP or when it's updated. They only need to provide a function. Similarly for the 01 metrics and fault reporting, we are providing interfaces for reporting a key value metric or raising and canceling alarms. The XSAP doesn't need to know how these things are communicated within the RIC or to the OAM layer. Similar to the 01CM, the policy A1P, the XSAP only needs to declare what policy types it supports and provide a function for processing policy instance and messages, JSON payloads. It doesn't need to know how to communicate to the A1 mediator. It doesn't need to know how the A1 mediator finds the right XSAP to send the messages to. And Thorov already mentioned some of the initial steps towards the E2 abstracting like the subscription providing a rest interface rather than needing to construct the E2 subscription message. This is still a somewhat of a study item but we are hoping that in the cherry release, we have some concrete steps even beyond this rest interface. And for messaging, the RMR messaging layer provides a very simple abstraction for XSAPs to send and receive messages. It eliminates the need to do service discovery, XSAPs just construct a message and send it and the RIC platform will take care of delivering it to the right destinations. So that concludes my portion of the talk. At this point, I guess we welcome questions from the audience. Thank you.