 Yeah. OK, welcome everybody. This is our talk in the open network King and X summit about building the cloud native reference architecture for the echo cloud. Which is an update on our Anuket R2 and we will unpack all of these cryptic things from the title of the talk during the talk. My name is Gargay Chatteri and I'm working in the open source program office of Nokia as a senior open source specialist and my partner in crime. Hello, I am Ricardo Gasperetto story. I am the work stream lead for Anuket R2 and my current role is principal cloud architect for Vodafone Group where I oversee the strategy architecture of cloud platforms for networks. Very good, so let's start a bit from the from explaining what is our agenda. So we will discuss what is Anuket at all. What is the aim of Anuket and how it tries to achieve it. Then we will talk a bit about the specification projects inside Anuket and then focus on on specifically the reference architecture too. And then we will cover the history of of of area too. So what what what happened in the past and then then we will look a bit into the future and of course we will explain to you how to get involved in the work what we are doing there. OK, so let's talk about Anuket in general a bit. So the aim of Anuket is to define build and conformance test reference cloud infrastructures for telecommunications. And this is to enable the faster integration of the infrastructures and the applications running on these. These infrastructures. And this is done in a way that that Anuket runs several quoting quotes subprojects, so there are. The specification projects what we will cover in this talk a bit more. Details so we have the reference model, which is an abstract model of cloud infrastructure. We have the reference architecture, several reference implementation and reference conformance. I will talk about these a bit more in a bit more detail in the next slide, but let's talk. About the rest of the project, so Anuket also provides. Or hosts feature projects, which are actual implementations of specific features exploring different technologies. It also provides projects which are testing either cloud infrastructures or specific parts of cloud infrastructures. It hosts test tooling and deployment projects and also it hosts a laboratory as a service. For the participants of the project, so from all of these different projects, let's focus on the on specification projects. In the next slide where I will describe how these projects are interacting with each other. So the first one is the is the reference model. This is an abstract model of a cloud infrastructure describing all the properties in a non-technology specific way. And assigns specific values to these properties. And from these from this reference model there are two reference architectures created, which are technology specific. There is one for OpenStack and there is one for Kubernetes. And these are specifications of cloud infrastructures from the applications point of view. So here we specify what kind of properties Kubernetes OpenStack based cloud infrastructure should have to run the workloads without any problem. And our focus today is the Kubernetes based reference architecture, which is also referred as RA2. Based on these reference architectures, there are reference implementations which are actual integrated solutions which could be installed by anyone and tried by anyone. And these reference implementations are compliant with the specifications of the reference architectures and of course because of these they are also compliant with the reference model. Now the question is how to test if something is compliant. The solution for that is a set of conformance tests which are defining the reference conformance projects and we have a conformance test set for OpenStack and we have one conformance test set for Kubernetes. And these are based on the tooling of the testing projects of AnuCat. So in the next slide I will discuss a bit about the focus of reference architecture 2. What we wanted to avoid is to specify everything. So we had to like draw the lines and we draw a line in a way that we do not specify CNFs themselves. So they are the applications which are running on these infrastructures. We specify everything which is in the infrastructure and effects how the CNS is executed. So we are specifying the orchestration layer itself. So Kubernetes we are specifying all the extensions of Kubernetes which are critical to run the workloads. So we have specifications around for example what kind of CNIs do we expect. We have exceptions on the different other extensions of Kubernetes. And also we are specifying on some level the lifecycle management of Kubernetes or at least we are specifying requirements to these. So for the details of what is specified and how do we specify I hand over to Ricardo. Thank you, Gergé. And yes, so this is the progress so far. This is the structure of the RA2 documentation divided in chapters. Here I will explain what exactly each chapter does and at which point we are flashing out the content. So we start with the architecture requirements and an overview. Of course these are the requirements that come from the reference model and we use this to trace what the requirements mean in terms of specification and building blocks later in the document. The third chapter introduces a high level architecture of what the building blocks of the cast of the Kubernetes platforms are and how do they interface with each other. And of course what each building block does that will be of course similar to the diagram that Gergé was showing earlier. The component level architecture is the substantial, most substantial part of the document here. We introduce all the specs and the rules for the clusters to be compliant with this architecture. So here we'll have rules about nodes, about sizing, about the processes of the Kubernetes control plane. A lot of space is for networking and plugins and extensions of Kubernetes clusters. So that's a rather important topic. And of course we have policies, specs on storage and requirements for workloads. For example, packaging that that is for consumers of the platform to ensure that the workloads can be on boarded on top of on top of the Kubernetes clusters compliant with this architecture. The chapter five is also about security. So policies and rules for security and for class clusters. And chapter six is the link between the architecture and the conformance testing. Here we map the specifications and the rules from the earlier chapters to special interest group features and functional tests in fun tests from the fun test project that of course are being run to certify that an implementation is compliant with this architecture. So this chapter acts as a link between the architecture and the conformance testing. Finally in chapter seven we talk about what is missing. So what are the gaps? What innovation projects we're tracking and what developments are happening in the ecosystem? And here of course we have a lot of feedback and new items that are coming from various actors. So if anybody wants to contribute, absolutely feel you're welcome to add your input. We finish with an appendix on multi tenancy. So this is about separation of workloads on the same platform in a way that they can be isolated and not interfere with each other on a common platform. So this is the document structure and what we've done so far in the past releases. The release schedule is as follows. So we've targeted the three releases on average per year. We are working on the seventh release. We've just released the sixth Kali in June. And the content creation is in progress for the next one like else. At the beginning of each release we decide which milestones and what dates are we going to hit with these milestones. So of course we're going to decide what to do, what is the scope for each project in Anuket for that release. So we decide what issues, for example, in RA2, what capabilities of Kubernetes and what sections of the document we have to update for this release in the first few weeks. Then of course the bulk of the work is concentrated in the third milestones. The content gets produced in the central months. And then of course we finish with proofreading and make sure that the release is ready for publication. This will happen, for example, at the beginning of December for the current release. So the last release was Kali and what we did in that is summarized in this slide. So we've mapped a specific release of Anuket to a specific release of Kubernetes. So for example, 121 for Kali and we will target 122 for like else for the version 7. Then we've added API and feature gate specifications so that it is clear what APIs and features are mandatory for implementations to be compliant with the architecture. And for example, the policy here is to just allow GA and beta APIs and features. We had a concept of node profiles to that label nodes depending on hardware specifications and the ability to host general purpose or network intensive workloads. But we've extended that concept by adding extension labels that assist in a more granular fashion by labeling things like hardware acceleration, configurations or latency, configuration or even geographical distribution of nodes such as edge versus core. And then we added functional blocks that are relevant for technical workloads. So we've added the definition of customer resources and operators. Things that are very important for hardware acceleration. So device plugin, node feature discovery are also have also been added. We've added memory manager and Sinki as well. We've also added new specs for high availability and network resiliency of the Kubernetes control plane. Of course, in telco environments it's a very important topic. And we have commitment to ensure the highest standards for platform availability. And finally, we've also started introducing the concepts of cluster lifecycle management. So we've introduced the definitions of a CAS manager, which is the entity that manages the lifecycle of Kubernetes clusters. For the next release, like else, we have freeze frozen the scope and the work is in progress on the content. So these slides shows you what we are targeting for this release. We're targeting the upgrade to Kubernetes 122. That includes deprecating all the APIs and features that were removed and are not supported anymore from this release, but also adding all the new features and APIs that are available in this new release of Kubernetes. We've also added specs on the service types. So for Kubernetes ingresses, it is important to clarify what types of Kubernetes services are allowed, for example, load balancer or node port and so on. And that would allow the workloads to consume and expose services in a standardized fashion. And then, of course, CAS clusters can be implemented in many ways. The main two flavors are with nodes based on virtual machines or bare metal nodes. So for the former, where the worker nodes are implemented with virtual machines, of course, it is important to have assurance about deterministic performance and latency that must be ensured. So we're adding a number of specifications about the hypervisor level configuration that must be implemented in order to guarantee such performance when the clusters are implemented on top of a virtualization platform. Another hot topic is CNI multiplexers that allow us to attach multiple network interfaces to the pods. So the state of the art in the ecosystem today, implementations of multiplexer have different APIs, so that reduces the limit to the portability of a workload that is implemented on a certain platform with a multiplexer to a different platform with a different multiplexer. So we're looking to specify what APIs or what solutions can be implementation agnostic, so to assist sort of portability between platforms. Finally, we're adding also specification on Edge Cloud, so for radio network functions and things like distributed core that would mean platforms and clusters with a lower latency to the final users. And we're also adding service function chaining, which is the stitching of multiple network functions together in a way that allows the user to manage them together and of course, configure them in such a way that the traffic can traverse them in a chain. We're also aligning them the security specifications with the reference model in chapter five. And we are increasing the support and the explicitly by explicitly listing Kubernetes APIs and features that must be implemented in order to be compliant with the architecture. So that would include upstream APIs and features that we think are necessary for any implementation to have to host the telco workloads. With some also adding specs on workload isolation, so how to have a multi-tenant platform with including namespaces. And finally, CNF packaging, which means how to package binaries and artifacts of a containerized network function in such a way that makes it portable between environments. So I'll hand over back to Gergay for information on how to join us. Thank you. Thank you very much. And as you could hear from Ricardo, we need more helping hands to do all of the specification work which is listed in our plan for the current and future releases. And if you'd like to have more information about the project plan, then you can check the project on GitHub. That's the last link from the slide. There are all the issues which are planned for the current release and other related requests listed. Feel free to join to the discussion or if some issue doesn't have an owner yet, then you can just pick it up and start working on that. The Reference Architecture 2 team has a regular meeting every week on Thursdays from three o'clock UTC. You can get the Zoom link and the meeting invite from the Anuket Meetings which is listed on the slide here. Also, you are welcome to join any other Anuket Meetings. Everything is public, what is happening in Anuket. Also, you can join to our mailing list where we discuss technical topics about the reference architecture and you can browse or even contribute to our wiki page where we basically handle all the management related things around the project. You can read the specification documents in the read-the-docs link, what is on the slide, and in the read-the-docs, we are publishing both the master branch under the latest URL and all the releases. So if you check the link which is on the slide, you get the actual latest state of the specifications. Of course, if you are interested in a frozen version, then you can select that one also using the read-the-docs user interface. So with this, I would like to thank you and I would like to encourage you to post your questions. Into the meeting platform, we are happy to address them. Thank you, any questions or comments? Yes, let us know.