 Welcome everybody. This is our presentation on containers within the CNTT reference model and let us introduce ourselves Hi and welcome everybody. My name is Gio Kunz. I am an open-source developer with Ericsson very active in various communities and specifically in that context I'm a contributor to CNTT and OpenMP My name is Gege Chatari and I'm working for Nokia as a senior open-source specialist and in this context I'm a CNTT area to co-lead and similarly to Georg, my work is also to be active in several communities mostly in the domain of cloud infrastructures Okay, let's discuss about CNTT and containers in the scope of the reference architectures and reference models and all of these reference things which are in CNTT. CNTT is Cloud Infrastructure Tech with Task Force which is an open-source project made with the mission to standardize and verify cloud infrastructure implementations to decrease the integration cost of cloud infrastructures and applications running on top of that This would happen in a way that different doctors in the community are collaborating in an open-source project and building up specifications, implementations and verification suites Based on the last 90 days the most active participants of this activity are listed in this slide so you can see that there are participants from the operator side of the tech industry from the vendor side of the tech industry and other companies as well Our aim is to rebuild CNTT to be a leveled playing field for all actors so everybody have an equal opportunity to influence CNTT and everything is contribution-driven However, the project is open-source It has two sponsors, GSMA and Alliance Foundation These sponsors are basically helping the operation of the community GSMA provides a support in terms of how to create good specifications and also GSMA releases the specification of what we call reference model and I will discuss a bit later what reference model is GSMA releases this specification inside the GSMA scope Alliance Foundation on the other hand helps the community to build the specifications and reference implementations and reference conformance tests in a real open-source manner So let's discuss a bit what are the main artifacts of CNTT and what is the logic of the different CNTT work streams So CNTT works in a way that there is a reference model which is basically a specification with the properties of a cloud infrastructure and with standardized values set to these different properties and based on these values the reference model defines two profiles which basically define how a cloud infrastructure should look like Based on this reference model, which is technology agnostic there are different reference architectures created which are cloud infrastructure technology-specific definitions how to build infrastructures which are compliant with the reference model In practical terms this means that we have an open-source base reference architecture and the Kubernetes base reference architecture and both of them are taking input from the reference model and describe how to build a cloud infrastructure which is compliant with the requirements of the reference model Based on these reference architectures and the reference model there are reference implementations for both the Kubernetes-based infrastructure and the OpenStack-based infrastructure and these reference implementations basically acting as a proof that the reference architectures can be implemented So these are really integrated software stacks based on the requirements of the reference architectures and the reference model plus of course adding lots of components of their own they are implementing the stack but anybody can download and install and test And also based on the reference model and the reference architectures we have conformance tests which are basically verifying that a cloud infrastructure and the workloads on top of that are compatible with each other So these reference conformance testing have different domains So there is a reference conformance test to test OpenStack and there is or there will be reference conformance test to test Kubernetes and there are conformance tests to test workloads running on top of these So this is to ensure that these layers are really compatible with each other and they do not expect something from the other which is not there So this is done in a way that we are specifying the specifications in GitHub and publish them to read the docs Also the reference model is published by GSMA So here in this slide we provided pointers to the different documents respectively like RTD means the read the docs document GH means the GitHub version of document They are in sync so it basically anybody's preference which way you would like to read them Also in the reference implementation and reference conformance side we have projects in OpenFV to cover these activities because CNTT works in a way that all specification work is done or let's say all documentation work is done in the CNTT space in GitHub but implementation of the reference implementations and the conformance tests are done in OpenFV as OpenFV projects So as we are talking about containers let me focus on the reference architecture for Kubernetes or RA2 for short and here I go back and discuss a bit more these different streams So usually if the work stream has a one in its name that's about OpenStack so RA1, RI1 and RC1 are about OpenStack and 2 is for Kubernetes so RA2, RI2 and RC2 are for Kubernetes So reference architecture for Kubernetes is RA2 and the scope of this work stream is marked with this dashed box in the figure because usually in case of Kubernetes it's very difficult to find the exact scope of a Kubernetes cluster So in RA2 we do not define anything which provides the resources for the cluster so it can be either a virtual machine or set of virtual machines or physical hosts and we do not define anything which are the lifecycle management of the Kubernetes cluster itself So RA2 is strictly focusing on the Kubernetes cluster and its let's say extensions and configurations So and we have different sources of requirements and the first set is defined in the reference models So the first set of requirements are basically like generic requirements regardless of the technology of the cloud infrastructure So these are the same for an OpenStack-based reference architecture and same for the Kubernetes-based reference architecture So these requirements are, for example, CPU pinning support, normal and wear resource allocation, SROV support support for network QS, huge pages support, support for multiple port interfaces So these requirements are coming from the reference model I just highlighted some of the requirements, the full list of requirements is defined under the link that I provided in this slide And there are also requirements which are defined by the reference architecture itself because the reference architecture defines technology-specific requirements like scalable and immutable infrastructure, CNCF API conformant infrastructure decorative configuration of the infrastructure, network resiliency So all of these requirements are defined by the RA2 work stream And based on these requirements, we are defining detailed requirements of the components Like, for example, we are defining Kubernetes requirements, saying that, for example, Kubernetes must use one of the three latest minor versions For example, that we have to have topology manager and CPU manager feature gates enabled For example, that we have to have the device plugin feature enabled, we have to have IPv6 dual stack feature enabled So all of these requirements are defined in RA2 And also we are defining here different components of the infrastructure, because both Kubernetes and Opestack are relying on other components And in case of Kubernetes, this goes into the detail of, for example, the network implementation And here we have, for example, inside CNTT a very interesting debate on the CNI multiplexer So it's a currently ongoing discussion in CNTT if we should define strictly one CNI multiplexer or we should define several options of these Because currently there is no API compatibility between these CNI multiplexer solutions, which are, for example, multi-sort them And this is a very live and ongoing discussion and this is one example of what kind of decisions CNTT makes and what kind of issues CNTT have And with this I hand over to Gabor Thanks, Gage, for the introduction to the reference model and the reference architecture So now I'd like to take a closer look at the reference implementation and the reference conformance test suite And before we dive into the details here, again, I'd like to take half a step back and put those a little bit in context Gage has mentioned that already, but there are quite a few different terms and projects and communities involved So it can be very helpful to look at that again So we basically have three major pillars here CNTT with the reference model and the architecture as already described Then also a document describing specifications on the reference implementation and the reference conformance So that, as Gage said, is happening on GitHub and it's basically a set of documents Then on the OpenEV side of things, that's basically where the practical implementation part lives So based on the Spanx, we have reference implementation We also have the tests on the test framework And then the pillar on the right-hand side shows the receiving, let's say, organization As Gage said initially, the overall goal of this is to basically ensure that platforms and workloads work well together And in order to prove that in a concise framework, the OVP, the so-called OpenEV verified program, is a compliance program That defines how compliance tests suite or the results thereof get reviewed What the review process looks like, the scope, and also provides resources such as a web portal where test results coming out of the reference conformance tests suite will be uploaded and reviewed by the community And a result out of OVP is basically a batch We currently have two flavors, an infrastructure-focused batch That's what you would try to achieve as a vendor of an infrastructure So a Kubernetes distribution, for instance And then, as also already mentioned by Gage, there is a corresponding compliance work plan and ongoing to look at the workloads themselves And then you basically match up a platform and a workload with having their respective batches and hopefully things work well And as you can see, based on the errors here, there is an obvious relationship So as I said, the reference implementation specs are input in terms of requirements to the OpenEV community building and integrating the reference implementation The same is true for the reference conformance, which is input to the test frameworks And the test framework itself has the output of the testing toolchain, so to say Needs to be consumable by the OVP toolchain to facilitate the reviewing process So this is how those three things work together In the middle, you can also still see that the OpenEV has community labs where most of the integration will be done But that's also to be extended in the future with maybe cloud-interest providers providing resources to that effort Good, so as I said, I'm going to talk about the reference implementation So on the next slide, I'd like to zoom a bit into that We do have a dedicated project in OpenEV working on building the reference implementation The project is called Kupref, and it is depicted here in the middle of that slide by this light green box And as you can also see, there are more boxes inside because the purpose of Kupref is not just really to build a platform And not even to build a Kubernetes platform from scratch, but rather we apply the best practices that OpenEV has been applying for the last couple of years It is mainly about integration, it is about continuous deployment and it's about testing So instead of building an installer for Kubernetes ourselves, we basically consume various, let's say, candidates from upstream communities Right now we are focusing, because we need to start somewhere on Intel's BMRA, bare metal reference architecture So that is a Kube-Spray-based Kubernetes deployer that integrates already quite a few telco-specific extensions, as mentioned by GearGame But other potential deployers are the CNF Testbed, for instance, or Aship So we basically consume those, we integrate those, provide a configuration that corresponds to the requirements of the reference architecture And then obviously we deploy that continuously now starting in an OpenEV lab environment But as I said, could also be bare metal cloud hardware provider or other cloud infrastructure providers where we deploy this on That is still work to come And once the platform has been deployed, obviously we need to run the testing against it And on the lower part of the slide you can see that we are making use of funk test and cross-testing An OpenEV toolchain developed over the previous years It is basically a generic framework that allows to integrate and to integrate arbitrary existing test tools and test frameworks Coming from various upstream communities and projects already, and provides basically a unified way of running those And it also provides a somewhat unified way of obtaining test results Because as I said on the previous slide, we need to review the test results as part of the OVP process So the testing results then obviously are feedback both to the reference architecture because we can say This reference implementation is able to implement the requirements coming from the reference architecture And also there is feedback to the reference conformance work by CNTT because we can say Yeah, okay, the test scope basically that we have today covers these and those requirements of the reference architecture And there's still those gaps that need to be closed And obviously we can also provide direct feedback to the test tools both in OpenEV and upstream as well as the installers In OpenEV as well as the upstream component So that is basically the purpose of QBref, a little bit like the spider in the web here Good, then on the next slide we zoom in a little bit more still The one important thing to mention here is the reference implementation needs to be somewhat flexible to cover various use cases Grega mentioned already the reference implementation is approved that the requirements coming out of the RA are actually implementable But additional purposes that it serves is it needs to be or it can be used to validate the test cases of RC2 obviously But it is also a platform for validating the workloads on So this reference implementation potentially needs to be installable for instance in vendor labs And then vendors can test their workloads on top of that reference implementation So there is some flexibility needed how to deploy this So because of that we have basically we're following an approach that splits the deployment and provisioning phase or process into two phases Who's provisioning in case we need to bring up the reference implementation on a bare metal environment We have some tooling in place for that based on the CloudInfo automation framework It's also based on previous efforts in OpenEV Under the hood it uses Bifrost basically to do the host to ask provisioning And it also consumes an OpenEV descriptor set of OpenEV descriptor files to describe the heart and the software And then the second step is the actual Kubernetes provisioning Obviously as I said the first one is optional For instance if you'd like to install the implementation on an infrastructure as a service infrastructure Then you often already get pre-provisioned host operating systems or VMs with OSs inside obviously So you can skip that The Kubernetes provisioning as I said currently is done using Intel's bare metal reference architecture And it uses CubeSpray as I mentioned under the hood to configure everything and to install various add-ons And that is currently done in OpenEV Labs Good So that's an overview of the infrastructure The reference implementation looking at the reference conformance on the next slide We have basically a very first release available already The approach here is as I already mentioned to leverage all the various test cases, test speeds available in various upstream communities And we basically select those tests that cover RA requirements And those test cases then get integrated into the OpenEV Functis and cross-testing toolchain I have to give credits here to Cedric, who is the Functis PDL, basically doing all of that work And you can see in the table on the lower right corner what the first release of an RC2 Suite basically contains It starts with the Kubernetes conformance test suites that basically check that a Kubernetes installation provides the standardized API There's more tooling included to exercise the API in kind of a benchmarking way X-Rally is a tool to deploy certain scenarios, going through a deployment and teardown of certain workloads Not just once, but also multiple times We have security tests in place using KubeHunter and KubeBench And last category, so to say, is the VIMS, this is a virtual IMS Which is a sample workload that gets deployed and then some application level tests also run Just basically to validate that the targeted system on the test is capable of really hosting a sample workload As said, that is the initial scope, there's more that needs to be done And because of that, this is also kind of a call for contributions to extend the tests Because only with the corresponding automated test cases in place, we can basically deliver the promise of CNTT And open a V like end to end to have the compliance test suite in place Good, going one slide further, there's one more thing that we'd like to mention Because we have been talking about this is being done in CNTT and that is being implemented in open a V And then you have requirements going back and forth and obviously that is not the best way to do that So both the technical communities of CNTT and open a V basically came together And we realized that we are basically in the same boat and we should work together even more closely as we're doing today So both communities, both projects are going to merge to basically become the single organization Where the telcos, the vendors can come together and then first define what infrastructures should look like And then implement those and provide the right testing for that That process is happening right now, starting mid of 2020 and it is supposed to finish end of 2020 So the targeted launch date for the new project and there's also going to be a new name for that is January 2021 And you're very welcome to join this effort of defining what the joint project will look like So that we can make sure that everybody's needs and requirements get well represented in this community effort To make sure that both operators and vendors basically get the most out of this effort because that's the purpose of it Good. And then the last slide is just a set of links If you'd like to engage with the community, you can look at GitHub, you can join on Slack, you can join the CNTT meetings You can take a look at Kupref and in October this year there's going to be a virtual developer event So if you see this presentation still in time and the recording, feel free to join that Okay, and that basically concludes my part. Thank you. Thank you