 Hello, I'm Cédric Olivier from Orange. I'm Linux Foundation Networking Board Member, a CNTT officer and OpenAV Functis Project Technical Leader. So thank you for joining OpenAV and CNTT in Orange AFP. This ONES session will first describe Orange CI strategy and I will highlight how Orange is leveraging OpenAV and CNTT in an ongoing RFP. So first, as a good introduction, here are a couple of guidelines regarding our strategy. So of course we want to automate most of the network operations. One simple example, we are currently programming former test plans to... mostly previously executed by hand to reduce the verification period after a new equipment or a new software release. It's worth mentioning that automation is the factor of mandatory as soon as we are referring to the new network architecture, such as NFV, because they leverage on cloud technologies based on over provisioning and also because of the new software release rate. We are switching from a classical equipment release rate every year, every two years to open source software release rate around six months or less. So we must deploy and test as much as possible to bring determinism and to be able to follow the new software releases. About testing, a key rule is we want to test all the software layers independently. Testing only the VNF is not able. We must verify OpenStack right after its deployment before onboarding and testing the VNF. It's of course more than a help to identify the root cause. If the issue comes from the VNF of the infrastructure, but it's also about responsibilities as soon as we are dealing with multiple actors, whatever it's about, OpenStack, Kubernetes or the VNFs. One important aspect is we want to run all deployment and verification jobs in our continuous integration chains. We don't want to set up a new CI chain for a new VNF. We want to assemble the deployment job and the testing job inside our CI chain. Somehow we don't want to duplicate the open source model where all projects define their own gates and diverging on their job-based tools. Here, what we want is to assemble all the Spark test cases available. It could come from internal development, from a range. It could come from open source as integrated by Functest. Or it could be proprietary testing as proposed by VNF funders. It means that we have defined an architecture good enough to assemble Spark test cases. Also, we want to be able to instantiate the CI chains everywhere in our group, in whatever the affiliate and the deployment model, leveraging on centralized services or local services. To do so, we don't want to reinvent the wheel and leverage best open source tools and practices. The biggest open source project has built amazing gates and we can leverage the knowledge, the practices to benefit from this open source project. Here, we don't want to reinvent the wheel and if a feature is missing, we will directly contribute to the UBSIM project instead of forking other bad practices. So, mainly, we want to integrate smoothly and we want to deploy everywhere fastly. So, how does open source help? We are already contributing to multiple open source projects and one very good example is OPNV, where we are first contributors from DZero. OPNV opened a platform for NFV as deployed and testing open stack for years, day after day. This is multiple installers. So, OPNV has built a full continuous integration chain composed of Jenkins, a test database, an artifact repository to ensure that we can build and test open stack or Q&A tests every day. So, there are a lot of OPNV projects in which we are interested and involved. For instance, FungTest. FungTest offers a collection of virtual infrastructure test suites. FungTest is about 3,000 functional tests, three-hours API and data plane benchmarking and three VNF automatically onboarding and tested. It's more than enough to give a confidence to the deployment and to ensure that all the open stack operations are working well. Of course, FungTest also offers integrated upstream test cases for verifying Q&A tests as well. There are a second OPNV project which we are leveraging on cross-testing. Cross-testing is part of our network automation journey simply because it lets the developer work only on this test suite without diving into CI CD. So, cross-testing helps assembling sparse test cases simply by providing a common test case execution and manages all the interactions with CI CD components. It provides a unique interface for GitLab and Jenkins, a simply docker container to execute and a test case name. It stores all the results into a single test database, MongoDB, and it automatically pushes all the artifacts to an S3 repository. So, cross-testing offers a common test case execution. There is a cross-testing CI, the continuous integration part, simply leveraged on the common test case execution to allow building a full CI CD tool chain thanks to a simple test case list, simply docker containers and a test case name. So, cross-testing CI allows us, for instance, in OPNV to write all the Jenkins jobs needed to verify the OPNV test tools in the existing CI tool chain, but it also helps us to deploy the full CI CD tool chain in your laptop in a couple of commands and minutes. Cross-testing CI allows us different models such as the centralized deployment, distributed deployment, or also a mix of them. And it feeds our needs for affiliates, for instance, where we could deploy a local CI and continuous integration chain if it helps or if there are constraints which force us to do so. CNTT is a much more recent open source initiative created by a lot of operators such as Orange. CNTT Common Infrastructure Telco Taskforce defines a reference model, a reference architecture for OpenStack and Kubernetes, which is onboarding and test VNF at the end. CNTT also defines conformance suites, essentially based on a fun test, and playbooks to verify the conformance of your deployment regarding the CNTT documentation. All the CNTT playbooks are leveraging cross-testing CI, deploying locally a full CI chain, running all the modatory test cases in CNTT and a full check of your deployment. Orange is the first contributor in OPNV, as I mentioned before, and a key actor of CNTT will keep contributing on this open source project, but we are of course expecting more contribution to help regarding the amazing scope. Feel free to click on the different links and any help is welcome regarding the different call for contributions. CNTT or OPNV in Orange, here are a couple of RFP requirements extracting from an ongoing one. There are three key requirements regarding our discussion today. The first one is we ask for the full CNTT reference conformance for OpenStack results and outputs. So we are asking simply to run the CNTT-ALC1 playbook and to send us the zip file at the end, dumping all the test data and the artifacts. So we don't ask the full compliance simply because CNTT asks for a lot of modatory features and it may be possible that the ecosystem is not able to deliver all the modatory features. But it's up to Orange to check the result to identify what is about a missing feature, what is about mismatches in version, missing API or simply big bugs in the system on the test. So I will describe in the next slide the Orange CNTT-Future about ALC1 which is in a very good shape. We ask also the success of FUNCTEST Kubernetes test suite. So here as FUNCTEST Kubernetes is mostly about interoperability testing right now, we considered that we can ask the full success. The same test case list is now mandatory in CNTT-ALC2 BARAC, just released, which takes the same test case list into account. And also we ask for a first BNF test cases running in our continuous integration chain. It means that the vendors must deliver a docker container compatible with cross-testing, which can be executed in our CI chain. And to verify it works, we are running the test cases through cross-testing CI. So as you can see, it's implementing the Orange Principle, just highlighted, and the CNTT targets. I remember a couple of meetings in Prague at the beginning of the year, where multiple operators and authors ask to see CNTT deliverables in RFP. Orange is already doing that. So just a quick highlight regarding the latest Orange CNTT-ALC1 field trial. So first, it helped detecting a couple of issues in CNTT-ALC1 just before the Baldy release. Two features were missing in Orange as a product at the beginning of the field trial, so Cinderbackup and Nova instance password. They are now part of the products. And it remains 10 single test failures out of the 2000 included in AC1. They are mostly about Cinderbackup, which was late integrated. And beyond for my func test role, maybe JujuPC must be announced to pass proxies as we saw during the Orange CNTT field trial. So Orange is very, very close in good shape to pass the full compliance in a couple of weeks. So we are leveraging all the current CNTT conformance suites, but we see a couple of features which can help CNTT. So first, we do integrate much more benchmark in CNTT conformance. In AC1, especially, we could have this benchmarking, which could be a very good help regarding the infrastructure performance. So we could leverage Stopper from OPNV, FDIO or DD, a lot of technical solutions. It would be great also to have KPI regarding benchmarking in CNTT AC1, which are currently missing. Regarding AC2, the Kubernetes side, it would be great to switch from the current interoperability testing to a true CNTT conformance suites. We are running the Kubernetes conformance through func test, which is about 300 single tests. It's far from all the very little ratio compared to the E2E capabilities. So we do now list the Kubernetes modatory features and the single tests, which can be part of CNTT AC2. It would be great also to bootstrap the first VNF and CNF conformance suites for the time being we are focusing on the infrastructure, OpenStack and Kubernetes. But there is a key need regarding the VNF and CNF at least onboarding. So of course, every contribution helps. Feel free to contribute. So a few takeaways. So Orange, Leverage, OPNV and CNTT in RFP. We are also a big contributor in these two communities. Merging one in a couple of days and months. We keep contributing in both specifications in implementation streams. We think both streams are mandatory for the success of NFV. And we are hoping to see all the CNTT actors also involved on the implementation side. So my last message on behalf of my open source roles, I would say we expect more OPNV and CNTT contributions, especially for VNF and CNF conformance suites or initial CNTT targets. So thank you. I will encourage you to try the CNTT reference suites. It's very simple. It's about a couple of comments. I'm pretty sure you will love them. If you're on the line, we have two questions in the Q&A box. You're welcome to go ahead and answer live on the phone line. Okay. Hello. I see the first question. How you intend to validate VNF with your communities on top of OpenStack or by Metal? So there are two parts. At least about onboarding both works in the same way. It's true that depending if you deploy Kubernetes on OpenStack or by Metal, we will have different results. I would say it's up to our next activities. First to characterize VNF and its testing. But at least regarding the VNF onboarding, there is, technically speaking, there is no differences. Regarding benchmarking, I would say it depends. If we look at the Neutron and virtual machine, whatever it's about, SDN or not, we can benchmark in the same way. I mean, if we look at Shaker and VMTP, which are included in SC1, it's about booting client and servers in virtual machines. So whatever is on their layer is managed. There is no technical issue. There is no difference between an agent solution or an SDN solution. So from an SC1, SC1 is written in a way that, we define Neutron Modatory Features. SC1 is testing them. It could be implemented via an SDN controller, via Neutron Agents or VN, for instance. There is no specific logics in the networking implementation. So the last question. You think using SDN is preferred for such a case at this preferred way. From an RA1 point of view, at least regarding OpenStack, we are selecting Neutron Features. So it could be implemented by an SDN or via the classical Neutron Agents. So from an SC1 point of view, we are calling Neutron API, asking for Neutron Features. Both solution works. And we alighted in previous LFND events that the OVN and the classical Neutron Agents passed successfully the performance suite. So from an API point of view, SDN is just the implementation, hoping I'm answering the question. So any new question? So I'm writing in the chat a link about the previous session, an LFND event session about CNDT-HC, highlighting the result with OVN and Neutron Agents. So feel free to have a look. Then no other questions. So thank you all for joining this session. Have a nice OEMS. Thank you.