 Hello, I'm Tom Kivlin, Principal Cloud Orchestration Architect at Vodafone Group. I'm here with Sridhar and Emma and we're going to talk about the new Anaket launch, merging the CNTT and OpenFV communities to create a new project within the Linux Foundation Networking called Anaket. I'm really excited about it. I've been involved in CNTT since inception, helping to develop the reference model. I've been the workstream lead on the Kubernetes-based reference architecture and reference implementations. I see the benefits to merging with OpenFV to be quite good. Obviously, the end result of the CNTT work streams was to have a reference implementation that was able to be tested against the specifications within a reference architecture. There's a big overlap between many of the OpenFV projects with regards to testing and conformance and providing that feedback loop into the architecture specification delivery. I'm going to hand over to Sridhar now to talk about the testing projects and the benefits the merger has on them. Hey, thanks a lot, Tom. I'm equally excited to be part of this launch of Anaket. When it comes to the OpenFV projects, we have multiple of these projects of which testing are one of the biggest group of projects and they have done excellent contribution to the community. Among these testing projects, there are functional, broadly functional, and performance testing. I work for client communications and I'm a PTL of one of the testing projects called VS Perf. I also work with other products like CIRV and Airship. Among these functional and performance testing projects, the performance testing project, they focus on this automation of these test cases that are defined by the specification. The specifications play an important role when it comes to the performance testing because they define the importance, the steps and the configuration variables and everything which are very much important to explain a number that comes along with the performance system. These testing frameworks implement these test cases that are defined by specifications of which Anaket is one of them. With the merger of this OpenFV and CDT, it has streamed, the whole process has been streamlined because we implement the requirements of Anaket and also the take back of the learnings that we do as part of our projects, especially in the experimentations that we do, to take it back to Anaket. So this streamlining has helped us a lot and I'm very, very excited to be part of this Anaket project. When we talk about the performance testing, when we want to answer the white kind of questions, the significance of the service assurance or the monitoring comes into the picture. I would like to hand over to Emma to talk more about this service assurance. Thanks, Rita. My name is Emma Foley. I work as a senior software engineer for RedHash. My focus there is on day two cloud operations, which includes metrics and monitoring for service assurance. I'm the PTL for the Browner project in OpenFV. And for the last few years, we spent time helping to improve the metrics collection tools that are available, so they're more suitable for NFV. This includes exposing more of the metrics that we need to monitor, not only hardware platforms, but also the network and the software applications that are very important for NFV. And in terms of capabilities and in terms of working with the C entity reference architectures and reference models, we know we're not going to suddenly take a large leap overnight in terms of capabilities. And progress is only going to be made by working together through continuous feedback and improvement cycles. And I think the performance and functional testing tools as well as the monitoring tools we have available are going to be key to this because it will let us continually make sure that the reference implementations meet or exceed the requirements that we expect, not just with functionality, but also, of course, with performance. And these tools are also going to let end users take that reference implementation and evolve it to meet their own needs, substitute their own components, while still knowing that the performance requirements are being met. So I'm looking forward to collaborating with the community on improving the monitoring metrics collection, be it for closed loop automation for service assurance or for, again, for performance testing. And I'm looking forward to getting a lot more feedback from end users and closing that loop on requirements and developing standards and best practices across the industry. I totally agree with your point about the end users, Emma. I think that's the purpose of all this is to make telco platforms and software more cost effective so we can deliver better customer experience at lower cost. And I think we've all made the point about collaboration and I think one of the key benefits of Anika will be whilst the projects may not necessarily have friction between them, just as a TSC being able to come together once a week to make sure there's no overlaps, there's no duplication, there's people are aware of what's happening within the related projects. It's going to be a great benefit. Very, very true, Tom. I fully agree with the point, especially the TSC is coming together, right? It has really helped all this in every project, especially when it comes to the testing. We now we really know who the end users are in consumers of these testing applications implementation and we are very motivated now so that we can meet those requirements specified by Anika and also as Emma was mentioning, testing products are one of the biggest consumers of the service assurance solutions. So we are very happy to collaborate with the other projects and also the CNTD community. I can go to it.