 Good afternoon. How many of you here do testing as your main job, whether it's testing the OpenStack services or tenant applications running on OpenStack? How many of you are testers? You're a handful here. OK. For those of you who don't have any experience in testing in OpenStack Cloud, the question could be, hey, OpenStack provides tempest for all the APIs, and the Horizon team provides some selenium tests for testing the GUI. So what's still there and what's the big deal here? So just want to talk to you a little bit about the complexity that we are facing in this large complex AT&T integrated cloud application that we have, why it's complex and how we are handling it. Those of you who have been here for the keynote last year, this year, know that AT&T is making a lot of rapid progress in virtualizing almost all of the services it provides, whether it's business ethernet, cellular data, or as we heard this morning, entertainment. All those services are getting virtualized, and those require multiple VNFs. These VNFs have their own orchestration needs, performance needs. Some of them need DP, DK, SR, IOV, what have you. So testing those VNFs, ensuring they work well in your cloud environment is one requirement. And then we have ECOMP, which is all your orchestration management part, which has now become an open source product called ONAP. And then we push the envelope when it comes to infrastructure, because you all probably heard about this phrase, carrier grade, which essentially is held synonymous with you can't fail and you need to provide highest levels of performance. And you need to do that in a rapidly evolving technology open stack. So that's the core of this thing. That is testing is the real requirement behind being able to handle so much of different technologies that are so rapidly changing, while at the same time being able to provide the carrier grade kind of expectation that the clients have of your environment. With that, I'm going to hand over to Winkert to talk about some of these things in detail, and then we go into the next level of depth about how much of testing we do at each of those layers, how did we automate some of those things, and other details. Hello, guys. Our primary focus is not doing a manual testing, but we automate from the upfront, like what we cover on a day-to-day basis. Because we have to cover a wide range of sites. Like we already have more than 100 sites which are productionized. So what we do in these sites, like we cover open stack APIs, non-open stack, same time CLI, GUI, and non-open stack GUI. If you really look into our test case scenarios, like pretty much most of them are automated. But we do cover, there are quite a few cases where we do a manual testing, which we cannot automate, like whether it may be a disruptive test cases. And there are also AT&T specific post-processing, where ATO managed test cases, those are not fully automated. But pretty much most of them are automated, and we can hit on any site and get the metrics immediately as we run and see what is working and what's not. So for those of you who cannot do speed math, the total number of tests for the release of Cloud is about 4,000, out of which about 40% is open stack, the rest are others. This is what I was talking about, how it's all not just open stack, it's the rest of the stuff that you need to validate too. Before you can say your Cloud is fully functional, right? And the key challenges which we are seeing is primarily with the performance enablers, like whether it's a DPDK or SRIOV, and also the migration of the cold and the hot migrations. How do we automate those test cases? Like do we really need to? Because it also involves the disruption. So whenever it comes to the disruption, we just keep out of the automation wherever we can automate as part of an API, we do cover those. And as you have seen in the first slide, we have a wide range of VNFs. Like each VNF has their own requirements, whether it may be a compute requirements, network requirements, or underlying the fabric requirements. So from an orchestration perspective, we do cover up to a certain extent of VNF automation, like how we can automate what all the core necessities of a VNF. And apart from that, how do we measure the performance of the VNF? What are the performance requirements for a VNF? Like each VNF has its own bandwidth requirements. What is the latency? Or it's an IO requirements. We do use quite a few open source tools for measuring the performance. Especially we do have quite a few challenges when it comes to the performance enablers, especially doing a performance tuning, whether it's an MTU sizes or with the DPDK. Like what is the ideal parameters which our VNF can perform? Like properly their requirements can be met. Yeah, so I'll hand over to Bhavin. Hello everyone. Thanks for joining us today. This slide kind of talks about re-trades what Venkat and Srini just talked about. How testing is so important across all the different layers. And what you see here is more on the framework side and how we accomplish this. We've come a long way at AT&T. We started using SOAP UI, then we moved down to Tempest and then CLIs were deprecated. We had to come up with a new solution, new framework for that. And so what you're seeing here, the top three are basically APIs that we test and we're using the Tempest framework. We utilize the TOX eSmoke, eFool and then anything that's non-open stack like Contrail or any other application. Basically we create plugins for them. So that's number three that you're seeing which is also within the Tempest framework. And then the fourth one is the need to do a lot of audit and configuration type of testing. So in order to accomplish that we built a framework around test-infra which is an extension of PyTest and all our CLI tests are basically done using test-infra. And then the last one that you see there is all GUI testing which is for OpenStack Horizon dashboard and also non-open stack components. And this slide here kind of talks a little bit about how we accomplish this. You know the need for the testing sooner with more automation. We basically shift left and then try to develop our scripts in parallel with the dev teams. So this kind of is more on the operational side how we accomplish this. You can see that as the dev teams are developing once the high level designs are done the automation team starts building the building blocks and we kind of come together towards the end here where we start deploying and testing using continuous integration. Future path for us is to be able to containerize all our tests and deploy them to local control planes and execute testing from there. With that, I'm gonna pass it on to Shrini if you wanna talk a little bit about the LCOO. You know LCOO is how many of you here have heard about LCOO if you hear, right? So this kind of within the OpenStack community tends to be the voice of the users, particularly those users who have requirement for deploying OpenStack at a large scale, the issues they face and that being conveyed back to the individual development teams. So there is some information here about how you will be able to access it. There is in fact something that you can scan and get in touch with LCOO to know about what are the, for example in testing one of the areas they are concerned about is destructive testing, how it needs to be carried out. So if there are issues that you would want them to or want to find out if that's faced by other large companies who are deploying OpenStack, what their solutions are, that's a good organization for you to get engaged with, discuss your requirements or share your solutions with other similar organizations. We probably have a few minutes for questions. Two minutes. This is all not done within AT&T, right? None of this is done anywhere outside of AT&T. This is for AT&T done within the AT&T dev test production environments. Any other questions for me? Thanks everybody.