 Hello, everyone. Good morning, good afternoon. My name is Abhijeet Singh and I'm director of AT&T Cloud Platform. Here I have my colleague from Ericsson who work with me here and they're going to introduce themselves. Shraddha Navina, please go ahead. Hi, I'm Sharath Rao. I'm a solution architect in the Ericsson Cloud Platform team currently working as a network cloud architect in AT&T, designing the AT&T network cloud. Hey, hi. My name is Abhijeet Shahgavendra. I'm a solutions architect in Ericsson working with AT&T for designing network cloud as a cloud solutions architect. Okay, so today we are going to talk about continuous integration and continuous deployment as a service for the agenda. We'll explain why there is a need of CICD as a service, the continuous integration, continuous deployment and service. What are the typical requirements as a telco we have for CICD? Then we will explain we evaluated the various continuous integration process flows. We are going to explain that process first to you. Same way we are going to do with the continuous deployment process flows. Then, of course, there are many softwares and open source softwares available in the market and some commercial softwares as well. And how do we make choices? Choices of a good CICD as a service. We'll explain some implementation approaches and then in the end we will conclude with the various topics we talked about and we'll explain that some best practices, some choices you should make when picking a good set of software for your CICD as a service. I mean, I think that's the next slide. Okay, so like I said, I work with AT&T and as a telco we have to meet the promise. We have to meet the promise for 5C software upgrade where we are going to update, upgrade all 5C network functions as truly software based upgrades. Now, if you look into a typical telco network functions ecosystem, we have suppliers, the OEMs, providing many of the network functions. And we have like so in 5C with 3GPP standard, we have like set of network functions, each coming from different suppliers. Now again, all this network functions are built on cloud network principles and we hope so. But even if it's built on the cloud network, the integration, the management of the software is a very huge task to talk about the version management, which is a typical day-zero activity, the integration with the continuous integration pipelines. Then we talk about the deployment of those applications. Box fixes, troubleshooting, and then change management. And then day two when we go and our operators has to operate those software that has to manage, deploy, upgrade, all those activities. And that's where a good CICD, resilient CICD, secure CICD built on a good cloud network principle, has to be there to build a very good role because that's the foundation. If you don't build a good CICD as a server, this whole big ecosystem will be very difficult to manage. We're not going to move to the next slide. So again, I'm going to dive into the typical requirements of cloud-native CICD. And now we all are, most of us, I believe, are using Kubernetes based on the stricter. And that's where I think the CICD has to be built, the CICD system itself has to be built with the cloud-native principles. And security for all, for the self-help is very important. And we must protect our customers. And that's where working with Kubernetes, working with various C and CF communities, and the framework which we already have, things like role-based access, control, things like container security policies, things like network policies. We must integrate the CICD system with the same set of policies to provide a secure infrastructure. And again, like I mentioned, the day zero, day one, and day two, everything should look like to have a truly developed approach of the linear central functions. And that's where the native support of GitHub's model that can be tied to each pipeline and work flows with further support for itemized rollback and reviews and rollback mechanism is a must and is required. In addition, there must be capabilities to perform canary deployments with a given set of posts and a given set of reasons. And last but not the least, we build networks. And with networks, the solution must be resilient, must have high availability. And again, like the CICD infrastructure itself, it has to be HAA, it has to be DO deployed in multiple locations. There should be proper integration between those systems. So in that network to protect our infrastructure from a disaster kind of scenarios. So with that, Avinash, and why don't you take us away to the next section and explain the CI processes and CD processes and share of different approaches. Thank you. Avinash, take it away. You are new, Avinash. Thank you. So thank you, Abhijeet. So as Abhijeet was explaining about the need and the technical requirements, let's dive into some of the typical processes that a CICD environment would need for such a solution. So here we are looking at from the left side. So you have a service provider admin who is actually working to create a manifest of what artifacts would be required per vendor. So this could as well be a vendor admin. So in our typical AT&T or a service provider environment, there would always be a service provider admin who would actually create a manifest on behalf of the vendors. As I said, this could as well be the vendor admin who would be automatically pushing the manifest from their existing CI CD systems or their software delivery units into the CI as a service DevOps pipeline. So what this would do is this would prepare the repositories. So you have the content repository and you have the manifest repository that would configure it for each vendor with appropriate paths, appropriate user access, everything. And then once the paths and the repository is set up, these repositories will be further set up to actually auto-fetch the content that was just approved for pulling. So this auto-pull is very much important. So you don't want a human intervention here that would again slow down the process. So what technically this mechanism is achieving is once the manifest is ready on what to pull, this central system would automatically pull in the content that is specified for each vendor and then combine them for a particular release catalog which we call a V1Alpha. Now when we go into the next slide, so this is a part two of the vendor CI and onboarding. So we carry on from where we left off from the previous slide. So here on the left side you again have the repositories which with the content as termed as V1Alpha. So the CI as a service would then have built in service provider preset jobs like security scans. As Abhijit was mentioning, security is a very important detail here. So everything that is coming from the vendor or in-house produced has to go through an industry specified and auditable security scans. So this includes image scans, code scans, and manifest scans. So once the scan is complete and then if there are any manifest changes, so if you actually see inside the inside detail inside the git path, you would see the Helmcharts has various tiers of changes. So a Helmchart for a particular service might have some global values. Some may have over readable site values. Some even within the site every app may behave differently from each region on each site. So all these changes could be applied at this point in time and a vendor admin who is responsible per vendor software could peer review it and then trigger once the peer review is complete, the trigger would go into the CI as a service DevOps pipeline that would trigger the package of the incoming software into v2, v1, beta. So you actually see, so this repository now contains both the v1 alpha and then the v1 beta, which is the latest of the software. Now going further, so once the initial scans and the peer review is complete, you would then move the software into test QA and further. So where we left off again from the left side. So now you have v1 alpha, v1 beta. What happens here is the CI, in all technicality, the CI as a service is complete. The CD as a service would pick up. This is what Abhijit said about the canary deployments. So it is aware of various regions that is preset for it or it could be defined dynamically. So here I am showing two different regions where there is a region where there is a dev testing done. So there is a certain set of preset test cases that needs to be run per software that could be run here. Once that is done, then the QA testing. So this QA testing usually involves end-to-end testing. And then once the end-to-end testing is done, it creates a GA tag on the same release. So that means given there are multiple feedback loops that have happened. So I am showing a best-case scenario. That is why there is always a v1 alpha, v1 beta, and v1 GA. So in reality, it might not be like this. So it would be v1 alpha, v10 beta, or v52 GA because there would have been several revisions of that. And then eventually once the GA tag is created, the GA tag release is moved to your production. So if you have noticed at the bottom, all of our previous slides, we were working in the pre-production environment at the bottom. Now once you go to the, once you complete the software test and QA, you eventually want to move the GA tag release into production. So the same content that has now been approved, dev tested, QA tested will move into the content repositories in production. Now once it is in production, it is actually now ready to actually be deployed in the production regions. Now I would hand it over to Sharath, who can speak on how the production deployment would work and eventually take further into the implementation approaches as well. Hey, Sharath. Thanks, Savinash. So now that Savinash mentioned, we got a GA tag on a particular piece of software. This implies that it has been appropriately QA tested and we are confident that this can be deployed into production. Remember, we don't want to push anything into production unless and until it has been completely tested. So once we get this GA tag, we can now start the continuous deployment as a service space of the DevOps pipeline. Here, what we can do is we can apply different configs that are required for different production sites. Not all production sites are equal. So what we'll do is that, for example, in East Coast, we may want it to reach out to the application to reach out to certain things on the East Coast itself in order to minimize latency. If the application has to be deployed on the West Coast, then we'll reach out to other applications that are deployed on the West Coast itself. So do some kind of geo-redundancy as well and as well as ensuring that different deployments can actually have different configurations when we are in the production zones. Any cloud native deployment can be deployed in not only just local internal production zones but also in public cloud. So the deployment pipeline should be capable of doing deployment into various different clouds and not be tied into a single cloud. Let us now look at two different implementation approaches that we are considering in order to satisfy these requirements and the requirements that Aviget initially set for. If you look at the first approach, here the vendor again still has a control because he has the best knowledge in order to figure out how his application has to be deployed in. So you will have a vendor admin, as you can see on the bottom left of the screen. Vendor admin creates appropriate pipelines using Spinnaker, which is one of the tools that we have selected for this approach. He can fetch all the required data based on the tag. It can be a V1 GA tag or a V52 GA tag as Avinash was mentioning and pull it up and create the appropriate pipeline along with the service provider. Airship is the AT&T or OpenStacks cloud building platform which builds the infrastructure as a service. And in this approach, we are using that to actually create to actually deploy the software workload in conjunction with the Argo CD. And here the Argo CD will help us to actually make those small conflict changes that I talked about before for different regions, be it Canary or be it region on the east or be it a region on the west. So that is one approach for doing the continuous deployment. We'll now move into the second approach where the concept is similar, but we are using a different tool. Here also you have a vendor admin who actually builds the pipeline to deploy his or her, their application. Then we have one main difference here is the concept of the deploy part. The deploy part is a self-contained part that has got all the details required for deploying that particular application. So the first part is that the vendor would actually help build the pipeline and then utilizing the deploy part will deploy his workload on appropriate regions, either let it be private cloud or the public cloud regions. Obviously, we are still using airship as one of the tools, driving tools to build the infrastructure and to push in any Kubernetes clusters that are required. These are the two different implementation approaches that we are looking at as part of the continuous deployment of service. Avinash, you are on mute again. Sorry. So thank you, Sharath Abhijit. So in conclusion, CICD as a service in a multi-vendor environment is absolutely a must as Abhijit initially elicited. You have multiple vendors. Each vendor has their own software delivery process. We don't want to impede that. So they are best at what they are doing from their own shops, so to better and smoothly integrate their software to build a bigger service. Sharath, can you take the rest of the conclusion? Sure. Sorry. As Avinash was saying, these are the periods of virtual summit and working from home during the pandemic, but I will continue. So once different software vendors know what is best on their deployment strategy, as well as the integration strategy, and it's very important to understand that we need to push open source tooling so that all the vendors have access to the same set of open source tooling so that they can adopt it and then provide it back, not only to the community, but also share best practices with us so that we can push it as a best practices downstream to other vendors as well. Then thirdly is that there are other CICD tools. There are plethora of CICD tools that are available, but the concept of whatever we are talking about will still remain the same. So we are setting up the framework. You can plug in different tools, but the concept of our framework will still remain the same. Avinash. Thank you, Sharath. Abhijeet, any, yeah, go ahead. Just to conclude, I think as I mentioned in my opening remarks that the CICD is the foundational and as any cloud native application's deployment, if we have the responsibility, we must invest a time and careful selections of those tools, but the amount thing is that, number one, I think any tools we choose, it has to be built with the cloud native. We can't compromise on security and then the resilience. You have the tool at the same time when you work on the complex ecosystem, complex vendor OEM ecosystem, there is a lot of cognitive desires, which is masked to operate. So the tools which is there with the cloud native and tools, which is providing security as a source, they're just not compromised. But they're very resilient, use a very well-architected service-oriented architecture tools on this building with tools that's out. And last, but not the least, provide flexibility. We call it a structural flexibility, where yes, there are rules, but at the same time, there's a flexibility which allows different functions to evolve, to do the software revolutions of queries and things like that. So thank you, everyone, for your time today, and we are open for Q&A. Thank you. Thank you. Thank you, Abhijit. Thank you, Shahad. Thank you. Appreciate it.