 Good afternoon, everyone. Let's do a big show of hands. How many of you are using the DevOps to accelerate your software development lifecycle? Awesome. Let's do one more question. How many of you are using the open stack in your production environments? Wow, this is decent. One last tricky question. How many of you are using the DevOps to upgrade your routers, to upgrade your packet gateways? Anyone? This is the topic for today. So DevOps is totally transformed our software development lifecycle. Previously, developers used to write the code. And ops teams, they used to deploy the code. It used to take three to four months to release a major product feature. And it used to take weeks to months to provision a single server. DevOps totally transformed that one. It's a game changer. Now, from crawling the IT development to the DevOps totally transformed it, now you can spin the environments in minutes. And you can run the thousands of test cases with single click. And you can make the releases in two weeks or one week. DevOps is totally transformed the IT development lifecycle. But there are two problems the teams are struggling with. The first one, if I want to open a firewall to my back end systems, I set up the new environment. I want to open a firewall to my 10 of my back end systems. Still it takes days to weeks. It's a still standard manual process. There is no end to end automation. The second problem, if I want to add a new feature to my routers or packet gateways, we will design and develop and to test in the production network environments. It is taking almost, can you imagine, five to six months. Can't we accelerate this process? Can't we make it more agile making the network upgrades and network changes? Can't we apply the DevOps principles? Today we are going to show you how we can apply the DevOps principles for the network upgrades and the network changes. Once we show you this, I'm sure you will be confident. Or at least you will try how to apply your standard agile practices for the network upgrades and network changes. My name is Sharath Nalutla. I'm an associate director, DevOps Platform Engineering Verizon. I am responsible for DevOps Toolchain and its engineering platform. I help the teams to automate their software development lifecycle and foster the innovation. Today I'll be presenting this one with my great colleagues, my friends, my partners from Ericsson. I'm Harshath Tanna, I'm chief architect on Cloud Manager product, which was used for this proof of concept with Verizon. My name is Behul Shah, I'm part of the CTO team at Ericsson assigned to Verizon account. Thanks, Harshath. So let's quickly go over the agenda. So we'll talk about our DevOps journey at Verizon. We don't take much time. And then we'll talk about the NFEs. What are the continuous integration and continuous delivery building blocks of NFEs? And we'll go over the couple of use cases. We worked with Ericsson how we can automate the complete lifecycle with one click. You can make the changes and upgrade these virtual functions. Let's give a quick introduction to who is Verizon. I know we don't need an introduction about Verizon to Americas, but we have the friends from all over the world. Verizon is America's most prominent and most reliable broadband and wireless network. We have over 110 million wireless customers. And we have 6 plus million internet customers and around 6 million video customers. We have our 178k plus employees working around the wall to support our business and serve our customers. We have around 1,700 plus retail physical locations with the presence and with a large online presence on the web. Let's quickly go over about Ericsson. OK, so Ericsson, we don't make cell phones. A lot of people still think we make cell phones. We are a large provider for radio networks. We also provide IT products and solutions more focused on OSS and BSS systems for telecom industry. And we also have media offerings. And we are a global company operating 150 plus customers, countries 180. Global presence, just a quick introduction. We are a Sweden-based headquarter company. Our US headquarters is in Plano, in Texas. Yeah, so Sharad talked about some of the challenges when it comes to the network functions. And this is especially true when we are talking about the telco providers. Because a lot of gear there is made up of network functions. There are IT systems for sure. And Verizon has a large amount of IT systems. They have applied this DevOps processes for such IT systems. But then we are looking at the challenge that Verizon posed to us about the network functions. And a lot of network functions like firewalls, routers, load balancers, packet gateways, and IMS, these are some of them are very complex systems. They are, today, provisioned or deployed as physical boxes with their own hardware and software, everything packaged in racks and shipped to the data centers where they run all the network load that provides the wonderful services for wireless customer and the wireless customers. So the challenges are that when you have the functions or the applications in such physical form, it takes very long release cycles that Sharad talked about in some cases. Even after the equipment arrives at the location, by the time it is stacked and racked and wired and configured for first services to be running on that, it takes five to six months many times. And that poses this challenge where any new upgrades or new releases that you want to take for such network functions is going to take a very long time compared to what we are seeing in the IT industry now. So with the advent of network function virtualization, which you have all heard about, we started looking at this topic. Can we make it even faster? With NFV, it is possible that we can now deploy this network functions in the virtual machines or containers in some time in the future. But still, it leaves a couple of things that are not addressed in that environment. You can package the network functions and deploy them. But these network functions are not developed within the IT shops of the network provider. In many cases, these network functions are coming from third-party vendors like Ericsson. And it follows a little bit of a different lifecycle. It doesn't get ingested into the normal DevOps lifecycle and we said, let's look at any ways to integrate this network function virtualization with the DevOps lifecycle of the IT applications in the provider's IT shops. So that's where Sharath is going to take us through that journey. And we carried out this POC with them and we can discuss little more details there. Thanks, Arshad. So let's quickly talk about the different phases of Verizon's DevOps journey and rebuilding our engineering culture. We started this journey almost four to five years back. But we made a significant progress in the last two to three years. We have over 1,000-plus applications that we run to support our business and server customers. Now we started moving from large legacy delivery models to more agile modern delivery models. Even though we are a big technology company, there were specific challenges that we are facing across the portfolios within the Verizon. Development teams, they write the code. From the time they write the code to push it to the production, it has to go through the several teams. Someone has to build, someone has to package, someone has to deploy, someone has to test. All these teams are working in silos. There is no collaboration. There was no collaboration. That is one of the specific challenges I took it. And we enabled a DevOps tool chain. The second one was, let's say, if you want to provision a few servers for the application development, it has to go through different teams and several handoffs. It used to take from weeks to months. Each of these gaps, each of these delays piled up. And at the end, the major release used to take like three to four months. So then we thought, OK, let's apply, let's automate. A line of code written by the developer might be only the manual thing. All the functions that you build it around the line of code, everything needs to be automated. That is our philosophy. Nothing else. Only developer will write the code until it goes to production and set up the environments. Everything is one click. Everything needs to be automated. So we set up a DevOps tool chain. Let me quickly give over what is a DevOps tool chain. It consists of several tools, if you can see over there. One is like a project, agile project management tool and a version control system. And a test case management automation tools and provisioning tools and build and deployment provisioned tools like Jenkins. What we used here, we used the JIRA for the project management and agile project models. We used the Git for a version control system. We used the Selenium for the test automation. And we used Ansible for the orchestration. And we used Jenkins for the build and deployments. So this is the whole process of automation. It took almost two to three years. Out of 1,500 applications are now following these agile automation models. The developers used to write the builds and make the builds like 20 or 30 in a week. But now almost 800 to 1,000 automated builds are happening. You can see the accelerating the innovation and how fast we can deliver the software and new features to the customers. So the story was good. Everything is fine. And there was a tremendous progress. As Arshad said, the NFE is born. So now NFE, we are all using it. And how we can apply these principles, DevOps principles, to these network functions? Can we make it more agile? So what we did, we took this use case and we did a proof of concept with the Ericsson. We worked on two use cases. The first one, take the Ericsson's virtual router and take it from Ericsson and put it in our OpenStack network and instantiate it. Do the whole process with one click. The second use case that we worked on, let's say today, virtual router is version 1.2. Tomorrow, there were new features and Ericsson releases the version 1.3. How we can upgrade our networks? How we can upgrade our network environment as a test or production seamlessly without any disturbances? So we took the two use cases. We did a proof of concept. And the goals and benefits of these use cases are how to enable the DevOps principles. And the important one is shorten the time to market. Right now, it's taking four to six months. Now we are making it to almost hours. That's a big leap. Let's talk about how we did this one. So if you see all this network function virtualization, these are all the software packages at the end. So Ericsson has their own lifecycle, own development model. So they design, they develop, and they test, and they release at Ericsson. So we take that project, we take that package, OVF package, and we'll pull it into our binary repositories. Here we use the JFrog Artifactory so that Ericsson will push the virtual routers automatically into our binary repository. Then use the Jenkins, the same tool that we are using for the application builds and deployments. So use the Jenkins and use the Ansible and orchestrate it using the Ericsson Cloud Manager. Ericsson Cloud Manager is used for instantiating, activating, and deploying these VNFs into the network. So here, the specific package, it's called the OVA package. So Ashut, do you want to tell more about what is this VNF package model is? Yeah, it's basically the VNF. It's traditionally known as VNF descriptors. We have used, in this particular case, open virtualization format. And OVA is a whole package that consists of the descriptor plus all the VM images for the virtual router delivered as an output of that build cycle from Ericsson to a provider like you. And then there are other formats possible like heat orchestration templates or in the future, basically very short time from now, we will be also supporting talks like descriptor. So that is, basically, it is application packaged in a turbo-like deliverable that can be consumed into an artifact. So if you really see here, the one pipeline is running from a vendor. They are developing. And they are pushing that package like a software. No more appliances here. They are not pushing the appliances through HTTP. So this is a software package. They are pushing it to our pipeline. So the two different pipelines are integrated through a repository. And everything will be deployed and tested in our networks. Let's talk about the first use case. Here, the Ericsson is provided the OVA file. Once they team, once they develop it and test it, even with a small feature release, a small function, they don't need to wait. They'll push it to our artifact. So once we take it from the artifact, the everything will be like our standard IT pipeline. Just build a Jenkins pipeline that has four steps. The first one, pull the OVA file from the artifact tree. Then upload it to the EVR. Here, EVR is Ericsson Virtual Router to ECM, Ericsson Cloud Manager, using the REST APIs. Then you can deploy it in an OpenStack environment. So basically, we are taking the router from the Ericsson and deploying it in OpenStack environment. As a last step, even we are testing it, whether it is working or not. The whole process is with one click. There is a small change from there. And it is coming over here. And we are deploying it on our Merantis OpenStack environment. So let's quickly, we did a video of this one, the whole process, how we did. It's a couple of minutes video. Let's watch that. In this video, we will see in real time how virtual network function changes can be deployed in the next-gen OpenStack environment, using the DevOps platform. For this demonstration, we used a virtual network function V Router provided by Ericsson. Enabling DevOps for the VNF deployment shortens the time to market for network feature updates and increases the quality of streamlining deployments. To enable this functionality, we installed Ericsson Cloud Manager in our OpenStack environment. Ericsson Cloud Manager, or ECM, is a cloud management system that enables the creation, orchestration, activation, and monitoring of services running on programmable network resources. In this use case, we accept the Ericsson virtual router OVA file into our internal artifact repository, One Artifactory. Using Jenkins, our orchestration engine, we can pull that file from Artifactory and deploy it in OpenStack using Ansible and ECM APIs. Let's take a look at the Ericsson Cloud Manager UI. At this point in time, you will see there are no virtual applications deployed onto the network yet, because there are no virtual machines provisioned and no virtual networks created. Additionally, there are no V Router packages available. Viewing the Verizon internal artifact repository, OneArtifactory.Verizon.com, we can see Ericsson's V Router OVA files available for download and deployment. Here, we can see the virtual network function continuous delivery pipeline we created using Jenkins. The Jenkins pipeline consists of four steps, pull, register, deploy, and test. In the first step, we will pull the V Router OVA file from Artifactory and place them onto the build server. In the second step, the Jenkins deploy job will verify the integrity of the OVA file using checksum and then upload it to OpenStack. In the third step, Jenkins will deploy the virtual router onto the OpenStack network. Lastly, in the fourth step of the pipeline, we will run the test job, which will send a ping between the two test VMs and ensure the traffic is routed through the Ericsson virtual router. And voila, that's it. With one click, we can pull, configure, and deploy virtual routers into the OpenStack environment and test their functionality. Now, if we go back to Ericsson Cloud Manager, you can see the received EVR packages, EVR1 and EVR2, one active EVR in virtual applications, the virtual router itself as a set of VMs, and finally, the virtual network that we created during the process. Now we can see all of the jobs in our continuous delivery pipeline are green, indicating that they were executed successfully. This highlights the repeatable process of downloading, deploying, configuring, and testing virtual routers. Even though this test is done automatically in our Jenkins pipeline, we can see the virtual router is working properly by testing connectivity between our two test machines. If we connect to one of our test machines with an IP address ending in 149, we can prove connectivity by pinging the other test machine ending in 151. And there we have it. The virtual router is up and running. Let's take a step back and review what we've accomplished. We stored a virtual router from our vendors in our internal artifact repository and deployed that virtual router onto our networks using a DevOps continuous delivery pipeline. As this is an easy and repeatable process, any changes made to that virtual router's functionality can be tested and deployed onto our networks within minutes using the DevOps platform. So also, we worked on the second use case. It's a bit more complex. That is, upgrading version from 1.2 to, let's say, 1.3. Even without disturbing any of the existing network resources, making one is active and one is passive, upgrade the passive one, and then make it as active and so that you can have a roll over deployments of virtual routers in your networking platform. So this is our POC use cases, that proof of concept that we worked with Edison to showcase that how we can apply the DevOps principles. Now, Mehul will talk as, what's there, what's next for this NFE journey? Well, just to summarize, the transformation, the network transformation from physical to virtual, that's well underway. And Verizon's doing a lot of work. You know, Ericsson's kind of helping. And so are many others in the industry, including I see some of the faces in the room who are very actively engaged on STNN, if we program. So I think that transformation, that transition, that's already started. But to keep things simple, what's really happening is the key enablers that you see at the bottom is the network and the network components, like the routers and other mobile network functions that were briefly mentioned earlier, are becoming more programmable, right? So more API enabled. And that makes it easier to bring them into the DevOps tool chain that we discussed. Also helps them bring into whether it's an open stack environment or maybe in the future, some kind of a container environment that have been a lot of talks around that. So the view is that you are moving from mostly physical world today to the NFE virtualized STN enabled world of tomorrow. And the future is very programmable network that will truly provide network functions as a service. Very modular architecture that we talked about, full support with open stack, containerized. And also enable new business models and new services for what is called network slices, for example, from taking a slice of the network from radio network to the mobile core network close to the customer. And essentially just having a slice of the network, let's say just for connected cars or your household appliances, right? So that will just make things more programmable. And there have been a lot of estimates by Ericsson and many other in the industries by 2020 expected connected devices to be 20 billion. I've heard the range from 20 billion devices to 50 billion devices. We strongly believe that virtualizing the network elements as well as orchestrating them, automating them and bringing them into some of the DevOps that Sharad talked about in this POC is going to be quite critical for the future of the network. So that was, and Sharad, anything else? Yeah, that's it. Thank you for attending and thank you for the partnership. Yeah, and any questions if we can answer? We have, I think, still about 10 minutes. Any questions? Well, so this DevOps is, how can I say? Utilizing the new build into the Verizon lab and maybe Verizon can easily test it. Do you think you can extend this model into the actual live production network? Absolutely, this is not just, even though this is a proof of concept, and if it's something like a DevOps journey, right? When someone started automating the processes, everyone was thinking, okay, we need to get the confidence. Once you get the confidence, now everything is automated. I'm sure in the coming years, the whole process will be automated. This is not just for the labs. So then if that is the case, what would be the biggest challenge, do you think? Do you want to take it? Yeah, so as Sharad showed, this DevOps tool chain, it's already working in 500 plus IT applications, kind of that mentioned, and we've done a proof of concept. The challenge, number one, is prioritizing and doing it, and obviously vendors like us, as Ericsson, making sure that we provide clean VNF descriptor interfaces and whether it's an OVF or hot templates, and make it as easy as possible for Verizon and the likes to integrate with the DevOps. Okay, thank you very much. Thank you. So just a question on the naming you are using, like DevOps, what I see is you have done an automation of a long manual process by using software like Jenkins and Cloud Manager. So you could achieve one touch end to end provisioning. But when we say DevOps, it's actually something else, right? I mean, it's a development process where a small group of people with different skill sets come together, build something quickly and test, and if it works, use it, if not, throw it away. So it's more like a development process rather than an implementation like this. So I just wanted to know why you use the term DevOps for this. So, see, DevOps has 100 definitions, the way we view it. At the end, are you doing it manually? Are you doing it taking the time for like two, three months? Or you're just automating and do it in minutes? How fast are customers are getting the results? It doesn't matter what process you call and what technology you use. At the end, you need to search for the customers and release the features on time in real time. Are we going to achieve this one with this process and with the automation? That's what we did. Yeah, and there is a development component here. So there are two parts we talked about. There are IT applications and then there are this virtual network function applications, they're applications. One, for VNRs, the development takes place outside the internal development shop, but there is a development cycle and that's what we showed as the top line of the VNF's own development lifecycle. And the critical idea is that, okay, both of them, how do you combine them so that you can build a common platform for IT as well as network function workloads? And that is the holy grail of this virtualization for a lot of providers here. Hi, thanks, great presentation. What I wanted to know is that you actually showed us an admin workflow for DevOps when you actually on-board a full router or upgraded a whole router. Typically when you actually deploy an application, let's say it's your internal application, you need a subset of that functionality, let's say a couple of router ports to be open, couple of switch ports to be open, firewall or a load balance of VIP to be created. You know, have you actually had any experience in taking an application, internal application, not, I'm going through that whole process in a more of a CICD manner, and not on-boarding a complete VNF. A complete VNF is a rare event in the DevOps environment. Thank you. So that is the first challenge I was talking about, right? So as an application manager, I got the new environment with the 10 servers. It's good, great. But I need to talk to 10 back-end systems. Then I need to talk to my network schemes or someone, okay, request the process. The whole process is currently is a manual, right? We can solve the same problem in the same similar fashion using the continuous integration and continuous delivery pipelines so that once we set up the environment of the infrastructure, kick off another Ansible playbook and open the firewalls so that your systems will talk to the several back-ends. So this is not limited to the particular, only the virtual functions bringing into the network, not only the upgrades, but also for the changes. Here we showed a couple of use cases, but this can be extended to anything. All right, so very impressive speech, and I'm wondering the interface with Oxtrader. Here is like a cloud manager, and so is that only about the VM for onboarding? Secondly, what about the VM manager installation? Are you doing both of them simultaneously? It's the same two questions. So the interface is, so cloud manager is acting both as a generic VM manager as well as an FEO. So the onboarding part is of course handled through cloud manager, but even the deployment was handled through cloud manager because it exposes the VM manager interfaces, and both of them are REST APIs. So those APIs were used in the Ansible scripts and they were invoked to achieve this end-to-end solution. So yeah, so the follow-up, I mean we have the design GUI for the VM, like service design GUIs, right? I mean that's on top of the Oxtrader to design the network service end-to-end GUI design. So how you combine with that GUI application? I'm not sure if I understood the question, but let me repeat it so that. Okay, so is the question that there is an external service designer to design this end-to-end service? In this case, we didn't exercise that, but what you can think of is that that service design is embedded in the Jenkins and Ansible scripts, basically. Because again, this was a proof of concept. Of course, when this goes into actual Mano network service deployment, the service, you will have a network service descriptor and the VNF descriptors onboarded onto the Cloud Manager. Right now, we only use the VNF descriptors. We didn't use any network service descriptors. Okay, so I can discuss with you offline. It's very GUI-designed. Yeah, yeah, okay. So you'd mentioned that you're onboarding a bunch of applications, specifically their builds. What was that process like and what challenges did you encounter in actually onboarding the applications, getting scripts, you're in getting test cases implemented? I think I'd be interested in hearing more about that. Yeah, it's a very good question. And this is the journey that we go through every day. So onboarding application to the DevOps platform is really challenging. So it happens, you need to change the culture. You need to have, rebuild the engineering culture. So some of the teams might be using the manual process. If you want to test 100 test cases, still if you're using the Excel and someone using an open source platform like Tessling and you automate everything and run the hundreds of test cases, if something fails, open a Jira ticket and so that the developers can fix it. The whole lifecycle, once you automate and once you showcase this one, one particular application and take it and showcase to your teams, obviously onboarding the rest of the applications is not a big problem. Basically, you need to provide the confidence in the teams when you are making a change, cultural change and taking them to the next level. Tricky question, both. Sometimes we take few applications. See, in any environment, in any company, there are a few teams. They want to go more agile. So they are your advocates. You make and automate them. Others will follow. It's not like everyone wants to be in a rockage, never. Right? Hi. It's a great presentation and it truly develops automation in my point of view. Thank you. Are you planning to use OpenStack Tacker like for NFVO and VNF Manager or Ericsson Cloud Manager like, you know, does the same. So what is the difference that you are seeing between Ericsson Cloud Manager and the OpenStack Tacker? That's a good question. We can talk about Ericsson Cloud Manager, but OpenStack Tacker, I don't know. I need to, you know, we need to get more details. Thanks for the presentation. My question is, when you onboard an application, especially when you install the new version of vRouter, if the upgrade fails or if the configuration that you want to make on that fails, how did you handle that condition in this automated lifecycle? I think you're talking about our third use case. So we did the four use cases. Exactly, we drafted the four use cases. A couple of them we talked about today. The second use case was if our router fails and how we can pull it back and make sure that the network will be in the same state. The other one is when you off, you can scale this environment. When more VMs are added and how we can bring the more routers and scale it. So there are other use cases. Probably we can go in more details, but we did those third and fourth. Yeah, so we didn't execute through that use case. There are, especially, there is an error condition and how you roll back to the previous version. But the principles are kind of same, and this is, I think, applies in your IT applications as well, that whenever a new version of an application that you're trying to roll in fails, you always, always have the previous version that is handling some of the traffic or some of the application functions, and you roll back to that and clean up the resources behind it. So we discussed the scripts and the flows for that, but we didn't execute. This was just a proof of concept. And we want to basically share with the community what we have achieved so far. So if you had a simple logical extension, that might be another step in the Jenkins pipeline. Something goes wrong. You can just fix it again. Maybe one more question I think we can take. So what do you explain here is the proof of concept that you already achieved. So my question is more for the next step based on this proof of concept result for the future. What's the plan for horizon perspective? What's your strategy? That's a very good question. This is a proof of concept. This happens for any life cycle. When something comes, we'll do it with a proof of concept. We'll build the confidence. And when the time comes, we'll realize the same things in production environments. This is quite natural for any of the other automation or any of the changes that we do. Okay. Thank you. Thank you very much.