 Sorry, are we ready back there? Sorry, yes, we are, we are ready. You're on. Hello, my name is Dan Newman. I'm a cloud architect for Verizon Internal IT, responsible for designing and implementing the internal IT infrastructure cloud. And I'm Fred Oliver at 200, flip the next slide. Fred Oliver, I'm also an architect working in the cloud architecture. And particularly more on the network side of the house. And we're jointly going to talk about what we've done and our different use cases and the environment. So that's just a short agenda we can probably go to next. Again, I think we're going to go to everyone. So again, we're just doing a very relatively simple open stack environment. And part of our goal is to leverage kind of the COTS hardware, separate the hardware from the software functions, and enable the addition of different services, different open stack services over time. One of our big challenges is kind of how do we move into this framework given our legacy environment. And we'll talk a little bit more about our challenges as we go along. We have a relatively simple again, spine and leaf architecture. This is kind of, again, another small transition from our environment. Part of our, again, goal is to deploy SDN controllers to manage our data center fabrics. And this data center, the SDN controller part is to, we want to extend from just managing the internal data center fabric and gradually move out farther and farther into the network so that we can control the whole network. Verizon has a kind of several not interconnected networks. But in order to bring all those networks into the environment, we want to kind of bring everything into one place. Part of from the network perspective, we want to run services, applications from all the business units. So we intend to run a wireline, telecom, wireless business services applications all within the single cloud environment. And that brings with it a kind of a mixed set of challenges. And we'll get some more experience over time. Again, we expect this to be a fairly long journey. We're deployed right now in a small number of data centers. And over the next few months and years, we'll deploy in several more data centers and gradually bring applications into this environment. From an IT perspective, we've been doing IT automation and siloed compartments for quite a while. Everybody had their scripts and their tooling to do their jobs faster. And collectively, there wasn't really a lot of interaction between the siloed groups. So from the standpoint of IT, we wanted to see how we could partner with the business and deliver business results. And from a business perspective, all the work we were doing in the siloed environments to kind of speed up what was going on from an IT perspective wasn't necessarily visible to the end users. I likened it to working days and days on the engine of a car. And it could go as fast as a Lamborghini. But if the outside looked like a Kia, everybody wondered what you were doing. So it really was that end result that everybody was looking for. So we realized that all the work we were doing to speed up and really just do IT automation was great. But it wasn't necessarily fitting to the end, right? The business users didn't actually see that value directly. So when we looked at OpenStack, one of the reasons why we went towards the OpenStack environment was to start with a common set of programmatic APIs to get a programmatic infrastructure so that we could expose it to our development partners. We really went down the road of DevOps, and we wanted to partner with them tightly so that all the automation we were doing was starting the foundation for that end result that we were trying to achieve. So we wanted tight integration with the CI CD tool chain. And so to get that moving and to really kickstart that, we recognized that an organizational change was needed. So we took a group of developers and defined a DevOps platform engineering team to focus on the CI CD tool chain aspects of the environment, to really start to put together that automated, agile development framework that would take advantage of the programmatic infrastructure that we were building. And when we wanted to make sure that this would be a very transformational effort, we didn't want to go and recycle the things that we've done in the past simply to get a little bit of movement in the infrastructure side, but we really wanted to reach that business objective. So this next-gen platform, we had some principles that we wanted to kind of drive home with the developers, with the business unit saying, I think you've all heard the Cattles vs. Pets analogy, and we really wanted this platform to be more cattle, where it was an automated perspective that we weren't building assets in this environment that we're going to be cared for and managed in the traditional legacy model, but that we were really going to push for more of a cloud-native methodology. We didn't want any software agents to run in the environment. We've been there, done that. The dynamic capabilities, the dynamic scaling, it would slow things down, having agents to be reset, to connect. It was a total change in mindset. From an asset management perspective, something could be there now, it could be gone in a minute, and an agent might not even connect by then. So we really wanted to transform everything, not just the infrastructure, but we recognized the holistic environment had to start to change. Trying to get away from the concept of going in and making manual changes so that you had to test and manage your code so that when it went into production, you didn't have to log into a box and make manual changes. Try to really harden the fact that the dev test process and that tool chain needs to be where everything happens. So by kind of going into that framework and changing the methodology and the model, and from an infrastructure perspective, it was pretty difficult because we'd always been held to harden that infrastructure, harden the hardware, I mean, make sure that there's nothing that's going to go wrong in your environment that a developer can take the assumption that if I run something there, oops, great, the infrastructure's not gonna go away like this. And they could rely on the underlying infrastructure to provide high availability to a certain degree. We wanted to really push more for horizontally scaling, 12 factor application methodology, and it was very important that they understood that's what we were trying to accomplish here and not try to shove legacy applications in here and not get the results that we were trying to get. So during this process, there were a couple things from an IT perspective that we learned that was very transformational from the aspect of the folks who were involved in the project as well as the enterprise and large. Infrastructure teams had to learn how to become service providers. They were no longer in a siloed model where I just support this box. I don't know what it's doing. I don't know how it's being used, but this is what I do and this is why I do it. They had to understand that they were providing a service to an end user, our development and business units, and they had to understand how they wanted to be able to use that product you were providing. So there had to be a greater partnership on how that service was developed, how it was maintained and how it was delivered. One of the first things we did before we start to automate this end-to-end process to understand what are the processes? Not just for acquiring the hardware, deploying the hardware, but also the processes around an end user coming into request hardware. When we started to workflow all the processes out, it was surprising to find out how many things were done just because that's how they always did it. And by streamlining the process, questioning everything about the process, you were able to automate smaller tasks, execute tasks faster and get people what they needed on demand. And that was huge. The fail-fast culture shift, you know there was always this long stream of testing and evaluation before anything could hit the floor and it would take months to get anything new out. I think getting everybody to understand that not only to be agile at the business level but agile at the infrastructure level to test things out, to take risks. I mean that's really what we wanted this environment to be. We understood that OpenStack's maturing, the tooling is maturing, and we had to go into that knowing it's not gonna be perfect because if we waited for it to be perfect, we'd never deploy. And then again, from an automation perspective, making sure that we followed the standards, that we leveraged standard APIs, that we weren't developing a lot of custom API code that we'd have to maintain and do regression testing for. So it was really huge to leverage and consume standard API so that you could write once use many. Communication was huge. There was a lot of concern over what does this mean? How do I get involved? Especially the outline process, when you talk about operations, you talk about finance, you talk about the governance, components, compliance, security. There were a lot of new areas. So as we began to automate when we hit a bottleneck because it was a manual process where there wasn't any tooling in place to automate that particular task, the spotlight kind of moved down the chain. And people started to realize, two, three steps down the chain, where's the next bottleneck gonna be and do I wanna be there? And that really helped to get the process moving because as you automated it away, you would always evaluate where's the longest piece in this chain? Why does it take four weeks when the automation here takes two hours? And so then you would start to shine that off and start to begin automating the processes. So communicating out the whole effort so that you weren't necessarily buoying the ocean, but you were assigning service providers to even the business processes so that they could immediately begin to start looking at how can I make this process better? How can I automate this process so that when the bottleneck came to them, they already had a plan, they were kind of moving on it and making sure that they understood what the end result was and that the business objectives that we were trying to achieve was the end goal kept everybody on the same page. The training and transition was huge because when you start to move into this methodology, you have to understand how do I take the knowledge I have and how do I codify that? How do I fit into this new model? How can I understand how to be an evangelist for this and not fear that if you automate this, my job goes away? If you automate this, I can go over here and do more new tasks. I can build new services. I can manage the roadmap for this service. It's having to understand what do I do today and how does it fit in the world of tomorrow? And again, know your user stories. I don't know how many times we've delivered services and the end user said you've got 20% of what I need, not good enough, so I'm going over here. You have to understand what you're building, why you're building it, have a roadmap where you can collaborate with your end users to understand where you are, where you're going. And so a 20% success story is not seen as a failure but as a move in the right direction because a lot of times it's easy to say if you don't have everything I need, you didn't deliver me anything. So we have to partner and understand that it's a growth process, it's together, we can mature it. And that's kind of very important when defining user stories and identifying and placing the importance on where to start with your roadmaps. Monitoring, that was huge. This was a new environment, a lot of open source components were integrated, a lot of knowledge needed to be gathered and where to look, where are the problems happening, what problems are happening, and what do I do when I see those? And as we matured and building the monitoring platform and building the items to monitor and find out what the recovery process was for a lot of these things, we were able to start building self-healing and the stability of the platform started to grow as our monitoring got better. So it was really important to get some real-world experience with failures and troubleshooting and identifying the issues that we started to see in our particular environment with the tool sets that we've had. And it just, it takes time for that to happen. I would wish there was just a quick template that just came out and said, here's all the things you look for, here's where you look for it, and here's how you make it stable, but we found that it grows over time. So that's something that we placed a lot of importance on and anytime anybody sees anything or has to do anything, we have to track it, we have to monitor it, we have to make sure that if there's a way that we can create some self-healing actions to take that we do that, and it really helps with the stability of the platform. And again, adoption is about the end-to-end automation. Within IT and the legacy application space, we've had a lot of automation and workflow scripts, but if you have a button that says push this, now I get a VM, but you have to go through five weeks of business process to request it, to validate it, to fund it, to approve it before you hit the button. That two hours is just a spec in the seven to eight weeks to get anything. So end-to-end automation is really key to adoption, especially when you talk about CICD tool chain, the base infrastructure integration with the CICD tool chain is only a piece of it, change management automation, security validation. I mean, so OpenStack is the foundation for what we're trying to deliver, but the end result that from an IT perspective, we wanna see the businesses see value in that and automation so that they can be more agile in delivering services to their end users. And we see OpenStack as a very good partner in that foundational piece of delivering a higher level business objective. All right, I'll turn it over to Fred. So in the network space where we don't actually develop most of our applications, we actually get them from our vendors. So part of the issue is how to get these applications into a cloud environment. A lot of these applications were not written for this environment. So we're having to work through for each application that we get, work with the vendor, work with our platform engineering team, incrementally tune the environment to adapt to each VNF to the environment. And this led us, we actually have to provide a lab environment within various locations of Verizon, and we're working actually with some of our vendors directly that they are deploying a mini version of what we're deploying in the field in their lab so that they can validate this environment. So again, because these things are not necessarily meant for the cloud, some of the issues we're seeing is that the dynamic scaling doesn't work very well. They're tuned for a fixed set of services if they're providing a specific service. And there's things like load balancing don't work very well. They have again, because they're not meant to scale in and out, they don't have some of the load balancing built in. And then one of the key or kind of issues we found and as we started to deploy some of these applications were that the licensing models that we're getting from our vendors are more targeted at the kind of legacy environment where you're buying a fixed set of resources and you're deploying them up front. And then kind of paying that same cost forever. One of the things we were looking for in this cloud environment is that there is a ability to run kind of dynamic scaling, not have the full capacity available from every service all the time that we would scale capacity as the demand and as customers demanded it. So this is kind of something we're working through with our vendors as to what is the legacy model, how do you deploy applications on this infrastructure and work with it. And again, as I think as Dan mentioned, one of the kind of the key points of this is really is day two through end. How do you keep this working? How do you upgrade it? How do you integrate with the existing management tools, existing service assurance tools? There's a bunch of pieces missing from the current, both from the platform level in OpenStack as well as from the VNFs. When you introduce an extra level of abstraction through virtualization, you've now lost control of where things actually sit. There is some of our inventory systems expect things to be a particular application to sit on the same server forever. And that isn't necessarily true in a cloud environment. Again, service orchestration, automation is probably key to achieving any operational, long-term operational advantage. We really need to improve automation, both from a kind of deployment and resiliency perspective. And so there was some of the issues we found in this space that there isn't any common way to do orchestration of these environments. The APIs of the applications themselves, as well as the infrastructure, a platform after infrastructure, doesn't really have a visible or well-defined information model that we could consume and enable through an environment. Because we're deploying applications from several vendors, historically these vendors have delivered kind of a full stack of functionality, including hardware, all the software, all the orchestration pieces as a single unit. We end up with having several orchestrators built up into this environment. And they don't necessarily cooperate. They kind of, in the way they're designed and what they're expecting, they expect that they own all the infrastructure. This again introduces some issues for us because we are running multiple applications on the same infrastructure and the orchestrators really need to be talking to each other, coordinating issues, or at least be tolerant of the kind of resource exhaustion from anyone, particularly a place where there's multiple services that asking for resources in the environment. Well, that worked, yes. So, and again, as Dan was describing, a lot of the kind of our challenges are how do we actually bring this into the environment? Our application teams are not used to this. They're used to getting a kind of a full application suite from a vendor and deploying that in a separate space without any kind of shared resources. And now that there's a kind of a software-only release, these release cycles are getting kind of shorter and dealing with this as a upgrading these things relatively quickly without having to spend six months qualifying in the visual environment. Oops, I don't do that. Is kind of a problem that we're having to deal with and kind of trying to get these out incrementally is one of our desired goals to get into a more of a, I'll say a cloud ops DevOps model and managing this comes an issue. Again, from a kind of a network perspective, what we generally, what Verizon provides is kind of the best service. And in order to accomplish that, we're work with our vendors to work on an SLA environment. In order to provide kind of an SLA from their application running on kind of a commodity shared infrastructure, there's kind of a gap in how they achieve that. There's no way to quantify exactly how their application is gonna work in a shared environment. There's a kind of, they need a way themselves to get some assurance that they're gonna get the capabilities, all the resources that they've asked for and that they need a way to measure that they, from the platform perspective, the platform hasn't broken the SLA that it's promised for that service. And again, from the platform perspective itself, we're developing this kind of as a, from multiple vendors. OpenStack generally doesn't, because we're buying hardware separately from the OpenStack environment from various components in there. The platform team now becomes somewhat of an integrator. And this is, again, it's just somewhat of a change for the Verizon team. How do you, with different release cycles of OpenStack versus an SDN controller versus the orchestrator, how do you manage these release cycles? Currently, OpenStack doesn't do a very clean upgrade or a non-disruptive upgrade without having some hiccups in the environment. So that's kind of some of the challenge that we have from, we're asking for OpenStack to improve is the kind of upgradeability and dealing with this. But this is kind of our existing challenge, is how do we run this from day end? Let's go on. Just some of the issues that we found in kind of OpenStack environment itself, just from some of the things that we're experiencing more is things like the security issues, having an OpenStack environment that sits underneath all these applications does by its nature expose a slightly larger attack service. If you can get into the OpenStack environment and control some of the controller aspects, you can have access to all of the VNFs that are in the environment. And so this is one of these where we need to harden all aspects of the OpenStack authentication, OpenStack access, and all those services as well. Because of our, again, our natures, we're looking to deliver mostly network services from my side of the house. And one of the big gaps we see in OpenStack today is that there's not a well-defined metric for API to manage the quality of service that we're getting across the network, as well as across all the IEO environments. And we're looking for a way to get this, there is currently no, that's in various processes in the latest version that's improving, but we need to kind of enhance all that environment. And then again, I'll come to the orchestrator problem I talked about before. I believe that's it for things. I think we're done. We're willing to take questions, if there's anybody that has questions. And then we can do for it. Thank you all for coming. I was wondering if you guys could spend a little bit more time talking about the business requirements that you were trying to address. And so I assume a big part of that's financial, where the analysis sits, and when you expect to see more gains than investment. Yeah, and I certainly is financial. I think it's a fallacy to expect that you'll get immediate cost savings. I think there's actually an increase in cost in the near term, because you're deploying a new environment and exposing another set of operational tools and training into this environment. And I think one of our bigger gains is pace of deploying functions, deploying applications. The ability to enable capabilities there and operational efficiency will be our kind of, I think, long-term gain and deployments of applications. Dan, anything else? Yeah, I think it's business agility is a lot of it. I mean, if you look at the public cloud space and the cost model for that, a lot of the reasons are business agility. Shadow IT, that's one of the main reasons I think OpenSec has really kicked off, because we're trying to get that same level of functionality internally to deliver that business agility and to cut the costs and kind of bring as much of that cost down to provide the end goals of business results. There's something down here. Hey, thank you very much. Could you talk a little bit about challenges you may have faced around security and their take on your journey and how you maybe had to transform or evolve from where you were to meet the gaps of the challenges of this journey for you guys? Yeah, well, I've seen what Dan has done. So I think we certainly had challenges in kind of many directions. Because we were deploying kind of dynamic services into kind of various network topologies, our kind of normal path for this is to basically file, you know, tickets for enabling firewall rules, enabling access to different networks. This is a challenge for us and currently, yeah. How do you trust that the automation tools will actually do the correct operations? So kind of just the firewall rules, access to networks is kind of a big challenge itself from the outside world. Inside, some of the challenges we're faced are things like Keystone as a kind of authentication model and role-based model doesn't, or it's kind of in the version we're using, doesn't really have hierarchical roles. And so you end up in this, kind of there's a single administrator perspective that's caused against some of our issues of who has access or who has control of the infrastructure. Those are kind of the major security challenges we ran across, anything else? Yeah, I mean, you know, when we talk about the spotlight moving, when you get that automation and somebody get hit a button in the self-service, you have to have a way of automating your compliance, your auditing, make sure your access and your authorization rules are in place, audit trails exist, isolation exists. So a lot of that is a constant work in progress. It's an educational exercise. So you start with smaller pockets and you grow, but as Fred said, there's a lot of legacy security requirements that have to fit into this, whether that's putting open LDAP in front of multiple open source products so that you get a federated access methodology for some of the open source products that don't support multi-AD support, things of that nature. So everything is gonna happen overnight, but I think everybody understanding where we're trying to get and kind of feeding that back into the community is something that hopefully will grow and mature as we move forward. Thank you. Yeah. Yeah, thank you for sharing insights with us. So I have a question about, you have a place the problem with dynamic scaling with the root balances. So can you explain more about it? I think, well, I'll explain my side. The kind of challenge we have is more from our way the applications were initially designed. Again, they're designed internally for a specific service level and they weren't designed to scale out automatically. I think the tools are there, at least with things like LBAS 2.0, to do that functionality, but the V&Fs we have and the orchestration that was available didn't leverage that capability so we couldn't leverage that. Do you have any? From a dynamic scaling perspective? Dynamic scaling perspective? Yeah, I mean, again, applications key, right? The application needs to support it. The KPIs that you're leveraging and where those exist and how those events get triggered. And that's why, if you're talking legacy, it's a different game to enable any kind of dynamic scaling in the sense that sometimes they wanna go up and down with modifications of existing VMs, but in the next gen space, what we wanna do is make sure that any of the asset management tracking, security rules, the replication of the application, it really relies on that CSED tool chain, right? So if you don't get all of the components, both from a governance and operational perspective and the application together to be automated dynamically, if any one of those doesn't work, the dynamic capabilities go away, which is why I said earlier about the agent list because the whole concept of having an agent go register and catch up when that dynamic scaling methodology could happen in seconds, up and down, if you don't tune it right, but those are some of the things to really look at as you need that end-to-end automation for dynamic scaling to truly work. All right, thanks. Yeah, I guess one more there. Okay, sure. I had a question about some of the challenges you had. You talked about networking, orchestration and security and some of the challenges you had. You talked sort of around the things that the open security needed to resolve. Does your team perform any improvements and pass those improvements back to the community? And what does that process look like? Sure, and I think there's probably two sides. So in our current path, where we don't do our own development in the network side of the house, we're dependent on our vendors to provide that. Some of the challenges we found, particularly in OpenStack, were some of the integration issues we had with our hardware and our SDN controller, most from kind of a deployment and upgrade perspective. Those challenges were basically not really OpenStack, per se, related more from the distribution related. From a kind of OpenStack in general, I think there's more the high packet rate processing. I think that has been addressed in things like the DPDK-enabled OVS improve that, and that's some of the things we're about to deploy is that that kind of functionality. But our plan of process is to work with our vendors, work with the issue, find out what the issue is, they would, our vendors would then propose a solution, provide us with a short-term solution and then push all that solutions upstream and then eventually come downstream. When you mentioned vendors, are you referring to the company that maintains the distribution? Yes, I think you may have heard we made a press release, we're using Red Hat for that distribution and big switch for the SDN controller. Fantastic, thank you. Quick question, so you talked a little bit about the financial implications here as one of the drivers and you focused a lot on automation and scheduling of those systems. And as well as looking at OpenStack or your virtualization environment as well as delivery tools, but do you think you can do today, get a lot of those gains just by automating a lot of your process and using virtualization as another tool but not the end-all be-all? So I have, so in the near-term, yes, I think there's significant gains that are possible just by automating some of our manual processes. It's a little bit more difficult in the current environment because we tend to buy kind of a vertical silo for a particular application. So there's like, we can't share resources between those things. There may be a whole or a number of racks that are idle for one application that we can't leverage for another application just because they're kind of sitting in silos. But yet, certainly there's opportunity just doing automation, orchestration environments and leveraging some of those capabilities. And then we are doing that simultaneously. And again, OpenStack, I agree with you, is just a tool and virtualization in general is just another tool to fit into that toolkit anymore. Just flip it upside down, right? When we started, virtualization was the speed. The process was the bottleneck in the virtual space. If you flip it upside down and you look at everything outside of virtualization where we haven't done the end-to-end bare metal provisioning per se or, as Fred mentioned, some of the siloed components, if they can take advantage of the automated business and financial processes up front, that becomes the small part, this becomes the larger part and it'll probably help to drive this because this will get the spotlight and they'll say, why can't you get virtual, right? Or how can we speed whatever you are doing up? And I think the value is just gonna be tremendous once we crack that and it starts to work for everything, right? IT is a service, not necessarily just virtualization, correct? Just one follow-up question on that. In terms of, we talked a lot about hypervisors and optimizing hypervisors, DPDK and SRRV, but are you looking to use more of a bare metal approach or a containerized approach to get more efficiency out of the systems versus having to do the heavy lifting or trying to integrate through or make Swiss cheese out of hypervisors to get performance? Yes, so again, from the network side of the house, we're again, dependent on our vendors to actually do that. We are encouraging our vendors to, and not in it, so containers are a solution. And here I'll say, cloudification is probably the more direct path and however it's actually implemented is up to them. We're asking our vendors to make their applications be much more dynamic, separated out into much more smaller services that can be individually scaled, individually deployed, individually upgraded, and operate them as kind of again the classical microservice. From a kind of container perspective, against the containers as kind of another virtualization method, it in certain circumstances does remove kind of a level of indirection through another guest OS. So it all depends on what's suitable for that various applications. Yeah, I agree, it's a service level discussion, right? As the services mature and the reference architecture to deliver those services mature, then those decisions can start to get made. And from a container perspective, depending on what those service requirements are, if you've got enough demand for that service to justify the dedicating resource pools to deliver that at a lower latency, not making Swiss cheese, it all depends on the service and its requirements, but yes. Thanks very much. That's it, again, thank you very much for attending and we'll talk again. Thank you.