 So welcome. So you all saw the presentation this morning, I hope. So now you get to see how we actually did it, all right? So here's my co-conspiracers. And that would be, come on. Now you're not working. Come on. Let's try it. How come it's not working? This is a cable. OK. Yeah, we'll just do that. OK. So sorry, Verizon Legal makes me do that. So I want to introduce you to Glenn McGowan and Jason Kett. And why don't you say a few minutes? So I'm Glenn McGowan. I've worked on a lot of the integration testing, some of the aspects of determining how we want to leverage OpenStack and type of the hardware platform environment, setting up the POCs and doing some of the sort of analysis in terms of how we onboard the white box solution and some of those other key aspects of employing OpenStack in a non-data center environment. Yeah, and my name's Jason Kett. I'm based out of the UK, just outside of London. So I'm part of the product development team for Verizon. So a lot of what I had to do was in developing the product itself, the scale, the architecture supporting Glenn around the architecture of the box, and the process is built around delivering the service as well. So it's all well and good as innovating with the device, but the process is to deliver for our customers is just as important as well. Great. Glenn's responsible for the logo. Yeah. OK. So we're going to talk a little bit about how we did it. So again, we're going to go into a little bit more about the project goals and objectives and the challenges. This is a really hard project. Distributing OpenStack or forget OpenStack, just distributing these types of services over a global network is really darn hard. What I like to tell people is think of it as like AWX, AWS, except not just in a data center, but in 1,000 data centers. And they all talk to each other. So that's really, really complex. So that's what we did, massively distributed OpenStack. And of course, to make it all happen, we had to use a centralized management and orchestration. And we'll talk a little bit more about that and then how we all put it together. And then finally, we'd like to talk a little bit about where we're going to go with this. And also, we believe that there's some ways the community can really work with us to take some of the things we're doing and really add value in both directions. So with that, so first I'm going to talk a little bit about the vision. And so network as a service in the real world. So this is really taking, as I said, there's infrastructure as a service, applications as a service, now network as a service. And network services as a service specifically. So any network is the idea anywhere in the globe. Verizon's in over 150 countries. And I think Jason personally probably knows the details, the legal details behind all of that. And our customers all have all sorts of environments. And they're big and small. We have customers that are using this very small, maybe 10 deployments. And then customers are looking to roll this out in 10,000 locations. So very different types of architectures. And we wanted to make it as flexible as possible to support those different architectures. We're also moving just the company and the world is moving toward a consumption-based use model. So the idea is that customers will be able to stand up kiosks for a couple months during the Christmas season and then take them down and use these services on that sort of ad hoc basis. So with that, so there's obviously a lot of challenges to building this product. And so I want to turn to Glenn to talk a little bit about why OpenStack. Yeah, so one of the things that we constantly struggle with in any sort of carrier environment is the ubiquitous infiltration of vendors. And I'm trying to look at the back of the room because there's a lot in here that are just staring at me. So I'm going to try to stay on point. But one of the things vendors are good about is selling their ecosystem. So they have their own strategies in terms of roping you into their ecosystem. And what OpenStack really provides for us is sort of a pure intent, right? It's only there to sort of fulfill the needs of the community. And so some of our needs were OpenStack is developed for sort of a data center environment, but there's some challenges that exist within the data center or there's some opportunities that exist in a data center environment that we can move out of that data center and move it to the customer prem. So one of the things that we liked about that was with a little bit of sort of innovation, we can take what OpenStack does in sort of a walled-in garden and maybe export that in a connected way out to the customer prem. So what we sort of envision is a single pane or an orchestration plane where we can run VNFs and be able to literally drag and drop them wherever a customer sees fit. For example, we may be running in a cloud type of scenario where a customer may have a router function or some sort of LAN optimization function running in the cloud. And all they have at their customer prem is just some basic connectivity equipment for providing some sort of layer one, layer two connectivity. And we do processing there. But maybe some of our larger companies have data centers of their own and maybe it's not practical to run some of these services in a data center environment. So we would want to sort of bring that to the premise. But we don't want to create, by doing that we don't want to create a situation where we create a little island, right? We would expect to be able to sort of extend our cloud to that premise, not break it and create another mini cloud but just extend our cloud. So one of the things that OpenStack provided for us was that capability. And there's still some challenges there that we'll probably cover in some of the future aspects of what we think that we need to address. But the time being, there was a lot of work that was done in the OpenStack community that kind of enabled us to be able to realize this capability. And with that, we're able to sort of say, okay, no Mr. Vendor or no XYZ, this is kind of the trajectory we want. And what that ended up happening was we saw something that was really incredible. For the first time, we were able to then define the framework for sort of where we wanted to go versus sort of a vendor kind of pushing us in a particular direction. We were able to take OpenStack and say, okay, this is the framework. Now what can you guys do to help us enable that? And so we worked with a number of vendors to onboard VNFs. There were some vendors that really weren't familiar with the whole OpenStack concept. But through some very close relationships, we were able to sort of cultivate that. And as a community, which includes the carrier and its vendors were able to grow. And it felt like we were in a little bit more control of that process. So let's talk about massively distributed OpenStack, which is, and again, Glenn. I feel like I'm getting all the work here. Don't worry. So the product architecture itself kind of follows the OpenStack model, where I just talked about, we want a sort of ubiquitous cloud experience within the Verizon footprint. So like I said, we don't want to create little islands. We don't want to sell a product to a customer and say, okay, here's your cloud product. And then for some reason, they want to move a call center or some sort of data center to another location. We'd say, okay, we're gonna have to disconnect that service in this cloud and spin it up in this cloud. That doesn't make a whole lot of sense to us. So from a product standpoint, we had to really integrate OpenStack and our real kind of vision for how we wanted to implement OpenStack across the board. So in that process, the ecosystem that we establish is not based on islands that we've kind of separated out and we call that the cloud, right? The cloud in terms of at least the way we would want to define it is a true cloud. Everything is connected to everything else. It's massively distributed, multiple layers of control, autonomy, the ability to just move functions where we see fit, when we see fit, and how we see fit. So let's talk about the technical solution. So we have all these little components all over the place. We have the orchestrator, which we'll talk a little bit more about. We have the network services where they're needed, as Glenn said. And this is what we started with. So back of the napkin, or actually in this case some. That's college rolled. College rolled. We're sophisticated in Verizon. We use notebook paper. Yeah, we use notebook paper. And then we turned that into a system that really logically has hosted services, the edge connector, the edge boxes, all in different combinations. And we of course had to add the classic, the customer existing conditions. Because this is not a green field. I mean, these are customers that already are connected. They already have firewall services. They all have all those services. And so this is a migration and next generation type model. So we needed to incorporate that right into the product, the ability to do that migration. Which I know something is OpenStack thought of a little later in the process. So you wanna talk about that or should I? Yeah, go ahead. Yeah, I'll jump in. Yeah, so we started off with our VCP, our global hosted network service solution. So this is our data center infrastructure which we've deployed across the globe today. And so this has been built from the ground up, specifically to support network functions and network services. And we took that concept into the white box functionality. And I guess I draw your attention to the gray box at the bottom, the x86 hardware components. What we've got here in the table here is just one of the series of the family that we've been working with. So there's gonna be a range of platforms available to our customers and our enterprise customers. Because their needs are based around the amount of VNFs they wanna support on a single device, the size of those VNFs. And those needs and those sizings are also driven by the vendors we work with too. So our partners and vendors have got different sets of requirements for us based on the x86 resources be it the amount of cores each one needs to spin up, the amount of round, the amount of disk. So we've got a family of devices which will be coming out as part of the white box architecture. And one thing I would add there is our vendor relationships are critical to this piece as well. I come from the IT organizations within Verizon and we are certainly capable of developing our own sort of in-house open source solutions. We even had kind of our own prototypes running. And one thing that you have to think about is version management, who's gonna do all the patching, who's gonna do all of these things that are gonna be required for managing sort of a complete in-house solution. And so through the process of working with our vendors, we came up with the mechanisms that you see here that we think are kind of best in breed. And interesting as well is it doesn't, we can open the door to as many vendors as we want and we can even then eventually support or bring your own model where customers can consume this service on their devices that they want. So that's why we try to draw this differentiation between the top line. Okay, so another interesting thing as well I guess Glenn is that we've managed to get a lot of this resource, the open stack elements, fast packet processing, or the Linux base, the hypervisor base, all on the small unit here, we've managed to get all of that into a core where one of the partners will be working with. So that's an interesting point to raise as well. Yeah, I wanna talk about that for a little bit. I did mention the keynote. So this box goes out on a customer site and the value to the customer is not particularly the infrastructure, the open stack or the hypervisor or any of that stuff. The value to the customer is the network functions like the firewalls and the win optimizers and other functions that are sitting on this box. So we wanna optimize the amount of space that's devoted, the resources on this box, it's devoted to services that our customers want and we really wanted to make sure that not that we were stinting the open stack, it obviously needs to work, but we want it to be as optimal as possible. And I guess another key point as well, this is a $5, $600 box. So what we're doing is we're enabling our customers to get involved in this innovation and get involved in this new technology. If you think about the old model, they'd have to spend $2,000, $3,000, $4,000 on a piece of CPE device, which they'd hang on to for a large period of time. This is a device that can get entry to this type of technology immediately at low cost as well. That's right. So something else is that it gives us the ability to not just give the customer entry, but it gives us injury as well. So maybe as VNF and even SDN start to mature, there may be use cases that we have not sort of thought up yet. So we have the capability sitting out there in an x86 platform, a white box. This one just happens to be literally white. I think Jason, you wanted to make it white, right? We made it white on purpose. That's chat. Because most of our boxes are black and they're like, where's the white box? I'm like, that's it. No, it's black. So here's our white box and it's black. Yeah, yeah, so the whole of it, it's like, never mind. Marketing, think marketing. Yeah, they don't know anything. So we really think that with this sitting out there, we have the ability to sort of on the fly, orchestrate any type of solution going forward. It's just getting sort of that adoption pushed all the way out to the edge and building out the infrastructure. This, to me, maybe not, maybe from a product standpoint, but for me, at least from a systems perspective or an engineering perspective, this represents plumbing. So we're able to plum out our different customer locations. And I think with this device here, we're starting with that. We're able to plumb them out. Yeah. Just a little bit about the data center design itself. So I'm not going to spend a huge amount of time other than we did some things unique related to the fact that it was a network. It's designed for high performance networking, but the reality is inside. We did something fairly standard, a leaf and spine normal type environment. Again, it was really focused more outward looking in the WAN environment, which is what really counted for us. So we wanted that low latency and high performance networking experience that our customers want from us. And remember, we're using this for our internal applications as well. So it's not just for customer facing applications. So you want to talk about the WAN top dials? Yeah. So Verizon has many different networks. We certainly have one network. But when you start looking at the different functions that are associated with each network, you really come up with these three different types of environments. We have a public internet network. And then we have various private networks. And then, of course, we have to manage all of the devices that we have in a separate management network. Yeah. And these are millions of devices. That's right. Yeah. So we've got an entire army of devices out there. And they're all serving different functions, but with what we're able to do with OpenStack and the network and infrastructure that we've built, we can basically create an overlay on top of those networks, which are, you know, they're just kind of a means to an end. They're the plumbing. But once we're able to sort of overlay those three networks, we really get to a true cloud type of environment. So I think what we're saying is once you've got your cloud layer, you make that a ubiquitous layer, you stop doing these little island type configurations. And then that cloud layer, if you look at it, is superimposed on top of your different networks that you have. And so what you end up, at the end of the day, is sort of a very vanilla cloud environment that can pretty much accept any type of VNF, or we would want it to accept any type of VNF and any sort of network configuration. Yeah. At any location. At any location globally. Yeah. That's the real key. I mean, I think most cloud vendors traditionally have, you know, it's very data-centric. And you don't necessarily hook the data centers. They're sort of islands unto themselves. But we haven't done that. We've really extended out and have a single. Yeah. We're very big on SD-WAN technology. And you can kind of see where we're kind of bringing that sort of SD-WAN type of environment into our internal network, where we can sort of build a common network cloud as opposed to just a compute cloud. So that's kind of the duality there that we're looking to accomplish. Yep. And then, of course, we can talk a little bit more about the cloud management. That's really the key to it. Because any kind of telecom is going to be very, very concerned with efficient delivery of services, right? So we've automated the hell out of things. I mean, this is no secret, because that's the way you have to do it. And so this cloud management network that we've created, and it's consolidated, but it's all running through our ticketing systems, all our back-end operation systems, which are all highly automated. So you want to talk a little bit about this, or this is just a little bit about some of the back-end systems we did for the unified fabric. And then we have the same with the storage. I don't want to spend a huge amount of time on it because I don't personally think it's all that interesting. But what you get down to is the orchestration for the virtual network services product, which is, of course, just part of the rising strategy. The orchestration is really the key to making it work. And so orchestration, what we did is we really forced the vendors. We literally got the vendors to talk and some of them are vendors that aren't necessarily, I mean, some of them are competitors. But we had to have them talk to each other because they had to talk to the orchestrator and they had to talk to them just for the integration. And that's something that is swirly needed and it's the only way it can work in the telecom world. So we also, you want to talk a little bit more about this? Yeah, so orchestration is really the challenge. You have a cloud and you have the underlying, I guess, underlay network that supports that cloud. How do you stitch all of that together? How do you link that together in a way that's cohesive and efficient? And so that's where orchestration comes in, is we have the capabilities to be able to realize and identify what network configuration points we have, how to do with them, how to configure them, but also how do we hook that up into the virtual space within the cloud compute areas? And so the orchestration environments are very critical to that because they provide that stitching across those different domains, whether it be a network domain or a compute domain. There's also the closed loop assurance aspects where we can monitor the health of the VNFs that are running in the compute space, recover them, do service assurance things like reporting failure conditions and that sort of stuff, and we're able to stitch those and do a end-to-end sort of correlation, not only within the compute stack, but also within the networking stack so that we have a full end-to-end view. So for us, it's not just compute for open stack, it's also we have that sort of underlying underlay network that we also have to kind of bring up into focus and manage it all as a unit, and that's what the purpose of orchestration for us is. Yeah, and we have our orchestration export in the front row here if you want to ask us some questions. Yeah, drag him up here. Great, just to put you on the spot, Russ. So the other thing, of course, is that orchestration really drives another aspect of it is a lot of our customers want high availability. So they're used to having networks with, I think it's officially our SLA is five nines, but in reality, they expect six or eight or nine or basically a hundred percent. And the only way to do it is using the orchestration platform and the analytics to detect a fault and then bring up the new service, whether it be out at the edge or within the cloud environment. The analytics are very key because it kind of gives us an insight into how the customer is using the service, how the network itself is performing, how we are utilizing our network in terms of capacity planning, and that goes not only because of what we're trying to achieve with unifying the network layer and the compute layer, how is that compute using the network and how is the network using the compute and establishing those relationships through those analytics and then kind of putting that data together, doing some pretty deep analysis and coming up with the correlations that we need to make the right decisions for the path forward. And I think what we're trying to show here, I guess, is how important OpenStack is in our overall architecture. If you like three layers in this dark, we've got orchestration at the top, Glenn's just been talking about, we've got the cloud layer in the middle and then we've got the white box, we've got the universal CPs at the edge and in each of these areas, orchestration is a key component. It's what stitches the entire network and the entire ecosystem together, so it's an important part of the fabric. And we're using the OpenStack APIs to do it. So OpenStack is key to making this all work. Anything else on this? No, I think one thing that I would like to point out is you see that OpenStack control is running there. So one of the challenges that we ran into with OpenStack and that is not a problem in the data center environment is you've got control and compute all sitting in the same relative area within a physical data center. So you don't have to worry about control separation issues between your computer and your control. So one of the things that we looked at initially was, okay, can we put control inside the device itself so we create control and compute in the same box? And that was one of the harder things that we had to do and some of the decisions that we had to make is because when you put control in the box on a little machine like this, resources become an issue because now you have to factor, okay, this is a four core machine. How many cores am I using for data plane? How many cores am I using for management? My controller is gonna take up some of that for the things that it does for the control stuff. Compute's gonna take up some of that. Data plane's gonna take up a little bit more of that. So the model that we're looking at right now is to keep control in the device and what that does is it's gonna prevent the issues that you see other carriers describe and the challenges that they've identified in terms of startup storming and those sorts of things. So by putting control there, what we plan to do is do a sort of a multi-layered controller approach where we've got tricycle running and ever-increasing levels to where we can efficiently link our open-stack environments into a single cloud. So at the highest layer you've got control, a single controller, we'll call it an Uber controller or whatever you wanna call it, but it's a single controller that has underneath it all of these different sub-layered federated controllers. We've looked at technologies like tricycle, there's a couple of other projects that are doing that, but the most promising one that we've seen so far is tricycle and we'd like to try to figure out the ways of making that happen and in reality that's the direction that we're going today. So I wanna talk a little bit, do you wanna talk about this or should I? No, you can talk about this one. This is the diagram I drew. So I wanna talk a little bit about the automation workflow. This is actually from the customer perspective that we're looking at the workflow. So customer puts in an order or the customer puts in an order to the account manager, either one and then it goes into the portal actually and then in the case of a universal CP we ship it out and then it powers up and then it reports its active status back and then that's in the middle is the zero touch provisioning and that then pulls down the whatever the customer ordered and then sets it up and then there's some again automated testing and then the service is turned up. We also have and that's what happens at the universal CP on the edge and then of course it's much simpler if it's within our hosted environment because you don't have to ship anything to anybody and then the bottom process is the process that happens if the customer already has the universal CP and they wanna make some changes to their or add a VM or take it away or whatever and that bottom process is what happens when they wanna do that as well. But I guess the key thing as well here is part of innovating the process is two numbers I can throw at you here, three days that's the time with the target where we're imposing on invoice comes in or the order comes in to our white box manufacturer to getting it on the customer's side. Think about the old world model that was, that's a week, that's a month's process. Three days, that's a challenge, that's a target and then the ZTP, we want these things spun up in literally 15 to 20 minutes, no more than that so that's another key target as well where we're trying to innovate with the process as well as just the technology too so they're key things to be aware of. And another thing is that we don't wanna have the images, you can preload the images on the VM, I mean on the box but you don't wanna do that but there's a number of reasons for that. One, the customers aren't necessarily gonna buy all the images and two, there's some legal regulation and regulatory things with boundaries, cross country boundaries. Export compliance. Yeah, export compliance and again, horizon has to take into consideration all sorts of legal and regulatory things that a lot of people don't have to think about but poor Jason has spent the last six months working on all those export compliance things and not putting the image on here and just downloading it, that gets around a whole lot of those issues. So just wanna kind of put it all together and so we're gonna allow some time for some questions as well. So this is the slide that we had this morning but again, we put all these services together and really put together an architecture that allows customers to really put together what they and consume what they want. And we have built some automated tools to help them architect appropriate services so I know we have a little calculator that says what fits on what box. We have some tools that we've built into our sales thing. I think you saw a little bit of that this morning that gives optimum suggested and recommended solutions that was for the specific box but we also have some suggested architectures for based on what customers want. And this is just the overall framework what you can see is right now we have four services but the framework allows us to add additional services as the needs arise and as customers ask for additional services. So we implemented the ones that customers ask for the most which security is probably number one by a factor of 10 but we have others and customers are constantly asking well can you support this, can you support that? So we have the framework to really add that and it's really just a matter of going to the vendor and saying well if you have a virtual image and you can support OpenStack and you can tie into these APIs, let's have a conversation. And here's just some slides from the customer. So we've created this portal which you saw this morning. This is just some screenshots from the portal. So one of the things that we're really wanting to do is bring the customer in to understanding how they utilize their service, what control they're gonna have over their service. And so we're really putting a heavy emphasis on our portal capabilities, working with our customers to kind of educate them on look you have control over your network, you have control over your compute. And so as you saw in the keynote this morning, Beth had went through a demo of the portal that we're working on now. And as you saw, the customer has complete control. They can go into their network, they can increase their throughputs, they can decrease their throughputs, they can add or remove service level features, they can do anything they want or we would hope our goal is to allow them to do anything they want. Anything that they can. Yeah, that's true. So it's the direction that we're going is enabling the customer and empowering the customer to have control over that. Me being a technical guide, there's gotta be customers sitting out there that are technical as well that are so frustrated by the fact that they've gotta work through various levels of sales and product support teams and things like that to just, I just wanna increase speed. Or I just wanna add this particular feature. Why should that take me two or three days? Why do I have to go through a quote process and things like that? I just navigate to this side, I click a button and then within a couple seconds I've got what I needed. So we think that's very powerful. And if you link all of that together with sort of the framework that we've created with OpenStack, the stitching we're doing with our network and this scenario here, this capability here, we're in control. We're able to allow our customers to take that control whenever they need to take that control. In terms of vendors, because of the framework that we've built, we can snap vendors in and out of our framework in any way that we want. If there's a particular technology that we don't like with the framework that we've built, we can remove it or we can go with a different technology. We're very flexible. And I'd like to talk a little bit about the future and also open it up to questions. You actually mentioned some of the stuff already. Yeah, so the current challenge today that at least I see with kind of OpenStack in general today is out in the customer spaces we're dealing with the Metro Ethernet forum type of environments where we're doing VLAN switching, PBIT priority, all of those things that are associated with customer OAM functions and that sort of detail. Where OpenStack sort of lacks in those environments is that capability and we've seen the ML2 extensions utilized and there's been some efforts to sort of create plugins that can augment OpenStack but in reality, OpenStack at the end of the day is still geared more towards the sort of data center environment. And as you heard in the keynotes this morning there's a large amount of complexity that's involved with OpenStack and we don't really wanna add to that complexity but there should be instances where we can have our cake and eat it too where we can enhance OpenStack in a way that will allow us to seamlessly deploy that in the network or in a customer premise based environment which doesn't require us to go in and build plugins or these complicated ML2 extensions that are just sort of abstracted or hollow capabilities to support what we actually need support. Yeah. Do you wanna add anything? Should we open to questions? Let's open to questions. Okay, I'd like to open to questions. Check it. So one of the benefits we all enjoy about OpenStack is that normalization of API, right? You get normalization for Neutron, for Nova, for heat templates, your VNF life cycle management, et cetera. And down to a lot of the lower level of us stuff, you can find Yang models for just about anything, right? NetConf Manage and some of the vendors in that space provide that too. Not as many as you'd think as we are finding. Agreed. So going up a level in the stack, I'm sure you're contending with this already. There's no standardization, little even normalization like NetConf Yang or anything like that. How are you tackling the VNF vendors themselves? Yeah. Because again, if you want that build your own, bring your own, you know, you would eventually want to evolve something more like open daylight or something like that could solve that. Yeah, so that's a good question. And you're right. I mean, that is the top of our stack as well. And you mentioned NetConf Yang. We're looking at Tosca models as well. And one of the things that are critical to the evolution just has to be this year is sort of pushing our vendors in that direction. So we're coming up with a portal to do just that where we would say, OK, Mr. Vendor, if you want to sell or you want to bring your technology into our network, you've got to go through this sort of process. So they would be able to upload their images into our portal. They would have to provide some sort of standard template in terms of a Tosca model or a Yang model so that we can understand all the different elements or details that are associated with that. And then our ideal situation, and this is what we're working on internally, is the ability for our automation and orchestration platform to evaluate all of those different inputs and spit out maybe an initial decision on whether this vendor needs more work or whether, hey, they're ready to go, let's move them automatically to the next cycle and we can start sort of that onboarding process. That's why I mentioned NetConf and Yang. That's also an answer, by the way, underlight stuff, NetConf and Yang is the answer to that. Well, so NetConf, I would disagree in that we would stick more with the Yang side and maybe the Tosca model side. NetConf, Yang is something that's being pushed on maybe some of the other equipment vendor sites where you've got them sort of trying to standardize on NetConf. But it's not happening, right? We still have some vendors that are standardizing on NetConf. Others are not. Some of them are just saying, no, I've got my own REST API and I'm going to stick with that. Or I've got a controller that has a REST API. You guys don't worry about that. So we're still contending with that. Where I think what I understood your question was, is how do we create sort of a normalized layer of VNFs that we're going to spin up in a compute environment? Normalized APIs for actually putting payload configs and firewalls and stuff like that. And that's one of the reasons that we've got the stuff that customers are interested in. Right. There's not going to be a ring to rule all the vendors. Every vendor is going to think that they have their solution. And so what we have internally to combat that is we've taken the approach of normalizing everything north of our orchestration platform and into our IT and OSS-BSS systems. Is there a single standard API there? And then we have that translation layer that plugs in all the different vendor differences. So we can try to push that further down the stack as much as possible. And that's what our current objectives are. Don't forget you have the customers and the money. Yes, that's true. That helps. So next question. Check. Yeah, so there's been a lot of vendor interoperability discussion from the Verizon back end perspective. What is being done? Or how does this open up vendor interoperability at the front end? These are like the final customers level, like mine. Whether it's a virtual operator of Comcast or competition like Sprint, how do they become interoperable? So we have to be careful when you say that, right? Because when we develop an API, we wouldn't necessarily develop an API that would allow them to interact with the equipment directly, right? Because that could create situations where we're running into out of sync conditions, where there may be some policy management and security things that we're not keeping track of. So the best way to do that, at least in our opinion, is to set up a common API for just the purposes of customers where they can come in and influence those environments through our orchestration stack. That way we maintain that level of synchronization. We can control the customer experience. Because to your point, every vendor is going to have sort of a different API potentially. And what we would want is not to say, OK, customer, you ordered a Cisco solution, or you ordered a Juniper solution, or a Fujitsu solution. These are the three or four different APIs for each one of those solutions. I don't think you as a customer, you'd be like, I don't want this. I want one API that would control my three or four different vendors. And the only way for us to really offer that is to kind of bring that through a common API that's filtered through our orchestration platforms. Good. Next question over here. Hand side being 2020, with respect to OpenStack, you did this process all over again. What is something you would do differently and why? That's a good question. I think what we would probably want to do differently is, yeah, that's a tough question. Well, I can tell you. Is something that would be the process more efficient? I tell you. So the one thing that I don't like about OpenStack is its complexity. And the one thing that kind of brought us to bear was heat. Heat was kind of our saving grace. So I guess one thing that I could say is that we were kind of late in the late adoption phase for heat. And once we were able to bring in heat, then we started to realize that there's ways to sort of rule all of those different OpenStack services with one heat. And that goes back to the gentleman's point and the other gentleman's point about having a single API. Heat represents that single API, at least from an OpenStack perspective, within kind of the Verizon space. Yeah, and I'll add, we were a little late to the game. So Verizon, I've been involved in OpenStack for about six years. But Verizon, as a company, played around for a couple years before it really took it to heart. And I think we should have built up the expertise sooner than we did. I'd say if I had to do it all over again, definitely. I think she took a shot at me. I'm not sure what that means. It wasn't him. It wasn't you. Next question over here. You mentioned your orchestration platform and the complexities of orchestrating at the edge. You also mentioned TOSPA compliance a couple of times. But what actual orchestrator are you guys using? So what we do is we break up into service orchestration and resource orchestration. So at the service layer, it's all homegrown. We understand our product. We understand how to sort of create that service chain, how to stitch it together within what we call the service orchestrator. And then what we have is a resource orchestration layer. And that resource orchestration layer is what I was talking about a few moments ago, where we would plug in the different vendor differences that is the resource orchestration layer. And that vendor today is Ericsson. So we bought an Ericsson solution. They're providing that layer of abstraction for us. And then they offer us an API up to our northbound facing service orchestrator so that as technology changes and knock on wood, Jason and I were just talking about this last night that I'm kind of a pessimist. I don't want to put all of my eggs in the open-stack basket. I don't want to put all of my eggs in a particular vendor basket. I want, at least from a Verizon standpoint, for future scalability is for us to maintain some sort of generic component as much as possible. And by going with that, Ericsson orchestrator is from a resource orchestration standpoint. That's helped us achieve that goal in a very short amount of time. Unfortunately, I think everything's over. Is that correct? Well, we can take it. We've got time for one more. All right, one more question. Hi. So you have all those open-stack that are distributed. And then you construct a global view. And my question is, how do you construct good, exact global view? Are you using something that decentralized, such as FreeCycle? Yeah, TriCycle is the one that I'm most interested at this point in using that. I think there's a couple other projects that are out there as well that would do that. But yes, TriCycle is kind of the top of the pile right now. So I guess the next step is to decentralize TriCycle? That's right. Well, OK, so that was a trick question. Make sure it works. So at least the way I would want to happen is it's not just controller to controller. It would be controller to controller to controller. So maybe quad-cycle or something circle. So it's multiple layers of a TriCycle type of control environment. OK, thank you very much, everybody. Thank you.