 My name is Billy Felton. I'm a Director of Technology at Verizon Wireless and I think last time we gave this presentation was OpenStack East and this is really kind of a continuation of that, a little bit of our journey somewhat. We're really talking about this illusion of infinite capacity along with the other illusions that a lot of the Etsy standards seem to be focusing on. Infinite capacity, latency is no longer an issue, speed of light goes away in the cloud I guess. A few other areas that the industry really hasn't addressed from the standards perspective yet. So what we're going to cover is we're going to define what our VCP product is. We're going to go over a little demo that Andy's going to give. You know basically we're going to talk about some of the methodology that we use to deploy. We're going to talk about some of how we're doing now from a perspective of getting things deployed, getting some of the technologies written, an automation written to deploy at the scale that we've deployed at, facilitating self-service which is kind of a contradiction to the Etsy standards but it's clearly a necessity as part of how we are building the cloud and how we're going to use the cloud. And then we're going to cover some of what we're working on in the near future which is probably some of the cooler stuff that we're going to talk about. So with that I'm going to go ahead and hand it off to Andy who's going to cover basically what VCP is. Thanks Billy. No problem Andy. I thought the first thing that I would do though is clarify something. There's been some confusion based on the OpenStack application and the presenters here today. It does appear that we have the CTO of Verizon here but he's actually not the CTO of Verizon. I think he feels kind of proud that maybe some people think that way. We also think that maybe there's some people in the audience that thought that that was our new CTO and it's really not. So if you're here for that reason you can just sneak out. It's all good. Thank you. It all happened so fast. So with that said, as Billy said, we've definitely been on a bit of a journey, our OpenStack journey. This is the third time we're doing this talk so we thought we would kind of update the talk and talk about some of the things that we've been able to do and some of the successes we've had using OpenStack as a core part of our implementation of VCP, the Verizon Cloud Platform. So when we talk about VCP we like to define it as a large-scale cloud. It's built with OpenStack. There's some guiding principles that we go by. It's defined by the services not the tenants. So this is kind of a mind shift for some of the different types of NFV-style clouds we're seeing built. We have to serve multiple customers with this cloud. So it's defined, in fact, by the services. The services are the products that we have. We need to give our users the ability to quickly deploy and support their applications. Okay. Architecture. It's a common interface for infrastructure across our entire organization. I should say organizations. Multiple lines of business we're supporting. On-demand and self-service. Okay. So we want to be able to support the ability to quickly come and do things in an attended and unattended manner. Okay. So that's where the self-service virtue comes in as well. When we give direction to the team, when we're talking about how to do this and we're designing how we do our deployments, we have some very, very important guiding principles that we stick to. Right. So when we're deploying at the scale we're deploying at we must version control everything. Right. So here I'm kind of diving down a little bit getting into the details for how we're actually doing our deployments. And I'll show you how many we're doing. Everything must be machine readable. Right. So our documentation, our deployment guides, these things are actually machine readable. We don't do things with word documents in mobs. Okay. No physical configuration should be occurring. Right. Configuration should be all done unattended. Okay. Serialization in delivery as much as we possibly can. Declarative. This has been a mind shift for us as a deployment organization is doing declaring everything in advance. Right. So building out our architectures, declaring those in advance in a place where we could go grab them and actually build them. Our sixth guiding principle is use bots wherever we can. Right. So not only are we doing automation, but we're leveraging the concept of bots to go out there and do things and answer questions for us. Right. This is a big part of the way that we're supporting our platform as well. Okay. Chat ops. You can't really read this. I'm sorry. But this is, instead of going through a bunch of slides about how we're doing those declarations, I thought I would share with you some pretty interesting stats. Right. So what this is here is showing sort of a report that is we're capturing all of the different work that we're doing during a build. Okay. So we call them automation steps. So what I've got here is showing 13 people that since January 1st have actually done 35,000 tasks. Right. That are 159 different work operations. Right. Across basically, I don't know, just over 2000 things. When I say things, these are basically things that have IP addresses. Right. So for us, that's significant. Right. This is a significant amount of work that's occurring. Right. By a fairly small group of people. Now, there is a larger group of people that are supporting the infrastructure to get this stuff stood up and built, the network done, the cabling done to allow us to do this type of work. It's also allowing us to have documentation that's a little different than we've had in the past. We use a CMDB. Right. Our as builds are actually live. Like our as built diagrams are drawn from our declarations. So we can actually look at our servers, our racks. Right. In real time. And we can tie operational tasks to them. Right. So that when something happens on a machine, it tells us. Let's dig in a little more. Do the scary part. So this here is a website. What I wanted to do is draw upon our status page. And share with the room how we're looking at architecturally, how we've built, but how we look at the status of our regions. Okay. So we've adopted a region based architecture. The region is basically our minimum mapping units. Right. Everything's built from a singular region, aka of them. Right. We haven't adopted the region zero concept with OpenStack, but basically we have adopted regions as our minimum mapping unit. Each box that you're looking at here is actually a region. Okay. This current view is just showing the live date when things are coming online. And actually what provider networks we have as part of those. But I can quickly switch the persona and look at something like capacity within the same grid. I'm going to mouse over. And what we're looking at is information is being drawn from live systems showing the consumption of our virtual CPUs, our memory, and available bandwidth. Since we're in Boston, let's look at Boston really quick. Right. So this information is coming from multiple places. Okay. And being drawn into one place itself. Regions have become a big part of how we deal with things. We'd like to refer to it as regional diversity. Let's go back to our core locations. We also have multiple versions of OpenStack running at the same time. Okay. This is how we deal with our upgrade and patching strategy. Right. And we have a lot of responsibility following upon our tenants to be able to move workloads between the different versions of OpenStack that are running. Okay. So we have a sort of currently a destructive mode for doing upgrades. Where we move workloads from one region to another. We destroy a region and we rebuild it. We are messing around with non-destructive ways of doing this as well. In terms of leveraging live migration underneath the hood. But that's something we're moving slowly into. We're not beginning there. Okay. I mentioned earlier services are our products. So I just switched my view to looking at the actual available services that are in a particular region. These are the services that are available in this region. Let me go ahead and mouse over this again. And what I'm doing right now is drawing out actual benchmarks against our block storage. Scroll down a little bit here. This is real data. It's live. Okay. The interesting thing here is anybody can come and get this information. So our illusion of infinite capacity when we're talking about internal use. There is no illusion. You can see exactly what's there. You can see how much is left. You can see how performant it might be. Okay. And in an environment where we have to support multiple types of orchestration. This becomes extremely important. Right. So the health of our services, the availability of our services, must be made available at all times. Right. So that, you know, tertiary orchestration can come in, ask a question, and make a decision about what to do. So this is, that's an area where the Etsy specifications really are kind of lacking. Right. So they have this concept where you're going to have an orchestrator come in and it's going to own admin privileges to a stack. Right. Which works great if you run everything through that same orchestrator. But it doesn't work very well in a multi-tenant environment. What this does, because it's all driven by an API, it allows an orchestrator to actually ingest information about the health of all of our regions. Capacity, performance, everything that it would need to know to determine where it's going to orchestrate workload. It's absolutely critical to the success of anything that's NFD-based. And it's one of those things that the standards really have kind of overlooked, primarily because of the aspect of an orchestrator having admin privileges and actually running the stack. Yeah, or the presumption that they would come into the infrastructure at any given time and draw that information out, make a decision about it, and then move forward by operating on the infrastructure. In a world where we have a multi-tenant environment and actually multiple orchestrators, we can't rely on that to be the case. This is our solution for that. So to solve for this illusion of infinite capacity, we've done things like this that are not necessarily core or part of OpenStack itself. So we call them, I don't know if it's appropriate, second-class citizens to the platform itself, so we do this in a non-invasive manner so that when we deploy an upgrade, we're not interrupting, we're not causing a dependency there. So this has become paramount for us to be able to support multiple regions, multiple types of orchestration on top. This website that you're looking at is live. It's actually being fed, hang on a second, it's actually being fed by another application that we've worked on that we refer to as our service status API. So I just drew back some information from that application about the regions. So all of that information that was populated in that website is actually coming from an API itself. So the application is using its own data as its own source and it's live. I like to call the websites that we build on the team the best proof of concept website I can come up with. If anybody else wants to build a website differently, that's perfectly fine. So all of this data is available, transport JSON, whatever you want to pull it out as, that's fine. But the data's live. It's a RESTful API that's just sitting there. Our documentation for this is actually just swaggerized. And this is how we publish back out to the business what you can do with that data, how you query the data, and how you pull that information back in. So this is going to prove to be very valuable for our tenants so that they can actually, you know, if they have an application that's running that's storage intensive, it can go in and it can see how many high ops are available in a particular region before they orchestrate a workload over there. It's also inclusive of network health. You know, the networks that are available at a given region. We're incorporating some firewall functionality, some other functionality to expand it to cover pretty much anything you would need to know to determine where you would want to orchestrate workload. Yeah, and it feeds into some of our guiding principles as well around transparency. So all the information that we have about the platform we'd like to publish back out, including as Billy just mentioned, different types of testing that we're doing on the platform, the enumeration of the service health as well. That's something else we've sort of had to invent as part of the service status API. We enumerate a value so that you can make a judgment based on the actual, you know, performance of a particular service like compute, right. Because some applications might view the performance differently than others. So if I enumerate a value of eight, that eight means something different to you than it does to you, okay. But we're amortizing across all of the different regions those values. So it's sort of bringing together all the data about the platform and presenting it back out in multiple forms, okay. Another component here or aspect is the concept of metering, right, and being able to meter and look at different services as products, okay. So overlaying the concept of a customer with VCP, a product a.k.a. service or collection of services, and then placing some sort of value on top of that, a.k.a. rate plan or something like that, I can now enumerate an actual value for a consumed service, right. So when I moused over that one region and I showed you the available capacity, that available capacity is actually coming from another platform that we also refer to as a second class citizen to VCP. It's our metering platform, okay. So our metering platform facilitates self-service as well, quite honestly, and it's been built in a very similar manner as our service status API or platform where there's an underlying, you know, a single place to go get things and then we leverage a web presence on top of that, right. So depending upon your place in the business, you might have a different viewpoint than another person. Finance looks at this stuff that we're doing very differently, right, than folks that are deploying core pieces of our network or our signaling core themselves versus somebody who's just deploying a tool, right, an application that's running. So maybe we'll talk about this self-service stuff a little bit for a couple of minutes. Introduce Sanjay, good. Yeah, thanks, Andy. Yeah, and for those of you who came in late, I am not the CTO of Verizon, I'm the CTO of Taligent. Taligent is a small software company. We've been working with Andy and Billy and team for the last year or so and had to continue in the vein of providing an update from the last time that we presented at OpenStack East for those of you who might have seen that. So our focus, I mean, if you take the fundamental tenets of cloud being self-service, infinite capacity, and pay as you go, we've worked, we came into the relationship with the ability to support the data to serve that illusion of infinite capacity. Or not, I mean, you surface it up, but however you choose to either just consume it internally and kind of hide from the tenant that there is a physical set of resources and a finite set of resources, or you expose it directly to them and let the tenant make decisions on where to deploy and why. So the capacity and what you choose to do with it, the pay as you go model, so there's a billing component that all of this feeds into. But then the self-service piece is something that we didn't talk about much the last time, only because we were working on it and have made good progress. And so I'd like to really focus on that in giving you this update. So what we did was build an application, it's targeted towards the customer user and differentiating the idea of customer from tenant or project user, a customer is the one who receives the bill for services, customer may have multiple projects under them. So it sort of rolls up multiple views. So we built a customer interface that provides the detail on usage, lets you launch additional resources, lets you manage resources. For the most part, it's VM-centric. The application serves an optimized use case. It doesn't aim to replace Horizon, but it aims to let you do VM management a little bit better and a little bit smoother. So if you take the tenant lifecycle or the customer lifecycle, I want to onboard myself, I want to create a project for myself, there's a self-registration interface which can be, again, invoked through a, we also think of the registration page as the best proof of concept that we could build, but backing it as an API. So if you want to build a better page, if you want to collect additional data, build your own page, invoke the API. At the end of that process, the project is created, it's backed by heat. One of the guiding principles in doing this implementation was don't unnecessarily reinvent what doesn't need to be reinvented and so using the native orchestration frameworks in OpenStack to do the heavy lifting. So create a project, create the user, create the required elements that you need to reference when you launch a VM, and then you have the shortest path to actually launching a VM when you're ready to consume resources. So this is the dashboard view that the interface presents, a couple of things. I'm not as brave as Andy, I'm not going to do a live demo, I did it yesterday and I paid for it. So I'm just going to click through some screenshots that we captured before we came in. A couple of things here. So there's a multi-project view, multi-region view that's just sort of built into this. When I look at this dashboard, I get an aggregated view, but as I move through the application, I can scope and filter appropriately. This is the page to launch a VM. It's one page, anything that you might be referencing here when you get ready to launch a VM is pre-configured and pre-built for you, or you can break out from here and create additional networks and security groups and key pairs and so on. But the idea is make it simple, make it as straightforward and quick as it could be. The other key elements here are, we're also pulling together the additional detail that might be relevant for you to make the decisions about what to launch where. So cost information, which is being driven in real time by the billing engine that's on the back end, quota information, obviously just essential components. But anyway, within five minutes, you could have onboarded yourself to VCP come in here, launch your first VM and be up and running. We want to keep extending this. On top of this, we want to build support for containers. We want to build support for applications that have a more complex heat template behind them and so on. This is really just version one and we're just getting started. So also, there's another unique problem that we needed to solve. Having moved toward that region architecture that I described was sort of the maintenance for our users of all of the horizon and services dashboard links and things like that. There's multiple ways to handle that, but this actually serves as a single front end to come in, fronting all of those regions that I showed earlier, which by the end of the year will be well over 100. So this kind of correlates that, it collates it together in one place. Great point. Thank you. Exactly. Given that VCP is built as multiple independent OpenStack clouds, the alternative would be I log into a dozen different horizon consoles to be able to consolidate that view. This lets you, it integrates all that, a VM in region X and cloud Y sits alongside a VM in a different cloud in a different region. I have my consolidated customer view. Other things that are essential components to making a decision. Here's a collection of services that I anticipate this application or this set of infrastructure is going to require. What's it going to cost me? Just simple cost estimator kind of things. One other important aspect of self-service is, and I've become educated in the process, the planning function. I mean the idea of what my projected capacity is going to be over time. I mean ultimately, the way to prevent this infinite capacity from turning into a tragedy of the commons is to have some mechanism that sort of fences how much you really can consume realistically. And so quota obviously in OpenStack is that mechanism. Being able to predict your anticipated usage, so providing quota forecasts, having automation around that where as step one, you can request increases and they get processed through workflow, but then extending beyond that and saying, hey, if you've told me what your usage is going to be for the next six months, I'm going to auto-prove or auto-increase and auto-decrease. And then having this be a billable component as well. So I request a quota just to make sure I'm requesting the right quota, the ability to charge for quota, the ability to charge for actual usage. And then some additional reporting. So just the complete visibility here is what my spend has been over time. Usage over time across OpenStack services. And that's it. So thank you. Yeah. Yeah. Thanks, Sanjay. So we moving, moving right along. So yeah, it seems so exciting. So anyway, the steel thread, you know, talking about the illusion of infant capacity, right? So I talked a bit about our journey for internal customers, how we view it. We've got some other use cases that we're thinking through where literally the illusion of infant capacity is important, right? We aren't going to necessarily be exposing all the information about the underlying platform where we just have folks that are using our services, okay? So there's a different viewpoint on that, right, Billy? I mean, there's something else we need to think about and we need to think about capacity and latency, right? Same thing, different angle? Absolutely. Maybe take a couple minutes. All right. So what we're looking at now is what we're doing in the future, right? So we've really done a, I think, a pretty good job addressing the perception of infant capacity. We've recognized that we do things differently in a cloud than we've done them in the legacy networks that we've built before. You know, I think we've covered that fairly well. The latency aspect of this is a little bit more of a challenge. So, you know, you've heard all this talk about mobile edge computing, right? I mean, it's kind of the big buzzword that's running around where we're going to push stuff out to the edge of our network. So, I guess one of the first issues I have with a lot of the slideware that I've seen on Mech has been, you know, defining what the edge is, right? Because technically, the edge is defined by where your consumption is, right? Whether that's at the actual edge of your network or in the core of your network, right? So really, it comes down to where you need workload to run, to best suit the consumer of that workload, wherever that might be, right? So, you have to look at the behavior of the user. You know, as a user going to be somebody that's driving across town, as a user going to be something that's another peer machine to machine connection that might be co-located in one of your core locations. You have to look at the application needs, you know, what type of services, what type of, what some services are more sensitive to latency than others. You have to look at workload needs just across the board, right? From the perspective of storage, of compute, what is needed to support that workload. And basically, what it all boils down to is you have a spatial problem as well as a logical problem that you have to correlate. So what we look at doing to solve this is taking workload and orchestrating it to push it where it's needed to run, right? So where you have certain applications that require a very low latency, if they have a very low latency budget, like let's say five milliseconds or one millisecond, you're going to push that workload to that location where you can defy the speed of light, you know, because you know, as much as we would enjoy it, we still haven't figured out how to accelerate packets beyond the speed of light, even though I keep pushing my network guys to listen. They don't listen, right? They keep talking about infinite mass and being a big mess and all that stuff. So because we can't do that, there's only one thing we can do, right? And that's to change physics and that changed the formula, which means move your workload to the source where it needs to be. So we're looking at doing this through a number of ways. We're looking at some orchestration mechanisms. We're looking at different container mechanisms. We're looking at all kinds of technology to do this. I think you've seen some of it that was demoed earlier in the keynote. You've seen some other technology that is presented if you just go to the Etsy website. You'll see a bunch of stuff on it. So it's a challenge. It's a challenge for us to solve this and it involves us deploying compute resources, storage resources, and SDN network resources at a whole bunch of locations. So it's definitely not an easy problem to solve. So with that, we've got about what about 13 minutes left Let me summarize a little bit. So we defined VCP. We talked about how we've been deploying in hyperscale. We shared our regional architecture kind of let you know how we're doing. We're doing really well with OpenStack. Over the past year or so, the team has matured. We've moved into an operational status as well on the platform and supporting the platform. Our non-invasive approach of doing things with OpenStack is paying dividends. We're not heavily customized at the VIM level. Our operators are leveraging the same tools that I showed you. So being able to pull the information together about the platform and present it back out in a singular location is for our customers. But it's also for our operations folks who are supporting the platforms. We have been finding problems that persist and repeat themselves. We auto-remediate them. We figure out how to fix it. We move on to the next version. So our journey, and we were talking to the foundation yesterday actually and a lot of their questions were about how are you guys doing? Like generally, how are you doing? It kind of switched my frame of mind for the talk today is to tell the room we're doing well with it and we're pretty happy with the community and for those of you that contribute, thank you to it for doing that. We really appreciate that. We're going to be doing a lot more of that ourselves moving forward as well. So this helps us facilitate self-service with our customers but also the folks that are operating the platform. The future is bright for us with OpenStack. It's a core part of our plan moving forward. So yeah, I think that's it for our summary. Yeah. We're doing well. 10 minutes. We are. So with that, we can open it up for questions. So if anyone has- Sometimes I talk too long and there's no time for questions. Right. Usually there isn't. So I'm actually quite impressed that we have time for questions. So if anyone has questions, feel free. If not, we can have Andy do an interpretive dance. Oh, yeah. Yeah. All right. We appreciate that. One of the most important discoveries you made when it was put in place that surprised you. That's a great question. One of the things I would say probably the number one thing that struck me and you can certainly add on though is what people thought they were going to be doing with the platform wasn't necessarily what they were going to be doing with it in terms of resource provisioning and preservation of resources. Right. So they need for a certain amount of resources all the time. Right. There was a lot of concern up front. It's very clear through our metering solution what's actually being used. Right. And how often it's being used. Right. And so the need for certain things jumps right out at you. Right. We place a very high value on IP version v4 addresses. Right. Yeah. Very high value. Right. We also learned, you know, a little bit about, you know, how to better size the system. Right. So how to balance memory and compute. And, you know, we're a little bit out of balance. I think when we first launched because it was a platform that was designed in spreadsheets. Right. Yeah. That's actually a selling point. So, I mean, in addition to looking at individual tenants and what they're doing or projects customers, there's a really nice aggregate view that presents itself back as an administrator of the platform. So you can clearly see are you balanced between the amount of memory that you have versus the amount of actual physical cores versus, you know, what you've got with virtual cores. In terms of your oversubscription ratio as well. Very quickly, you will run out of memory and strand a lot of CPUs if those are not balanced. It really presents itself very well. I don't know who asked that but that was good. They're already sitting. I'm curious about the transparency into the consumption and the availability and you extending that visibility into your tenants. I don't know if that's a common business model or hosted environments or other people that are providing large clouds or if this is a somewhat of a revolutionary offering that you're giving your tenants to be able to see and if that's a differentiator and you know, I wonder that and I follow up question that is complimentary to that is about collision that people to tenants at the same time decide oh there's huge vacancy in Boston and they both quickly jump and they both collide and now saturate it how that could be remedied. I don't know if you're going to control that. We're going to build a bidding platform. I'm kidding. That's a joke. So I'll answer your first question. The first question I mean in public clouds you can always see what you're using. Right. You can't always see that versus what's available. Right. It's the illusion of infinite capacity. In a public cloud you should never worry about resources running out. Right. On a private cloud in a private cloud when you're dealing with internal customers you definitely have to plan and make sure there's enough resources. So our thought process was put that information out and allow the business to help us plan the platform. And now that doesn't happen on day one. That happens year two, year three where that fine balance between what folks really need versus what they're using really strikes. We wanted them to be able to do it on their own as well. Right. With immediate feedback to the platform itself. Right. So to me it's revolutionary. Having been in the business for many years I would have loved to have been able to have that level of exposure to the capacity but also the performance all the time. And maybe it could draw tenancy. People go if you get on the VCP cloud they will give you great visibility and that could be advantageous for people who are trying to make real time business. You know decisions. Correct. I would agree. And then if there is collision stuff at some point you guys may need to massage things or move your tenants. I won't use term force but you may be moving your tenants around the balance. Absolutely. Because certain listen certain regions you know have different feelings than other regions different applications have different requirements. The line of business the apps themselves are going to ultimately start moving themselves around where they prefer to be anyway right. We don't want to dictate what that is. Now collisions that's a dangerous thing to state. I mean when you're talking about noisy neighbor stuff we need to balance that to meet our SLAs for our service products. So they're not going to collide like that if they do we're going to hear about it. We need to adjust and balance right. But there's no doubt there could be some competition right about who wants to get to the resources first. If that makes any sense at all. Consume away. That made more sense. Okay thank you. I'm going to get a t-shirt with a horizon check mark and consume it up. Anyway Scott closing from the new stack the other 95 percent of the content of this conference has been about continual orchestration. And I believe there are two of these there's Kubernetes and there's other. Wondering with what you've demonstrated here and in primary here is there anything that container orchestration engines could have given you if they were available to you as a as an architectural option to three years ago. Or are they doing something in or instead are they doing something for the people who are utilizing them in their infrastructure today to catch up to where Verizon already is with your project. Okay. So I don't even think I can think three years ago when it comes to this platform right. Yeah I would have I mean containerization has been around for a long time. Docker changed things we use containerization for many different things in terms of the underlying infrastructure builds. Right. We use it where appropriate. Okay. I do really like them where it's maturing toward and I do like the open stack communities acceptance for containerized approach to deployment. Right. So I think that that is a big part of our entire community's future is being able to converge a lot of the different I'll call it classic ways of deploying open stack. Can I say that about something that's like three years old. Right. So I listen I don't find any gaps in time for me. I don't think containers are too late. I think that the timing is perfect for the maturity level right now and we are using them where appropriate. And you managed to bring containerization into this one talk that didn't have it. No, it's great. It's a good question. Yeah, we were going to dig a little deeper quite honestly and really our conversation yesterday just got us thinking about you know the journey we've been on and sharing the success for what we're doing and giving a more a broader picture for how this is benefiting the business. So to help foster that environment so where they understand what a cloud really is and all their applications need to be built. How is that being processed today? Okay. I'll start with that one I guess. So I mean internally that website that I showed we really view that as our presence. It's the VCP's presence. It's the place where we want people to come. We've got stories. We've got labs that we've built to allow folks to come in and learn about you know how to deploy. These labs are getting a tremendous amount of traffic. Okay, that's the easy answer. Right. The difficult answer is now that they've come and folks are learning about it. What about these giant huge monolithic applications that they've been building and spinning for years? What happens next? Okay. There's change management there with providers and with development teams for building applications that are more friendly for this environment than not. So that's a journey. Yeah. You know and we have to take it a step further. Right. So I mean as these tenants come on you know there's learning open stack. There's learning how to deploy. There's learning heat. There's all the things that you know our team will help guide them through. There's another aspect of this which is working with all of the big providers. You know your your your CISCOs or Ericsons and taking all of these applications that are monolithic breaking them apart into microservices or refactoring them and refactoring them and making them ready for the for the cloud to take advantage of multiple regions. You know part of how we built this environment the reason we built it the way we built it was to allow our tenants to have an environment where they could scale horizontally versus the typical telco way which is to buy a new box that has more processors and more memories and more storage and everything's faster. Right. That regional diversity we talked about earlier having all of those different regions to deploy to is part of our journey for bringing our applications and our development teams along into a manner to be able to deploy in a manner that doesn't create friction for them in the business to be able to actually get you know a zero down time app running. We're used to five lines. I think we can actually do better. Yeah. Right. On a on a platform that actually has you know SLA is at three nines. That's deep. Right. Okay. We literally have one minute left and no questions. Oh. You thank you. Thank you. Thanks. We're here if you want to come on. Oh you have. I'm sorry. Yeah. Got to get to the mic. The last thing you said I just want you to expand on it because it seems like the technology got ahead of obviously the adoption. And if people are now thinking about how they can change and refactor and we knew about microservices it seems like it's happening now. It is. It is my sense and and folks that know me I've known me for a while I have this tendency to say just do it and oversimplify things but it actually is fairly meaningful to think this way. Right. So I don't believe in gating applications that are not cloud friendly cloud friendly from the cloud. I believe in bringing them in and going through the experience well how long does it really take you to upload your image. Right. How long does it take you to try to create some kind of redundant behavior across the application to your itself. And from that experience you tend to see what areas make sense for refactoring with your application. All applications are different. I mean we can totally just generalize and say well traditional enterprise three tier app. Right. Okay. Go ahead. Get it up. Bring that thing in and see what happens. So it's not a lot of fun to try to run a giant honking SQL database you know and try to replicate and do it in a traditional manner. But you'll know exactly which parts of the business logic that's built into your SQL layers that you pull apart and maybe they don't belong in a relational database anymore. Right. Because you've got options now. You've got cloud native databases that don't care about referential integrity of your schema. But there are parts of your business logic that are in your application that rely on that. So okay keep it there. Trim it down. Right. So to me that's really been part of the journey in working with application teams over the years and it's kind of it's fun to be honest with you. It's painful at first but I would say just do it. That's the Nike swoosh by the way. Absolutely. Now it's transferring. And it's gotten better. Right. I mean so two years ago every application that people wanted to onboard required SRIOV they all wanted you know one to one CPU pinning pneumatopology filters that really complicate things well basically build a bare metal box but virtual. Right. So just pay hypervisor tax for the fun of it. So but now I mean we're actually getting to a point to where more of the applications are okay with oversubscription they're okay with bird IO. They still don't like V routers but yeah we've got to get them there but not much IPv6 support that's another problem right IP space. I mean he was joking about us putting a huge price tag on v4 address space. No I wasn't joking. Yeah okay well maybe it is a highly priced but but you know v6 you know getting getting the you know v6 to take off a little bit more and by v6 I mean you know native v6 not dual stacked right that doesn't help us any. Thank you very much. Yeah thank you. Thanks everyone. Thank you very much.