 I guess I'll start off by introducing the team. So I'm Billy Felton. I'm a director of technology at Verizon Wireless, part of an organization called National Network Operations, which basically is a heart and soul of Verizon. So that's all where all of our call processing, a lot of the application services that we provide our customers is managed. My specific role is a role pretty much dealing with disruptive technology. So I'm deploying a lot of the cloud technologies, a few other things that we're working on. The challenges that we faced, I think building the team, has been building a really flat organization that's cross-functional and then applying that into this national network operations team that takes a great deal of pride in the quality that we provide our customers. I mean, we have the best network in the country. We're proud of that, and we want to continue to have that. So instituting change in an environment like that's a pretty challenging feat to accomplish, especially why you're trying to deploy this cloud technology as well. So with that, I'm going to introduce Andy and Sanjay, who are going to co-present. I think I'll turn it over to you, Andy, if you want to. Yeah, sure. Thanks, Billy. No problem, Andy. So some of the topics we'll cover today. The central theme here clearly is we're talking about managing a deployment using OpenStack at PiperScale, right? So we're going to define what we mean by PiperScale. We're going to talk about why we're doing this, sort of talk about how we've achieved some focus to actually do what we're doing. We're going to talk about orchestration for a little bit, some of the challenges we've hit, how we manage this, and ultimately land on this concept of self-service. So one of the guiding principles that we've used as we've built the team and built the practice is that we want to create a product, a self-service cloud environment that's actually easy to use. So a guiding principle is people should want to use the platform. So it shouldn't go against the green for which how their line of business actually works. So as we embark upon this journey for change within the company, we needed to make sure our folks actually wanted to do it. We'll define what we call VCP quickly. So VCP, or the Verizon Cloud Platform, is a multi-tenant environment. It's an environment intended for all of our lines of business. So across our wireless organization, our enterprise services, our wireline business. And ultimately, there will be capacity available for folks that maybe even are outside of our organization. So there's a lot of folks that are going to be in there using it. So we like to think of this as a large-scale environment with OpenStack at its core. Now, OpenStack is clearly our engine or our VIM that we're using. There are some other technologies that we're using around that as well. And we'll talk a bit about that. And also some custom applications that we've built as well. So the platform itself, this is kind of interesting. We define it by a group of products or services that we're offering back out to the business. We don't let the applications necessarily define the platform itself. So this is kind of an important guiding principle as well. Because we need to actually be able to support multiple types of applications. So as we think about how we build and how we need to repeatedly build our different locations, we need to keep the semblance of a uniform environment across. So basically our SLAs are at the service level. So services like compute, network, storage, metering, benchmarks, things like this. So some of those are core OpenStack services. Some of them are not. For example, we've invented a service status API, which is a product that we have, which actually gives out an enumerated value about the health of our services. So this is something that we've built on our own and added to the OpenStack deployment. We need to give users the ability to quickly deploy and support their applications. So yes, we want to leverage the tenants for what OpenStack bring us, but there's other things involved as well. So the ability to quickly ask for new resources, the ability to quickly ask for increases in resources that you've already got. So as we move folks towards the ability to do things on their own without necessarily having to ask for permission, it should be easy to do. So we wanted a common interface for everybody to come to for provisioning their infrastructure. And we're achieving that through some custom UI's that we're building. And this should be on demand, and it should be self-service to the best of the capability of the platform that we can. So that's how we define it from a high level. How do we think a hyperscale? Well, it's large, relatively speaking. So I guess large, it depends upon the size of the organization how you define large. So for some of you large might actually be small for other companies, and some small deployments might be large for other companies. But for us, it's fairly large. I'll get into some of those details. There's a lot of instances that are running, and there's going to be more and more instances that are coming with a lot of users. It's distributed across domestically across the United States now, and we'll be growing us across internationally as well. It's got to be elastic. So as we need more, we need to be able to scale out quickly, and again, this repeated thread that I keep bringing up, it has to be easy to use. Not that I don't love Horizon. We all love Horizon. So why are we doing this? Well, our network is our most valuable asset. If we can actually create products and things that overlay on top of the network and actually create a repeatable SLA for folks, well, we get to actually leverage what we've invested so much money in. We want to be able to get new features to customers faster. So ultimately, we want to be able to leverage the tenants for what DevOps or CI CD brings to the table for our whole ecosystem of developers. And by doing this with VCP, we can allow folks to deploy things very quickly, which leads us into the ability to improve this notion of an agile environment for our developers, but also for our suppliers, for folks that are actually building applications and delivering them to us. Ultimately, we know that this will reduce costs both up and downstream, but we also realize that there's an initial investment that's important. So initially, you have to invest to save in the future. So again, it should be distributed. It has to be disposable and then dependable. So some more guiding principles for us, the 3Ds. So in order to build the platform, which we've been doing over the past year or so, we had to find some focus. We had some brilliant planning that was done for this. We had a business that clearly was seeing the value for doing cloud computing. So finding focus and figuring out, well, how do we actually begin to productize this thing internally so that it's repeatable, defendable, and something that folks can rely on? Orchestration for us is a big deal. Orchestration actually drives a lot of the business cases for why folks want to build or leverage VCP. There's a lot of noise around the business, and there still is. You hear a lot about, well, how are we going to onboard things? The Etsy Mano overlay for NFV and VNFs. So what are we going to use for orchestration? But different people define orchestration differently. So I would actually argue, first, there's this concept of tertiary orchestration, which is sort of above the stack, so to say. We're seeing a lot of movement in this area in Etsy Mano world. And then there's this concept of secondary orchestration, which is basically at the stack level. So you don't really do tertiary very well unless you have the secondary orchestration available. But more important to us initially was to figure out the first level of orchestration. So if I had to flip these around and think about it from the bottom up, the ability to quickly orchestrate your infrastructure provides a very solid platform for doing your stack level orchestration, which then gives you a dependable environment to do tertiary orchestration on top of. So that focus for us was very important to actually plan backwards. So we embarked on a journey to figure this out. And here's some interesting facts for you. So to figure out how to get that underlying environment set straight so that it's repeatable, so it's idempotent, we had a lot of different pieces of hardware to deal with. 16,000 pieces of fiber and copper cables, around a little over 270 leaf switches, 80 spines, 300 patch panels, over 1,700 servers, 445 terabytes of memory in those servers, and some folks on our staff with very sore fingers when we had to actually put some of that memory in there. 38,000 cores of which we can oversubscribe. 90 racks, 76 different storage arrays, over 4,500 hard drives, in seven locations with 21 regions and availability zones. Well, we're going to adjust the availability zones as we see fit. A lot of provider networks that we had to plumb up. And then, of course, there's the tenant side, thousands of networks and thousands of instances going on top of those. And this is the first wave of capacity we're building in the core. So some challenges. Well, the hyperscale in and of itself is a challenge. Creating an environment that can scale and is easy to use, just the tenants or traits of what hyperscale means, the distribution across the country. We needed to do things in a repeatable manner. Unfortunately, every single one of the locations that we're working in is not exactly the same. So we had to account for that. Repeatability in the way we actually do the build itself. The community with which we're working in is, let's face it, they're used to doing things in a bit more of a stovepipe manner in terms of operations. So OpenStack is kind of a horizontal technology, not a vertical technology. So the community with which we work in needed to actually affect, we needed to affect change upon it to get this done. Resilience in the environment itself, so the maturity of the API is the maturity of the VIM. We have to account for the ability to keep this thing running because we need to bring the value back to the users. And that user experience to us is extremely valuable. If you go to use something and it's not a pleasant experience, you're not going to really want to come back and do it again. And that goes for onboarding right down to folks that are actually provisioning their VMs and building their networks. Ultimately, we know that this is going to affect massive change on the organization, massive. So from an environment where it once took months to get a server purchased, space on the floor, transport done, powered up, and available, two minutes. So from an organizational perspective, this has had a tremendous impact on us already. So some guiding principles that we've used to do this. Version control everything. We version control everything that has a config in it. We look at Git as a core component for how we build our own applications when possible. So we're wrapping around it when we need to. So anything that basically has a configuration is stored and version controlled. Another guiding principle is make things machine readable, not necessarily human readable. So a design document or some documentation that we have, we like to provide REST APIs as documentation. We like to provide things that are reusable. So you force yourself to actually keep these things up to date if there are things that other systems are relying on. No physical configuration. Talk about muscle memory in an organization. Our engineers are very used to kind of knowing the machines and knowing where they are, where they sit in the data centers, which part of the data centers are hot or cold. They get attached to them. They get attached. Well, it's a little cattle versus pets thing. This is the reality of it. So again, another guiding principle. No physical configuration. Serialization and delivery. Well, yeah, we want to be able to go at things in a repeatable manner and serialize in the sense that we know exactly what's being done in terms of order of operations. Not to be confused with being able to do things in parallel, because actually by doing everything I'm describing to you, it's allowed us to actually not do things in serial per se from a process perspective. But that order of operations, very, very important to us. This concept of declaring everything in advance, actually building out our environment as built from a CMDB, and then having our installs actually go to that environment and ask about the servers or the nodes that it's configuring. We've achieved this. Some new territory for us is leveraging bots to do things as well. So we've embarked upon a bit of this chat ops journey as well. We're using it where it makes sense. Our bots can respond to things basically in our loops from our monitoring platforms. So when we pick up events on the platform, whether it be a hardware event or something with an open stack service, we all love rabbit a lot and some of the other services that are there. So when we're looking at that and we're saying, okay, we're seeing this behavior, is this behavior real? If we can decide and figure out, yes, there's some weird behavior here and it is real, we don't drop it on the floor, we actually instruct a bot to go off and do something or to remediate. So we've got some loops that we've built for actually looking and maintaining the stack itself but also the underlying servers. So this has been interesting for us. It's been quite a journey and actually we're achieving some success with the bots. People are starting to become friendly with the bots. It's a little weird. Anyway, so managing this, I mean, this conversation is about how, sort of managing a hyperscale, right? So in addition to sorting out, how do we attack the build, right? And how do we make this thing predictable, right? For applications to run on top of, we actually have to figure out how to manage and plan as well, okay? And from an operational perspective, we start to mince a little bit into the product side of this thing as well because we needed to federate this thing, right? We needed to actually make sure that I, in one of my regions, am the same person in another region, right? And that person's the same person in another region, okay? So we've created a solution around our IDMS or the identity management itself. Particularly storage is an issue as well, right? Specific plans around how are we gonna provide storage in terms of performance in the breadth of the type of storage, okay? Security for us is a big deal. We've actually had some interesting work over the past few months, sort of the collision of the folks that are systems administrators or essays versus the stack folks, right? There are two different viewpoints on the same infrastructure, right? So when we think about hardening an OS, it's a little different than hardening the stack itself, right? And then the rate with which change occurs for those two are different as well. So metering for us has been an interesting journey as well. So the metering platform that we've built and Sanjay will talk more about as well, has actually created an organizational overlay for VCP or aka OpenStack, right? So through our metering platform, we've actually implemented the ability to model an organization, right? Above and beyond what a domain can provide or a project, right? So we can create an organization and that organization can have a many to one relationship to people or tendencies, right, or projects. But then it also implements the ability to build a service product, right? Or a collection of services that are rolled into a product that somebody can select and use. And then, lastly, it gives us the ability to implement a rate plan on top of that, right? So now I can start to look at the overall value of the consumed service coming off of my infrastructure, right? So we can, in a year or two, actually look at the numbers with which we spent versus what folks are consuming, right? So what is the intrinsic value of that hardware that we're now abstracting through our platform, okay? So the organizational overlay forced the product planning on us, okay? Monitoring is a big deal as well, right? You need to have complete visibility on what's going on. We've had to build a plan for doing monitoring. The build itself, I've spoken about at length, right? But actually, you know, planning for how we're going to do the build was important, okay? Originally, and this has been a long journey for the organization, right? A lot of folks in our company have been working with OpenStack for quite some time, okay? And we love every new version that comes out. Things get a little better each time. But I think having a forward-looking view into what's changing from a deployment perspective and an operational perspective has been very important for us as well, right? The prereqs, right? Any of you in the room who have been building a platform on top of OpenStack, understand that there's a pile of prerequisites that need to be in existence before your stack build's gonna work, okay? They need to be predictive, okay? So we've actually got a specific plan around that. Data retention for us is a big deal. Once things are out there and being leveraged, we need to allow folks to be able to actually retain information about their build, but also the data that's flowing through their applications that are on top of the platform. Work with our partners Trilio there, okay? Upgrade and patching, okay? One of the reasons why we're leveraging diversity in our sites, right, across the country, but also within the sites themselves is to give us the ability to have an independent stack or region with which we can constantly upgrade and bleed traffic off of production stacks and move people onto the stack that's not necessarily in production, but has been upgraded, okay? Our self-service experience, okay? Well, this is all the work we're doing on terms of building and planning and making this thing reliable. What's the one place where our customers are gonna come in and actually do it, right? It's the self-service experience itself, right? I think the dashboard that ships with OpenStack, the one that shall be called Horizon, right, is okay for some people, right? It's not okay for the entire customer base, right? So we have built some technology to help us there as well. So let's talk a little bit about self-service experience, you know, metering a bit. That's great. Thanks, Andy. Sure. Let me scoot over. Yeah, I'll sit over there. So I think Andy's points covered what I like to think of as all the key issues around day one. Like what does it take to stand up my OpenStack Cloud? Once I've stood it up, what does it take to manage it at a systems management level? But OpenStack doesn't deploy in a vacuum. OpenStack deploys within an organization, as Andy repeatedly said. And organizations consist of people. People have needs. There are different types of people with different needs. And so that's the piece that we've been working with Andy's team on, and I'll just take a couple of slides to talk to you about those items. So who are the stakeholders? I mean, obviously, they're the operators. They're the consumers. But then, organizationally, at a management level, there are folks who want to know what the health of their applications is. There are people who want to know whether the money that's being, the money and effort that's being spent on OpenStack, what's my return on it? What's my cost? How does it compare to what I might be spending in public cloud or in some of their environment? And so having that kind of visibility and being able to provide it across the board at multiple levels of granularity, being able to roll it up into an organizational view as it makes sense to the specific use case for Verizon or anybody else, those are key considerations. I mean, those are the day two considerations that if you don't account for those, you may have built the most stable, most scalable, most repeatable environment, but if you don't address those other needs, you haven't fully planned for success. And so KPIs, I mean, having a set of KPIs around what your OpenStack cloud looks like, being able to define those, being able to collect data about those and being able to share that data as appropriate is critical. I mean, you have to think about and address the day two requirements. And so what are some of those guiding principles from a day two perspective, right? Cloud is from the consumer's perspective should look like an environment that has infinite capacity. Reality is it doesn't. And so how do you maintain that balance? The key first step in doing that is to capture the right metrics and to watch them closely. So looking at usage and trends and who are my top consumers and so on, critical requirements. And then once you have that data, additionally having mechanisms to encourage good behavior. I mean, to encourage stewardship of ultimately scarce resources. I mean, when you're talking about the scale that Verizon has deployed or some of the other folks in the room might have, there is a lot of juice behind this, but ultimately it's limited and you wanna make sure that things are being utilized the right way. So when I say provide the right data to the right people, what do I mean? Here's some key or some common things that come up and people wanna know the answers to. So customers wanna know. And by customers we think who the ultimate end user, typically from our perspective, a customer consists of multiple projects or it's a line of business or something. So they wanna know what their usage is and what the cost is obviously. Cost is a good lever for good behavior. If unlimited resources are available to me just at the turn of the faucet, but it don't cost me anything, I'm less likely to be thoughtful and a good steward. Operators need to know what the capacity usage is and what the forecasted usage is. So then they can plan ahead. Do I have 30 days of capacity? Do I have 60 days of capacity? Do I have more than 60 days? I'm good. Customers and operators both need to know what's overutilized, what's underutilized. Where can I reclaim resources? Where can I improve performance? The business unit owners need to know their current and projected spend. And then ultimately at a senior management level you need to be able to measure that TCO and ROI. And then finally, there are varieties of cloud experience, right? Like what book I read in college is a religious experience and there's varieties of cloud experience. You may have a Cloud Foundry use case where OpenStack is just the layer and what you're interacting with is the past. You may have an API end user experience. You may have a scenario where horizon is sufficient for what you need. And then you may have a scenario where it's not, where there's a different use case that you wanna support from a self-service and front-end perspective. So being able to acknowledge all those different scenarios and being able to say maybe there isn't a one size fits all and I need to have the ability to support multiple interactions. Optimizing for that end user experience and that self-service in the way that makes most sense for those particular environments, that is also critical. I mean, ultimately the success is gonna depend on how your end users interact with it, how pleasant the experience is ultimately and being able to support that, give some real thought and effort into that from my perspective is critical. So, I guess I'll just turn over the keys to you, right? That'd be the other thing. Yeah, that'd be pretty good. The lessons learned are more Andy's lessons than mine, so I'm gonna hand this back to him and put him back on the spot here. Okay, sure. I guess I'll remain sitting for this part since we're wrapping it up, but yeah, I mean, are you know, lessons learned create a beautiful user experience, right? Create an environment that folks wanna use. A lot of times when I've worked with other teams, when I first begin with them, I always say, well, why do people buy things, right? And I get all kinds of answers. Oh, it brings value, it helps the business, but ultimately people buy things because they want them, right? At the end of the day, they'll figure out how to pay for them in a certain way, but simply it's because they want them and we want people to want to use this platform to make their business or their line of business function easier, okay? Taking a real good look at the core projects within OpenStack and figuring out where the holes are, or at least where the evolution or the maturity isn't quite there yet, looking for the gaps to figure out where to focus internal development efforts on is important. It was very important for us in our first part of this journey. Tracking DBAS and the overall community behind NFV was also very important for us. We noticed that sort of the inability to leverage traditional database in the cloud environment was a becoming a gating issue, so we embarked on our journey for looking at, well, how do we provide database as a service via our APIs or Trove API, and we found some folks with Tesora that have some great ideas there, okay? So service level agreements, this is something we're still sorting out. I'm looking at some of my colleagues in the room, right? So it's very different to put an SLA on a service than it is on a server, right? So the servers are based on the services that are being provided. So this is a bit of muscle memory and evolution for us as an organization is what does that service level agreement need to look like and actually, what does it need to read like? There's implications for how applications are gonna actually plan for the use of your platform. And we always have this sort of notion, well, how do we get five, nine sitting on top of a three, nine infrastructure or a two, nine infrastructure, right, initially? So that's an important part. We need to provide the diversity for your applications. Organizational buy-in, wow. This is definitely a challenge for us. But this is something that I can't stress enough. So some folks in the audience that are embarking upon this, some of you I've spoken with already, it's kind of like you need to take folks along for the journey. I think when folks actually leverage and utilize a platform such as VCP, light bulbs kinda go off more so than necessarily conversations or lots of meetings that's actually getting in there and leveraging and using it and seeing how it brings value to your project actually starts this sort of horizontal or grass-roochish kind of behavior where people become evangelical within their own organization, right? So that's been really interesting. Yeah, I mean, that's been one of the biggest challenges is deploying a technology that basically enables you to utilize all these new practices, right, which are strange for us in the telco space, right? You know, Agile and this DevOps thing everyone hears about, right? The big challenge is we as a company have a tendency to wanna wrap a technology around our organization, right? And so once you do that, right, you've basically broken all the principles of cloud computing, right? You've turned it basically into a bunch of bare metal servers. Or a bunch of clouds. Or a bunch of clouds, whole bunch of clouds operating independent, but yeah, well, you're paying a hypervisor taxes for the fun of it. But, you know, so the challenge is that balance, right? Because you can't take a large company and change them overnight. So, especially a successful company, right? So it's, as Andy pointed out, you know, we build the technology, people will come use the technology and they'll start to learn. They'll start figuring out better ways to manage their own projects, to manage their own products. And they'll change and they'll evolve with the platform. And, you know, that's been our biggest challenge, I would have to say, is the organizational ones. So, I mean, this is a really quick talk, right? So, I mean, we talked briefly, we defined VCP Verizon Cloud Platform as an environment that is defined by its services, right? Not necessarily defined by the applications running on top of it. We defined hyperscale, distributed, right? Lots of stuff going on, lots of moving parts. Why we're doing this, there's business value for us. Ultimately, we need to get products to our customers faster, right? Our focus, you know, began by having to slow down and kind of, you know, look at the project and get that base first level of orchestration nailed so that the secondary orchestration or heat, right? The stack level orchestration is dependable and then we can have our NFV-style tertiary orchestrators working in an environment that they can rely on. Some challenges that we had, we went through those. The management of the environment. Bots are important to us, the monitoring, right? The benchmarking of the platform. Another tenant that we have, or I should say, guiding principle for us as we manage this is complete and total transparency, right? So, putting our benchmarks out via our internal web presence so that our customers can actually see what our storage, our block storage performance is. What is our virtual CPU performance speed, et cetera? Things like that, allowing them to make decisions on their own moves us towards this concept of self-service, right? So, I mean, we have like three minutes left and we're here, we'd like to take some questions if you have questions, or we'll be here afterwards as well. So, yeah, we've got one here. You gotta drop it. Yeah, you gotta, I'm just kidding. Can we get this external, this mic turned on? Hello? Okay. All right. Thank you. Obviously, OpenStack is very technology focused but from your discussion, it sounds like you can manage the technology by applying engineering. The bigger challenge is the process side. So, what resources exist in the community to help organizations tackle the process piece versus the hyper-focus we tend to have here on technology? Sure. You want me to go? You can start. So, for us, that's a really great question. I mean, we have a lot of process and we have a lot of process for good reason. Very solid base company, solid customer base. So, what we're doing is not to disrupt existing business per se but to introduce things in a new manner. So, the old stuff can't just die in one day. Basically, we introduce a new way of doing something. We evolve toward it. So, it's sort of a die-on-the-vine strategy. So, process should be built into the interface with which people are using this. From a customer perspective. Now, from an operational perspective, process is built into our declarations. Actually, into the underlying infrastructure as code that we've got running. So, anyway, when we bring up a new way for doing something, the old way eventually goes away and folks will move over to doing it in a new manner in a way that doesn't create friction within their organization. Sorry. So, resources in the community, right? From my viewpoint, I think the more we talk about this and the more that we get public information out there, I think the community is just slowly growing. Our space is very enterprise-focused, right? So, we've got that overlay, right? So, the more that we're able to provide information back out to the community about what we're doing in Verizon, I think is gonna be a big factor in this for other folks as well. So, in short, you have to create those resources yourself, right? So, there's books that if you wanna spread buzzwords throughout your company, it's a great way to get started. But if you wanna actually take it a step further, it's a lot of meetings with your human resources department, your security groups, seriously challenging all of the process and procedure that you have in your company and really being an instrument of change. That along with the technology, people will tend to follow. So, good? We've got a question up here too. Thank you, Lisa, for doing this. Yes. Yeah, you talk was nice, but somehow generic, I would say. Sorry to say. What do you plan to do organizationally? Where do you think that the first impacts will be on your manpower and how do you do risk healing? How do you prepare for the transition? Very generic question. So, I'll give you a generic answer. That's right. Turn it down. I'll give you a generic answer. So, there's really not gonna be any immediate changes to our organization. So, I've built an organization that's already built around the principles of DevOps, so it's cross-functional team. It's scaled to the size that it needs to scale to. All of our legacy technology is not going to go away anytime soon, right? So, there's no need for any type of immediate change at this time. Is that good? Oh, okay. It's a good question, yeah. So, not generic, I was just kidding. No, listen, we've got this horizontal team, right? OpenStack in and of itself is a horizontal technology and our team is sort of built in that manner, but our organization is definitely vertical, right? So, what we're doing is building sponsorship from other organizations, and we are sort of having folks come in and work with our team that aren't necessarily members of the team, right? That's important. It's also important for us to provide the proper resources in terms of training and the ability for folks to take it upon themselves to embark upon a new journey, right? So, we're seeing a lot of folks that have been part of the organization for a long time that see this as actually a breath of fresh air and an evolution for them, okay? So, I think providing really good information about how we're doing this to them allows them to model external organizations like we are as well, right? So, I've also seen enterprise transformation done in a much more, how should I say, like an EA style approach, enterprise architectural approach. That doesn't align very well necessarily with trying to do this at this level, right? You have to have proof, right? So, instead of building a lot of artifacts in advance, our artifacts are actually the applications and the teams supporting them, okay? So, again, it's manifesting itself in the platform itself, right? The platform trains you as you use it, right? As you go through it, sort of reducing that friction that folks entail. We're going all the way in the back, yeah. We actually pre-planned this, Lisa, so that you could get the most amount of exercise possible. So, you're up next, right? You know that, right? Yeah. So, 1700 compute nodes. What's at hyperscale the total number of current customer workloads you guys can support or are supporting actively? I don't have that number prepared, right? So, the total number, that's a difficult question for me to answer, to be honest. So, in an individual site right now, we've, let me answer it a different way. We've figured out that in one location, right, which is basically one seventh of that number, right? The oversubscription ratio that we're comfortable with right now is three to one, right? That's going on there. We haven't maxed out an individual site yet from a testing perspective, right? We'll be embarking upon that over the next month. So, the next time we get up and have this conversation, we'll be able to provide numbers like that. Current customer workloads. So, just specific to what we have currently running, we have probably what, about 30 something projects that are in varying degrees of performance requirements on each. We've got probably another 100 or so projects that are all based around, I don't want to say lab work, but that's pretty much what it is. A lot of the software providers learning how to do that secondary orchestration that we spoke of earlier, right? Applications that are running that they're calling themselves in production today are fairly simple applications, web style applications that are running, Nginx stuff, some Apache stuff, some database stuff like MySQL kind of stuff, and then a lot of proof of concept work around core applications that are participating in call flows, right? So, lots of testing going on there as well. And I think that as we move forward over the next few months, we'll get some, pretty good idea of how far we can stretch each of these environments. I think he's a current consumption. Yeah, how are we expecting it to evolve? Oh yeah, we know it's going to evolve. So, our current consumption, very high, 80% consumption of our physical memory in the boxes, our storage is at like 2%, right? We've got some adjusting to do there, CPU in one location, 60%. But that floats up and down depending upon what testing is occurring. And this, it's a really good question. I mean, it's one of the reasons why we've built a metering platform as well. So, we can actually go in and look at that in a dashboard and tune, right? So, clearly I've got stranded CPU. Okay, that's it. Thanks guys. Thanks. Thank you. Thank you.