 First of all, thank you again for the kind introduction and it's a pleasure to be here and it's a privilege to tell Verizon's story of my team, my peers. So I know Verizon typically does not talk a lot about the network infrastructure in public, but there's a lot of tremendous amount of the work which is going on in the background. So what I thought we'll share with this audience is that, you know, let me try to unpack the edge and the Verizon 5G edge. There's we talked a lot about Verizon 5G edge in other forums, but as part of this community, let me see if I can unpack the, what does it take to build infrastructure and scale that infrastructure to the amount what we're going to be talking about here in a second, right? First of all, part of the technology planning organization with Verizon, so probably responsible for network cloud, me and my team are kind of taking care of the edge network cloud infrastructure, security aspects of it. So that, let me dive in, right? So we start with the promise of the 5G. This is talked about a lot, but it's important of why we talk about the 5G in the context of, you know, how is it enabling the service and everything else. But 5G, Verizon is aspiring to go beyond the connectivity. We are wanting to enable unique services, and we are enabling those unique experiences for our customers and unlock the value of 5G, right? So, and for that, there's obviously three fundamental things what we've been, you know, building on top of it, right? I mean, there's 5G promise of the eight currencies of 5G, what it has to offer, network slicing and edge computing, right? I look at these three as the fundamental pillars, but underneath it, we've been investing and building the technology for a long time. This has not come overnight, right? So, whether it's fiber, millimeter wave and the C-band spectrum roll out in SGN networks and now the edge computing. With all of this, we expect that the promise of the 5G is going to be delivered and we will be able to enable the services. That's the problem statement. We're trying to enable the network as a service and build service for our customers. And obviously, today I'm going to be talking a lot more about the edge computing, but each one of these are the fundamental pillars to enable those experiences. It could be Industrial Revolution, Fort Auto, it could be, you know, V2X. Any of these services are going to be coming together with all these things coming together. And now, as we kind of going into the edge computing, the one thing has to become clear is, you know, we started this virtualization journey a while back. We started with obviously going towards, you know, unlocking the, you know, decoupling the hardware and the software. We did the virtualization and, you know, we got the benefits out of it from our partners, but it's not quite enough, right? I think where we were realizing the benefits is that, look, it's got to be portable. Not all the cloud principles have been met. I mean, we want to become portable. We want to have things like in-service upgrades. We want to be able to have that N plus K architecture is not so much just about the microservices because it gets lost whenever you talk about cloud. Cloud native, everyone grabbed the HTO. It's microservices. No, we're talking about the portable key and the telco grade. Cloud native functions basically to be enabled and unlock those experiences. So that's what our aspiration is. The 5G is being built on that web scale, more about the agility and more about, you know, enabling those services what I just touched on. And for that, now let's go a little further into this one, right? So in that journey, it's very important to understand how Verizon network is, you know, distributed. It's massive. Some of the core concepts have to come across, right? So obviously we have the core network, which where we do the control and management functions, large data centers and everything else. We have our edge network. This is the most strategic asset. What I personally think is that these data centers are distributed across the country and they're placed closer to the workloads, closer to the customers and everything else. This is where we're kind of deploying the compute along with our partners to enable mobile edge compute or Mac capabilities. But then we've been pushing that capabilities much further into our network, what we call is the far edge. Think of it as these are in thousands of locations, right? These are the towers underneath the towers, where we're kind of putting cards out there, putting the cloud native functions and basically unlocking the value of that capabilities as well. This is the most disaggregated network you will see at least in the US. And where we have thousands of the sites and we think that this edge for edge network does not stop at our network. We are pushing that envelope all the way to the enterprises. And this is all connected with single fabric so that you can move that workload where you want and where you want to position it. Think of a warehouse truck, right? From the enterprise, it moves. You should be able to consume the services and continue to basically access those services. That's what we're trying to enable and we've kind of been rolling it out right now. So in this far edge, today, obviously, I'm going to be focusing on how the network portions of it, like how are we enabling the network? How are we virtualizing those networks? What are the pain points that we have gone through? And then how the industry can help us to do more in this area? So this gives us the baseline for that, right? So let me dive in, right? And when we looked at that infrastructure, we didn't want it to solve for one simple use case of just running ground workload. That infrastructure needs to be able to solve for whether it's virtualized ramp. We're deploying that infrastructure in the enterprises, solving for in building 5G, because we see this service moving all the way into the enterprises and customers wanting private networks, whether it could be a smart factory, it could be a utility company, it could be a mining industry, whatever it is, we want to enable our customers to be able to have the network the way they want it and unlock those unique experiences. When I paired that network with this multi-axis edge capabilities, you will see all kinds of other services being enabled, whether it's V2X, industrial automation, its limit plus, right? I think we are giving that ultimate flexibility for our customers where they want to run the workloads and where we can even have that network be running. And for this, and when we started this journey about a year and a half ago, right, I think we needed to have some tenants. I mean, these six tenants might be at a high level, but there's a lot of details behind it. These are the principles what we said must be true to enable network coming virtualization all the way at the far edge and to the enterprises. One, maximizing the compute at the edge, it is about getting every efficiency of that compute. And you will see in a minute here how tight those spaces are. So when you're deploying a compute and you're trying to extract every ounce of that compute to be able to deliver highly sensitive workloads like a random workload, what does it mean, whether it's software and hardware performance accelerated check. We got that done. We deployed in thousands of locations. That was an important thing for us. And we wanted to make sure it's a common platform where it can run multiple applications just not ran what it just touched on, right? And it has to be highly distributed and security at scale. These are locations where potentially could be unmanned locations. These are literally small sites where you're deploying it. So security was a paramount thing. And more importantly, it's highly distributed. And we want to be able to operate it from central location. And I want to be able to build these sites at hundreds of the sites a day. I mean, again, you're almost building a data center every single day. That's what you're going after. And then you have the resiliency and rapid recovery. I wanted to make sure that our principle was let the site should be self-contained. We should be able to continue to offer the services to our customers, even when the site goes. I mean, the link could go down. You have to expect the unexpected things in these kinds of environments. And more importantly, it's zero-touch provisioning. I know it's a grand wording, but it was important for us that our field technicians, our operations team are able to plug these servers and able to build it as much as without, I mean, literally it comes up, understand it. We stumbled on a lot of issues and we were able to solve those issues as we kind of went along. So this was very, very important for us. And having the observability of the network at any given time was very important. With this backdrop, let me jump into each one of these sections and see where the opportunity is, what we have done, and where the community can help us as well. And the maximizing the compute. Obviously we spend an enormous amount of time selecting the right hardware and everything else. And just to give you an example of these locations, just so that you have an appreciation, you got sites where literally where you're riding on the highway, you might see a cell tower, just take a pause and look at it. And then you get an appreciation for one of our sites where you're deploying this compute, you're deploying it cloud literally underneath it. And we have done this, right? And yes, you do have nice buildings, you have, but we also have deployed this in locations where, yes, that Verizon response truck where you have, it is running an edge, it is running the cloud, it is running the same stack, what I'm going to touch on in a few seconds here. So the amount of the scope of the network is tremendous for us. So getting the right compute was paramount and I should be able to have heterogeneous compute CPU GPUs and everything else. Yes, we selected some accelerators because we wanted to offload certain functionalities and then keep the efficiencies of the host CPUs for the applications. That was one of the other paramounts. PTP, a lot of the folks say that, oh, wow, I didn't even realize you needed PTP because you're looking at the synchronization at the nanosecond accuracy level because you're running high sensory workloads like RAP. So it is needed for that. And we wanted to show our depth as you're looking at these servers at the form factors were important. And I wanted to run the servers, same servers all the way from Arizona to Alaska, you're kind of operating in a wild temperatures and everything else. So those were all important. And most importantly, the abstraction was needed for us because whether it is using Redfish or Open VMC, I should be able to provision these servers and turn up these servers because I could have multiple OEM servers deployed in this network. How do I get that abstraction was important and be solved for all of these? We continue to have challenges is that the platform overhead itself cannot be more than 10% of what you're making it available for these applications. And it will become a little more apparent as I kind of go through it. This is where that optimization, making sure that the control plane or when the cloud itself is running, you need to optimize the platform overhead and minimizing that was very, very, very important for us. And from the stack perspective, look, this will be not any surprise. We literally went with the open source stack inspired by CNCF. That's why we are here. And it was important for us that this stack that satisfies multiple use cases, just not running a brand workflow. I'm running today a distributed anchor packet gateway literally at the far edge of the network. That was important for us. And where we have is that the reason, by the way, I did not put service proxy whether it could be HTO or any of these capabilities is because that's what we're going down. This is whatever is already checked out is deployed and it's working in our network. It is taking real traffic when you're driving around in the country where we have opportunities in the central controller, the central controller, which is operating all the sub clouds at the south side. Yes, we have working software, it is scaling for it, but there's a lot of opportunity in the industry to make it open source and look at capabilities and how I'm going to control this multiple nodes. So that's one thing I'll leave the audience with to think about what else can be enabled. By the way, don't get me wrong. There is commercial products out there. What I'm challenging ourselves is that open source ability to have a central controller trying to scale up for thousands of the sites. And I'll touch on that in a second here. Yes, and we needed to deploy this one at scale. Look, today, you know, I do it the core. I manage the clusters. It's a well defined environment in those mini data centers what I talked about. Again, it's well defined and problem statement is well defined. But when I have the far edge approach, right, you need to have a centralized management plane, and it has to be positioned in the right location strategically in the edge of our network, having the capabilities of managing multiple cell sites, multiple customer sites in a strategic way. So that was very, very important for us. It's not hundreds. We want to go to thousands and 10,000. And what is, when you're zooming, what is it going to look like? And again, look at these numbers, right? Yes, I have thousands of seed and sites, which is by the way, nothing but the basements are pooled for offering for multiple cell sites. The Iran is distributed ran, kind of virtualizing the baseband for a single site, and have in building, I have enterprises and everything else. What was important for us? I needed a local control plan within these individual cell sites to manage the cluster itself. And it got to be loose coupling between the central controller and the control plane functions. Why is that? Because, yes, that link could go down. I do want to have that continue to work, that site continues to work and provide the service for our customers. I want to have deployment capabilities of patch management, pushing the software capabilities and everything else. We've done that. And I needed a real-time operating system for high-demand workloads, what we just talked about. And more important, I needed automation all the way from the firmware, OS, and the applications, and all obviously accomplished through zero patch. And the most important thing is, look, the size of the sites we solve for, you know, three to 500 sites, like the central control. I want to get to a place where I'm able to manage thousands of these sites. That's again an opportunity right now, what we're working on, and we're building on top of right now. So, again, if I have to leave with it, think of a massive data site where you're managing centrally. How do I do parallel operations? How am I, within one maintenance window, I'm able to upgrade and patch thousands of the sites in one night. That is the problem statement. And again, our teams have bruises, but we made this happen, and we continue to innovate in this area. And lastly, the resiliency and recovery is super paramount for us, because disaggregation, while it has benefits, it comes at a price, as you can see. And I cannot have, when the site goes down, I cannot have a long recovery times to build this entire site. And I don't have the luxury of putting, you know, N plus one or N plus two nodes out there and think that we can, I can do a failover. This is not because, again, it's a cell site. You got that, you know, the stack and you got the hardware. It has to be self-contained. And when I'm doing the patching, I don't want to ball the oceans and look at the entire upgrade of this entire stack, right? I mean, I should be able to do it at a component level, and it's got to be fast and quick. And for me, to getting back to that service was super important for us. And this is where, again, I will look at it as a challenge for the entire industry is that I need to be able to recover within minutes and whether it could be, you know, active active from the perspective of, you know, having a separate boot image within the, within the box itself, because that's how we solved it. But there's still more opportunity for us to basically innovate here, come together and solve for this kind of problem. And the most important thing for me, where I would leave the entire audience is the following, right? And building the edge requires different mindset, tools and operation model. You do not have the same luxury of the mini data centers and data centers, whether you have DHCP pixie boot capabilities to bring up an image and then basically solve for the servers, highly secure, insecure environments where you're kind of putting this infrastructure. Scale, resiliency, recovery are very, very paramount. And if those are table stakes for us, we built this, I know what it takes to get this done, whether it's my team, my peers, we came together, a lot of learning spent in there. We're building this massive infrastructure at a scale again, with only one and only one objective for us. Verizon is going beyond the connectivity. We are enabling services and experiences for our customers, whether it's consumers, business enterprises and everything else. That is the only fundamental mindset, what we're kind of driving this infrastructure at a scale. And again, if you have bold ideas, we're game, please reach out to us and come with Verizon. That's my call for action. So with that, Arbit, I'll pause and give you back. Thank you. Thank you very much. That was very insightful. If you can stop sharing, then we can be on the screen. There you go. Thank you. There you go. I know we're over time a bit, but there were two quick clarification questions for you. One is, what is EASIC? And the second one is, how big is your edge location in terms of compute resources? So I'll start with the second one. The think of probably as small as probably conference room, you might not see it with my background. You're looking at a 20 by 20 square foot location in some places, and then you might put a 2U rack, I mean, 2U server. I mean, literally, that's what we are deploying in some cases. So that's one. EASIC is for us to be able to offload the RAN accelerator. It's just like us for programmable. We start with the FPGAs, and now we're moving to EASIC to basically offload the RAN workloads. Got it. Okay. No, thank you. Excellent. I think we will wrap here, and thank you very much, Anil, for giving us these insights. Thank you.