 Alrighty hello everyone and welcome to today's session on building planetary scale mobile edge computing applications on Kubernetes super excited to have everyone here wish we could have been in person, but we're super excited today to talk a little bit more about our experiences with Verizon 5G edge, which is our mobile edge computing platform and tell you a little bit more about why we believe. The Kubernetes community should care about this infrastructure and what it means for you as you start to deploy network intelligence across your edge applications. I think to start off just introduce myself. My name is Robbie Wilson, and on behalf of the entire Verizon team, we're super happy to be here. I come from part of our corporate strategy team where I lead our developer relations efforts for all things 5G edge. Everything from blogs developer newsletters immersion days hands on labs, working with developers day and day out is what I do. I love nothing more than having the opportunity to share with this very community about the integration opportunities and just ways that network intelligence can play a role in orchestrating applications. But joined alongside me today is Raghu from our mix strategy and architecture team want to introduce yourself here we go. Hi, hello everyone. My name is a good one for any part of my strategy and architecture team here at Verizon, and my today day to day responsible does include the 5G edge architecture design. So my core functionality is application experience management and how do we manage the network exposure so all of you guys can go ahead and develop some crazy amazing applications on top of the 5G edge that we just delivered. I'm pretty much into cloud native distributed systems and also have a lot of experience with software defined networking and pretty excited to be here along with you. Fantastic. Well, we're going to have a lot of fun today as we talk about first and foremost. What is 5G edge? How does it relate to 5G? Understanding what you need to know about deploying your first Kubernetes application on 5G edge. So from an infrastructure perspective, the key components, where's the control plane live, what are the workers, worker nodes live, how do they self registered the control plane. Then starting to think about things like experience management, what tools are Verizon developing. To solve some of these key challenges that you may not even know our problems today, and then start to think about building your edge enablement journey. How do you get started and where are those resources to follow as you hope to build your first EKS cluster or Kubernetes cluster more broadly on Verizon 5G edge. So let's get started here and I think a really important place or interesting place to start is why the mobile edge. And I think to me, as you think about consumers ever increasing demand for data for immersive experiences, the infrastructure and network today may not be good enough for tomorrow said differently. The cloud today is often concentrated in a few geographies that if you happen to be outside of those geographies, say in Miami, Florida, that cloud endpoint or cloud application is probably being delivered 1000 miles away. And so wouldn't it be nice if you could bring the best experiences of the network with the best experiences the cloud, bring them together at the network edge to physically have those resources topologically closer to users than ever before, perhaps within the radio network in Miami. And in doing so, start to solve challenges around conversion, particularly with transaction based workloads. We see for an example that a share 100 milliseconds in delay can affect conversion rates by as much as 7%. And so if you're trying to really decrease that end to end delay. It's not just about the optimizations we've made today, things like using a CDN more intelligent back end operations asynchronous requests. All of those optimizations are no longer enough, you can physically move where that application lives closer to you, you reduce the non deterministic behavior of the internet those incremental hops and in turn deliver that more performing user experience. But we have Verizon wanted to ask ourselves, well, where do we come in. And so the question we found ourselves asking was, how do you optimize application logic for the network. And can the network finally become an asset. Kubernetes applications and applications at large, and we fundamentally believe that the answer is yes, because we found a way to do so without having without having you to have to learn the nuances of the 5G network. We have to use the cloud platforms you know and love and simply put, that is what Verizon 5G Edge is all about. It's worth noting that as you think about our portfolio 5G Edge is our portfolio of mobile edge computing solutions. We have two different flavors. We have our private mech solutions which ride on top of a private network with compute co located right there. But today we're going to focus almost exclusively on public mech, otherwise known as Verizon 5G Edge with AWS Waveline. And the reason why we wanted to focus on that is first and foremost, this solution is available in 13 cities today across the US East one and US West two regions, meaning compute is topologically closer to you than ever before. In more cities, you no longer have to just rely on those parent region diplomas those very few cities that deliver cloud applications today. And also, you don't have to be an expert in 5G networks as we mentioned. Let Verizon deal with the complexities of managing the network and you can focus on building your application. And lastly, you don't want to have to learn new language, new syntax, new infrastructure concepts. We wanted that sort of uniform pane of glass as you develop for the non edge and for the edge you want it to feel the same, because at the end of the day, a VM is a VM, a Kubernetes cluster is a Kubernetes cluster. It shouldn't really matter where it is. And that's why we partnered with AWS to deliver the AWS Wavelength service, because if that same single pane of management across both Wavelength and what we call the parent region, with all of those services, such as EC2, your virtual machines, EKS for Kubernetes, EBS for persistent storage volumes, all of those services you know and love are also in AWS Wavelength. And what we want to do today is actually deep dive into those services. We talk, we talk a lot about right now, but what's the same. But we do want to highlight what's different so that we can help you build your first application on the network edge using Verizon 5G Edge and AWS Wavelength. And I think a great place to start is, well, why would you want to use Kubernetes at the network edge and to me, I always say that application modernization practices should still apply that for POC is what we're seeing from customers. It is admittedly a lot of EC2 instances at the edge, perhaps in an auto scaling group, but as these workloads become ever complex, particularly as you start to deploy only those microservices, which require low latency and low latency only. Well, then you don't have to move the whole application, you only have to move select components. And that's what you're seeing here the consistency across the infrastructure the scalability the flexibility that you get from Kubernetes. And that's why we believe that's going to be the application deployment pattern of the future. Kubernetes on AWS Wavelength. And there are a couple things that are particularly compelling that I want to call out. First and foremost, the first question we get is, I'm an application developer, I already have an EKS cluster, do I need to create another cluster in the edge? Worse, worse yet, if there are 13 Wavelength zones, is that 13 EKS clusters or just Kubernetes clusters more broadly to manage? The answer is no. And we're very proud to share that as you look at the reference architecture on the left. At the highest level and will delve into the networking nuances here in a second, what you want to know is an existing control plane running in the region can also support node groups or worker nodes at the mobile edge without making any major changes to the cluster itself. You're just adding another node group. And we thought that would be the easiest way to allow clusters to support not only greenfield deployments but brownfield deployments as well. And we think that that's incredibly important that you take existing workloads and you're able to extend them to the edge. And the best way to do that is keep the control plane as is and incrementally deploy additional node groups and have them self register to that control plane. AWS has a few handy infrastructure templates for self managed worker nodes and we can happily refer you to our github.com slash Verizon slash 5g edge tutorials, where we've already automated all of this for you but the key point is incremental node groups at the edge that self register to the control plane. And it's that simple. And who cares. The reason this is so compelling is now really for the first time ever before, you can have a single control plane orchestrating containers that are physically geographically separated by potentially 1000 miles apart, said differently, that one control plane in northern Virginia and East one can now have node groups in Boston, New York City, DC, Atlanta, Miami, etc. You could never do that before. And if you did, would require a series of undifferentiated heavy lifting. You don't have to do that anymore. It's really that simple. We take care of the connectivity from each of these wavelength zones back to the parent region via the service link. We take care of optimizing the complexity of the network itself so you can focus on building the application. There are a few things you do need to know about and it has to do with the connectivity you see here in the bottom right key components and mobile edge architectures. We think it's very important and worth calling out. First and foremost, we know about our VPC is that virtual private cloud that logical isolation of resources. The key thing here is this wavelength zone this mobile edge. It's not this weird standalone entity outside of the VPC. It's just another subnet. In fact, a wavelength zone is an availability zone. And when you create subnets in a VPC, you can have a subnet in an availability zone in a wavelength zone multiple of each it doesn't matter it's just treated as an availability zone from a subnetting perspective. However, from a routing perspective, you need to carry a gateway. And the reason you need to carry gateway is the traditional network address translation that happens between private IPs that are allocated to your VPC and public IPs doesn't actually make sense in the context of wavelength because those public IPs are actually Verizon allocated devices that are unique to our network. We call them carrier IPs. And because these carrier IPs are unique to our network, we needed a different appliance. And so it's not called a internet gateway, but a carrier gateway and you need to attach that to your VPC. Much like you would to the internet gateway. And in turn, have a route table unique for your carrier subnet traffic which I've tried to highlight here in that little purple box in the middle. This is a traffic desk and for your VPC, your local route stays within your VPC. However, the route table attached to that wavelength zone for egress traffic is probably going to send it through the carrier gateway. You don't want to have to go all the way back to northern Virginia or that parent region, and then proxy traffic out. You could, but you may as well use the carrier network that's right there. That's the whole point to get that low latency connection. That's why we design the carrier gateway. So between care gateways and carrier IPs you understand the networking fundamentals. The Kubernetes fundamentals that we just described are incredibly simple. You just have to extend your cluster. Add node groups and then make sure the connectivity is sound. And then one thing worth noting for those who want to delve into the nuances here, depending on the cluster endpoint type, that self registration to the control plane needs to be able to talk to the control plane, which it might not be able to do if you haven't attached a carrier IP to those worker nodes. What you have to do the very simple EC2 interface endpoints, you can talk to the EC2 control plane and in turn registered to the cluster. All of this information can be found on our GitHub Verizon that we mentioned our Verizon 5 GH tutorials GitHub repo. Now we talked a little bit about Kubernetes, but I want to set up a problem statement here that there is truly no better person than Ragu to talk about. That's the following problem statement. We talked about these carrier IPs that only Verizon really natively understands. To you, they all look like 155.146.x.y. They all look the same to you, but they're actually going to correspond to different endpoints in Miami, San Francisco, Las Vegas, Boston. This is a headache. And worst of all, DNS today, geolocation based routing has no idea where these addresses are. And so you need a way to figure out at any given point, I don't want to hard code a carrier IP address. There's got to be a better way to figure out what's closest to me at any given point and allow network intelligence to take over here. And so I'll turn it over to Ragu to talk a little bit more about the edge discovery service. And I believe you're on mute here Ragu. Thanks, Robbie. Any application that we can imagine today is geographically distributed and the problem statement that Robbie mentioned earlier is aptly suited for this kind of geographically distributed application. In public region, the concept is pretty simple. You have four regions to choose from East one East two West one West two taking the Amazon as an example and any application developer can pick one region and deploy the application. And a device or an end user connecting to that application can pick out of those four regions based on the geolocation and just connect to it. And DNS is pretty much suited for this kind of an resolution. But when it comes to 5g edge or this kind of an multi-axis edge compute, we have much denser deployments taking the example of wavelength itself. We have like 10 sites for 5g edge or 13 sites including the sites that we launched this year. Then how does an application developer determine where to place your application? Or how does an end user connect to out of all those 13 sites, which is the right site for the user to connect to? So this is a very big problem statement where it comes to edge computing. And here at Verizon, we are using our network intelligence abstracting all the complexity and giving you a very simple service you can actually use to solve two use cases. Looking at it from the point of view an application developer. How do I find an optimal Mac platform that needs that meets the application requirements, right? You might have some policy requirements stating the latency requirements or application performance. How do we determine the optimal Mac that satisfy your application requirements? The second point of view is the end user point of view. How does a device or how does an user connect to those application servers which are already determined or already running on the predetermined Mac sites? And we at Verizon have developed a very simple to use edge discovery service and that is what we're going to talk about in the next couple of slides. Let's take a very practical example, right? So there's a cloud gaming application and a cloud gaming developer have developed the application and wants to deploy it on out of the 13 sites they want to deploy in certain sites which maximizes their ROI for the application. They use edge discovery service to determine and determines that they need to place the application in three sites. Boston, Miami and New York. Let's talk about East Coast for a second. Now, when the application is deployed, how does the client or the mobile know which wavelength zone or which application endpoint to connect to, right? Between Boston and New York and then Miami. Geolocation is not going to solve the problem. And especially for these three examples, between Boston and New York, you can just use the reverse geolocation or reverse IP address and then figure out rough estimate. But the more denser we go, the problem becomes much more complex now because you have a lot more sites to choose from and also the resolution might not be accurate. And for latency critical applications and this kind of an performance sensitive applications that even the small difference is going to be critical. So once the application is deployed, now the end user needs to connect to that endpoint. So the developer programs the edge application to call the Verizon edge discovery service or we call it as EDS. And then Verizon EDS identifies what is the optimal network path for the mobile to connect to the Mac platform. And in this, respectively, we also determine what is the optimal endpoint on the which is running on the Mac for the application server that the device can connect to so it gets the optimal experience. And again, the optimal experience is defined by the enterprise or the cloud gaming application developer as part of the policy as part of the profile of the application. As a result, now the device will connect to the Mac instance or the applications were running on the Mac platforms which is selected by Verizon by looking at the network conditions by looking at the various latency metrics and abstracting all of that and providing a simple interface. So let's do a bit of a technical deep diver giving you an example of exactly how it works. Right now the cloud gaming application has determined to deploy the application in three sites Boston, New York and DC, right? And the application is deployed and you get a carrier IP address as you can see in the 155 146 range. These are again allocated by Verizon. So the enterprise in this case, consolidates all this information, and then pushes it towards the edge discovery service making an API call with information like the application server ID, calling it as Boston test instance. And then the New York test instance and the DC test instance, and also allocating some information about it like what is the address IP address where the application is running. And all this information is provided to edge discovery service as part of service registry. And next when the device or the application running on the device makes a query to the edge discovery service, passing in the UV identity, so the identity of the end device. And in this case we are taking an example of IP address to the end mobile device IP address. And then the edge discovery service will use that IP address to determine which is the optimal application server out of the three application servers provided by the application enterprise developer and then returns it back to the device to connect it. As you can see all of this complexity of determining the right application based on the user IP address is abstracted from the user and it makes it very simple for the application developer and enterprise or even the end device to determine the application server. That's back to you. Thanks for go. So what I wanted to do in the final slide here is close with a thought experiment, an open challenge and opportunity that we see as one great application of the edge discovery service you see how incredibly important it is because without it. You're either going to guess and guess wrong or guests in this case. You're only going to guess right one out of every three times you could just have a high level DNS DNS records that round Robin you to victory, which in most cases again would be wrong. Or we get a little bit smarter about this and use the edge discovery service to our advantage particular in a Kubernetes environment. And one thing I've been seeing a lot from today is that it's been invoked manually most of the time even if you're using you've developed your own SDK for it congratulations but anytime you create let's say you're exposing your services today and worth noting that in the absence of a load balancer, a lot of customers today are using node ports to expose services. And so today, those three addresses we saw before in Boston, New York City and DC respectively, you have to call manually. The EDS API each time the node port is exposed, or when you port for it. And that's a challenge as the application grows as you use more wavelength zones, you can see a scenario where one manual error results in clients being blissfully unaware of application infrastructure that you're paying for. So we started thinking, maybe Kubernetes can help. Maybe we can tie together the network intelligence story we just talked about with the Kubernetes world and it occurred to us that we can start thinking about admission web books, or in the future operators so I'll start with the operator story that's a little perhaps longer term thinking here, and then talk about something we could do today. Conceivably, there's nothing stopping the edge discovery service from being a CRD. And if that's the case, then we can have these custom controllers here taking care of all the application logic, or the creation the updating all the credit operations associated with these end points being exposed to the node port load balancer or otherwise, always watching for changes and manually updating EDS so now the API is only going to be invoked in the back end essentially once you create once you create that resource type, you never have to touch it again because Kubernetes takes care of it. And to me, that's incredibly powerful, particularly in a world today, you know what we're seeing from customers is what we're seeing is it's not always even stable care IPs, they're ephemeral in their very nature you spin up an EC2 instance has a care IP it goes down it comes back up with a different address. Imagine the operational complexity of managing that yourself. Using operators as one example, but something we can definitely use today is admission web books, a much simpler way to say, Hey, Kubernetes, always watch out for the integration of these node ports, and in doing so, populate the edge discovery service accordingly. And so to me, this is a really really new admittedly new, but exciting problem space for the Kubernetes community, and the network intelligence community here at Verizon and beyond can come together to build some really unique integrations. And worth noting that this is just one example of network intelligence I'll give you one more to close out the session here. And that is today by and large, application metrics make application decisions are favorite example. That's often discussed when you're first getting into DevOps or infrastructure monitoring is that CPU utilization. Once you get a particular threshold you scale out, maybe from one instance to two instance instances rather. In the future, maybe you don't have to rely on CPU utilization as a proxy for load. Maybe you already know where your devices are, and can preemptively scale before you hit that utilization threshold. And so, across all of these different examples and by the way the example I just highlighted, we have working code to prove using Verizon think space and basic auto scaling groups on AWS Wavelength so whether you're looking at that perspective, or even you can have your auto scaler tied out to that, to those things based devices so whether you're thinking about this on the perspective of IoT devices and auto scaling groups, Kubernetes and admission web hooks network intelligence. We think there's a huge opportunity to consider network intelligence like never before, in how you create update and maintain Kubernetes clusters, and we invite you to participate in this community by getting started with your first Kubernetes cluster on AWS Wavelength. And as a few examples of ways that you can get started, you can visit our developer resources page by going to Verizon.com slash 5G edge and clicking on our developer resources page. You can go to our GitHub page that we mentioned 5G edge tutorials. You can visit the Verizon 5G edge blog. You can check out our 5G labs at Verizon 5G labs.com slash edge to check out our latest immersion day, hands on training and more. And we'd love to continue the discussion with you. You can find us on Twitter. You can email Verizon 5G edge or Verizon.com to continue the discussion. We'd love to have them, and I will leave the final word to you, Rugo. Yeah, pretty much excited for what all amazing things the developers here can build with the technology that we are providing. And I'm pretty sure the network intelligence is going to change the game that we're playing with the edge computing and as well as the low latency applications are being built. And the way Robby explained, we have all these services, tutorials, blogs and everything for you in our edge portal. And please do visit and start building amazing things. And with that, thanks everyone for joining. Stay safe, stay healthy and hope you enjoy the rest of the conference. Thank you.