 Hey, good morning, good afternoon, good evening, and welcome to our session Architecting Geodistributed and Hybrid 5G Edge with Kubernetes. My name is Sachin Rathi, I'm responsible for edge technologies at Red Hat, and I have with me Robert Belson from Verizon, Robbie. Thanks Sachin, and hello everyone, thanks so much for joining this session. We're super excited to tell you a little bit more about our experiences working together to build the next generation of hybrid mobile edge compute applications using Kubernetes. And I think it all starts with defining what the edge actually is, and that it's an inherently ambiguous term. I'd argue that the edge is both the device you often have in your back pocket, the various 5G radios you might see as you walk along a city street or on the highway itself, as well as anything in between. Any edge endpoint could be defined as part of an overall cloud continuum. And what gets us so excited about the edge isn't just that it's more performant, it's not just the speed, the latency, the throughput, but it's that you can start to unlock new revenue streams because you can often introduce new features as a result of the edge topologies. The ability to unlock innovation, drive new revenue streams, while also having a more performant end user experience, we think is a really exciting opportunity. And that's why we teamed up with Red Hat to explore this, particularly on the architectural side, in earnest. I want to talk a little bit about the challenges here, because while edge applications unlock all of these fascinating capabilities, they're pretty difficult to build. As you think about edge use cases that necessitate higher performance, perhaps a disaster response-based use case, today you could find yourself in a scenario in the absence of the edge that the actual workload is a thousand miles away from those end devices. That's not going to work in the future. More broadly, can you introduce the edge in such a way that doesn't disrupt the existing ways you deploy applications and the existing ways you orchestrate your containerized workloads? And lastly, as you start to introduce new paradigms, not just for the cloud itself but your device fleet, are you going to know ahead of time whether that device is connected to a public network, a private network? Could we find ourselves in a world where you have a mix of devices and then how does your edge environment care for that? Do you need to decide ahead of time? To all of those challenges, we worked with Red Hat to address each of those by trying to create a Kubernetes-native implementation to solve these challenges. Now, when looking at Kubernetes solutions suitable for edge, we're not just looking at edge deployment option, but we're looking at an end-to-end solution. This is where OpenShift, which is a Kubernetes distribution, comes into play. It's a solution that takes care of development of apps, provide consistency in their deployment, as well as maintains consistent resource consumption no matter where it's deployed. OpenShift also has capability to interface with any infrastructure components, whether it's networking, storage, support various CPU architectures and GPUs, depending on the use cases. From a security perspective, how data is handled at edge is also of paramount importance and for that, OpenShift provides highly secure solution built on rel core OS. With OpenShift, you have a solution that can be deployed seamlessly or any infrastructure using its native capabilities, whether it's full deployment or installing on existing environment. It can also be sized based on the use cases, which could be a single node deployment on a very restrictive environment to possibly a remote managed environment number of nodes deployed in different sites. And as you think about the reference architecture here, at the highest of levels, there are ongoing discussions about how to best deploy Kubernetes at the edge. From our experience, what we think is going to be most successful from a mobile edge perspective is to take that control plane for a given region and actually centralize it in the region itself. And then for each incremental set of edges, each of which could be separated by over a thousand miles apart, the nodes would essentially be the spoke in the hub and spoke architecture. That way, you're reusing the same control plane. It's cost efficient. It's easier to manage. And in a world, particularly in the future of 10, 20, 50, 100 edges, you reduce the complexity of management. Nonetheless, now you'll have challenges around multi-cloud, multi-region. What do I do? And how do I keep the simplicity? That's where OCM comes in. Now, edge is not limited to a particular geography or technology. Edge could be private to an enterprise or public where it can be accessed by anyone. It could be on a cloud platform or multiple disparate cloud platforms. However, the need to manage from a central location still remains. This is where open cluster management project comes into play, which is a CNCF project. Open cluster management, which is supported as advanced cluster management by Red Hat, solves the problem by deploying an ingesting cluster configuration from multiple Kubernetes platforms and use it to manage edge application deployed anywhere. Once clusters are under advanced cluster management, users can get full observability and can manage all Kubernetes cluster from a single dashboard. Orchestration of applications can be done based on various policies, making sure that the deployment is done based on your compliance requirements. Now, when we look at infrastructure automation, it requires use of software to create repeatable instructions and processes that reduce human interaction and makes management less error prone. The right automation platform, install and running in the right location, can improve activities like provisioning, configuration management, security and compliance. Those benefits of automation are more noticeable when the automation software is running as close to the thing it's automating. When automation is rolled out to the edges of the network, it will help speed up the transactions. Now, Ansible Automation Platform, which was used in this case, make use of blueprints of automation tasks called Playbooks. Ansible Playbooks are frameworks that can program edge applications, services, server nodes, infrastructure, Kubernetes clusters, et cetera. Ansible Automation Platform always runs as the control node, the location at which automation tasks execute from. This allows us to automate edge location with the lowest latencies. But now let's talk about one additional challenge. In the world where you fully automated and abstracted away the complexity of the id for itself, you've still introduced a problem around edge discovery. Even in the case of a single cluster, how does a given mobile application understand natively which edge is the closest edge? In fact, if we borrow from the airline industry when they often say the closest exit may be behind you, that very much is true in an edge discovery scenario. It could very well be that you're in Boston, a highly immersive application needs to be delivered, and the closest edge could be in Miami. It all has to do with the topology of the mobile network, and the nature of where your device is in fact anchored within the packet core of our mobile network. With that said, as a developer, you don't have to manage that complexity. You're already managing the complexity of the infrastructure itself. That's why we developed the edge discovery service, an API that any developer can use to solve these challenges. It's a simple API that essentially allows you to take advantage of CRUD operations for a service registry, populating edge endpoints with an FQDN, the IP address for those carrier-facing workloads, and then you just query it passing your IP address as a mobile device or IoT device and out comes the closest edge. It's incredibly powerful, but then we ask the question of what's the right infrastructure platform that allows us to automate the complexity of keeping the service registry up-to-date? First and foremost, that's where OpenShift comes in. We wanted to create a future-proof architecture that in a multi-cloud world, but also a hybrid cloud world where those specific carrier-facing endpoints could in fact be public and private network facing. You just want one single pane of glass. That's exactly what you're seeing here on the left-hand side. These machine sets are spanning a physical data center or rather a physical outpost in Dallas, and then equipment that you've perhaps never even seen before because it's a wavelength zone in Boston in New York City on top of the region itself. So you have all this infrastructure being in front that you manage, in front that's managed for you, separated by 1,000 miles, all abstracted away via machine sets. And then by inspecting each of these nodes or each of these workloads, you can see all of that relevant metadata right at your fingertips. But going back to the Edge Discovery service, what we did is we created an admission controller that anytime requests are made to the Kubernetes API server to expose a given workload that thus would necessitate for the Edge Discovery to actually know about this workload, we would then intercept that request, figure out the end node being exposed, take that metadata, populate it to the Edge Discovery service, and then keep that valuable state information in a config map. So this is how we're using Kubernetes native objects to bring together network intelligence and a hybrid multi-cloud mobile edge computing world. We recognize that this is just the start. We invite you to join us on this journey by visiting github.com slash Verizon slash 5GS tutorials to learn more. And we hope that together, Verizon and Red Hat can be your trusted partner for solving your next generation of challenges and opportunities on the hybrid mobile edge. Thanks so much and hope you enjoy the rest of KUKON. Thank you.