 Good afternoon, everyone. I'm Praveen Mada. I'm the staff AML engineer at Dish Wireless. And for the next few minutes, I'll be talking about how we are building self-perfecting networks. So when we say we are Dish, we are being building the Dish Wireless network for the last three years. And we want to reimagine connectivity, so how we are doing that. So we want to harness the power of 5G that we are building, the infrastructure, the networking side of it, and combine it with artificial intelligence AI and built self-perfecting networks. What's a self-perfecting network? A network that could monitor, manage itself, and also perfects itself. So I'll be taking you guys through the journey that we went through for the last few years. And we'll be starting with how we build our underlying 5G network. So for the last three years, we built it through our cloud-native principles. What do I mean by that? So we built our 5G core as an elastic app that's scalable on top of Kubernetes, which is all the underlying network functions that are geographically distributed across the 18,000 edge nodes that we run. So we run them as individual apps on Kubernetes and also in the cloud. So the next few guiding principles that we worked on are building an open network, a secure network. When I say open, it's not a buzzword for us. We wanted to build a network working with our partners. So we are multi-vendor partners. We pioneered the open-run setup. And also when we say open APIs, all the information that the network is generating from the Kubernetes, up till the network functions, and the services that are running on top of those services, or the Kubernetes layer, we want to expose all of those as APIs for our windows to manage underlying infrastructure and also for developers, our individual developers, who will be part of a marketplace so that they can build apps on top of our infrastructure. And when the second principle is building a secure network, we built it. Security is the number one thing that we focus on. So any component that we build as part of our 5G infrastructure, security is inbuilt into it. And you can also think of the infrastructure that we are building as an observability tool for managing our retail customers and eventually our enterprise customers. So all the network data could be exposed to our enterprise customers to optimize their applications and provide utmost services for our end customers. And the last one is flexible. What do we mean by that? The network is distributed across the whole US, and we have 18,000 edge nodes. So we need a way to control it. And also when we say control, we don't want to control it for our enterprise customers. We want to give the power to them because they know the use cases they want to run on the network better. So we are building kind of a hyperscalar, which is distributed. We are not just in NDCs, RDCs. We have edge nodes. We have nodes at our towers as well. So we want that power in the hands of enterprise customers so they can optimize their applications they want to run on our infrastructure. And these are the principles that we used. But going forward, so all of these principles are the underlying foundation for our infrastructure. But as Dish started this journey of reimagining connectivity, where we also want to build an open, secure, and flexible network, there are two main challenges that we encountered because of the scale of the network. So the first one is, how do we scale this network for multitude of use cases and also for enterprise customers? And once we scale it, we need to figure out how to maintain that infrastructure because we are not providing a one-size-fit hall infrastructure for our enterprise customers. We are providing a customized service that fits their use cases. So how did we do that? So the data that's getting generated by our infrastructure is in petabytes per day. So humanly, it's not going to be possible for them to understand the implications of the data. So we turn to AI. So where we train machines or AI models to understand the data that's getting generated by our infrastructure. And also, we train reinforcement learning agents that takes actions on those observed patterns of the data by our infrastructure. So let's start with the machine's identifying patterns, which is the first part of it, where deep learning techniques have proven instrumental in training machines to understand patterns. So what's at the high level deep learning technique is an AI technique where a machine tries to process the data by emulating how a brain does it using neural networks. So neural networks are a distributed combination of nodes that could talk to each other at the highest level. And the machine we train is an autoencoder. It's a neural network that identifies patterns by reconstructing or recreating them. So how we train this autoencoder. So the autoencoder has two layers. The first one is an encoder, and the second one is a decoder. The first layer takes in the complex input data and tries to synthesize and extract essential features out of it, and then passes it on to the second layer, which is a decoder, where it takes in the essential features that are identified in layer one and try to reconstruct the input data. So the difference between the input data and the reconstructed data, or the residuals between that data, is how we train the machines to identify patterns. Now, this is the base model that we trained on EKS metrics or the performance metrics data. It also takes in application logs from our network functions that we are running on the infrastructure. Now, we have machines that are identifying patterns. So now we need to train machines that are reacting to these patterns. We did try providing this as an output for a knock, but it's going to be harder for them to understand the implications. So we thought, why don't we simplify that process and train machines to react to those patterns as well? So we went with the reinforcement learning technique. So it's a decision science that trains an agent to take an optimal behavior in the environment it's trained in, and the main focus of that agent is to get as many rewards as it can over the longer time horizon. So that's how we trained it. And the agent we trained is called Fomper, which is first open network pattern reactor. And I'll give you a snapshot of how we trained it. So just to the highest level, reinforcement learning has these five components. So the first one is the environment. In our case, that's our 5G network that we are running. And there is an agent. The job of the agent is try to understand the environment and react so that it could get optimal rewards. And the agent interacts with the environment by taking actions, whereas agent gets snapshots information from the environment through states. So the only action that agent keeps taking is based on the use cases or the traffic on the infrastructure, can it scale up and scale down the underlying infrastructure? So going for more deeper into how we trained rewards. So reward is the guiding principle that keeps the Fomper agent in check, which strikes an optimal balance between the actions it's performing and the reward it's obtaining. So these are the key features that we incorporate into our reward function, because it's the V1 version of the reward function. So the first thing is we want it to be flexible so that we could add more components into it. So coming to the business focus, we started with two different business-focused objectives. One is providing utmost service continuity so that we can optimize underlying infrastructure where we could be profitable. So with all of this information, I would like to take you guys a few years into the future and imagine a Dish 5G network that's run by robots or Fomper agents where a network of agents are communicating, understanding, and reacting to this pattern. So that's how we at Dish envision self-perfecting networks. So all the AI work that we do, our team does is open source and all the code is available through our GitHub report, through our Dish Dewex developer community. And all the research that we are doing is also available through our medium posts. So we are looking for developers who would like to join our team and join hands with us so that we can solve more enterprise-specific problems and change how the world communicates. So thank you. Thank you, Praveen. If you have questions for him, you can find him at our break or later this evening at our reception. Next up for our last lightning talk before our break, we have Kubernetes Gateway API for complex environments and service providers.