 short talk. So I'm Amar Kapadia. I'm co-founder at a startup called Arna Networks and the title of my talk is Nephio at the Edge and as we will see Nephio uses Kubernetes and it's especially suitable for the Edge so it's almost a perfect talk for Kubernetes on edge day. So first the problem statement. What are the new problems that come up when you try to edge workload for example a network service orchestration and management and in my view three things come up. First is the scale. As we all know the edges can be massive in comparison to to say the cloud in terms of tens of thousands of locations hundreds of thousands of devices and that's the first challenge. The second is the dependence on the infra that a workload has. For example if you take a network service such as a radio area network that often requires GPUs or DPUs and in so you have to configure the infra to suit the workload requirements and the third is just heterogeneity the edges heterogeneous that are hyperscalers providing edge solutions software companies providing cache solutions for private edge implementations. There's just a lot of compute, cache, storage, networking. If you look at the diversity it's massive. So this was actually a new piece of information to me that Kubernetes is more than just container orchestration. I had just assumed that and what I learned over time is that Kubernetes is a general purpose control plane that just happens to do container orchestration first. It can do a lot more. So what if we could use Kubernetes for the problem that I just outlined? And that's exactly Nefio. By a raise of hands how many people are familiar with Nefio? Okay so a few people. So hopefully this will be new for a lot of people. So it's a new Linux foundation project started this summer so it's very new and it's a Kubernetes based intent driven system I'm not going to read the whole thing that can be used for network functions and edge computing applications and the underlying infrastructure. So you can set up the cloud and edge infrastructure with the initial configurations and then deploy and manage network services and workloads on top. So why Nefio? Nefio solves the three problems that I outlined quite well. So first is scale. It's meant for multi-site so you can have lots and lots of clusters on demand or private. Your network functions or workloads can be interconnected straddling all these sites. They can be edge or cloud. Doesn't matter. Intent driven. This is a huge piece of Nefio where it's based on configuration as data not code because infrastructure as code tends to have imperative embedded in the descriptor and that makes it very sort of brittle and difficult to use. And day zero configurations are automatically derived. I'll walk you through it. There's a concept of mutation where your initial intent is mutated to generate something that the end cluster can consume and day zero configurations are part of it. Workload, infra dependency, it's useful for both workloads and infra and it's meant to be heterogeneous. So like I said, Nefio uses Kubernetes. There are three key pillars in Nefio. All three pillars use custom resources and under the custom resources they have operators. So it's a standard Kubernetes design pattern. The custom resources are in three buckets. First are cloud infra resource automation. So that deals with the infrastructure layer. The next is workload resource automation, network functions or edge computing applications. And that orchestrates and manages applications. And the third is workload configuration. So this third pillar can configure your network functions or edge computing applications using whatever mechanism, whatever protocol is required. In the network functions it could be yangnetcon for edge computing applications that could be REST. Declarative is a huge part of Nefio and it's an underlying principle. So there are three pieces. The first is you design your intent. So I'm not going to go through the whole thing, but you design your intent that describes what you want. The intent then gets mutated to something that the end cluster can consume. And the mutation is based on a series of operators. So the operators can say, oh, you know, this cluster is small so let me adjust the scaling factor accordingly. Then you deploy the intent. And then the third is very important where you have a control loop that monitors the deployment and constantly reconciles it with your intent. So this is again a key part of intent driven where you have constant reconciliation. And below that you have the various clusters. CICD at scale is also baked into the project. So it's inherent. There's projects called KPT. KPT does packaging of Kubernetes resources and PORCH, which is a wrapper around Git. And through KPT and PORCH, everything is basically going into Git and you get CI automatically. And for CD, there's a project used called config sync and where the edge pulls the configuration from the central nefio cluster. It need not be configsing. It can be argocd, it can be flux, it's really whatever you want. And because of the two mechanisms I mentioned, CI CD at scale and constant reconciliation, day zero, day one and day two are handled in a uniform manner. So whether it's configuration drift or changing configuration, both are handled. Use cases, we have multi-vendor edge service brokering where you can broker between various edge providers, for an infra, 5G networks, for example OpenRAN or 5G core, and edge computing applications. I have a demo. I don't have much time. So I'm just going to make some quick points. So on the left is the nefio cluster. The nefio cluster is centralized and on the right we are showing the operations. We are going to do both infra orchestration and we are going to do workload orchestration, both. And you'll see how easy it is. So first, so on the right there is a KRM file, Kubernetes resource file, that is describing what we want to do on the infra. So you see it's very easy, simple. It's just saying what is the repository from which it should fetch the resource. We want AWS EKS and the package type is infra. And there's a couple of steps that are done to apply the KPD package and then to approve it. So now it's applied. Now we are going to approve it. I'm going to speed through this. It's only a 10 minute talk. And what you see is that the cluster is coming up. So all we did is we described the resource file and we approved it, committed and approved it and now the cluster is getting deployed. And now we go to the application or the network service on top and that's also it's describing the repository. It's a DNS caching application and the package type is workload. So here what you'll see is the day-zero configuration mutation is also going to happen. Based on the cluster type, it's actually going to mutate day-zero configurations. I'll just show you real quick. And again, I apologize. I'm flying through this. So what you see is the cluster we deployed is a small fixed site and based on the small fixed site, I know what kind of scaling I want. So the day-zero configuration is going to mutate accordingly and that's what gets applied. So we apply it, then approve it, same process. And what you'll see is the cluster is finally up and the application is up as well. And you can see the full demo. So that's the link. And please get involved with this project. It's interesting to you. Here are some resources and you can join us at the one summit in Seattle in November. And with that, thank you. Do we have time for questions? Any quick questions? Here. The project has the ability. You would have to develop a controller. So far, nobody has stepped up to do that controller. But in theory, you could. Okay, thank you.