 Hello everyone and welcome to the Nephio overview webinar. My colleague and I will walk you through what is Nephio and a demo of Nephio as well. I am Amar Kapadia. I'm co-founder at Arna Networks, Sandeep. Hello everyone. I'm Sandeep. I'm the principal architect at Arna Networks and I am also part of the Nephio TSE. All right, so let's talk about challenges at the edge. If you look at edge, edge workload, orchestration and management presents new challenges and by edge workload we mean the network service or an edge native application. And the challenges that we see are one is the scale. You could have tens of thousands of edge sites with hundreds of workloads. The second is infrastructure dependency. So interestingly with the cloud we try to decouple ourselves from the infrastructure. Now as we go back to the edge we are finding that it's not so easy to decouple. And workloads have specific requirements meaning that the infrastructure has to be orchestrated first that meets the requirements of the workload and then you can orchestrate the workload whether it be a network service or an edge native application. And the third is heterogeneity. We have very different types of edge clusters, different storage, different compute, different Kubernetes flavors, different networking providers. And when it comes to the workload there is no shortage of a variety of different network services and edge computing applications. So these are some big challenges in terms of having the edge be successful. Now what if we could use Kubernetes to solve this problem? And the answer is you actually can. What was not known to me is that Kubernetes is a control plane that can do more than container orchestration. I had always assumed that you know that's what Kubernetes was for. But actually it's a general purpose control plane and container orchestration was just the first one. And we'll talk about how Kubernetes can be used to solve the edge orchestration and management problems that I just outlined. All right, so introducing Nephew. Nephew is a new Linux foundation open source project. It is seeded by Google. It has been around for just a few months. The first developer summit happened in June of this year. It is 100% Kubernetes-based, intent-driven, orchestration and management framework for network services, edge computing applications, and the underlying infrastructure. And that in a sense addresses the challenge that I mentioned completely. It is multi-vendor, cloud and edge, infra-oriented. So you can orchestrate edge and cloud infrastructure across multiple vendors. You can orchestrate network functions and edge native applications. And finally, after you orchestrate these different things, you can handle the configuration management and ongoing lifecycle management as well. So why Nephew? There are other solutions out there that I would say the biggest reason for Nephew is the scale. When you look at tens of thousands of sites, tens of infrastructure providers, hundreds of workloads, this is a scale we have not seen before. And because of that, you need a new solution, and that's where Nephew comes in. It's suitable for multiple sites. It's intent-driven, and that again helps with scalability. And there is constant reconciliation of state. If you're just a few sites, you can manually reconcile. But at this scale, you have to have constant automated reconciliation. And day one, day two is also taken care of by the same mechanism, which lends itself to scalability. You have CICD baked in, as you will see. It goes all the way from infrastructure to workload. So on demand, distributed clouds, and it's suitable for infrastructure and workloads. And finally, it's heterogeneous. So it's meant for multi-vendor environments. It can handle public and private clouds. It can handle a variety of third-party network functions and edge native applications. What type of problems can Nephew solve? These are just three examples that are, you know, there's no limit to what type of edge applications you can use Nephew for. The first one is multi-vendor edge services brokering. Here at Telco may want to be the single point of contact for enterprises and provide them edge from different vendors as per their need. For example, if somebody says, I want an edge that is five milliseconds from such and such a location, it automatically, the Telco automatically finds the right edge, provides it, and essentially provides a brokering service. The second is multi-access edge computing. This is for just any general purpose edge computing application. And the third is for 5G network services, for example, OpenRAN. The Nephew architecture at a very, very high level is as follows. You first state your intent. Everything is declarative and intent driven. So the intent explains to Nephew what the infrastructure requirements are, what were the network function or edge native application requirements are. And on top of that, the network service or the composite application. Then based on the intent, the orchestration happens. You validate the intent and then you deploy it. And that goes into a control loop where you're constantly monitoring the application and then reconciling the state. So this makes sure that there is no configuration drift. And it's also useful for day one, day two configuration where you may change your intent and it's automatically reflected on the workload. I'm going to be very quick here. Nephew extends Kubernetes. It has three pillars, each one based on custom resource definitions, which is a way to extend Kubernetes APIs. And under a CRD you have operators. The three pillars for operators are cloud infra, where you orchestrate the cloud infrastructure itself, cloud and edge, workload resource automation, where you orchestrate network services and edge computing applications, and then workload configuration. So these are the three pillars that Nephew is addressing. And the three key principles I won't go into them are intent based automation, all based on intent. We looked at one example. You can see another example here. Declarative configuration. It's very important in Nephew to be declarative and not have any imperative information in the intent. So in fact it's called configuration as data and non-complex. So very simple Kubernetes cloud native principles are very simple and those are the ones that are being used. CI CD is automatically baked in. Sandeep will hint at this. And with that, I'm going to hand it over to my colleague, Sandeep, to walk through the demo. Sure. Okay. So in this demonstration, what we are going to show is the orchestration of the infrastructure and then the automation, which basically prepares that infrastructure for the workload deployment. Followed by that, we are going to orchestrate workload, and workload in our case is caching DNS application. So we will see how the placement intent that is specified during the workload orchestration selects the cluster which are created, clusters which are created by the infra automation that we are going to show. Followed by that, there is an example of day zero configuration of the caching DNS application. So in this specific use case, the configuration will be based on the cluster type. So the controller is basically going to look at the type of the cluster. And then based on that, it will configure the scaling profile in the application dynamically. And then it will basically deploy the curated package to the target infra. So this is a very high level diagram of the nefio platform and the three edge clusters. So what we see in the nefio platform are primarily two kind of clusters. One is the workload cluster and another is the infrastructure. One is the workload controller and another is the infrastructure controller. The infrastructure controller underneath is going to use the crossplane. And we are going to orchestrate EKS cluster. And the nefio controller node also has the config sync as a component. And config sync is basically the GitOps tool here. And config sync's job is to sync the packages from the Git repos to which it is pointing to. And Porch is the manager for the KPD packages. So basically what we have to do is we have to register the Git repositories with the Porch. And Porch automatically discovers all the packages that are present in these Git repositories. And Porch then provisions the users to basically download these packages, perform mutations and upload these packages, upload the curated packages to the target repos. In this example, there is a repo associated with every cluster. And the config syncs in these clusters are configured to sync the packages that are uploaded to their corresponding repos. And then along with that we have an infra repo. Infra repo is supposed to contain the infrastructure-related packages. So in our example, the infra repo will have the KPD package which has the crossplane KRM objects. And then there is a repo which has the nephew packages itself, which are basically the controllers that we see, the config sync and Porch. And then private catalog packages contain the raw packages, raw workload packages. So this slide basically describes all the components that are part of this demo. So this slide shows the concept of intents at a very high level. And in this demonstration, what we want to show is that the user is specifying two intents, two high level intents. And the first one is going to be the infrastructure intent followed by the another intent that is going to be the workload intent. In the infra intent, all the user basically, all the user needs to specify is the properties of the infra that he wants to create. And the example could be the zone, region, wavelength zone, for example. And the properties like if he requires GPU in the target cluster. And these high-level intents are then understood by the controllers, the infrastructure controller specifically in nephew. And it will perform the job of creating this infra based on the intent that is specified. Similarly, for the workload intent at a high level, only two pieces of information is what is required. One is the placement intent where the user can specify the region where he wants to deploy the workload, followed by the source repo. So in the previous slide, we saw the catalog repo where the raw packages are present. So user has to specify the source repo so that the controllers could download the packages from their perform mutations and upload them to the target repo via porch. Yeah, we can go to the next slide. So this slide, it basically shows the end to end demo. So as I described, the pink boxes on the leftmost of the screen. So user starts with specifying the infrastructure orchestration intent. And that is specified via a custom resource, which is called the deployment package custom resource. And then this custom resource is basically reconciled by the nephew controller. And nephew controller will download the KPT package, which has the EKS related KRMs from the private catalog repo. It will perform the mutations if any specified in the placement, specified in the infra intent. And then the nephew controller is going to upload the curated package to the to the infra deployment repo. And as we know that there is config sync present in the nephew controller cluster itself. And this config sync is basically meant to sync the infrastructure related packages from the infra deployment repo. And as the controller curates and uploads the curated package to the infra deployment repo, the config sync will automatically in the back end starts syncing that package. And as a result of it, what will happen is that the the the the AC controllers from AWS, they will come into action and they will start creating the resources, the resources, which will basically comprise our EKS cluster. So this is the part. I mean, it covers how the EKS cluster orchestration is basically automated. But that is not enough for workload orchestration. We have to prepare these clusters for workload orchestration. And we have automated that process as well. And that is the job of the nephew infra controller. So it's a POC level controller. So what it does is it watches the EKS Kubernetes resource in the nephew controller cluster. And it waits for this this resource to come into ready state. And once it comes into ready state, it deploys config sync in the target cluster. And it also configures the config sync to basically point to its corresponding edge repo. So that is the that is the minimum preparation that is required in order to prepare in order to prepare these clusters for handling the workload deployment of the workload. So this was the complete end to end path of the infrastructure automation. And then comes the workload automation workload orchestration. So workload orchestration user specifies the workload intent. And so basically two pieces of information. One is the source repo where the caching DNS raw package is available. And another is the placement intent. And then this controller will basically resolve the placement intent. And it will figure out that it has to push this package to the edge repo, which is associated with our EKS cluster. And the based on the kind of the edge repo, it will basically configure the scaling profile in the caching DNS KPD packages. And the the curated packages will be basically put on the edge repo. And as we know that our infra controller has already configured the config sync in the case cluster to pull these packages from the edge repo. So as soon as the nephew controller pushes the package to the edge repo, the config sync in the target cluster will basically sync that package. And that will result in the deployment of workload in the case cluster. Yeah, next slide please. All right, so I'm going to stop sharing and maybe Sandeep in the next five minutes or so if you can just quickly show the demo, that would be great. Sure. So let me share my screen. So is my screen visible? Yes. Okay. So this is a recorded demo. So on the left part of the screen, I'm starting the nephew controller. And on the right side of the screen, as I described, we will start with the specification of the infrastructure intent. So this is how the YAML looks, the infrastructure YAML. So all it has is the source repo from where it has to pull the EKS KPD package. And it says that the package type is infra. So based on this, the controller basically differentiates between the infra and the workload package. So as soon as this YAML is applied, the controller reconciles to it. And as part of it, as I described, it is pushing this curated package to the target repo. So we will list the packages now. So as we see, the highlighted package here is the one that is pushed by the controller. And it has the EKS KRM objects. So this package is in proposed state. And we are going to approve it manually. And the approved packages are synced by the config sync. So with this command, we are approving this package. So as soon as we approved it, in the back end, config sync would have synced this package. And that will basically start the creation of the EKS cluster. So now we are logging into AWS console. And we see that the EKS cluster has started to come up. And this EKS cluster will come up. And then the infra controller that I described will start preparing this cluster so that it will keep happening in the background. And in the meanwhile, we can go ahead and specify our workload intent as well. So this is the workload intent. And workload intent at a high level has, for this example, it has two pieces of information. One is the information related to the placement, which it says is the US central one. And another is the source repo, where the caching DNS package is present. So this is, so now what is the link between the placement intent that is specified and the EKS cluster that the infra controller has created. So infra controller creates cluster profiles. And cluster profiles are the, they basically map what is specified in the infra controller to the target cluster. So this is how the cluster profile looks. It is a, it is a Kubernetes object again with kind cluster. So yeah. And this has, this cluster profile says the scaling profile as small, small fix size. So this is what the controller will read. And then based on this, the controller will basically mutate the caching DNS package and configure the scaling profile in the package. And this, this cluster object also has the pointer to the target repo of this cluster. So based on this information, the controller will mutate the packages and push them to the target repo. And at this point, we have applied the object and this would have basically, the controller now would have pushed the package to the target repo. So similarly, yeah. So, yeah. So as was the case for the infra, infra, infra intent. So this package is in proposed state and we are going to approve it manually. So now this will stay in approved state. And in the background, the infra controller is still preparing the EKS cluster. And once the EKS cluster comes up and it is prepared, the config sync will automatically sync this package, which is in a approved state from its corresponding repo. And that will basically cause the workload deployment to happen in the target cluster as specified in the placement intent. So this process takes 10, 15 minutes bringing up of the cluster and preparation. So I would just go to the end of this video. So, yeah. So eventually the application did come up in the target cluster. And yeah. So that concludes the infrastructure orchestration followed by the workload orchestration by specifying only two intents with the help of the nephew controllers. So yeah. So that's about it with the demo. I'll stop sharing my screen. So back to you, Amir. Okay. Thanks. Thanks, Sandeep. And hopefully our viewers got a good idea of what nephew is and how powerful it is going to be when it comes to edge workload and infra orchestration and the idea of segments starting with retail, healthcare, V2X, manufacturing, logistics, telco, et cetera. So we'll end on our final slide of what's next. So please do get involved. If nephew is interesting to you, please join us. You will find all the information at nephew.org and wiki.nephew.org. You can watch developer day videos to get a much deeper idea of what nephew is. We have a resource for executives. So if there are managers, director level people, VP level people who want to understand what nephew is and how it can create value for them, then please use our white paper. And finally, at any point in time, if your organization is looking at nephew, feel free to request a one hour workshop with us and we'd be happy to walk you through nephew in more detail. With that, thank you and have a wonderful day. Thank you.