 Thank you, Arbit, so much. Sorry for the little snack, but I just said to activate some features into Zoom. It's such a pleasure for me to be here. It's actually my first time keynote in the Linux Foundation. And I'm here with Kandan, who is going to share the presentation with me. And we'd like to talk a topic which is really dear, very close to Google and something that has been a topic of discussion with many of the different carriers and service providers that we have been talking in the industry. So let me start with a provocative statement here. Network functions and cloud infrastructure. We feel that these two entities have been like ships passing in the night. In reality, there's no error of understanding of each other. And this has been an architecture with a lot of evolution in these last years. So how do we change the game? How do we make the infrastructure or the cloud infrastructure understand the needs, requirements, and state of network function and how we can have network function do the same in respect to understanding and operating on top of the cloud infrastructure? If you actually look at what's been happening in the market in these last five plus years, there's been a major transformation through the network function virtualization architecture. And we actually had, I call it, a non-container era with some management and orchestration traction, where there's been a lot of different automation capabilities transitioning into more network function oriented or virtualized network function oriented as we transition into the NFE world. And now we start to see a picture which is even more complicated. You start to see the coexistence of virtualized network function as well as containerized network function. But I call it an out-of-band automation. So Kubernetes, which is widely adopted in the world of application or enterprise application, is actually starts to become a core building block of how to build telecom networks. Obviously, this started from 5G, but it's not related to 5G only. It's now actually widely spread across fixed, wireless, cable, and many, many of the other net of fashion environments. But we actually believe that this environment is still not truly cloud native. We believe that through cloud nativeness lies in the fact that Kubernetes can actually automate all the way down from describing a network function or a service to the infrastructure, including all the underlying networking components. And of course, the big change as well as in the previous presentation is not just about some of the core network, but also extending all the way to the edge, but also starting to see public cloud environments as well as the footprint for deploying and then to end truly cloudified networks. And now we, but where is the environment today? If you look at really the automation aspect today, it's fairly complex. There are a lot of still manual configurations. There's a lot of different separate repositories. There are complexities in managing security throughout. And also, there's complexity in managing a true intent in how to deploy some of these services. And I'm capturing some of these points here. There's an ever-growing network complexity. In fact, some of the aspect of cloudification is actually adding to the complexity by adding additional form factors, additional network function, additional use cases, the mix of network and mobile access edge computing applications and actually evolution towards private 5G and private networking in general. Exploding number of sites and services from the original premise of deploying large data centers, 10 to 20 to 30 data centers to now tens of thousands or hundreds of thousands of nodes. Obviously, everyone is looking at the evolution of radio access or access in general these days. Still relying on solid architecture with lack of multibander support. And of course, actually some lack of standardization. Cloud versus telecom standards. And this is creating, in our opinion, a set of unmanageable drift and configurations in addition to the need of having new operating and support models. It's easy to talk about DevOps and CI CD. But if you actually think of the typical lifecycle of supporting software in a telecom network, we are moving from a very fixed type of releases to monthly or even faster type of cadence. So given this picture, we have been approached by the telecoms community as Google and as Google Cloud. This is a nice narrative. But then the question was, how can you help solve this? Have you stepped into the world of cloud native with major entry with Kubernetes? But also, how can you actually look at how have you actually built your network, Google? We have thought of the only way to operate a scale the way we operate now, both as a hyperscaler but as one of the largest networks in the world, was to actually step into this and we believe the right way is to actually adopt Kubernetes as the underlying methodology to solve this problem. So I'm really glad that today, in fact, even representing the work done with Linux Foundation, we are announcing a new project called Nefio. And it's a new open source technical project. Part of Linux Foundation, we work very closely with RP10 team and we are aligning an ecosystem to simplify the automation of telecom network functions. And I call it the Kubernetes way. We, as we started this journey a few months ago, we started to understand the core concept and started to familiarize this with different carriers, different network function providers, different infrastructure and operational providers. And I'm really glad to see that today as we are unveiling these new projects, which kind of will go into the details, we are glad to actually have such a rich community already behind this. As you can see, we really focus in making sure that we had major carriers and service providers supporting this in many of the regions around the world. And we actually wanted to make sure that each one of these carriers are for looking, embracing cloud, embracing a truly cloud native way of deploying this. At the same time, such a project will not have legs if we didn't have great support from our network function providers. As you can see in this picture also from the largest ones or to the new coming vendors and anything in between. And of course, many others from the community. So I think I'm really glad you'll see the more details going forward, but I wanna basically pass the ball to Kandan is gonna describe what nephew is and core objectives and give you more details. Kandan. Next slide, please. So Gabrielle talked about the challenges today and I would like to add more to it in terms of the network automation and the cloud infrastructure configuration that are occurred to support the automation. So if you look at actually today, there is a potential use of containers, but containers alone is not a completely a cloud native automation that is incorporated to it. And the Kubernetes full potential has not been fully been utilized by the industry today. Most of the automations, those are fire and forget automation. So basically the monitoring has to really pick up the once the container has been implemented. In this case, like we are talking about network function as a container. The biggest problem of all this thing is the whole CI CD pipeline with the declarative configuration. So today, mostly the infrastructure as a code, usually they are complex template, very difficult to read, limited reuse, not compostable and primarily lacking vendor support and neutral templates which are very open in the community. So the nephew goal is like a primarily a three important function here. The one is like a simplified automation and it's logic. And you can see that we are giving primary importance to simplifying the automation as a number one goal of this particular community. The second one is like a mission manageable automation configuration. This is very, very important. And we will go through that in a detail in a minute. And then the cloud native all the way across the architecture, not like a bit in pieces, but primarily across the whole architecture. That is what primarily that this architecture of this nephew will focus on. So three important factors that nephew is going to from architecture perspective that's going to look at. The smart, simpler, Kubernetes based cloud native automation which is primarily intent based automation and the active reconciliation of the configuration when there is a problem. And we can call this as a smart and simpler and because it has been using the Kubernetes based automation. So this is one of the important aspect. The Kubernetes here is not only used for hosting the container, but also actually automating the cloud infrastructure automating the whole container as well. And the dependency configurations related to that network function. And this community is going to actually use the fully declarative configuration that is basically based on Kubernetes resource model which has been well accomplished in the industry. And the enterprises use this like in a very, very important way to actually automate the application. The other aspect is like to coming together with the cloud vendors as well as the network function vendors and telecom. The intention is to actually maintain multi-cloud, multi-vendor, intra-operable templates, automation templates to completely automate this particular functionalities together. Next slide please. So this picture sort of explains further detail what I explained on the previous slide. So there are three important components that this is going to automate. And again, as I pointed out, nephew is based on Kubernetes based automation. So we are using Kubernetes for the automation of the containers, not only just to posting the containers. And when you look at this architecture, there is a mentioning of the CRDs and operators. And the CRDs and operators are not new. This has been in the industry and this has been supported by the CNC of Kubernetes community for a while now, but for the enterprise application. But nephew project is going to actually enhance that further of the CRDs and operator in terms of supporting the network function, the cloud infrastructure provisioning, as well as the Kubernetes provisioning as well. And this is all going to be done through the conformance to the telecom standard. That is very important because there's a lot of great work that's been accomplished in the industry, in the standards and open source as well. So this project is complementary to the other open source projects as well as standard, but the implementation is done through the Kubernetes to keep it very simpler. And in terms of automating the cloud infrastructure, which is the bucket number one and automating the workload resource automation, which is actually the bucket number two. And bucket number three is the network function configuration itself. Again, it's all done through the conformance to the standard. And this community will not focus on the service orchestration layer, which is actually the top portion shown on this picture. And the work which is done in the other communities will be used by this community in terms of the service orchestration layer functionalities. Let's go to the next slide. So as I pointed out, this is a Kubernetes based and the Kubernetes is a well-known for telecom, well-known for cloud providers, and it's well-known for network function vendors. And this really brings the very simple open and the widely adopted Kubernetes as an automation engine for automating the cloud infrastructure, as well as automating the edge resources or the network function, either edge or the code resources as well. And the network function itself will be automated. So all these three people have to come together. And that's what you see in the launch partners that it's a mix of people from telecom, cloud providers and network function vendors. And this will benefit the whole industry in terms of pushing this cloud native automation across the industry. And we welcome and the community is officially launched today and you are welcome to join this community in terms of taking this community to the next step of cloud native automation. You can actually find additional detail in the nephew.org website where there's a lot of information about who is part of this community, like the way Gabrielle was showing on the slide. There's a FAQ that explains, like with the additional information and technical details about this thing, you are welcome to join this community. Again, there is no membership fee for the past year. And you can find additional details on that particular webpage and that you can actually join us free and contribute and be part of this community. And again, we welcome everybody to join this community and push the industry towards the cloud native automation. So with that, back to you Arpe. Okay, thank you. And first of all, congratulations and thank you for seeding this project. As we see, the S-Scoopinettis is going quite well and a lot of networking projects in the telecom spaces are going quite well and you see holes or blends that I would say to move the automation forward very, very good. I think we have a few questions. I'm just gonna sort of pick the ones. We have a couple of minutes to answer a few. The first question, in fact, there were two questions in relation to other existing projects and how it fits in. So if you can sort of share something around, how does it fit with something like Airship, Emco, and then obviously, you mentioned ONAP as the service orchestrator on the top, but in general, how do you see this complementing the other existing open source projects? And I think probably one of the best person to answer this is actually Candan, who was actually deeply involved. I know, I know, he was one of the original. In general, I would say that we really want to look at Kubernetes to be center of this, right? And therefore, how instead of rebuilding capabilities, how can we actually leverage and extend what Kubernetes does, given the massive community behind that, right? And then Candan, maybe you can cover specifically some of the topics on, you know, Acreino and Emco as well. Yeah, so this is definitely a complementary project as Gabriele is pointing out. This uses Kubernetes not just to host the containers, but to automate the container network function which our network function focus here. And this is definitely complementary to other projects and we welcome actually collaboration with other open source communities. And we also actually would like to actually collaborate with pretty much like the standard communities as well. For example, you know, like this is going to take a specification from O-RAN community as an example and incorporate that as a sort of an implementation on the Kubernetes for automating the RAN based workloads. And again, we welcome pretty much everybody to join this community and to be part of this community. And in parallel, we would like to collaborate with other open source community as well. And you can actually find additional detail about, you know, how we would like to collaborate. And we would like to hear like a further opinion from the community, but there are some additional information which is already available in the NEPI website. You are welcome to go and look at the website for additional detail. Cool. Maybe we have time for one more question, which is what is being done in Nefio to hit carrier grade requirements? I think that will be, it's a broader question, not just Nefio, but you know, what are your thoughts on sort of the carrier grade requirements of what we are going to do here? The Nefio starts with the primarily Kubernetes, which is a carrier grade implementation that's what primarily used by everybody in terms of the implementation, irrespective of what the vendors are, Kubernetes is being the common component that is being used for the implementation. So the start itself is a carrier grade and using that Kubernetes for the complete automation adds to that. And in terms of, you know, further scale and implementation of this whole automation framework, there are components that Google have already open sourced and we are bringing additional open source component to this particular community in terms of like enhancing the Kubernetes, enhancing tools around the Kubernetes to make this is like a more comprehensive, large scale automation engine. And again, the simplicity is what primarily we are looking at in terms of the thing, even though it can automate like a large number of locations, but the automation itself is like something we wanted to keep it very simple. Again, as I pointed out, you are welcome to that website where it has like additional detail. Again, the website name is nephew.org which has like additional detail. And we are planning to kick off this community very soon and there is a face-to-face meeting is going to be arranged. And we are welcoming pretty much everybody interested on this topic. And we were allowed to share like additional detail in a very, very deeper way. Okay, very good, thank you. I know there's a lot of questions coming. I think we are going to be out of time. But again, as Condon said, go to the website, ask questions. And I think most of the questions are really around the resource and the CRDs and standardization, which is exactly what we are trying to do with the project. So, hey, thank you very much. And, you know, looking forward to growing the community with Google Cloud and the entire ecosystem. Thank you, Gabriel. Thank you, Condon. Thanks, Arpitin. Thank you.