 day two of Open Networking and Edge Summit. Hopefully you have had a great experience yesterday and I know I learned a lot. I'm sure you did with all the keynotes as well as the sessions, mini summits and the talks. I'm sure we'll all come back and review several of these again. We have a great lineup today. So without any further review, let's just get started. The first keynote is one of the hottest topics in the industry right now and that's security. Security in open source and we have some amazing speakers today in a panel that will provide you all the insights you need. Let me introduce the panel very very quickly and I'm sure they will go through a formal introduction but we have Amy Zvareko from AT&T Security. We have Priyanka Sharma, ED from CNCF and Malini Bandaru from Intel Cloud Native Architect and the panel will be moderated by Will Townsend of Move Insights and a contributing editor to Forbes. Without any further ado, please join me in welcoming Will and the panel to the stage. Thank you. I want to thank my panelists today. We've got a great topic of conversation on tap. We're talking about open source and there's no question that open source plays a pivotal role in enterprise cloud native networks, telco networks as well as edge deployment. So in this panel over the next 30 minutes, we're going to discuss some of the security implications as well as best practices. So let's get started with our first topic and Malini, I want to start with you. There's a debate, you know, some feel that open source presents vulnerabilities, you know, others and I fall into this camp feel like that with more eyeballs upstream and downstream, it's actually more secure. So I'd love to get your perspective on, you know, what is the Linux Foundation networking group and edge working groups doing to ensure the most robust security and open network environments? So Will, I'm in total agreement with you more eyeballs the better security through obscurity doesn't really work. So Linux itself is 30 years old, it just had a birthday. The more that's in open source, the more eyeballs can look at it and say, Hey, do we take care of running a container as a non root entity? I mean, does it have super privileges over here? How are we doing authentication, authorization? Are we, you know, refreshing our certificates and keys? I mean, have they been overused and therefore become vulnerable? All these things you can set as policies and force them have benchmarks like Q bench to just check for some of these things like, you know, have I opened up the file system? Can somebody, you know, escalate their UID or their group ID and things like that. So the more that's open source, there's more protection for us. And, and that's a change I really like, especially in the security space. Yeah, Amy. Yeah, I can give you a couple of examples from own app, which is a big LFN project. From the beginning, we actually specified mandatory security requirements on own app and adopted very secure develop software development practices. In terms of a lot of the requirements and practice, we do mandatory software vulnerability testing. And what it's helping us do is keep all of these old packages with long standing vulnerabilities out of own app, and also remove vulnerabilities from the own app code. So as an example, in this upcoming release of own app, by upgrading packages, we've probably removed over 700 CBEs from the own app code base itself. Now that number sounds large because own app is huge, but we're doing a lot of work in that area. Yeah, if I may chime in there. So I'm Priyanka and I am the general manager for the Cloud Native Computing Foundation, which some of you may know as the home for Kubernetes. We are, believe it or not, still the fastest growing community in open source, even at the current size of hundreds of thousands of contributors. And security has been a top level priority since day zero, kind of what Amy's saying around how own apps are structured. It's very similar here where as a foundation, we mandatory require graduated projects to go through security audits. In Kubernetes itself, there is technical advisory groups focused on security. And in the specialized work we're doing for telcos and edge computing, which is the cloud native network functions, security best practices are part of the requirements. And I think this is all happening in open source by groups of volunteers, people coming together. And so my personal opinion is that we're doing because of the collective collaborative effort, we're able to do a lot more. And people can be more reliant and secure when utilizing well managed open source. So that's just my two cents. Thank you, Priyanka. Awesome. Well, let's jump into our second topic. I want to talk about OpenRan. Amy, I'd like to start with you on this one. And obviously OpenRan is gaining tons of attention, the ability for it to drive a domesticated supply chain to support 5G and 6G and beyond. But it's highly disaggregated. There's a new ecosystem of partners that are coming into play. And I'm wondering, from your perspective, does that present security challenges or just challenges in general beyond security? So I think this is where the OpenRan Alliance comes into play. And in addition to ONAP, I'm very active in the OpenRan Alliance. Because the OpenRan Alliance is actually defining those OpenRan functional and non-functional requirements. And they've adopted security as one of the base principles from the get-go. So we've developed the security requirements for an OpenRan based on it running in a zero-trust environment, which I think is really important, especially in 5G, where things are being pushed closer and closer to the edge. So for example, we're adopting just the table stakes of zero trust, which is you've got to have strong authentication, you've got to have encryption in all of your transactions. We are requiring a software bill of materials to be included. And that's really in response to this desire to help protect the supply chain. You asked about the diversity of it. The interesting thing about this is that most of the players here are very, very comfortable in working in an environment where they have to build to external standards because they've been building to 3GPP and Etsy standards for years. So it's very disaggregated, but in many ways it's a community that understands the space they're in and working with external standards. Thank you. Any other comments from the panel? I have something to share here. Security is something that's cross-cutting. And application vendors, suppose they want to process packets or do some encryption or deep packet inspection, whatever, it's not necessary for them to handle all these baseline common actions like identity, authentication, authorization, etc., etc., even logging and telemetry and stuff for observability to debug issues. So this is where a project called Istio really plays a strong role. And whether it's ORAM, whether it's ONAP or in the enterprise, nowadays Istio is going to play that kind of solution that handles this common security-related actions so that the application developer can focus on the real tasks. And Malani, I'd like to comment on that because the Istio work is something that ONAP is trying to adopt for those very reasons, to move some of those common functions outside of the ONAP code and into the Kubernetes environment where they can be handled by a different service that, say, for logging, that somebody who only cares about logging. And that functionality doesn't have to be built by ONAP. So huge fan of the Istio work. I think it's one of the things that's really going to help. Yeah, I think service meshes and the whole philosophy behind it is so useful, right? As you said, then you don't have to care about all of that yourself. And that's exactly what the application developer needs. They need to focus on building features and getting stuff out quickly. And something like a service mesh that is effective and takes care of observability, has baseline security baked in, just improves the developer experience so much. Thank you, Priyanka. I'd love to start with you on this third topic. As mobile network operators embrace the cloud for delivering new telco services, I mean, we're seeing examples of it left and right lately. What's really driving the adoption? And what's been enabled through the Cloud Native Computing Foundation and other OSS umbrellas to make this a reality? So first of all, 100% agree with you that the telcos are coming out in full force. I mean, AT&T is actually a Platinum member of CNCF. And it's been awesome to have the support, guidance, thought process there as we build, continue to build our foundation. I've had so many conversations with a lot of telco providers who are now very seriously looking at Cloud and Cloud Native to help them. And it ties very well to their efforts around 5G. And I think that first of all, this is the best thing that could happen because ultimately, in my opinion, the telco providers are the companies or organizations that reach the masses of humanity. And by working with them and helping them as much as we can, we are impacting humanity on a totally different level. That also means these companies have the most complexity. I mean, these 100-year-old plus companies at times, right? And so their needs are not going to be the same as the latest web scale startup out there. And so on the CNCF front, we've been working on something called the Cloud Native Network Function Working Group, which has an associated test suite. And the whole point there is for us to work with telco providers, with cloud vendors, or bring the community together to really align on what do we mean when we're talking about a Cloud Native Network Function or a containerized network function? And there are such fast-moving and early adopter people in the space, AT&T is a great example who've done a lot of work. And we want to work with them and the sort of more newer folks in the mix to have standardization around what does it mean to be a Cloud Native Network Function. And in the standard Cloud Native way, we're doing it in the smallest kernel of truth that we can find in the form of best practices that can be tested against in a test suite. And next year, we'll be launching a CNF certification program as the owners of the CNF trademark and the Cloud Native story. We thought it was imperative for us to be the support function here, be the neutral party that can help people move in the right direction and support the operators in having a choice, being able to be multi-cloud, being able to be multi-vendor and not being locked in by anybody. And super quickly, security is always, you know, you can't be in a situation anymore where security is something you figure out later on. As Malini and Amy have said, it's like security doesn't work. And then there's also zero trust is critical. All that's not going to happen if you don't think of it from the get go. So we have security best practices baked in the CNF working group. Yeah. And you just, you can't make that an afterthought when you see some of these, and we're not going to mention names, but there've been some pretty, you know, prominent, you know, hacks of, you know, of various carriers. I'm sure Amy, you've got something to add here as well. Yeah. I think that this is really interesting from from the standpoint, and I think I alluded to this earlier that it's this move that they're doing, and especially through CNCF, is it's driving standardization and security. And to me, that makes it so much easier to secure my environment because I can identify, if there are gaps, I can quickly identify them, and I can put controls in place. So this is really going to, and it's really this standardization of security is really being accelerated by the shift from kind of the IaaS model and the VNF model over to the platform as a service or the CNF or CNCF model. It's for as an operator, it's going to make my life easier. Yeah. Okay. I do, I do want to make one call out here. The CNCF has several white papers and there's one on security. It's like the kitchen sink, it alludes to everything, not just identity, authentication. It touches on storage, it touches on networking, and even offers you additional links to go out and learn way more. But anybody who is like just ramping up on security, just to get a lay of the land, that's their good starting point. And the rest is of course, you operationalize on all the points that they make. It's huge, it's about 40 plus pages, but it touches on everything. And I recommend that as a good starting point. That's great. We'll endeavor to include those links at the end of this broadcast. Thank you, Melanie. Melanie, I know you'd like to comment on confidential computing for a few minutes. Thanks, Will, for asking me that question. That's my current focus. So we've always talked about data at rest security. So you encrypt your data whether it's on a volume, an S3 bucket, whatever. And there's also data in motion security where you use HTTPS, I mean, basically TLS to encrypt the data that's flying by between your microservices or connecting to a data center. What confidential computing deals with is protecting the data while it's in use. So whatever data you have on your processor, on your CPU when it's running, you want to secure it from other processes. You don't even want to trust your hypervisor in a cloud or your host operating system. So what it does is these are called trusted execution environments. You essentially set up an enclave and this secure private thing is essentially protected by your hardware, your CPU, their solutions from AMD and Intel. And what they do is they essentially encrypt your data and the only place where it's in the clear is when it's in the process of cache. So if somebody were to do a cold boot attack, they can't access your data. And this is very critical particularly for those edge use cases where you might have a lot of sensitive data, security camera footage, or private healthcare data. Let's say it's your diabetes monitor device and that's an edge device at that point. Those don't get exposed. So this is something that I think is timely. It's coming up. And once again, the CNCF has two excellent white papers, a great starting point for anybody who wants to learn more and get involved. And there is another open source project. It's part of LF2. It's called Parsec. So it introduces an API, a standard interface. So regardless of your underlying implementation or your hardware provider, you can take advantage of trusted platform modules of the AMD solution, the Intel TDX and SGX solutions. So I think that's an important progress in this field of security now. Thank you. Our fourth and final topic. I want to talk about private 5G cellular. And boy, this is something that's very exciting. I think you're going to see operators like AT&T providing that as a service. And certainly to facilitate workloads and applications that will take advantage of extreme low latency and fast throughput will be important. But there's security implications as well. One of the superpowers of 5G has many superpowers, but one of them is just based on the 3GPP standard, the ability for it to support a massive number of devices, especially IoT. And as we all know, IoT tends to expand that attack surface. So I'd love to kind of pose this. What are some of the security implications and what can open source do to maybe close some of those gaps? So maybe Priyanka, we could start with you. Yes, absolutely. I think you're 100% spot on that the larger the attack surface, which is definitely the case, the more IoT devices, edge devices, the more challenging things are going to be. And from my perspective, it's following what I now think of as cloud-native principles, but security pros have probably been thinking about forever is just that whole concept of like designed for security from day one. And also connect that to compatibility through standardization. That applies not just to designing a CNF, but also to how we are going to approach IoT and edge in general. If we look at the world that is much more hardware oriented, right? Like all the industries that are more about in the physical where you can touch things space standardization plays such a big role. And I think that direction is the one we all need to take getting in also the goodness that you get from the software mindset. So something that is I'm seeing in CNCF a lot is very cool projects that are coming in and they are focused on edge compute. They're focused on security on the edge. So K3S is an example, which is a lightweight Kubernetes meant for edge and IoT devices. And the fact that it can literally run on the device makes things wildly awesome. See if you have others to like cube, cube edge, and then there's just every actually every month there's something very new and exciting because of the way we accept projects into the sandbox, which is category of CNCF projects. So I think the number one thing remains like designed for security from day one and thinking about compatibility for the long term. Now this having to do that is challenging, right? Because our traditional mindset always tends to be that, hey, I sell X type of things. So let me make the most unique special thing that I can sell. But the reality is we are entering into a stage of innovation where it has to be a collaborative, cooperative, innovative cycle. We cannot build off in silos and that is more true for IoT and Edge than anything else. Any other comments from the panel? Amy? Yeah, I did want to bring up something and you touched on it, Pran, kind of triggered it. That 5G is intensely software based. And I think that and what we really have to think about now is it's not just about securing perimeters, but we really have to be seriously considering the software, you know, talked about how own app does that. But this is about vendor software as well as open source. Really got to be able to start doing that vulnerability scanning and removing attacks that would be caused by having old code running. Because think about the impact of an Equifax Apache struts type of attack in a 5G communication service. So, you know, this is kind of my call to action. Let's make sure we get that we keep up on current versions because that's the best way of reducing the security vulnerabilities in my in my services. That's a great point. And then Melanie, you wanted to add something there too, right? Yeah, I want to add a few points and this touches on Priyanka's comment about interfaces. So, more likely than not, your Edge is really a cloud. You can't rely on one node going somewhere and saying, hey, I'm going to monitor your nuclear power plant or your temperature thermometer, whatever. And an autonomous driving vehicle is also a little mobile Edge cloud, you know. So, the thing about all these clouds is they're becoming very Kubernetes-like. So, that's your interface, your API. So, all the standard best practices you're using today with your Kubernetes clouds in the enterprise and a data center, they'll apply for you at the Edge. And just like Priyanka and Amy mentioned, projects such as Cube Edge and Edge X Foundry and OpenEarth, and even K3S, I mean, they're all like Kubernetes edges really. And that helps us. So, you're, is it your envoy you could use there? You're, hey, I want my container to be a non-root entity and I need mutual, you know, authentication between services from the client to the server for all your microservice into communications so they cannot be attacked, you know, with like a man in the middle type of attack. So, those are all the best practices you can leverage. That's to our strength. And another thing is I used to be the co-chair for Edge X Foundry, another LFH project. And this is about two years back, you know, before COVID hit and we used to do conferences in person. We had already been working on threat modeling. So, are we going to like protect that, that Edge cluster from those devices? Are those devices genuine? Or is it a fake thermostats and temperatures high or low or pressure is safe or not? So, we need to authenticate at that level of each device. Then the cloud, that Edge cloud itself, and then its communication to the data center cloud. So, we've been doing threat modeling. It's becoming more of a best practice. And another point that Amy brought up that secure supply chain, the projects like open horizon, also an LFH project. There's also MCo. All of these are now going to be that monitoring layer on top of your Edge clusters to say, hey, you know what, I did a security vulnerability scan, you know, like the bill of materials and their projects such as turn, which checks what's inside your container image. Then there's Snake who offers it as a free service for open source projects to just say, hey, there's a CVE against this, you know, library that you've used. So, once they do these sort of things, you could say, I have a deployment with some XYZ that is now vulnerable to this. Let me just update them. And it can push out to all these edges in this monitoring management type of mode. You can even do your camera testing, you know, AB type of testing and then push out to the Edge, your new solutions. So I think as a community, we're marching in the right direction to make things more secure. And these standard best practices of your supply chain, of your bill of materials, of your scanning of your monitoring and management through a Kubernetes type interface, it all helps us. If I may, sorry, chime in one more time. So sorry. But specific to IoT Edge, actually the Kubernetes Edge working group, they've created a security byte paper focused on that. So I wanted to plug that in for anyone who maybe it would like to double click into this stuff. And some of the concerns they look at is like, you know, physical component access or intrusion detection mechanisms. How do you verify trust for connected devices? So if anyone wants to like double click into what's happening in, you know, the Kubernetes community, I highly recommend that paper. And it really takes the position that an attack or compromise, it's going to happen at some point somewhere in your supply chain. And that means you've got to secure all of the points. Absolutely. I mean, we've discussed that probably a dozen examples of what, you know, the Linux foundation, you know, networking and Edge groups you're doing, the three of you are, you know, participant in many of these and are really making an impact. And this has been a great conversation. I want to thank each of you for participating. And I want to thank the audience for tuning in to this event and have a great rest of the event. Thank you so much. It was amazing being in such a good company. Thank you. Thank you. Oh, wow, that was quite insightful. Thank you. Thank you for sharing sharing some of these insights with us. Awesome. The next keynote is from Constantine Polychronopolis. He's the VP of 5G and Telco at Juniper. Been in the industry for quite some time. And he's going to focus on a specific topic, which is really around automation and orchestration in Oran for 5G networks. So please welcome Constantine. Hello, everyone. Thank you for joining the session on the disruptive potential of N2N network slicing. Constantine Polychronopolis, Vice President of 5G and Telco Cloud at Juniper Networks. And I will share with you in this short session here how Juniper pushes the frontiers of innovation to deliver the big promise of 5G, which is network slicing. If we look at where we are today, where the network essentially treats every device as the same, one size fits all, and where we want to go in 5G. Actually, we have started already delivering that promise of 5G by customizing the network user experience on a per device, per user base, per use case. This is perhaps the most complex part of the 5G promise. And many analysts and operators and technology companies have called network slicing the catalyst for delivering the so-called industry 4.0 capability through 5G. And that includes, of course, smart logistics, intelligent cities, intelligent homes, and so on. As we look at perhaps the big picture of what network slicing is all about, it's about the ability to deliver complex use cases on the same shared physical infrastructure. If we look at the bottom, we look at an abstraction of the highly distributed 5G network that involves or includes 1,000 potentially of Edge Clouds that drive the radio infrastructure, hundreds of regional data centers, cloudified data centers, as well as national data centers. And for that matter, public clouds, the hyperscalers as well. So how can, from the orchestration from the delivery point of view, how can we bring all of that together to deliver this tendency, this complex tendency that we call network slice, that involves not only the key components of the 5G network, but also applications that can be slice specific to deliver the end user experience as well as to implement the specific use cases. So perhaps the most complex use case is that of an MVNO, a mobile virtual network operator. So we would like to believe that going forward, large operators will have the ability to onboard MVNOs fully programmatically as a tenant, as a number slice, and give through a separate dashboard the ability to a tenant to manage their own virtual resources. Meaning that a tenant can now subslice their own slice, a very powerful model in delivering the promise of 5G. Key to that is the ability to stitch what I call the three semantic domains together. And we look at the three major ones, the radio access, the transport, and the core network functions. And let's recognize the fact that as we talk about all of these essentially being containerized, virtualized functions, we have the ability to distribute them. We don't have to have all the core network functions running in the same data center, in the same cloud, right? We may have in some of the slices, the UPF sitting very close to the radio, for example, in some other cases, you know, can be in a regional data center, while the SMF, the AMF, et cetera, can be more of a central function in the network. So being able to stitch the three semantic domains that have many incarnations, for example, transport also exists in the radio. Together to deliver strong SLA specific to its slice is a key requirement, a fundamental catalyst in delivering the promise of network slicing in an end to end fashion. At Juniper, we leverage, of course, the strengths of the company, the leadership of the company in transport, as well as our major focus on Oran architectures with the Juniper rig, the Juniper X-Apps working with partners to deliver this ability to orchestrate fully programmatically end to end use cases in the form of a slice and provide the ability to the slice, to the tenant to manage certain aspects of their own infrastructure. We look forward to bringing this vision to reality, working with operator customers, as well as with several partners in the coming months and years. Thank you very much. Thank you very much. All right, let's move to an end user keynote. I'm really pleased to have Shrini Kallapala. He's the VP of Technology and Strategy at Verizon. Obviously, everybody knows Verizon. They are very advanced when it comes to the sophisticated networks around 5G. And let's hear what Shrini has to say as we move forward. Please welcome Shrini. Hello. It's very good to join and talk to all of you at the Open Networking and Edge Summit. We continue in this virtual format, and with that in mind, I've kind of created a few slides to make it easy to follow the talk track. Today, I want to take you through our journey, the Verizon journey in terms of network cloud and how open source played a big role in how we transformed our technologies. So let's dive into the details of our network. So when we look at the foundation of our network virtualization, our cloudification, it relies on a couple of things. One is building a platform that is uniformly distributed across all our network. And we use a normal culture called core edge, far edge and enterprise. But it effectively tells you where the cloud is, core being in core central location, edge being close to the bigger cities in metros, far edge being at every cell site we talk about. And in some cases, deploying this cloud all the way into the enterprise environments. And doing all of this while partnering with the likes of the web scale and the CSP partners and deploying their cloud along with ours to enable your experiences and your outcomes. Now, when you look at the core component of the cloud that we deploy, it is infrastructure, plays a key. It is deploying cloud native and virtualized applications. And on top of that, leveraging orchestration automation to manage all of these new experiences and new workloads across our footprint. While doing this, along with third party developers as well as third party application providers, so that they can leverage this new modern, highly featured network along with some of the innovation they are doing to deliver some of these new outcomes. If you look at historically, prior to 2015 or so, every application that we are buying on the network side would be a physical application. It is a physical network function. It comes in the software and every hardware package together. We would deploy them across our footprint to enable the different network outcomes. But as we move towards virtualization, our focus was generic infrastructure deploying virtualized network functions across our footprint. With that, we get the leverage of deploying standardized common hardware, following standardized common operational models, no matter what application we are we are virtualizing in the client. And as we move towards the 5G world, the requirement is not only automating our environments and bringing new capabilities, but we also had to realize that the capabilities we are developing, they're going to be consumed, leveraged and integrated by the web scalers, by the CSPs and all the new contemporary and emerging applications. In that world, we had to ensure that the same principles that they go through with, whether it is automated scaling, elastic scaling, whether it is ability to deploy software at a rapid pace, we had to ensure that we are also able to evolve and meet those requirements given the convergence of the network and cloud worlds coming together. If you look at our environment, this is Verizon environment, we have tens of sites where we have the significant amount of cloud fabric that is deployed, manage applications such as IMS, voice and charging, these are applications that can be centralized and deliver to all our users spread across the continental US. But as we look at data processing and packet processing applications, then we move towards edge and we have significant number of edge sites where we deploy packet core applications. And as we virtualize the radio access network, and we're talking about deploying cloud infrastructure across the CRAN and DRAN, this will run sites across the footprint, reaching to tens of thousands of sites across US. Not only that, we deploy our core fabric, but we also deploy 5G edge in our case, we've deployed with our partner AWS. Now, one of the things that you, if you look at it, this consistent cloud fabric that we're deploying across the entire footprint allows us not only the consistency of the common fabric nowhere, no matter where you're looking at, but it also allows us that depending on the workload requirements, move those workloads from one point to another point of the network. The fact that we have a common fabric means and a flexible orchestrated implies that we're able to actually move these workloads from one to another, one point to another point. Now, when we look at future use cases, whether it is local breakout kind of use cases where the traffic has to be taken off the network and processed at an edge site, you know, close to a CRAN site. Or if the processing has to be done close to an edge site, it's the same application that can be deployed on demand anywhere in the cloud fabric across the network because the application gets the same consistent experience no matter where it is executed. That is the sort of flexibility that the evolving architecture provides us. Now, I've been taking you through for the last three sides, the kind of network evolution we are working on, the kind of cloud evolution we're working on. Where does open source fit in? Where does open networking fit in? Now, let's get into the details of what do these different cloud deployments actually contain? If you get into the details of that, whether it is core, edge or far edge, we're looking at for VMs, it is open stack, for containers, it is Kubernetes. And then we look at things like Redfish to allow us to manage this highly distributed deployments across tens of thousands of sites. On top of that, if you go deeper, what you notice is that we actually leverage most of the software that the web scalers or the open source communities are making available. When you look at example of Helm, in the world of VNFs, we had to deal with VNF managers, heat and more configuration, more complexity. Containers, Helm gives us the full flexibility of lifecycle management of the application on the container management. When you look into the CI CD pipeline, Git and others allow us to automate the deployment across our entire footprint. Again, if you look at the roots of all of this, come through open source and open source communities. So underneath all of this, as you can see, our cloud stacks that we used to deploy all of the latest to 5GC and other applications, Vran and others, is that you would see it is built together and put together with the components that are open source and the components that others are using, whether it's web scale and other communities. But if you move to the Vran side, the virtual array is ran, a couple of other requirements start emerging. Not only that, we're now deploying cloud, but we're deploying cloud across tens of thousands of locations. Now, with that comes in that, we won't be able to use the same common models of CI CD to deploy across all of these locations. It's just at the scale that we try to do, it becomes an extra impossible. So we worked with the communities again to look at how do you create the architects or centralized controllers that actually manage the life cycle of the VDU, in this case the container or payload, as well as the sub-cloud environments, both for life cycle management, as well as collecting and monitoring and other data and managing across these large footprints. So, when network does require certain special accommodations, it does require certain additional capabilities. Examples, we are natively IPv6. That means that every piece of software that we use has to come with IPv6 support. And as we embarked on, for example, building a web scale infrastructure, we had to work with our community partners to figure out a way to accelerate IPv6 development across all of the stack. We do need things like SRIOv4 in the efficient data processing and speeding up packet processing. So there are certainly requirements, our scaling requirements consistently we talk about clusters that go beyond 500 nodes given the kinds of volumes of data and processing that we go through in our environments. So, I talked a lot about open source and we've seen a number of logos in the previous slides. But in summary, if I were to say what does open source do for telco, especially we're driving the cloudification and virtualization of our environments, it actually provides seamless interoperability between us, who are creating the infrastructure and our application providers, who are our network functions suppliers. We both have this common understanding of what to expect in each other's environment and that can reduce the integration complexity. It also is allowing new players to enter this environment, whether it is cloud infrastructure providers or others to actually innovate and deliver both applications as well as infrastructure models. Because of this broader community coming in, we're seeing the innovation accelerating. In private network world we see a lot many more suppliers, creators innovating and creating new products that benefit the telco. We're also seeing that this is allowing the convergence of telco and internet infrastructure, the web scale infrastructure, and because of that, whether it is the max or whether it is the newer capabilities that we're working on, they become somewhat easier to deploy and integrate with. Certainly, as we look at it in open source, it's an open network function. They're playing a big role as we move towards fully cloudified and cloud native environments. So far, I talked a lot about how we leverage open source and how where these open source technologies are playing a bigger role in our environment as we shift towards this emerging technology domain. That all is not completely well. There are certain needs that we have. The communities and the broader open source forums will need to look into. One of the challenges we face is that in any domain we look at, there is a lot of forking. There are a lot of multiple groups that are working on similar kind of programs, splitting the users and creating, perhaps in some ways, multiple flavors and causing suppliers and others to divide their resources across many of these. If I were to talk about, for example, if you look into SDN controllers, Onas versus OpenContail versus ODL, you have multiple streams. In telco environment, it does become a challenge on which one you pick and where is the support and who's actually continued to innovate and who would help us bring some of the requirements that we the telcos have. The other point I'll talk about is that telco is driven a lot by standards and open source is giving us a way to create a, you can call it a pseudo set of standards. We do need to see these two worlds come closer. If you look into Oran, it is about creating an abstraction at the Cipri layer so that we can accelerate innovation and allow new players to come in. But at the same time, we do need to see the likes of standards 3GPP and Oran's come much closer so that as the newer 6G and 5G advanced standards emerge, that they come with this interoperable Oran models out of the gate. I can give another example of in, we use today HTTP2 and JSON and others as protocols for all the software interfaces. We would like to use GRPC, but again, standards haven't caught up to make that one of the options available for us to work with. So we do need to see these two come together. Scaling becomes a bigger ask in our environment whether it is data throughput scaling, whether it is a amount of compute that we deploy at different locations. On one end, we're talking about final node clusters. On the other hand, we talk about deploying servers at a radio site where at the most, we would like to see one to two cores being used for the platform layer and everything deployed for the application itself. If you look at it overall, as I took you through the journey of Verizon and how we're deploying our network cloud, what you realize is that we're working very closely with the open source communities as well as the broader software community, and we're leveraging, we're benefiting from a lot of innovation that's coming out there. With that, I hope you got a full view of how we're driving both the edge and the cloud evolution within our environments, and we hope to see some of the changes become reality faster in each of the simulators that you're working in. Thank you. Well, thank you very much. It's always great to hear from end users and operators, because open source networking is all about end user driven innovation. And we have come a long way in the last five years thanks to operators like yourself and others. Okay, so let's move on to the next keynote. Very pleased to have the largest service provider here at China Mobile providing a keynote and an insight into their progress. We have Dr. Julan Feng, and she is a chief scientist and a general manager at CMCC, China Mobile Research. Clearly lots of patents, lots of publications, lots of awards, and let's hear how China Mobile is taking advantage of this open new great world on open source. Please welcome Dr. Julan Feng. I feel like I'm performing because I don't see the audiences. Hopefully we can see person in in sometime in future. Hopefully not far from now. I'm Julan Feng from China Mobile. The title for my talk today is AI for 5G and 5G for AI. It's about networking intelligence. So here the title is also a big strategy for China Mobile. So both 5G AI are very hot topics. They're high fees. And no matter you watch the TV, you read an article, you chat with your friends, these are the words which appear everywhere. So but we believe they're not just a high fee. They are game changes. They will change our life in future fundamentally by being a high fee. So now we talk about is there some synergy between 5G and AI? First we can see what's their top features for 5G probably for any even the previous version of a telecom network. It's a large scale and it's reliable. And it's reliable service no matter we provide the service to individual customers or business customers. And it's very systematic. We have thousands and thousands of standards to ensure the entire network of different layers and different generations can work hand to hand. And easy to deliver. And for AI, it's broad intelligence. The modern AI technologies involves a broad set of topics. For example, the perception intelligence, cognitive intelligence, and decision intelligence. And it can take any kind of data. For example, natural language, processing, any piece of text, speech, video, image, and any other type of data. So it's quite broad. And it's pushed the intelligence to a level which we probably five years back we have never possibly imagined. It's to end to end. End to end is a main feature of current AI technology. For example, if we build a speech recognition system and you just need to give utterance and give a piece of caption. And then you can train a speech recognition system from end to end. This kind of setup is universal now. And it's quite flexible. And it deployed AI services. And you can upgrade their service anytime. And the AI models will learn on its own. And so all the parameters to control the AI model can be adjusted in the way the business wants. And there are also always their pros and their cons. There are some features we probably don't like much. For the network, the 5G is quite complex. And we not only have 5G, we have 2G, 3G, 4G, and 5G all together in a very complex network. It's a complex means that different generations, it involves many layers, involves millions of devices, network devices and elements, units. It often leads to high costs to maintain a network and to operate a network. And it's quite dynamic at the basic feature for wireless basis service. And it makes harder for us to truly predict the features, the data usage, the possible problems in the network. So this uncertainty makes the operation much harder. And now, nowadays we have to make a 5G to work better for diverse business needs, which are quite different from 5G, from 4G. So that also challenge our way of operating the network to capture the main features, the main standards of the network. And for AI, it's hard, it's often involves high costs to develop and deploy. And because AI means the core of the AI probably is a piece of algorithm, a big model, but once you deploy into real scenarios and involves lots of connection with other system, it's hard to train AI model, which are designed for a certain scenario. And it's even further challenge to deploy into a larger scale. So what we can do, we believe they can help each other. And that's why it's 5G for AI. And as the way we deliver 5G, we think we can borrow the same way, the mechanism of 5G to deliver AI. And that's that's a big help for AI if we can make AI applications to go to a larger scale without much cost to pay. And another way to look at this AI for 5G, for all the network challenges we are facing today, we think many of those challenges can be helped by AI, by the intelligence technologies. So now we give a definition of what is the intelligent network. And it's a network empowered by AI technology and the systematic integration of AI network hardware, software and system. And to realize the intelligent OEM and intelligence services and for improving quality and efficiency of the network and empowering industry, a digitalization transformation. And this slides, I want to show you is the China mobile to share a big news, which we made it public back to July of 2021. China mobile set a big goal for 2025 to realize a level four autonomous network. On the right of the screen is the shows what level four means, which is defined by the standard. And on the left of the screen is how the logical architect looks like for intelligent network, for autonomous network, which means there are a few closed loops, for example, customer closed loop service closed loop and resource closed loop to make sure we can autonomously manage a network. And here there are four layers and network elements layer, network management layer, service layer and the business layer. And so we hope we can work with you and this community together to reach this objective. And how are we going to do it, there are a couple of steps and survey analysis standards open source and developed the technology and do the fundamental research construct the platform and deploy develop and deploy applications and scale the promotion and also for ecology operation. This is the survey. This is a survey was conducted by LFM in UAG and it was initiated by China Mobile. We got 60 feedbacks from 65 entities that involves 20 TICOM operators, 10 plus digital service providers and 30 plus vendors. There are many numbers here. I don't want to go to one by one. It's overall progress as although network intelligence has been a hot topic and we conclude by the survey we're still in preliminary stage. That's what shows on the top left of the graph. And for R&D we slightly surprise the most operators are willing to develop or work with the vendors to develop the network intelligence software and hardware. And this is how the distribution looks like. And I assume with time moving on these numbers they may look different. And on the bottom is the application and research stage and also shows which are the areas are most interested by operators. And I saw see the on the right bottom summarize the top challenges. And standards here you see the distribution, how that looks like we have close to 200 standards related to network intelligence. And the number one here is the function and the interface related standards. And there are many others and the great ones. And you can here we can see where the standards have invested their discussions and research on. We're happy to be a big part of it. China Mobile have been leading and participated in more than 40% of these. And here we call for further collaboration in standards involves the service layer operation layer and the infrastructure layer and also AI core algorithm and the AI operation itself. And we also hope we can think more how to connect, build the bridge between the open source and standards and the business needs, the business drivers. We need to do more work to make that close back up. The third one we think in order to develop faster and we have to share more, share computing resources, the environment and assets, part of the assets. For future we think there are two directions we don't emphasize as much as probably we should. One is the AI-native network which is also main feature for 5G for now, but for 5G discussion. And the second one is beyond the typical scenarios for 5G like the EMBB, URLC, MMTC. We're proposing here is we can make a 5G work better for intelligent computing and services. This is also go back to my title, the title of this talk, so 5G for AI. And you see those blue boxes which are the standards for involves computing network. And this is if you're interested in learning more details about network intelligence of China Mobile and our strategies, our implementations, these are the talks you can go to if you already miss them and you can go back to watch the videos online on the requirements and service intelligence operation infrastructure layers. And with that, I conclude my talk and thank you for your time. I hope you learn a lot. Bye-bye. Thank you very much. That was very useful and helpful. I'm hoping to all the attendees who are listening and will listen later on as well. Our next keynote is from Bill Ren. He's the chief liaison and open source officer at Huawei and he has been a very active governing board member for several of our open source initiatives here in the Linux foundation. And I'm very pleased that I'm sure he has some very exciting news to share as Huawei takes leadership position in several of these projects and contributes back to the community. So please welcome Bill. Hello everyone. This is Bill Ren from Huawei. I'm very glad to attend this open network summit and I'm very glad to share our thoughts about open source for 5G. My topic is edge gallery ecosystem buildings, 5G, MEC and deployment practice. So first of all, as you can see, 5G is a cornerstone of the fourth industry revolution as we all know. And the 5G core network, the MEC, MEP platform plus the UPF will be the unified and open MEC environment for open 5G capability to the industry. So just like the 4G to change our life, 5G how to change the society and the industry. So collaboration of connection and computing become more important. This will enable the industry digitalization. But how the 5G draws the digitalization, we can say from technology and the business point of view, we have to say there is a huge gap between the 5G MEC industry with real industry needs such as heterogeneous hardware, application innovation and application in different scenarios as well as the business model. So we have to first have the right assumption and target. Then we can give the right way to achieve the goal. So how the 5G MEC industry will meet the industry needs? How many users supposed to deploy this application environment? 10 or 1000. Now finally how many applications we are talking for our ecosystem? Is it hundreds or thousands or millions? And how many developers finally should be educated or developed those software? So from my understanding recently as some industry pioneers thinking, there should be millions or even half billion applications supposed to be developed in the next 10 to 20 years for our intelligent society. So the digital transformation, somehow depends on the numbers of the developers in this industry. So that's why we think we should have a huge number of developers in the 5G industry. The open source book called the Kassijou and the Bazaar give us a good indication. It's open source as a most effective means to build the sustainability of software. Although we can think the closed source solution should be more reliable and more easy to build and it can also be very beneficial to the vendor. But it will be very slow to iteration and the maintenance period will be very long and also very limited when the developers could be involved. But for open source it will have a very huge acquisition and spreading efficiency to reach two different scenarios. This will meet the 5G open demands very much. So that's why open source can give the fast iteration and more programs being involved. This will meet up the huge gap between the 5G MEC and the industry needs. That's why we think open source should be also useful for 5G MEC open capability. One year ago we launched this edge gallery open source project. Our initiative and the target is to extend and open the 5G capability to the edge. That's why we involved the developer, the trader, the tenant and the operator together from end-to-end to from developer environment to application repository and to the production environment for operator and as well as enterprise users. There are many scenarios like smart campus, industry manufacturing, transportation, game, competition as well as security. So there are many use cases I will introduce later. For the past one year the communities and the developers are very good and with many mature achievements edge gallery has become the most popular open source community in 5G area. First edge gallery has been downloaded more than 18,000 times and every release there are more than 10,000 committees and 6,000 PRs has been submitted and edge gallery developers also reached to more than 370. There are core developers is more than 100. This has already been a huge number in our tech industry. Edge gallery website has been visited more than 350,000 times and across more than 30 countries. So all this has been achieved through the Meetup, the technical salon and Hexen and the panel and the level of the community. So that's why we loaded the edge gallery community to build a prosperous 5G MEC application system becomes possible. Although it's still far away from our target we believe and finally it should be more than 10,000 applications in the industry to be developed. So we need more bold target and more bold action being taken. So the edge gallery platform the key value of it is to fast introduce and integrate the industry applications for user to deployment. From end to end we start from the base stations and core network side which is the edge of the network. We introduce the industry application to the platform and start for the integration preparation and do the certification and do the integration test. Then we release this application to our application repository and finally this application could be easily introduced and promoted to the application store for operators business environment. Here I give a tool use case in China. The China Unicom has successfully leveraged the edge gallery platform to build their 5G MEC ecosystem portals. They quickly build this system based on the open source platform within one month that's impossible before and do the innovation incubation and the business close loop also together based on the ecosystem develop ecosystem of the open source. This will enable their enterprise business unit more sales to could easily to develop the business customer and application engagement. And also as well as for China Mobile they also plan to launch their 5G Hexon based on edge gallery platform as well as this open source platform can be easily integrated with their open Sigma self-developed platform together. So this shows us how the open source is been really already in production environment for operators production launching. Finally I would like to use this page as my summary. For the industry building a common understanding and a common target is already very difficult but more difficult is to take action and make the target be achievable. So edge gallery has started the first step we plan to use it as an innovation base for hatching the application from zero to one but we still need more partners like operator and user enterprise customers to build a more incubation base as well as more practice base to to do the from one to three promotion for application as well as for large-scale replication for our industry. I believe when we join the hand together we will finally achieve our target together to build a more prosperous ecosystem for 5G. Thank you. That's all my presentation and I wish all your successful attending for the summit. Thank you. Great announcements and news and progress. Thank you very much. All right so now on to our final keynote. We have you know gathered everybody for what our mission is which is global connectivity. Whether it's pandemic or outside that's what network do and that's what networking is for. So what better than you know to have the director of engineering at Facebook, Shah Raman and one of the end users Reg Orton CTO of Company Cold Break and what they will be covering is how to enable global connectivity and bring the next billion people online which is again Facebook's mission and you know earlier this year they contributed a project called Magma to the Linux Foundation which has been showing great momentum. So let's hear it from two of the leading experts. Please welcome Shah and Reg. Thank you. Hello my name is Shah Raman and today I'm going to talk to you about Facebook connectivity. I have Reg Orton with me from Brick where he oversees all things technology. We hope to share with you how we are working towards connecting people and bringing them online around the globe and many of those are through open source technologies. So start off with we would like to discuss first how our approach and how we're actually bringing people to a faster internet. Okay at Facebook connectivity we have this focus and one of the key focus areas for us is to figuring out how can we directly impact and reduce the network TCO through widespread adoption of open source technologies. Telecom industry has one of the lowest open source adoption of open source technologies today although there is almost no product in the industry that does not leverage those components in commercial products. So to kick things off let's start with some good news. Global connectivity across the globe is improving and is improving pretty well at this point in time almost 60 percent of individuals are connected to the internet and when we look at the cellular network side of things 4G and 5G connections are increasing. 4G connections for example are expected to go from 46 percent today to 59 percent by 2023 at which point it is probably going to taper off and remain at that level until 2025 before it starts going down a little bit. On the other hand 5G which has been the very I would say popular and the latest and greatest of the cellular generations have been picking up pretty nicely over the past couple of years and currently stands at about 11 percent globally. That number is projected to increase to about 21 percent by 2025. So as we can imagine there's major network upgrades currently undergoing where most of the operators around the globe are busy building 5G networks that are further accelerated by the pandemic and pandemic has also caused the emergence of newer models around work, education, entertainment and so on. So we want to start off by stating how access to reliable connectivity has basically become important for livelihood of many people around the globe. The pandemic has compelled people to use internet more and for a wider range of activities than ever before. Most countries saw gains in internet inclusion driven largely by improvements in availability. Internet inclusion research shows that percentage of people who are actually pursuing education over the internet alone has jumped from 60 percent to 75 percent across the low to upper income brackets and that's that's almost a 15 percent jump from what we have today. At the same time, unfortunately the pandemic might have widened the divide between online and offline population. This is creating a new kind of digital divide which is not good for the industry or the ecosystem. This also has massive implications for the developing as well as the emerging countries and as we kind of look into this a little bit deeper what we see is that the data consumption is skyrocketing, art pool is declining, optics is remaining relatively flat given the network operators continue to manage their networks in a more traditional ways. So it's not hard to understand with all these factors combined we basically they're going to struggle and mobile operators are going to struggle to invest in the network capacity to keep up with the demand so even which means even with all the 5g investments in full total data demands are likely to continue to outpace those investments. At Facebook connectivity we are taking a holistic approach to to to help the network operators and service providers bring network dco and overall economics down. I'm sorry I might have advanced to the the next slide already. So from identifying the underserved population to planning and developing sustainable business opportunities our teams work with Linux Foundation and develop various open source tools. We see a world where majority of the networks become software defined and software centric over time. Open source software will play an instrumental role there. On the other hand we work closely with telecom info project or TIP which is one of the Facebook initiated industry level initiatives to drive those technologies downstream to defenders as well as the operators. So let's take a look at a couple of those examples. First and foremost I'd like to discuss a and introduce the TerraGraph which many of you may know. We developed technology of TerraGraph over the past six years working closely with Qualcomm and several other OEMs. Much to industry skepticism several products were released last year and this year which are rolled out to tens of networks running thousands of nodes. TerraGraph is delivering the promise of wireless fiber quite successfully so far and the software stack of TerraGraph is provided by Facebook as open search to OEMs totally free of cost. TerraGraph is a complete platform for millimeter wave networks that includes planning tools, network controller, routing and forwarding planes as well as a modern network management system. It is a key example of how open source technologies are driving global connectivity forward especially when combined with proprietary technologies. For example transmission of data over unlicensed millimeter wave is a proprietary technology. Another example that's closely related to TerraGraph is the OpenR routing stack. It was originally designed and developed for TerraGraph mesh routing upon the concepts of link state routing and closed network use cases. It was designed with a set of principles that includes modularity, abstractions, low-code footprint and async communications. Once again OpenR challenged the norms of traditional routing protocols that are greatly influenced by the routing functions alone. It powers mesh routing and enables TerraGraph networks run smoothly even with hundreds of nodes in a single deployment. This is a key for allowing fiber expansion over the year where fiber cannot be trenched or reach certain geographies such as urban canyons, dense settlements and concentrated venues. OpenR is not only a key element of the connectivity protocol stack where it powers the TerraGraph but it's also leveraged by Facebook data centers and backbone for routing. It is currently open source as a Facebook project and except just a few modules like the BGP speaker for example. Another example that I'd like to highlight is Magma. We have developed Magma over the past five years as a redefined packet core platform. It can act as the network backend and brain for building cellular networks for non-traditional use cases. Packet core has been serving as the bridge between the radio networks and the internet since 2.5g time frame. Over the generations different network functions were added to the core along with the protocols and interfaces as the complexity of those networks like 4g or later 5g network just grew and grew. Magma took a different approach for addressing those networks that do not need the growing complexity to a distributed architecture with local breakout. Magma is cloud native by design, horizontally scalable and offers 100% programmable core network functions. It is suitable for a wide range of use cases. Fixed wireless networks that could be either LTE or 5g, private networks once again which could be either LTE or 5g or CBRS in the US or Wi-Fi distribution. Magma Core Foundation was established with Linux Foundation earlier this year to spark further growth and development of the community. I invite and encourage you to check out the community that is growing at a rapid pace and that's what Magma community is about and as we talked about OpenR and Telegraph we have like different open search projects that we're working with the Linux Foundation and I invite all of you to actually take a look at some of those and see how those are helping advance global connectivity. With that, I'd like to hand over to Reg who can share some of the real-life connectivity challenges and stories right from Africa. Thank you. Yeah, thanks Sha. And as I introduce, I'm Reg Ordon from BRIC. We're actually a team of software engineers, hardware engineers, network engineers, deployment specialists based in Nairobi in Kenya and we've been working to try and solve or working to solve the connectivity gap specifically across Africa but also globally. And as Sha mentioned, there are many, many, many people in this world that are not connected. Sha talked about 60%. We think about the number is almost a billion in Africa, you know, three billion who really can't afford and can't get connected to the internet on a regular basis. Now that number is decreasing, thank goodness. That number, as we are seeing greater numbers of smartphones subscriptions, we're seeing better phones, smarter phones out in the market. You know, even in the rural areas, we're seeing, you know, a growing adoption of smartphones in this market. So we see the world in two ways. We see it as a problem of access, which is some of the stuff that Sha was talking about with Magma and OpenR and Terograph, which is still a huge issue. We also see the problem as a problem of affordability where users, well, they can get access to some of these networks. They really kind of afford to use them on a regular basis and kind of afford to get the speeds they need in order to access the content they need to do either their work or their education or work in health facilities, or really embed them themselves using connectivity and the tools that are out there now. And what we see is we see, you know, these economies beginning to lag behind when they can get connectivity. When they can, we see this amazing leafrock effect where things can, you know, emerge out and technologies can emerge that don't exist before. How are we solving the affordability gap? We've actually developed a platform called Moja. Moja is a free-to-consumer Wi-Fi platform. We talk about Wi-Fi. It's not only Wi-Fi, we've actually done some work with the Magma team and some of the open source tools there to bring this platform to LTE as well. And using this technology, we've connected more than 2 million users already to the internet. We've deployed in places like buses, Matatu's, you may have seen on TV, these painted up, loud buses driving through Africa, deploying, you know, kind of to be on those buses where people need to get access. We will have time to get access. We've deployed in fixed locations and we're now working on some mesh technologies that allows us to deploy in some really interesting high-density environments. We've actually deployed across three different continents, specifically most of our work has been in Africa today in Kenya or Wanda. And they're now also looking at different areas across Southeast Asia, Latin America where the need is also similar, this need for affordability and greater levels of access. So how does Moja work? Moja works by shifting the cost burden away from the user and on to a commercial entity or an NGO or another large entity. And this is done through what we describe as digital work. And digital work is a way that users can engage and they can engage in getting access to connectivity or content that they need, they desire or as interesting as informational content that, you know, people would like to give to them in order for them to do work. Work isn't limited to just content and just advertising. We've done work previously with AI partners who want to do digital training, want to, you know, digital this micro work on our platform. And through that we can start to engage users in the digital economy. We can start to bring them into a place that they can buy or so they can earn credit, they can earn effectively money that they can spend on connectivity. We also allow users to buy connectivity at decreased rates, you know, where we can actually deploy lower cost connectivity than maybe other people can because of some of the technologies I'll talk about. And that all brings us to a point where we can start to engage those users who typically may never be able to afford connectivity into this connected world. I mentioned this problem of access and shall mention this greatly. And we've actually been working in a number of different ways to try and improve not just access to the internet, but access to content where it's needed. And one of the examples here is this device, the super brick. The super brick is it was a partnership between Intel and brick. And it was leveraging a lot of open source technologies, a lot of edge connected technologies, specifically open WRT, we were using that as our core OS and as well as a lot of interesting caching technologies. And what we can do with this device is we can start to move applications content from, you know, data centers that maybe you're sitting in the US or Europe, or maybe there are some emerging data centers now coming up in Africa, we can actually move that right to the user or very close to the user. So we can start to do things like digital education where we're serving content to many users at once. You know, even when you do have connectivity, we often find that connectivity is, you know, of a high latency or it's maybe not, you know, maybe a few megabits in capacity, but when you're trying to educate 30, 40 students at once or even hundreds of students, that pipe becomes very small very quickly. And that's where we think about moving content to the edge. And these are some examples of places we've deployed. So this is looking at, you know, these buses where we've deployed applications into the bus so we can give a high level of user experience. We've deployed education content. This is a school in Rwanda that has previously never had even 2G coverage. We actually worked in an operator there that were bringing 2G into the area. They had limited capacity. We provided this caching technology, this technology to bring in this education content directly to the school. The most recent one is a project we're doing right now in Kenya. And this is leveraging some open source technology from Facebook as well called SOMA or FB Mesh Tee where we've taken commodity hardware, very low cost hardware, you know, worked with the Facebook engineers and this is all open source. Now to build in this very high reliability, high density meshing tool that brings some of the open art technologies that Shah mentioned, some of the key takeaway technologies from TerraGraph, but here we're lowering the cost significantly so we can start to deploy interesting meshes. This is actually where we're deploying. It's in Mathare Valley in Nairobi. We've got 600,000 people who are living on a very, very low fixed income. There is mobile operator coverage in this area, but we just see a very depressed level of usage of that connectivity. And when it is there, it's the capacity isn't high enough. You know, it's hard for other operators to bring in and invest in that capacity when they can't get, as Shah mentioned, decreasing outputs. This is some examples of where we're working. Unfortunately, this is not, there's maybe a corner case for many network engineers or many people deploying in the West. However, it's not a corner case in the markets we work in. This is a very common site we're working in these large high density people living very close to each other, but very unreliable infrastructure. And this is where things like the SOMA mesh and some of these technologies that we can integrate as a relatively small company and bring into this population and provide some real tangible value. So where are we going with this? So, you know, obviously now we're talking about connecting people. We're talking about giving access to people, giving affordability. You know, now we can start to bring these together and we're working on a new initiative called Cloud Shule. And Cloud Shule is where we're bringing the content, we're bringing the connectivity right to the edge. We're working with some really interesting backhaul providers to start to bring, you know, true education down to, you know, the users who even getting access to books in some of these environments is hard. So you can see now that these open source technologies, these tools that, you know, Facebook connectivity has been developing, other people inside the Linux Foundation have been developing start to culminate and really form a very tangible use case in the emerging and frontier markets. I hope that's provided some wonderful context and thanks, Shah, for the introduction. Thanks so much, Raj. It's been a great pleasure to share some of the things that we're doing at Facebook connectivity, developing and evangelizing some of these open source technologies with Linux Foundation. And it's been a real pleasure working with partner like yourself as you're moving the, you know, connectivity that is like a, that's a really an intuitive challenge. There are a lot of components to it. And you're driving that in, I would say, you know, the exciting place of Africa, right? So it's, it's great to hear. Thank you so much for sharing. So I guess for our audience, we'd like to invite them to see some of these open source projects. And as we're working closely with Linux Foundation, any thoughts for, for how to invite them? What do you think could be exciting for them? Raj? Yeah, certainly. So I mean, the Facebook connectivity has been amazing in some of the initiatives you guys have built around, you know, TIP and some of the infrastructure groups, the Linux Foundation engagement with Magma. We were actually one of the very early developers on Magma and it was a very exciting time back, you know, and, and the GitHub repo is also just a very deep wealth of information. I think here, one of the things that is great about the way that Facebook connectivity is thought about this world is by providing us these open source tools, we can start to work on some of these use cases that may not be obvious to some of the more traditional operate or more traditional providers, technology providers. And that's actually a very, very enabling thing for us. So I think, you know, engaging with this open source, building it out, you know, supporting the open source efforts are really, really wonderful thing and help us in the long run as well. Yeah, that's awesome. So I think I'd like to extend that invitation to all the attendees today here. We're looking at this open networking and edge summit and the Kubernetes on edge day. I mean, you can know that most of the open source projects that we're developing today has some flavor of Kubernetes. So check out the GitHub repos and what we're doing with Linux Foundation or OpenARP that still is a Facebook project, but it is an open source project. And yeah, look forward to seeing you there. And I'd like to wish you all enjoy the rest of the open networking and edge summit. Bye-bye. All right. Once again, thank you for joining us at the Keynotes. I'm sure you all learned a lot. I did. Please don't forget to visit the sponsors, showcase, connect with them, chat with them, see the demos. And again, a big, big thank you to all our sponsors. We couldn't pull it off without you all. Really appreciate it. The sessions begin at 11 am Pacific. And then as usual, we will be able to do real events in person next year. Let's hope so. But I'm sure we've made the best use out of the current scenario. And thank you all for listening to the Keynotes. Bye.