 and welcome to the Open Networking and Edge Summit and Kubernetes on Edge Day. I'm your host, Arpa Joshpura. And I'm really excited to say that this is the 10th anniversary of this amazing show, which is a community-driven event that brings both technology and business executives together. In about 10 minutes, I'm going to cover one year of progress and three years of prediction. So I'm calling it State, or I should say the Union of Open Collaboration. And before we get started, I do want to thank our sponsors, Diamond sponsors, Intel, Juniper, Platinum, Huawei, IBM, Sousa, and Sedeta, Gold, Silver, and Bronze sponsors, really big thank you. And without any further ado, let's just go ahead and get started. I have five things I'm going to talk about today, the first one being the impact of code. We're all here virtually because of it, but there is some silver lining. Connectivity rules, open source networking skills are in demand, and the best part of it is you all, the developers, are really more productive. And I'll show you why, what, and how. Let's start off with connectivity. 5G in the next five years is going to be the dominant technology. It's drastically different, very, very different. It's bringing major value to the industry, vertical industries, millions and millions of places to install, this pricing has to be just right, and it's much more complex. And we know that open source is the only way that will provide economies of scale. And in a pandemic like this where global connectivity rules, this is a very important initiative. As we look forward and further, you have an amazing application built on 6G. Peak data rates go to 1 terabits, 10 times the device scale, and a whole new breed of applications, which I will not even dare to predict. There are experts who can take care of that much better than I am. But the key here is we are on an amazing trajectory of network connectivity, which is even more important in the recent years. Now couple that with a pain point that you can look at it the other way, where open source networking skill is in demand. It's in the top three skills that the IT management needs, cloud container, Linux being the other two. So if you're in this field, which I'm assuming you all are, you should be proud of yourself, but also upgrade your skills by taking more courses and making sure that your DevOps skills are up to date, your open source skills are up to date, and really excited to bring this report. If you can download it on Linux Foundation website. The other thing that has just happened is based on our LFX Insights platform, we've found that open source developers are 22% more productive post 18 months of, let's say, March cutoff 2020. Developer productivity has gone up, almost 7,000 plus lines of code per developer. So we're really excited and we hope the tools, platforms, and the governance, the neutral governance, helps you even accelerate the innovation further. And to show some proof on that, you can see both LF networking and LF edge, which are umbrellas of the Linux Foundation, have seen a huge spike and a huge growth in both contribution and commits. Over the last three to five years, and we are really excited to see the growth continue. So that's kind of the macro picture. The second thing I want to highlight is that the deployments or consumption of open source has become ubiquitous. In an architecture diagram, where from open ran to home edge, fledge and Eve as the edge projects, moving into edge X and a Crano, all the way up into the stack with MagMine Onap on the top, these open source project are being consumed and deployed as we speak. And as an example, we have taken some highlights from the press very recently, as you see Deutsche Telecom deploy Onap or Orange taking advantage of this or AT&T or China Mobile or Verizon, IBM. I mean, these are just a few. You will hear from the executives at these worldwide institutions in the next couple of days talk about how they have made progress and how they have accelerated innovation using open source. Now let's get to the news. In Lenox Foundation Networking, we have a major announcement and that is Walmart is moving a production grade networking project called LEAF, L3F, into the Lenox Foundation. L3F will be announced, Kobe, who's going to speak right after me, and you should hear his discussion on why and what. It's a full kernel function as a service. It's an open source now that's supported by a variety of ecosystem partners. And Kobe is on the board of LF Networking and we are really excited to host and bring this product. Along with that, we have another project to announce, which is EMCO. EMCO is also joining the Lenox Foundation Networking and it's a control plane to securely connect the workloads across cloud and edge. And the seed code is donated by Intel and our network. So really exciting news, great momentum. It brings up the end-to-end solutions for Lenox Foundation Networking. The third major message I wanna give is that market realignment is in progress because the use cases are crossing the boundary of typically segregated verticals, whether it's cloud, enterprise or telecom. And this picture says it all. You have a whole bunch of open source projects at the bottom. They all need to be tied in and in the middle are vertical markets that consume these open source projects. Enterprise, networking, service providers, end users, government, et cetera. And then the edge verticals, whether it's manufacturing, energy, commerce, home automotive, they're all utilizing this. We're really excited to have a mini summit from the US government that's deploying and using 5G in a very secure manner built off LF Networking, LF Edge, CNCF and a whole bunch of other open source projects. And please join Dan in the keynote and in the mini summit that just follows this today. So really excited about this whole vertical market adoption of open source. If you take it all the way into the campus and into the distributed edge, we have projects like Dent. That is a NOS for bringing data scale networking into the small wiring closet. And once you do that, you have seen the cost drop. Significantly, because retail and enterprise that have stores and distributed small campuses, post COVID look completely different. Camera, sensors, automation, they're all part of the norm and you need a very intelligent NOS to take advantage of that. The next major topic is Edge. And I think we all know that Edge is reshaping the vertical industries. Majority of the data and workloads will originate at the Edge. Lots of the verticals are taking advantage of the Edge compute frameworks that LF Edge is creating. If you're not familiar with LF Edge, which is one of the largest or one of the large umbrellas under Linux Foundation, there's almost about eight or nine projects all the way from infrastructure to applications, all the way from a user edge, constrained to the service provider edge and projects like Edge X and a Crano as stage three, impact projects, very, very important initiatives that brings the various different market verticals together. So we have some great news there as well. We would like to announce and welcome F5 and VMware as premium members to the LF Edge board. And we're really excited to welcome them. So let's hear it from the executives at VMware and F5. Hi, my name is Ginny Smadi, a lead distributed Edge technology focus area in the opposite of CTO at VMware. A focus area that we recently established to help unravel the complexities associated with distributed applications that emerge from the fusion of internet, wireless and the cloud. We've recently launched the OGA, the open rail alliance to help expedite this transformation by addressing operational employment challenges for distributing the Edge. This distribution of the Edge is what LF Edge is striving for. As a company dedicated to diversification through open flexible architectures for the service provider, VMware is excited to join the LF Edge. With virtualization in our DNA and a deep rooted footprint in the cloud, VMware finds itself at a sweet spot to be helping clinics foundation in this important ambition. Thank you. F5 is excited to join LF Edge board. We look forward to collaboration in Edge computing. We share the LF Edge vision of an open and interoperable framework that enables everyone to innovate at the Edge. The pressure and demand for a distributed and digitally active society combined with the explosive growth of devices and things is redefining the Edge. We believe that an application-centric platform based on an open framework will enable every business to unlock the full potential of Edge computing. That was great. Thank you very much. Along with F5 VMware joining the board, we're also excited to have three new general members as well as the Eclipse Foundation joining us as an associate members. And we have new projects coming to the Linux Foundation, Edge Gallery, a huge ecosystem of applications that will make up the usage of these frameworks, Equipper and Project Alvarium, which kind of decentralizes a fabric for secure end-to-end solutions. Along with that, obviously, LF Edge Projects, Equino Blueprints and Edge X 2.0 are continuing to have momentum. And we're really excited to bring this to fruition. And finally, what's the next big thing? It's multi-organization collaboration. That's kind of the big thing. And it's fueled by what is called the 5G Super Blueprints. A blueprint, again, very conscious experiment and proof of concept to bring multiple open source projects together all the way from IoT to Edge to Core and everything in the middle. And these are the components and these are the projects. And once you have it, you can have a different set of use cases that can be shown and demonstrated to take advantage of in various industries. So without further ado, let me sort of hand it over to Heather and Amar to show you a demonstration of 5G Super Blueprints, specifically one of the finest initiative called Network Slicing that brings a whole bunch of new use cases to fruition. So with that, Heather. Thank you for that warm introduction, Arpit. As mentioned, I am having a 15th anniversary at the system for LS Networking and I am super stoked to be here today to talk to you about our 5G Super Blueprints mission. Before I get into the initiative itself, the question is, why do we do this? It's a lot of work and it's very challenging. And I think that we should really sum that up as a great statement. One, we believe that digital transformation is the right not a good one. There are a lot of challenges out there both business and technical, but we believe that together we can not only address those challenges but really uncover new opportunities. That transformation is obviously built on top of technical innovation and we believe that open source networking goes through open source projects as well as in partnership with standards organizations is the only true way to scale that innovation. And the reason it scales is that we do this together. We go past our own corporate boundaries our own company boundaries but even outside the boundaries of specific open source projects and umbrellas and Elephant, we are a center of gravity for collaboration with all the other folks working in this open networking area so that we can deliver value to the ecosystem. 5G Super Blueprints is something that has come out of a number of years of large-scale multi-products of collaboration and integration. And this go around really just to focus on network splicing and the possibilities related to that. Why network splicing you may ask? Well, not only is it expected to be a $200 billion opportunity by 2030 through the ability to more intelligently and in a more targeted fashion take that 5G network, break it up into splices that are specifically designed for use cases. That enables the operators to create more offerings for the vertical enterprise markets that they address and then it enables those markets to create new exciting services for consumers things like remote emergency medicine, for example, that really start to impact folks in their daily life. So that's sort of the business transformation case to talk about the actual technical innovation work that we've done. I'd like to invite up a Mark Apadia CEO and founder of Arna Network. Good to see you. Good to see you. Let's talk about what the actual developers really... To create network slicing, we first had to build a fully disaggregated cloud native 5G network. And the way we did it is by having the orchestration layer combination of ONAP and EMCO on the top right on a Kubernetes cloud at the UNH lab connected to the Kallum lab using Ternium SDVAN. And in the Kallum lab, we had the NFVI or the cloud layer being Kubernetes from Red Hat OpenShift. And then we had several network functions on top. We had the 5G core from Capgemini Engineering and Kallum, firewall from ATEM, and an ORAN implementation from Capgemini Engineering, Intel and GenXcom. And as a bypass, we had the Rebaka, Gnode B and UE emulator. Okay. Great. So we built a network, we built it back in, no biggie, in order to address network slicing, what did we actually do? ONAP supports comprehensive end-to-end 5G network slicing. And that's what we used to create two slices. One is a five megabits per second slice, let's call that the good slice. And the other one, a 2.5 megabits per second slice, let's call that the bad slice. And that was done using the ONAP user interface, as you can see in the screenshot. We did have to create some innovation. We developed an adapter between ONAP and the commercial 5G core, which is called an NSFMF and that was some custom work that we did to create the demo. All right. So we have provisioned the slice and then what really happens when you've got bits and bytes flowing through the network? Let's take a look. What you see here are two video feeds. The top one is going through the five megabits per second or the good slice. And the lower one is going through the 2.5 megabits per second or the bad slice. And you can clearly see the difference in quality. The high bandwidth one is crisp and sharp and the low bandwidth one is blurry and just not as good in quality. Now we are going to go look at the quantitative aspects of the two slices. The high bandwidth or the good slice, as you see, is going up all the way to five megabits per second. And the low quality or low bandwidth slice is going up only to 2.5 megabits per second. We can also see the amount of video available in the buffer space. It's a lot less for the low bandwidth slice and much more for the high bandwidth slice. All right, awesome. So we built a network. We connected it to the provisioning system. We provisioned a slice and it worked. And it worked. Great job. So we really can only do this with collaboration. We had a number of open source profits come together to make this happen. We had 10 companies really putting in long hours resources, physical resources, as well as employees working on this activity. And you can see their names here. This is 42 names who have spent sleepless nights, collaborated on Zoom calls at all hours. And I think that we need to give them some time. And then finally, we are looking ahead. We're wanting to take that network slicing capability, add in securing a policy to create an actual private mobile networking, something that's really important for some of the more sensitive use cases where user data and the sessions need to be protected. From a technical point of view, we are looking to integrate OWNAP with a new open source core project in one of the foundations called MAGMOM. We are looking to bring in anecdotes, network infrastructure, a cloud layer, as well as then integrate to live physical radio in addition to our emulation capability. Going ahead from there, we really want to start addressing those retail and manufacturing use cases by integrating with IOT projects from LF edge, and then also bringing in open source RAND from the O-RAND software community also here at the Linux Foundation. So this is really exciting work. I really hope that I've inspired you to want to learn more and get involved. Here are some resources we can do that. We have regular meetings. I highly encourage all of you to go to our virtual booth where you can talk to some of the folks who did this work, as well as see a behind the scenes video of those developers building this network and deploying it. This has been really exciting and I just want to end where I began. The work we do here at LF Networking is really exciting. It's based on transformation, innovation, and collaboration. These are the things that define and drive us and the greatest of these is collaboration. Hope that you have a great day. Thanks again. Thanks, Ily. Amar, there's a lot of great content. Hope you have a great one summit and I'm looking forward to seeing at least some of you in person again, Belting. I'd like to welcome Kobi Avital, EVP platform technology at Walmart. Walmart has made great strides in a technology-first company, and open source and network visibility are anchors for their journey forward. Kobi has some exciting news to share. And so without further ado, I'd like to welcome Kobi to give us a quick talk on Walmart's journey and how open source is fundamentally an anchor deliverable. Kobi. Hello, everyone. My name is Kobi Avital and I'm executive vice president of Walmart Global Technology Platform. Pretty much responsible to all the technology that it's a business agnostics and being used by the entire enterprise. Today we are going to talk about Walmart. We'll talk about the Walmart challenges and some interesting introduction that we would like to make. I would like to thank the Linux Foundation for inviting me for those keynotes and I hope that you will enjoy it. I would like you to imagine for a second the size of Walmart. If you can close your eyes and imagine 2.2, 2.3 million associates servicing customers in stores, online, and we are talking about 250 million customers a week, which is a significant amount of customers. We have over 10,000 stores worldwide and in the U.S., stores are distributed in such a way that 90% of the America's customers leave 10 miles for Walmart and Sam's Club. So they can definitely rely on the fact that we are there for them in a close proximity. The e-commerce is growing about 80% a year, which is a significant growth and we are building an environment that we try to service it. So now open for a second your eyes and try to think, what does it mean to service all of it? The scale, the resiliency, the connectivities between stores and the cloud, stores and the distribution center. How do we achieve deterministic connectivity and experiences to our customers irrespectively from where they come from? How we handle low latencies? How we handle the petabyte of data that we are collecting and drive the business form? And very interesting topic is how we operationalize it because at the end of the day, you want to be able to monitor it, to handle incidents, to assess the health, to predict what is going on, do you need to scale out, scale down? So the whole complex environment is being servicing by my teams and the product teams that are using the infrastructure. We spend significant amount of time to figure out what is the right model to do that. Our cloud and modernization transition is being in play for years and always we learn more, we understand more. The cloud providers are evolving and providing more and more services. We understand what is the footprint that we need to handle this kind of environment and we try to come up with a model that is sustainable for years to come. And if you look at the slide, what you can see our multi-cloud model consists of region-based deployment of cloud instances. And what we try to do, we try to take two or more cloud providers, including Walmart private cloud and to put them in a close proximity. Think about it when you put three or four or two cloud provider in a close proximity and they're operating in low latency, you pretty much become, it becomes a harmonial cloud instance that each one of them can access the other one with short, with low latencies and operation in the right way that we need. The other interesting part, when you bring all those plus provider together, you can choose what services are best of breed in each one of them and irrespectively when the workload is running, it can, the program can access the best of breed services on the other cloud provider. Now, once you have those triplet region instances, you can deploy them everywhere in the geo distribution that we need based on what do you want to service? What's a retail store, what the stores, what the customers are coming in and manage it in that way. Now, what we learned that the only way to create a multi-cloud environment is to build the proper layer of abstractions on top of it because we don't want the developers to deal differently when the right application that need to deploy to a Microsoft Azure or Google GCP or Walmart private cloud where all of them are behaving similarly in respect to how they handle the workload. And I will be introducing something that we are doing. It's called Walmart Cloud Native Platform that is providing the compute and workload distribution abstraction layer. Another thing that the abstraction layer provide us is providing us a unified operational model. And the really cool thing that we have done, we are deploying in the same manner, the same way into our 10,000 edge stores and distributed systems, distributed centers, so workload can traverse wherever we need them as we need them. Just a little view about our edge. You can see here the interesting numbers, how, what is the distribution of stores across the globe. If you take that triplet model and you come and say, okay, how do I use it actually in the real life? And the way that we are doing it, we are deploying triplets based on geo locations where the idea is a triplet will service a region and that way it can service it in the high efficiency possible. Customers from the region, stores from the regions are communicating with the triplet that within the region. That way we achieve a lot more predictable latency and the region itself is self-sufficient in respect to failover, resiliency and moving the data close to the customers. Regions are communicating among themselves in order to create DR posture. The posture depends on the tier of the DR and depends how much data you want to move from one region to another in order to maintain the posture. Well, most of the back office enterprise is done in our data center, which are in central. That is a combination of private data center and club. On the left side, you see the only channel structures. Customer can come from a store and from online, can order items that are in the store or outside of the store. Fulfillment can be done into the store, into pickup, into a distribution to their houses. So the whole thing is operating in harmony so we can fulfill the best we can out of the region. If it's not, it can go out of region and drive it. I mentioned before the abstraction layer that we created. We spent a lot of time to build abstraction networks. And we all know that abstraction is fairly expensive. WCMP is a layer that we will talk about it later on a little bit more that provide us deployment for compute. So we can distribute the workload anywhere that we need them just pointed and distributed. But in the middle, you will see there is another layer of abstraction corn leaf that allows us to distribute programs into a kernel level in the Linux of our servers. And this is a very interesting technology that helps us to really handle the resources that are running in the different clouds in a much more tight level in the operating system lab. There is another layer of abstraction that I didn't mention here in the slide, which is our data layer abstractions. We have an API layers, what we call folk lift that allows the developers and the products to access different data sources without really knowing exactly what is the shape of the data and how we store it and how we handle it. In that way, we can really move from one database technology to another with minimum impact on the application themselves. Let's go and zoom in a little bit about WCMP. WCMP is really the biggest enabler that we are having from the technology perspective. What WCMP is doing, it's a Kubernetes based platform that allows you really to package and deploy Kubernetes based workload. And after deployment, it helps you to manage the workload. All the workloads that are getting deployed from WCMP are interacting super well with the ecosystem of Walmart in respect to their authentication, the build, the monitoring metrics, everything that you can associate with running the workload, as well as traffic management. If you need to rev up, rev down the WCMP-based workload, and as well, ability to mobilize workload as pressure on the system is increasing to perform bursting from one cloud to another while the application developers don't really need to deal with it. We know how to do it too. So without it, we couldn't really implement a mature multi-cloud environment. I'll talk a little bit about LEAF. LEAF is really a cool thing that we've developed. And all of us know what is the EBPF, which are our program in the kernel level of Linux. What we have created, we have created a complete lifecycle management of the EBPF programs to handle thousands and thousands programs get distributed to the servers that we are having online and on the private cloud, on the public cloud. So it's pretty much cloud-agnostic deployment. You build once and you deploy it anywhere. You have automatic runtime configuration, so you can really change configuration on the fly based on what is happening. It allows us to do program chaining for one to another. On the fly orchestration and definitely one of the most important part for us is you can monitor what is going on so you can assess the health. In an environment in this size, it's super difficult to assess the health and to react to it. When we started to do the LEAF base, most of our solutions or programs were around traffic controllers, monitoring load and load balancing. But real quick, we found out that being there in the kernel level allows us to do kernel level metrics for visibility, which can give us triggers when we need to scale and where the server is under stress so it can improve our time to detect and time to recover. It allows us really to assess the security pressure for that environment. Handling DDoS and DDoS migration, allowing us to do traffic flow explorers so we know where the traffic is. The interesting part that we found out that we can do prioritize for payment, especially if store is getting offline and it relied more and more on the cloud to handle it so we can push requests for payment faster than the rest to do traffic monitoring and load balancing for testing and management alone, as well as improving the site speed and the performance of the hardware that we are working on. We all know that running a workload on cloud-based infrastructure is a little slower than when you run it on your own private cloud when everything is associated with your workload and you don't need to share the environment with other tenants, which is the case in the multi-cloud environment, the public cloud. So now I'd like to make a big announcement. We are announcing LEAF as an open source. And thank you, Linux Foundation for accepting this as a supported open source initiative. We really, really engaged with the community. We found out that the community is exciting to have this kind of environment. And we really want to open the door for everyone to contribute in area that LEAF provides you and in your interest. And we will share our learning, what application we would like to, we found out that are super important to be implemented on the LEAF environment and as EBPF. And later on to allow contribution on building the platform to a larger scale. So please join LEAF and go and take a look at LEAF.io. Make sure that you get yourself familiar if you want to participate in the LEAF environment. If you look at the LEAF, LEAF has few components. In the middle part, you see the LEAF platform. This is what we are opening, so open source. So this is the ecosystem to manage LEAF programs, to deploy them, to configure them, to push them into the actual servers, to change configurations, to identify health. So this is really an interesting environment and I'm sure that there are a lot of areas to contribute in respect to the visualizations, better control, and if you're interested in participating in this area, you are very welcome. If you look at the left side, the left side is something what I call essential to bootstrap any initiative. If you build a platform, the platform success is as good as the applications that you have running on it. So if you have a good set of application that are creating really a lot of opportunities, the platform will take off and get its own escape velocity, so to speak. So I really would like you guys to go and to take a look what can you use LEAF for and start to share with the community any EBPF programs that you feel like that the community is interested. So we have two different layer, one of them in the platform layer, the other one is in application contribution that we really have opportunity for all of us to participate and we have a call out for you to join the LEAF community. Okay, that's the end of my presentation and I would like to thank you all for taking the time and to listen to those keynotes. I would again would like to and hope to a massive contribution into the LEAF program. I hope that you found it interesting to understand a little bit about world-bound scale and complexity and enjoy the conversation. Hopefully we'll see you on the next keynotes. Thank you. Thank you. Our next speaker is Rajesh Karyar, VP and CTO at the Network Platforms Group at Intel. Cloud First and Cloud Native has always been buzzwords and to make it real and give you some real examples, Intel has always been on the forefront. Let's see what Intel has to share and I'm sure, like always, he'll be sharing some very exciting news. So please welcome Rajesh. Hello, everyone. My name is Rajesh Karyar and I'm a vice president at Intel and CTO for Intel's networking business. And it's my absolute pleasure to be participating at this year's Open Network and Edge Summit. I hope all of you are staying safe and doing well. I have a long history in networking and over the last several years, I have worked on network virtualization. The hardware software desegregation we've been able to accomplish as a result has helped us modernize the network infrastructure, bring new innovations in network applications, create a large, vibrant and open ecosystem and it has laid a solid foundation for the 5G and error of distributed computing. Today, I will talk about the next phase of network transformation, one with a cloud native and cloud first approach and how this is accelerating the deployment of 5G and Edge solutions. If the last decade was about hardware software desegregation, in many ways, the next decade is all about software desegregation from monolithic applications to microservices for scale and for automation. Okay. So here's a picture of an end-to-end network. As you walk from left side of this picture to the right side, you see intelligent devices such as industrial robots, connected cars, analytics applications in retail and healthcare, talking to an on-premise enterprise edge which then connects to a network edge such as a wireless radio access network with 5G connectivity, onto the telco operator core network and eventually to the public clouds. The technology transition to 5G comes at a great time. 5G with its 10 to 100x more bandwidth, 10 times lower latency, new technologies like network slicing that allows us to deliver end-to-end quality of service and the ability to use unlicensed spectrum means 5G can penetrate deep into enterprises and enable new and innovative services. However, the latency and quality of service demands of these new applications will not allow for all the processing to be done in a public cloud. This is where processing at the edge is significant and becomes the new focus of innovation. To think about it, it is the perfect marriage of cloud and communications. So edge is not about a particular location. It's really about bringing cloud computing closer to the application or service. It's about the flexibility to run applications or service components anywhere in this infrastructure and stitching them together to deliver an end-to-end service. While this distributed computing delivers significant flexibility, scalability and TCO benefits for 5G and edge, it also brings many challenges. Edge clouds need to support heterogeneous infrastructure with accelerators, smartnecks, GPUs. They need to support multi-tenancy, which means now you have a larger attack surface and hence security becomes paramount. As you desegregate and deploy services across multiple edge and cloud locations, ease of deployment becomes a huge challenge and you still have to deliver the desired quality of service for the applications. So the good news for all of us in the technical community is that we have plenty of work to do. Now in the last couple of years, we are beginning to see edge computing begin to gain a lot of momentum. 5G has served as a great catalyst for private wireless and enterprises. We are seeing enterprise security evolve to secure access service edge or SASE. Whether it is agriculture monitoring system for farming or enhanced public safety via real-time surveillance or massive AI-driven industrial automation, the innovation in the new edge and IoT applications are changing the way we live and it's changing the way we work. Now if you look at the evolution of computer infrastructure over the years, it has been an interesting journey. We went from purpose-built applications with tight integration of hardware and software to the era of server-based computing to the virtualization era with the ability to run multiple applications on the same server and now the cloud-native era where components of an application can run anywhere as microservices, even in different clouds. This next phase of transformation is all about desegregating software and building network and edge applications with a cloud-first mindset. The resultant benefits are huge. You can rapidly create, deploy and manage applications across multiple edge and cloud locations with a continuous integration, continuous deployment approach. You can support unified connectivity with end-to-end security and quality of service. You can benefit from optimal use of compute resources across multiple locations resulting in a much reduced total cost of ownership and above all, you can make this all easy to deploy with massive at-scale automation. So I am super excited with the cloud-native transformation and how it can dramatically accelerate 5G and edge services. The possibilities are huge and it will be interesting how we marry cloud and communications together to deliver new and innovative services. So now that I have you all excited at the possibilities, I want to discuss a few challenges in a cloud-native journey that we need to come together as a technical community, collaborate and solve. But before I do that, first, a huge shout out to all of you for your perseverance and collaboration through these difficult times but still driving significant innovation in the industry. Now, for today's discussion, I decided to zoom in on three challenges. First, as you know, Kubernetes has become the de facto cloud operating system. It has become the tool of choice for resource orchestration and automation. But one thing I would like us to think about is how we continue to improve networking in Kubernetes and make networking a first-class citizen in Kubernetes. Second, as the complexity of infrastructure grows and as we deploy services that span multiple edge and cloud locations, how do we build robust service assurance and observability solutions? And third, perhaps the most important, how do we make it easy to build, deploy and manage edge solutions for various use cases? Most edge applications have a lot of common requirement, optimized networking, wireless stacks, particularly virtual RAN stack, optimized for private wireless deployments, video and AI SDKs, ability to deliver network slicing and end-to-end quality of service and so on. So is there a way we can implement these common services in a cloud native fashion such that developers can use this platform to deliver new and innovative services with faster time to market? So let's zoom in on this and discuss these three areas in some detail. So first, let's talk about Kubernetes networking. There are three main challenges in Kubernetes networking. Fragmentation, performance overhead and deploying services across multiple edge and cloud clusters. If you double-click on fragmentation, you will notice many container network controllers, many service mesh technologies that do not coexist in a cluster together. Also other things, lack of multi-network support, lack of uniformity and interoperability. If you look at performance overheads, we see multiple traversals through the user kernel boundaries, pod overheads, memory overheads per pod, network stack latencies. ease of use also deserves some attention, in particular how network resources are managed and a lack of comprehensive visibility. Another emerging complexity is that that's increasing, increasingly we see a requirement to deploy composite applications in multiple geographic locations. Now some of the catalysts for these are latency, bandwidth requirements and the context proximity, running some parts of application on edges near the user that requires local context. Now to solve these problems, we started a project called Edge multi-cluster orchestration or MCO. MCO is a geo-distributed application orchestration for Kubernetes. It operates at a higher level than Kubernetes and interacts with multiple edge and cloud running Kubernetes. So the main objective of MCO is automation of application and services across multiple Kubernetes clusters. It acts as a central orchestrator that can manage edge services and network functions across geographically distributed edge clusters. MCO is already an open source project and I look forward to your collaboration in making it deployment ready. I think it's fair to say that we have some work to do to make networking a first-class citizen in Kubernetes like compute and storage. We must reduce overheads, make network capabilities uniform, address performance and latency issues, support off-load and acceleration with the right infrastructure for smartNICS and make it multi-cloud ready. Next, let's talk about service assurance and observability. Service assurance is the application of policies and processes by a service provider to ensure that services offered over networks meet a predefined service quality level for an optimal user experience. It is to ensure that the service SLA offered to the user is met while minimizing the operational cost. In simple terms, service assurance is all about maintaining and meeting service quality and there are two KPIs, availability of service and performance of service. Now these are the two that we want to ensure that we meet. Now today, these KPIs are largely delivered using proactive monitoring and correlation with misbehaving components and second, when the failure occurs it is by doing root cost analysis and incident management to respond to the failure scenario. You can see the problem with this, isn't it? We are either being proactive or reactive to a failure. What we really need is a predictive approach. So what does a good service assurance platform look like? There are three key requirements, collecting data, distributing data to appropriate data lakes and then storing data in data lakes, performing analytics and driving closed loop actions. So the diagram on the right hand side of this picture shows a framework almost entirely built with open source software, many of which as you see are mature CNCR projects. So my team at Intel is working on this vision and driving innovations for better observability and service assurance for cloud native network applications. Please talk to us if you would like to learn more and collaborate with us. Now earlier I talked about the need for a platform that allows for easy creation, deployment and management of edge applications at cloud scale. Most edge services need optimized low latency and high performance network stacks, connectivity to mobile infrastructure for private wireless and a state of the art computing infrastructure that can support heterogeneous resources such as CPUs, GPUs, acceleration and AI capabilities. So the question really is, can you provide all these common services in a software platform? And if you can do that, we can fuel innovation at the edge at a much faster rate. This is exactly what we have tried to do with the SmartEdge platform. So let me actually introduce you to the SmartEdge platform. In fact, until recently we used to call this platform Openness, Open Network and Edge system software. We have since renamed it as SmartEdge Open. So let me talk about the SmartEdge platform in some detail. SmartEdge is a software platform that enables highly optimized and performant edge solutions to onboard and manage both applications and network functions with cloud-like agility across any type of the network. It has many pre-built microservices for multi-access edge and has been integrated and deployed in many commercial deployments. It also has built-in connectors for cloud deployments such as Azure Cloud. In many ways, this is a shortcut that developers have been looking for. At the highest level, the SmartEdge platform has two main components. First, the edge controller and edge conductor, as you see on the right-hand side of this picture. These allow you to create and manage services, also provision the platform and operate service clusters. Second, the Edge node. So if you look at the Edge node, it comprises of optimized building blocks for networking, security, telemetry, and all this is exposed through cloud-native APIs. So what does SmartEdge give you? First, it abstracts network complexity. You can choose across many data planes, container network interfaces, and access technologies. Second, it provides a number of services for cloud-native deployments. In particular, it has support for cloud-native ingredients for resource orchestration, telemetry, service mesh technologies. SmartEdge actually has built-in microservices for data plane processing, multi-access networking, enhanced platform awareness, telemetry, platform accelerators, application and security. SmartEdge also provides many applications for hardware features for best performance in ROI. So in order to increase the usability and consumption of SmartEdge platform, we have also created experience kits, targeted at popular deployment use cases at the Edge. So SmartEdge offers several experience kits, which are use case-specific SDKs that integrate with tools like OpenVINO, FlexRAN, OpenVisualCloud, and OpenSource Technologies. So this lets our customers converge multiple workloads on their Edge platforms. Some of the experience kits that are included in SmartEdge are Private Wireless Experience Kit, Developer Experience Kit, Enterprise UCP, SDVAN, SASI Experience Kit, and Access Edge Experience Kit. Now by providing critical Edge capabilities in the form of modular building blocks, SmartEdge creates a faster path to commercialization for the ecosystem. Customers can leverage exactly what they need from the toolkit so they can prioritize their investment dollars on delivering added differentiation. So if you're a developer looking to build Edge services, I invite you to visit intel.com. As you begin to use SmartEdge, we at Developer Experience Kit with few reference IoT applications, please reach out to us and provide your feedback. We would love to work with you and continue to enhance the capabilities in SmartEdge for your needs. That brings me to the end of my presentation today. We live in exciting times. 5G deployments are in full swing around the world, and Intel is working with every major operator around the globe, as you can see. Now a quick call to action for my friends in our open-source community here. Let's collaborate in three areas that I talked about today. First, let's make networking a first-class citizen in Kubernetes. It's going to actually help all the network and Edge deployments. Second, let's accelerate service assurance and observability as a key tenet for 5G and Edge applications. And third, learn about the SmartEdge platform, download it, use it for building your Edge applications. Let's collaborate and solve some of the common problems and drive faster innovation. Like I said earlier, 5G has much more potential. It is going to fuel an innovation cycle like never before, particularly at the Edge. And to realize the full benefit of 5G and Edge computing, embracing cloud-native architecture is fundamental. Cloud-native is the only way to deliver efficient, scalable, and automated end-to-end service. I would love to engage with you further in these areas. So please reach out to us at 01.org slash Kubernetes and Intel.com SmartEdge, Intel.com slash SmartEdge. So we can collaborate and deliver solutions for the future, a very bright future. Thank you. Thank you. That's excellent progress and great news for the world of open networking and Edge. Our next speaker is Jennifer Kariakakis. She's the founder and VP at Matrix Software. She's won a ton of awards on entrepreneurship and innovation. And as a vendor, let's see how open-source and deploying open-source is fundamental to the business momentum and success of a company like Matrix. Please welcome Jennifer. Hi, I'm Jennifer Kariakakis, co-founder of Matrix Software. While Matrix the company has nothing to do with Matrix the movie, but given the imminent release of Matrix 4, I thought I would take you on a little journey about making 5G real, leveraging open-source. So drawing on this analogy for a second, bringing a movie like the Matrix 4 to life is an absolute technological feat. And a film like the Matrix might actually have more than a dozen different visual effects studios working on it. So they have to pass files and scenes from studio to studio. So compatibility in that industry is critical. And open-source has arisen there. And programs such as Blender and OpenColorIO are key to making that possible so that the movie studios can push the boundaries and create the most breathtaking experiences for their audiences. In the same way, open-source can do the same thing for operators building 5G networks. Those networks are gonna push the boundaries of how companies do business and how people consume content. So a little bit about Matrix. What is Matrix the company and what do we have to do with making 5G real? So Matrix was founded back in 2009. So 10 years after the movie. But we were founded to bring a new monetization solution to market primarily for telcos. A key piece of our solution is a 5G converged charging system which is a core network function known as the CCS. So the CCS controls services and manages consumption and balances in real time. So it's really at the heart of how network operators make money from the services that run on their networks. So when we look at the tens of billions of dollars being invested in 5G infrastructure, making it real means driving to a return on that investment. And as part of the 5G network, the CCS is what gives operators the power to turn network capabilities such as network slicing into new business models and revenue streams. So Matrix is highly involved and currently contributes to the Cloud Native Computing Foundation. We work on helping define best practices and developing a conformance test suite for Cloud Native network functions. We also co-chair the Anakit Assured project and we're contributing to 5G Super Blueprints and to the ONAP CNF Task Force. We're also engaged in other standards bodies such as the telemanagement forum and in 3GPP where we lead the charging group and we vice-chair the SA5 workgroup for management orchestration and charging. And so I'm just telling you all this because we are committed as a company to taking those leadership roles to champion open ecosystems and champion interoperability for the benefit of both operators and the larger ecosystem that will thrive by embracing Cloud Native technologies. So I'll talk a little bit about Matrix's own journey with open source before I then talk about what our customers are doing with respect to bringing 5G to market. So back in 2009, we came into this space because we saw a ton of innovation pouring into the device market, right? 2008, 2009. There was also a lot of innovation still going into the brand and the core market moving from 3G to 4G. But we didn't see any innovation going into monetization applications such as charging, rating, and billing. Most vendors in the space at the time are very, very large. They're very established and they have very broad portfolios. So frankly, that creates a huge barrier to market for any challenger trying to break into that space. Well, it basically take too long and be way too expensive to try to build a competing portfolio of products. But even then, to take one area such as a monetization and try to innovate there, taking a traditional path to market and a traditional product development path would have been too expensive. So from the beginning, we took a very different approach and we adopted open source and an open ecosystem mindset. Because we also had the vision to deliver an out-of-the-box product that, you know, did a certain set of functions but also could be extended in many different ways but without coding or scripting. So we were doing the no-code thing before it was really a thing. And I'll come back around to this in the end because every industry needs innovators and disruptors in order to just grow and thrive. And so the ability for small companies to come in, find a white space, solve problems, innovate and then actually have a path to market is something that I think in this market has only been possible because of open source and because of the standards bodies that push open ecosystems and interoperability. So our journey as we followed the white rabbit. In 2009, when we started, we were one of the very few software companies that was focused on the telco market but coming out of Silicon Valley. So we had a very different mindset. We had a software developer, DNA and a very sort of heritage around software development versus traditional telco. So that, you know, drove us to take that different approach from the beginning. And we really set out to disrupt how other vendors were building and delivering software into this industry but more importantly, disrupting how telcos were evaluating and buying software. So the norm in the industry had been to develop, custom develop, to spec for every single operator, every single deployment. So everything was a closed-back black box running on-prem in the customer's data center. And we came in with a different architecture model from day one that embraced the concept of open source, open ecosystem and interoperability. You know, add a necessity because we didn't have a broad portfolio. We had solved a very specific problem with a very specific application. So we were very conscious that we needed to interoperate with everything that the operator already had. So trying to come into a market that was dominated by large, well-established players, we leveraged open source from day one because that was the most beneficial way in the beneficial entry point to get into market and to speed up product development. So it was mainly for product development. You know, back then, basically every billing and charging system was built on a proprietary OS. We went Red Hat Linux from day one as an example. And when we first got a product into market and started working with operators, you know, trialing it in their labs and things like that, our use of open source was basically looked at as interesting and kind of cool, but not really something that they were demanding by any means. But as we started working with our first set of customers, it very quickly differentiated us to believe how fast we could build functionality and how fast we could put out releases given we were a very small company at the time. So it was core to that initial kind of break into the market. And then as our customer base grew, leveraging open source became equally important. Again, on that rapid integration into the ecosystem, not just product development. So integrating into their existing IT and network application and functions is where we really leveraged it. So examples, you know, from the beginning, we leveraged Apache extensively, including ActiveMQ and Apache Camel. We also leveraged Kafka for event streaming, and we heavily used Spring Framework for integration into things like LDAP servers. So over the past 10 years, the industry has been making that transition to cloud-native and open source. So, you know, 10 years ago, what we were doing was seen as interesting and cool. It's now very much table stakes where our operator base, our customer base, proactively asks us for open source technologies. But the interesting thing is the operators that are really on the bleeding edge, that leading edge of deploying 5G core networks over the past 18 months have been doing some really interesting work in terms of trialing and running extensive trials. And so we've had a single operator ask us to run our application on multiple different Kubernetes platforms and integrate into core network functions from four or five different vendors all within a space of a month. And we've been able to meet all those challenges and do those kinds of things because of this approach that we took from the beginning. So the question becomes are operators really taking the red pill? Is it really getting adopted? It seems like 5G is a forcing function because 5G is synonymous with cloud-native. But within our community, we see big challenges in telcos adopting cloud-native and open source as they move to 5G. So as the network functions and the host infrastructure transition our customers don't just need to understand which solutions work with their planned infrastructure. What they really need to understand is which solutions will excel and will work best in the infrastructure they're planning on using versus simply just being compatible. And they need to build that understanding without having to incur massive costs and spend lots of times validating potential solutions in different trials and different labs. And so we think that the solution to that challenge really does lie in standards, best practices and conformance programs. The goal of conformance is to enable service providers and vendors to engage and validate in an ever widening and complex set of standards and best practices. And while pieces of those ecosystems are developing today, the biggest challenge is the time and cost necessary to have to navigate every single partner and vendor and company individually. And you never know as you're going through that process, you know, are you duplicating work? Are you increasing, you know, doing overlapping effort? Are you wasting time? So unless this paradigm kind of changes, vendors service providers, everybody's going to continue to navigate a maze of organizations to ensure that the solutions that they're trying to develop are fit for purpose and leverage best practices. So this doesn't just add additional costs, but it also means it slows down the advancement of developing new technologies but also slows down the ability for telcos to get new services to market. And then, you know, near and dear to our heart, it also means there's probably far fewer vendors who can offer innovative point solutions. So ultimately, the impact it has is that that stifles the efforts of service providers who are looking to differentiate their 5G network build outs through a best-in-class type of approach. And that's why we're so deeply engaged in championing the work that we're doing without networking. Specifically with the Anakit Assured project, because that is where we're establishing a new conformance program for cloud infrastructure and for cloud-native workloads. So the goal is really to give operators the confidence they need. How do we build that confidence that they can move into a fully cloud-native, fully open ecosystem environment and understand which solutions work best across the whole ecosystem. So the end goal for them is that they can quickly and cost effectively design a modular and an adaptable core network unlike networks from the past. So throwing away those constraints of having to procure core network or ran for one single vendor. Operators could now adopt a best-in-class strategy which puts them in control to prioritize specific network functions as a means of differentiating their capabilities and services. So it means that service providers can actually create a unique 5G strategy. This takes it out of the connectivity-only world makes 5G network a unique asset to give them an entirely different level of differentiation which plays back to how they will drive those new business models and ultimately monetize that 5G investment. So what comes next? How do we bend the spoon? So looking ahead, enterprises are embracing network services. Enterprises from all industries are embracing network services to build new business models based on industry-specific needs, whether that be government, manufacturing or healthcare. And that's where the next level of coordination needs to come together with 5G super blueprints. So we see this as particularly compelling not only for operator customers but for the emerging service providers who are building out private network and edge offerings for those specific industry. We see a very healthy race going on in terms of who's going to dominate the private network space and we think it's going to be more one at an industry by industry level in terms of who can build solutions the fastest and most efficiently that are tuned to those specific industry needs. The whole super blueprint concept too, it's not just about open source. It is about interoperability between open source and vendor-developed software. And it's not just about 5G either. It's about how it can that fit into existing infrastructure. So Matrix being a monetization engine, it's paramount to us because it's critical that our customers can drive differentiation and instantiate new business models that are tuned to a specific industry like healthcare. It kind of sounds like a paradox, right? A complete paradox. Using standards and conformance to drive differentiation and innovation. But it's only through these programs and initiatives that the key stakeholders will have that optionality and be given the creativity and the commercial agility to build out unique solutions that can be rapidly developed rapidly integrated and then cost effectively repeated. So just like a movie studio can use 12 different visual effects firms to bring one movie together operators want to use a multitude of 5G infrastructure providers to build the best solution possible for their target customer. And that kind of circles back to the importance of smaller companies being able to find those white spaces, innovate and solve problems. And then actually have a path to market to sell both to traditional telcos and to the emerging private network sector. And so the adoption of open source and open standards is key to creating an environment where that is actually possible and that can happen. So I'm going to close with offering everyone the red pill we took it 13 years ago it has been critical on our journey from product development to getting into market to delivery success and now it helps us with our ability to continue to innovate on a key piece of the software ecosystem that is crucial to making 5G a real proposition for service providers and enterprises. Thank you. Thank you. What a great lineup of keynotes it has been so far. We now have a quick break grab some coffee stretch your legs come back here at 1050 Pacific I know it's a global audience but we're trying to stick to Pacific and we'll have some very very cool keynotes from some of the leading CTOs of the industry. Thank you.