 Hello, everyone. My name is Rajesh Garia and I'm a vice president at Intel and CTO for Intel's networking business. It is my absolute pleasure to be participating at this virtual KubeCon event today. I hope all of you are staying safe and doing well. I've had a long history in network applications. Over the last few years, I have worked on network virtualization with NFE and SDN. The hardware software disaggregation we have been able to accomplish as a result of NFE has helped us modernize the network infrastructure, bring new innovations in network applications, create a large, vibrant and open ecosystem, and it has laid a solid foundation for the 5G era. Today, I will talk about the next phase of network transformation with a cloud native approach and how it is revolutionizing the deployment of 5G and edge solutions. Here's a picture of an end-to-end network. As you walk from left side of this picture to the right, you see intelligent devices such as industrial robots, connected cars, analytics applications in retail and healthcare, talking to on-premise enterprise edge, which then connects to a network edge such as a wireless radio access network with 5G connectivity, onto the telco operator core network and eventually the public clouds. The technology transition to 5G comes at a great time. 5G, with its 10 to 100x more bandwidth, 10 times lower latency, new technologies like network slicing that allows us to deliver end-to-end quality of service, and the ability to use unlicensed spectrum. All this means 5G can penetrate deep into enterprises and enable new and innovative services. However, the latency and quality of service demands of these new applications will not allow for all that processing to be done in a public cloud. This is where processing at the edge is significant and the edge becomes a new epicenter of innovation. To think about it, it is the perfect marriage of cloud and communications. Edge is not about a particular location. It is really about bringing cloud computing closer to the application or service. It is about the flexibility to run application or service components anywhere in this infrastructure and stitching them together to deliver an end-to-end service. While this distributed computing delivers significant flexibility, scalability, and TCO benefits for 5G and edge, it also brings some challenges. Edge clouds need to support heterogeneous infrastructure with accelerators, smartnecks, and GPUs. They need to support multi-tenancy, which means now you have a larger attack surface, and hence security becomes paramount. As you disaggregate and deploy services across multiple edge and cloud locations, ease of deployment becomes a huge challenge, and you still have to deliver the desired quality of service for the applications. So the good news for all of us in the technical community is that we have plenty of work to do. If you look at the evolution of compute infrastructure over the years, it has been an interesting journey. We went from purpose-built applications with tight integration of hardware and software to the era of server-based computing, and then to the virtualization era with the ability to run multiple applications on the same server. And now the cloud-native era, where components of an application can run anywhere as microservices, even in multiple different clouds. In the network and wireless space, network virtualization with NFE has been very successful in disaggregating hardware and software. This next phase of network transformation is all about disaggregating software and building network and edge applications applying the cloud-native principles. The resultant benefits are huge. You can rapidly create, deploy, and manage applications across multiple edge and cloud locations with the continuous integration, continuous deployment, or the CI CD approach for business agility. You can support unified connectivity with end-to-end security and quality of service. You can benefit from optimal use of compute resources across multiple locations, resulting in a much reduced total cost of ownership. And above all, you can make this all easy to deploy with massive at-scale automation. Now that I have the CNCF community all excited at the possibilities, I want to talk about the work we have done so far, a lot of which has been in close collaboration with CNCF community. So first of all, a huge shout out to all of you. And I'll also touch upon a few challenges that this brilliant community can help us solve. So for today's discussion, I decided to zoom in on three challenges. First, as you know, Kubernetes has become the de facto cloud operating system. It has become the tool of choice for orchestration and automation. However, one thing I would like us to think about is how we make networking a first-class citizen within Kubernetes. Second, as the complexity of infrastructure grows and we deploy services that span multiple edge and cloud locations, how do we build a robust service assurance and observability solution? And third, service mesh, as you guys know, is a popular technology that delivers a common set of scalable functions such as data plane, E-stress security, load balancing, service proxies, et cetera. So how can we enhance the service mesh for 5G and edge requirements? Let's click down and discuss these three areas in some detail. First, let's talk about Kubernetes networking. There are three main challenges here, fragmentation, performance overhead, and ease of use or lack thereof. If you double click on fragmentation, you will notice there are many container network controllers, many service mesh technologies, and these do not coexist in a cluster together. Also, other things like lack of multi-network support, lack of uniformity and interoperability. If you look at performance overheads, we realize there are multiple traversals across the user kernel boundaries. There are overheads, pod overheads, memory overheads per pod, network stack latency issues, security and encryption overheads, and last but not the least, ease of use also deserves some attention. In particular, how network resources are managed and the lack of a comprehensive visibility. I think it's fair to say that we have some work to do to make networking a first-class citizen in Kubernetes similar to compute and storage. We have to reduce overheads, make network capabilities uniform, address performance and latency challenges, support offloads and acceleration with the right infrastructure for smartnecks and make it multi-cloud ready. Now, let's talk about service assurance and observability. So the first question that you probably have on your mind, what is service assurance? At the highest level, service assurance is the application of policies and processes by a service provider to ensure that services offered over networks meet a predefined service quality level for an optimal user experience. It is to ensure that the service SLA offered to the user is met while minimizing the operational cost. So in simple terms, service assurance is all about maintaining and meeting service quality. And there are two KPIs. First, the availability of the service and second, the performance of service. These are the two that we want to zoom in on. And today, these KPIs are largely delivered using proactive monitoring and correlation with the misbehaving components. And when the failure occurs, it is by doing root cost analysis and incident management to respond to that failure scenario. You can already see a problem with this. We are either being proactive or we are being reactive to a failure. When the failure has already happened, it's already too late. We really need to evolve from proactive and reactive approaches to a predictive approach. The starting point for service assurance is to get full visibility of what's happening in the network infrastructure. However, the complexity of 5G networks has made thorough and reliable monitoring of the network a big challenge. The complexity primarily comes from the fact that 5G networks will be hybrid in the sense that the network components of different generations such as 5G and LTE will be operating side by side and also that 5G will be composed of traditional physical networks, network function virtualization and cloud infrastructure. So what does a good service assurance framework look like? There are three key requirements. First, collect data, metrics, logs, traces from various entities at multiple edge locations. Second, the ability to distribute data to appropriate data lakes. And third, store data efficiently in data lakes, perform analytics using machine learning techniques and take close loop actions. The diagram on the right shows a framework almost entirely built with open source software, many of which are mature CNCF projects. At the bottom of this picture, you have the hardware platform elements such as a CPU, next storage that generate a lot of telemetry data. Also, virtual network functions and containerized network functions and other applications. This telemetry data consists of metrics, logs and traces. For metrics, Prometheus is a widely adopted open source metrics based monitoring and alerting system. Prometheus is growing quickly and is among the top three CNCF projects in terms of velocity. Another CNCF graduated project is Fluendee and this is a popular choice for log collection. For trace collection, Jagger has gained in popularity, not just in user adoption, but also contributions from the community. Jagger is a nice end to end distributed tracing platform. Now, combine these three elements in a Kubernetes environment and you have a really powerful platform with excellent observability and data collection framework. Next up is data distribution. For data distribution, CAFCAM is a widely adopted cloud native streaming platform for PubSub events. ZeroMQ is another popular cloud native message distributor, which is lightweight and can work without a broker. Next, for creating data lakes, MinIo and M3DB are great solutions. M3DB serves as a good cloud native database for time series data. MinIo is a good cloud native object store. It's also Prometheus and Grafana friendly. Also Apache Spark serves as a great analytics engine that works with many data lakes and also supports many popular AI frameworks such as TensorFlow and PyTorch. Last but very important is observability, especially as the complexity grows and you're deploying across multiple Kubernetes clusters. Grafana, Kibana, and Kiali are some of the good Kubernetes tools for observability. As you can see, these CNCF projects and other open source projects I've highlighted here combine well to provide an awesome foundation for service assurance and observability. The main gaps, if any, are integration of these tools without impedance mismatch. And when a service spans multiple Kubernetes clusters, ease of deploying the stack in a distributed fashion across these clusters and locations with some standardization. Third area I would like to talk about is an optimized edge-ready service mesh technology. As you're perhaps aware, service mesh provides the communication fabric for microservices with common functionality such as data plane processing, east-west security, role-based access control, service proxies, traffic management, and load balancers, etc. So 5G is well suited to a microservices-based deployment. In fact, 5G Core has been defined as a service-based architecture in the 5G standards. As the industry grapples with standardization of interfaces that support interworking between various vendor solutions, service mesh technology that abstracts and provides common infrastructure services can be very powerful. We're already seeing some communication service providers or telco operators successfully deploy service mesh solutions in their wireless core with a cloud-native approach. There are three main challenges here for us to address. First, performance. Most wireless radio access network locations and edge locations are severely power and thermally constrained. The performance overhead and resource consumption of a service mesh along with high latency and jitter can be very prohibitive. Second, security. There are some security gaps in current service mesh solutions, particularly around securing private keys associated with transactions between side cars that have to be addressed. And third, complexity. There's a good bit of fragmentation happening today with service mesh approaches. Multi-cluster service mesh configuration is very complex. Also, there is work required to make service mesh solutions multi-tenant friendly. We want to collaborate with the community to optimize service mesh for high performance, security, multi-cloud and multi-tenant readiness, 5G capabilities, and automation and ease of use. At Intel, this is an important focus area for us. We believe a highly performant, scalable and programmable service mesh can accelerate edge innovations. That brings me to the end of my presentation today. We really live in exciting times. 5G deployments are in full swing around the world, and this is just a start. 5G has so much more potential. It is going to fuel an innovation cycle like never before, particularly at the edge, which I like to call the epicenter of innovation. And to realize the full benefits of 5G and edge computing, embracing cloud-native architecture is fundamental. Cloud-native is the only way to deliver efficient, scalable, and automated end-to-end services. Now, a quick call to action for the KubeCon community here. Let's collaborate in the three areas I talked about today as challenges. Making networking a first-class citizen Kubernetes, accelerating service assurance and observability solutions for 5G and edge applications, and optimizing edge-ready service mesh technology. Me and my team at Intel would love to engage with you more in these areas. So please reach out to us at 01.org slash Kubernetes so we can collaborate and deliver solutions for the future. A very bright future. Thank you.