 Hello folks, welcome to Kubernetes on the Edge Day. My name is Keith Basil, and I'm the Vice President of Product for Cloud Native Infrastructure at SUSE. I also helped shape SUSE's global Edge strategy, and I'm here to talk to you about Edge today. I'd like to thank KubeCon and the CNCF organizers for this opportunity to present, and with that, let's get started. I wanted to build on the quote that was given by the Linux Foundation. Here it says, the Edge computing space will be four times larger than Cloud, and will generate 75 percent of the data worldwide by 2025. That's a very bold statement, and with that statement, it gives us quite a bit of runway to work into that space as we move forward. So walk with me if you will. If you can imagine a global deployment of 7,500 remote locations, and within those locations, you've got 1,000 industrial IoT devices at each one of those locations, right? So when you do the math, you're looking at 7.5 million things that need to be orchestrated and managed at the Edge. So the big takeaway here is that the law of large numbers is absolutely at play here, and we need to be ready to scale to meet that challenge. So to add to that complexity, we're seeing deep and diverse Edge scenarios. This goes from underwater deployments, all the way up to satellites in space and everything in between. So Kubernetes is being used to manage Cloud native applications everywhere across that spectrum. So within our ecosystem here, we have the facilities to tackle this challenge. Before we dive into that, let's establish a framework for defining what the Edge is, because everybody has a different definition of Edge, right? So collectively, we found it very useful to establish this baseline definition of Edge, so that we can have meaningful and relevant discussions going forward. So the first thing I want to walk through here is what we call the Nair Edge. So just walk with me for a second. Off to the left-hand side of the screen is the centralized services, large data centers, and the like. So that's where all the centralized services are today. So as you move from left to right, you get closer and closer to the Edge. The first thing that we run into is what we call the Nair Edge. This is absolutely the realm of the communications service providers, right? So you've got the telcos here, you've got multi-service operators, the cable companies who provide voice video and data, you've got a movement called MEC, multi-access Edge computing. So these MEC deployments are actually very interesting because we're seeing demand for MEC solutions that support what's on the right-hand side of that line of demarcation there. There's a few things here that are nuanced in the diagram that I want to expand on. So number one, the border around that Nair Edge definition is meant to represent two things. One, it's meant to represent the logical network, the IP space, if you will, for that segment. It's also meant to represent the infrastructure that's within that segment, and the biggest differentiator that supports the definition of the Nair Edge is who owns and operates the IP space, and who owns and operates the infrastructure that's within that space. Here, again, as I said, this is the realm of the telcos, right? The last thing I want to call out is that that line of demarcation is critical because there are some Edge solutions or applications where the communications providers are providing appliances that go up to and sit on that line of demarcation. If we go back to that IP space definition, the IP space that's attached to that device is managed by the communication service provider. In fact, that gear is typically owned by them, and they offer services to the end customers on the other side of that demarc. So that's a really critical thing in terms of ownership to help us define what that Nair Edge scenario is. So let me move to the next portion, which would be the Far Edge. Again, we move to the on-premises side. This is the remote location, and this is where things get really interesting. Again, the border there is meant to represent, let's say, a layer two domain from a networking perspective. This is customer-owned and managed IP space. This is customer-owned and managed infrastructure in the form of hardware that supports your Kubernetes clusters, right? So we've got boxes there. If you see visually represented, we've got boxes of various sizes to represent multiple cluster sizes. So in some, for example, manufacturing use cases, they've carved out a portion of the portion of the factory to act as a small data center, and they've got classic 2U machines racked up, and they treat it like a data center, and those are very large clusters. We also have locations that have a single node cluster to serve their uses as well. Also there shown on the screen are three broad industries where we see the Far Edge playing. So you've got commercial, you've got industrial, and you've got public sector use cases. We think the majority of the Far Edge use cases will fall under those three categories. So the clusters that are running in that space, typically obviously would support cloud native applications, and those cloud native applications represent a transformational business value that's actually pushed to that location where it can do the best good. Many of these use cases have the local Kubernetes clusters within the premises to actually aggregate data from the IoT devices and sensors and such, and that's actually a great segue into the third segment of the Edge. So far we've talked about near Edge, we just covered Far Edge, and finally we've got what we call Tiny Edge. I absolutely love this name. I heard this naming convention at our Edge conference in the fall given by folks from Microsoft, and this is really where the law of large numbers kicks in. This space is early and we are encouraged by the yeoman's work being done by Kate Golden Ring and Edward Wong in this space. Both of those are Microsoft employees, and under their leadership, Microsoft has introduced an upstream community called AUKRI, and AUKRI is all about solving the problem that we have in the Tiny Space, the Tiny Edge, or the fixed function device management space, and it's really cool and as Susa, we want to be involved in that community as well, and we would encourage you to take a look at that. I believe later in the session there's a talk by Edric that speaks about AUKRI. Now that we have a definition, a working definition of the Edge, let's actually talk about the three pillars that make up the solution for managing Kubernetes at scale. Again, I want to come back to this, so you can have one Kubernetes cluster to manage let's say a thousand downstream clusters, and these could be globally geographically dispersed. Given the law of large numbers and the diversity brought to the table, we're seeing the three pillars that are required to address this management at scale challenge at the Edge. The first pillar would be one of the distro. We're very fortunate on the Rancher side to have released K3S to the world, and it's a very popular distro literally with one command line. You can have a CNCF-certified distro running on very lightweight hardware, and so the distro is needed because importantly, it allows us to preserve, reallocate, and extend our existing investment in Kubernetes, and we can extend that learning, that skill set, those resources to the Edge where we need it to go. In second, we need a distro that thrives in those resource constrained environments, the remote locations with limited connectivity, and also within as a Kubernetes layer in some of the Edge applications that we're seeing. Then the second pillar here is a lightweight operating system. There's many options there where you have a lightweight operating system that's container native or container native friendly, there's probably a better way to say that, and we believe this is required to provide a low attack footprint from a security perspective, and also more importantly, to allow us to manage the full lifecycle of that operating system because when you're managing, let's say, 7,500 or anything, managing the lifecycle of the operating system should be done in a Kubernetes way. Then lastly, the third pillar is management, and so for us on the Rancher side, it's our Rancher management platform, but more specifically, it's the ability to adopt a GitOps approach to managing downstream Kubernetes clusters at scale, and so people are precious, and those that understand our space, they understand we need to adopt technology that allows us to leverage our skill set so that we can scale out our management capabilities to those large numbers of downstream clusters. We think that the GitOps approach is one that naturally fits the Kubernetes declarative model to manage infrastructure quite nicely. When you look at all three of these pillars, we think this is the minimum set that you're going to need to have an effective Kubernetes at scale management solution. Again, Kubernetes management at scale with GitOps is great. A lightweight distro such as K3S or any other related variants, and then a lightweight operating system that's focused on the Cloud Native space. When you look at it all together, you have that solution at the bottom. It will allow you to attack and manage all three segments of the edge. So we've got the near edge in the telco space. That's more of a classic data center play with small data centers or regional data centers. We can rack and stack at that point. That's something that we know very well. The things get really interesting when you come over to the far edge where you start deploying a lightweight version of Kubernetes, a lightweight operating system, and then managing that with a declarative GitOps source of truth behind, let's say 10,000 deployments. Then the tiny edge is emerging. The ACRI project, we like it a lot inside Rancher and we want to again, contribute to that community, but it's also very new. There's a handful of protocols that we support on the industrial IoT space, and we want to make sure we can mature the capabilities or the adoption of those protocols that are heavily used in the industrial IoT space. This is the complete solution. We think this is a win for the entire community, and we are going to be actively working on that. Rancher is going to be doing a lot of this work, and we ask that you join us in the areas that we've outlined. We'd love to see the definition of the framework that I discussed adopted as we believe that it's going to be very meaningful to have discussions that are efficient in guiding us to the solutions that work. Overall, I think we should strive to remove the complexity that's inherent in our systems. With that, I just want to thank you for this opportunity, and we hope to see you face-to-face at the next coup con going forward. Thank you again for your time and have a great conference and we'll see you soon.