 Some folks here may also know me from the Kubernetes release team as well. I thought a lot about what I wanted to get up here and talk about, and instead of talking about HPC on the edge, I wanted to talk about something that impacts all users, which is the notion of shared responsibility in Kubernetes. Kubernetes itself has been around for eight-ish years, and managed Kubernetes services like Oracle Container Engine, but then also GKE, AKS, EKS, have been around for about half a decade now. These services are old enough that if they were a human child, they'd be able to use sentences with more than four words, use scissors with supervision, and most excitingly stand on one foot for a few seconds. But in spite of their maturity, there's still a lot of confusion over who owns what within an environment. And more tangibly, this translates to who's responsible for the daily toil of activities like patching and other operation and management tools. So the concept of shared responsibility is something that we all face when evaluating cloud providers, and it applies to every single provider and also our daily operations. Simply put, it's understanding what falls to you as the end user and what you can trust to some other group or organization. In an on-premise data center, and this is so straightforward, you own the whole stack, you own the hardware, you own the software, and there's no question of ownership. But as you move to the cloud, some of these responsibilities transfer over to your provider, and these differ depending on the service or services that you're using. And so whether you're deploying a WordPress site or something more complex like an HPC workload, it's really important to understand what that shared responsibility model looks like. Now it goes without saying Kubernetes is not straightforward, and I would say that the complexity of Kubernetes makes this task both more tricky and also more important to figure out. There are a lot of different dimensions to consider. But to begin with, you have to figure out what kind of Kubernetes environment you're deploying. Is this something that you've deployed directly onto a provider's infrastructure, their networking resources, their compute, their storage? Or is this something that you've deployed instead to a managed service? Also then there's the action you're trying to perform. Are you configuring identity and access management, monitoring, scaling? What of these are you trying to think about the shared responsibility for? And then finally, you have to think about the actual components of the cluster, the data plane, and the control plane. And just as a refresher, here are some of the cluster components you have to think about. And I realize this is a bit of an eye-chart, but that's kind of the point. The left side is commonly referred to as a control plane. It consists of the QAPI server, Cloud Controller Manager, Cube Controller Manager, etc. And the right side is the data plane. It's the actual nodes where your workloads run. And so different types of Kubernetes services will offer different levels of shared responsibility with each of these sections of your environment. For folks deploying clusters manually, the level of responsibility is going to be just above doing it on bare metal infrastructure within a data center. The provider might simplify activities, but at the end of the day, you own all aspects of your cluster. There's obvious benefits to this. Of course, you can do things like set feature gates on the API server to test out new alpha features. Whereas if you had the control plane managed by a provider, you're then trading that flexibility for additional responsibility. For folks using managed services, the responsibilities for owning and maintaining the control plane are largely completely removed from you. You don't have to think about the QAPI server, but it does come at the sort of drawback of not having access to greater control over those flags. Users can further reduce the operational burden by doing things like using virtual nodes. This might be Fargate or Oracle virtual nodes or GKE autopilot, which removes the need to go one step lower and manage the OS level patching or Kubernetes updates or scaling. But depending on your provider, this also comes with limitations. Again, you're trading flexibility for increased responsibility. And the ultimate step, and I won't talk about this too much, because of course we are at a Kubernetes conference, is to just abstract away the cluster altogether and use something like container instances or the AWS Elastic Container Service. These remove the Kubernetes API from the picture. They require you to do all of your own orchestration. But you also can be very agile and just deploy your business logic without having to think about the cluster. As I mentioned before, there's also all these activities you have to consider. Here's a non-exhaustive list. Some of the ones up here, maintaining your workloads, including application code, build files, container images, and data. Doing things like configuring role-based access control, maybe it's looking it into a provider's identity and access management system. Also monitoring the cluster is something you have to think about as well. Some folks do a great job of having monitoring tools set up right out of the box, whereas other ones force you to do this yourself. And then also, of course, node upgrades and control plane upgrades, both of the host image and also of Kubernetes versions. The reason that this is up here is just to show that the number of items to keep track of can feel very overwhelming. Thankfully, most providers do a great job of documenting these shared responsibilities. And this applies both to the underlying infrastructure and also the respective managed Kubernetes service. Here's some examples from Oracle, Microsoft, and Google. These are the official documented support boundaries of managed Kubernetes providers seen through the eyes of those providers. So there's no real guesswork here. There's no division of labor. There's no fuzzy boundary. This is folks saying exactly what they're going to do and what they expect you to do. And this is the location for finding out the parts of your system that you're responsible for the care and feeding of versus the ones that you can trust to a trusted provider. Now, if there are things that are not covered in the documentation, for example, is this third-party tool supported, this open-source, very esoteric project that I just found on GitHub? Well, the answer often is that it's probably not going to be in the documentation. These are, of course, going to be non-exhaustive because there's just so much that goes into Kubernetes environments. But if you find yourself in that situation, I would recommend contacting the customer support and ask them directly because they'll go back to product, they'll go back to engineering, and they'll be able to suss out the answer if there's ever something that's not within the scope of the docs. So to recap, the first step is to know your environment. Then it's to determine the components of the cluster that you are going to work on and the activities you want to perform. Next up is checking your provider's documentation. For folks that have not thought about any of these things before, that have not thought about shared responsibility, this might be the time to do it. Really see what you can offload on your provider versus what you should pull into yourself. And also, these are great tools for assessing which provider or which service you want to use. Maybe container instances are the level of support that you need from a provider and you don't care about the Kubernetes API. Or maybe you have the opposite use case and you want to deploy directly to provider's infrastructure yourself. These are great questions to introspect your workloads and think about what's appropriate for you in your use case. Also, again, if there's any fuzzy boundaries, please feel free to ask questions to customer support. It's one of their functions. And finally, just make sure you know your shared responsibilities because these are things that are going to make your day-to-day toil a lot better if you can remove a few of them. Cool, thank you.