 So think back to your first experience with cloud. Maybe it was on the job or maybe as a hobbyist at home. Do you remember creating your first resource? For me, it was a VM. And I distinctly remember having to pick a location, a region for that VM, and thinking, well, this is odd. This isn't going to work. My customers aren't all in Ohio. They're not even all in the US. I'm hoping to have customers abroad. I'm not sure exactly what I was thinking. I guess I hadn't thought it through. I went ahead and pressed OK on that question and thereby accepted a little bit of technical debt that as an industry, it would take us years to address effectively. Well, I've got great news. Collectively, as an industry, we're working to resolve that technical debt, revisit that. We're revisiting it with a new software paradigm, a software paradigm that respects the physics of software engineering and recognizes that compute and data need to be strategically placed in our physical world. So no longer are you going to be blithely running your front end on back end resources. It'll just seem sloppy to do that. I'm Tom McCullough. I'm a product manager at Section where we work on problems like these, addressing problems at the edge. Today, I'm going to be talking about trade-offs of running software at the edge, containerized software at the edge, specifically the trade-off of performance to your end user versus the cost of delivering that performance. There's a hint in the title to the solution I'm going to outline for you, the adaptive edge. And what I'm going to do is pick pieces from the cloud-native computing foundation landscape where you can build a solution that I'm going to describe. So I'm going to be giving you a little bit of road map on how you can balance these trade-offs and get this adaptive, optimized solution. So performance, when I'm talking about balancing performance against cost, I'm talking about running hundreds of your instances of your application at the edge so that you can get, on average, lower distance, lower latency, fewer drop shopping cards, whatnot. But there's a cost associated with that, not just in terms of cycles, but in terms of operating all of these clusters. I mean, you'd love to run in every cluster. You can't really afford that due to the cost. How do we get the effect of running everywhere without actually running everywhere? What if we could be adaptive? What if we could move the workload to where it needs to be? What if we could maintain a set of running workloads that are optimal for your particular needs, optimal according to distance from your end user, optimal according to how many you're willing to pay for? What if we could continuously adapt that set of running workload locations? Adapt it according to signals, health signals, utilization signals. Maybe we could use predictive analytics to know ahead of time that a workload, we can shut it down in one location, start it up in another. And continuously revisit this set of locations and update it as time goes by. That's the idea. That's the roadmap I want to give you for a solution for an adaptive optimized edge. And I'm going to be picking pieces from the landscape. Hopefully you're familiar with it. And the first piece that I'm going to pick is Kubernetes, running a containerized workload. And so you'll pick Kubernetes wholesale from the landscape. We'll just use it out of the box. But there's other pieces we're going to need. We're going to need to be able to run our workload on multiple clusters. We're going to need to be able to move workload from one cluster to another. The signals I talked about in doing that optimization, we need a way to solve that problem. We'll talk about pieces for that. And then finally, we need to be able to direct traffic. As a workload moves from one location to another, based on the signals, we will need to direct the traffic to go along with it. So the multi-cluster problem is well addressed in the landscape. There's a number of solutions. Some of them are forks from the Federation Special Interest Group. Carmada is one example. I just picked one that would suit. That allows you to run, manage multiple clusters without the cognitive load of having to think about managing each cluster individually. That solution is there and ready to go. With regard to moving a workload, open cluster management is another tool that lets you manage workloads running multiple clusters and the multiple clusters themselves. But it adds something different. It adds a placement decision. So you can customize your placement decision. It's a code you write yourself and provide it to open cluster management that tells open cluster management where a workload needs to be prioritized or not. And so using this and the signals that we're going to talk about in a second, you can develop the ability to place a decision. The thing about open cluster management's placement decision, however, is that it operates on the initial deployment of that workload. It doesn't revisit it on a cadence, which we want. And yeah, it doesn't revisit it. And so you're going to have to add that capability to open cluster management to rerun this placement decision and then actually move the workload. Now with signals and optimization, this is where we're left empty-handed. So the CNCF landscape does have an optimization section in the landscape there. But it's devoted towards optimizing within a single Kubernetes cluster. And all the examples there are about optimizing for cost. How do I run this cluster in an optimal way, minimizing compute resource, that kind of thing? We want to optimize across multiple clusters. And we also want to optimize, take performance into account. There's nothing in the landscape that provides that. And so you're going to need to dig in and come up with a solution here. Finally, with directing traffic, now we're back to having a good representation within the CNCF landscape for that. Core DNS is a programmable DNS solution that will route traffic from an end user to the nearest location where that traffic can be served. You've got the common issues with DNS using this solution. However, an ISP as a piece of the DNS stack may decide to cache a single DNS answer for its entire user base. And therefore, not giving you the granularity that you really do want each end user to be routed to their closest location, not to the closest location of some other end user in that same ISP's network. So that caching is an issue, and then of course the time to live is an issue. We'd love to use the capability we're developing here to make highly available services. And if a time to live isn't being honored, you've asked for a minute and an ISP is giving you one hour, then that's the best you're gonna be able to. In the worst case, your switch over time would be an hour for directing traffic. So DNS has issues. Kubernetes global load balancer, however, brings anycast into play for us. So anycast is a capability built into IP that's purpose-built to route traffic from an end user to the closest location where that workload is being served. And Kubernetes global load balancer will use BGP protocol, border gateway protocol to control the anycast capability of IP. And that's a perfect solution for what we wanna do. So sketch in my short little time here, a very high-level roadmap for all the pieces that you need pulled from the CNCF landscape to build an optimized adaptive edge. You're gonna be able to balance performance against cost, get an optimal solution. You'll get the effect of running everywhere without actually running everywhere, without the cost of running everywhere. So during the talk, I've been talking about performance against cost, but businesses are unique. They want to put themselves forward in a distinctive way. Maybe you wanna prioritize something else at your business. Maybe, for example, you wanna prioritize your cloud, minimize your cloud, your carbon footprint. You could build into your optimization engine the ability to prefer green data centers. Or let's say that compliance is important. You could build into your optimization to prefer PCI compliant or other types of compliance. So your business can customize the optimization engine as needed to suit your purposes. So if you guys are as excited about this area as I am, I encourage you to look for me at the show today. Look for me here today. It's a nice friendly crowd and or contact me by email. We'd love to chat with you further. Thank you.