 We need to talk about energy. So energy is an incredible thing. Almost every activity in our society relies on some form of energy that we produce or harvest. And we're so good at it, we don't even think about the cost that it takes to turn on a light or push our latest commit to production. And our energy production has been exploding. Today we produce about 45 times more energy than we did in 1946 at the start of the computer age. And we've also become a lot more efficient using energy. Historically the computation that we can get from a kilowatt hour has doubled about every one and a half years. And so if the global energy that we dedicate to computing with that energy we can literally perform 200 trillion times more operations per second than we could have performed with that same percentage of global power output in 1946. And so let's look at that chart again. The main driver of efficiency here was denarred scaling. Basically that we could shrink transistors down to do more computations while using less power. And this graph ends in 2010. That was a long time ago and some things have changed. First, denarred scaling ended effectively after 2000. So performance and efficiency improvements have slowed significantly and are no longer being driven by shrinking transistors sizes and Moore's law. Instead they're being driven by innovations higher up the stack. Innovations like Kubernetes and that brings us to why we're all here. The real opportunity with Kubernetes and the real opportunity with cloud native is to help us drive the next generation of performance improvements in computing along with effective use of our resources. And as Kubernetes is built into more and more systems, the impact of our decisions is highly leveraged. Inefficient choices and how we run our systems can lead to significant monetary and resource costs. Getting this wrong, it's not like you left the bathroom light on. It's like forgetting to turn off all the lights in an entire town. The decisions that you will make. Yes, you, everybody here, the decisions that you will make determine the need not just for a higher energy bill, but for new power plants in the future, new data centers, new forms of computing. It's really important that we get this right and it's our responsibility to be as efficient as possible, both to preserve our natural resources and to make sure that we can innovate as much as possible with the energy that we have. So how do we do that? So first, what we need to do is we need to identify the biggest drivers of cost. It can be really hard in systems like Kubernetes and our new cloud native systems to understand what's actually driving resource consumption, what's driving cost. And this applies whether you're building an application, writing a new controller, or submitting a cap. Consider what happens as system scale. And there's emerging projects, projects like open cost that can help you start to understand what's driving cost across the system and therefore energy consumption across clusters and applications. Second, we need to prioritize the right optimizations. And keep in mind that the obvious optimizations may not actually be the most impactful ones. For example, you could say, hey, if we rewrote this code and see instead of go, we could lower our energy usage by three times. That's amazing. It's not bad at all. But it could also take you years of dedicated effort to refactor all of that code. In significantly less time, you could have your energy usage by being more flexible in the nodes that you choose and optimizing application placement. Projects like Carpenter, which I'm really excited to say is going to join SIG autoscaling, can help you select nodes and repack clusters to significantly reduce energy usage and cost. And finally, we have to convert fixed dedicated resources into shared dynamic resources. And there are absolutely times when you can optimize an application across a set of dedicated resources like in a fixed data center. But that's really rare for the types of modern dynamic applications that we're writing today and the clusters that we're running. Shared resources let us optimize at scale. And independent studies have shown that cloud infrastructure like at AWS can be over three and a half times more energy efficient than the median of US enterprise data centers. And this is mostly because a combination of more energy efficient servers and as well as a much higher server utilization. I happen to work at AWS, but any hosting model for Kubernetes that allows you to have more resources and to share those resources and go from static to on demand optimized resource sharing will help improve your energy footprint. Okay, so three things for us to think about and three things for us to do. First, identify what drives cost and in turn energy. Second, prioritize the right optimizations to improve energy efficiency. And three, adopt shared dynamic resources. Thank you very much.