 So my name is Huamin Chen. I come from Red Hat's Office of CTO, Emerging Technologies. I'm working on the sustainability at the moment. If you have met me before, I used to work on the Kubernetes 6 storage, which is so exciting that I want to also create some physics here, not to create the new functions, but EMCQ, resource record for the visions of sustainability as Red Hat's emerging technologies. For the M, we are using the metrics, specifically that's what we're going to talk in this project, Kepler. So when you have the metrics about how much energy consumed by your workloads, that is very powerful information that you can use in your configurations, in your scheduleings, and in your reporting. So once that information becomes available, the metrics information becomes available, we want to use it in the workload scheduling. So one of the theories that's when you're using your servers as the certain capacity and a certain utilization, the performance per watts is the optimal. Meaning that if you want to wrap up your Kubernetes clusters to 70%, for example, the CPU utilization, you probably get the best performance per watts curve. And we can use that information in your scheduler to make a smart scheduling. And we are trying to use in this project, it called PIX, to investigate such potentials. The project's going back and forth, but we finally settled to using the existing Kubernetes plugins called Tamron, so that we can configure the servers to certain capacity and using Kepler metrics to advise the scheduler to make the scheduling such that the overall aggregates across the utilization, power utilization, will be at the most optimal level. The next project is called Correction, which we presented as the last year's cube come. It's as we're using the Kubernetes of Vertical Part Autoscaler, the VPA, and then you incorporate with Kepler metrics, the power consumption metrics from workloads. And then we dynamically tune the CPU frequency such that this also has a background. The higher the CPU frequency, the higher energy it consumes. The lower the frequency and the less power. But in order to maintain the same level of quality of service, we set up the objectives. The objectives using the clever is called instruction per cycle. So the amounts of CPU cycles that is to run your workloads were deterministically determined the quality of service that you will receive. In the same manner, if we are giving you more resources, then potentially you are using more to power your application, the less resources then you are going to have more constraints to run your application. So as the higher CPU frequency, if you are maintaining the same level of IPC, we give you less resource. In the same way, just keep the mind working. The higher the CPU frequency, the less resource. That means that the amounts of energy consumed will still be reduced because you are using less resources. Even you are running at high frequency. Vice versa, if you are running at lower frequency, we give you more resources, more CPU cycles. So the IPC will be maintained, but your energy consumption will be optimized in that way. So all this happens in the magical, all this magical thing happens behind the scenes in VPA. And we have that experimental prototypes in our GitHub. The very last C is carbon. So at the end of the day, the sustainability come to carbon, the amounts of carbon footprint that your workload consumes. In order to do that, we have a number of initiatives happening. One is that we want to visualize how much carbon you use, your workload use. That's very consistent with the same way that the people are using the utilities, your electronics. How much carbon your electronics consumes and how much carbon to make it have to make the electronics to produce. The same way we are using a carbon accounting. The other way we're using carbon is when you visualize the carbon as different geographic locations that certain differentials, deltas, depends on time of your day. Your carbon footprint from electricity grid is not the same. Surprisingly, you are using the same electricity grid, but the carbon intensity is different. And similarly, if you are running at different locations, they are also as different deltas. So let's come to the project Kepler. So what Kepler is about? Kepler stands for Kubernetes-efficient power level exporter. It's using the EPPF as the underlying technology to collect information from your hardware, from your workloads, and then using machine learning models in the background to calculate the energy for you. This is a very diverse and highly collaborative project started by Red Hat, IBM, Intel, and a number of reworks and many of the other community contributors. We recently donated the Kepler project to CNCF for sandboxing. And hopefully, we can have more contributions from the people around the world. So if you scan the QR code here, there's a website for Kepler, and there's a Slack channel. You're all more than welcome to join the project. So currently, Kepler supports a number of granularities of energy consumption. So we can tell you how much energy used by your processes, your containers, your parts. And hopefully, potentially, we can also aggregate to the certain higher-level APIs. We currently support CPU-level, GPU-level, and DRAM-level energy consumption. We are hopefully in the next release or two, we are going to support your networking storage and other accelerators, special accelerators. And we're also going to use the external power sources from your network switch, your certain other hardware using the data center. We currently support bare metal and virtual machines, meaning that we are using the same infrastructure, same metrics, and you can get energy consumption from your environment, regardless of where your workload runs. We are potentially going to support the encrypted, the trusted execution environments that's running your encrypted enclaves called TEE. This is one of the dashboards we built. I'm not an artist, so I just briefly explain what this is about. So on the left side of the panel, on the top left side is the carbon. So as I said, there's different fluctuations as during this certain time of the day. We can break down the carbon consumption into different levels consumed by your hardware. This is a little bit smaller if you are looking at the green side. This is the CPU package. And the yellow one is the DRAM. And the bottom one is because we are not using GPU at the moment, so you do not see any of the activities over here. This side is the power. So power is collected by Kepler. So we can collect as the node level, as your server level. We can also collect as your namespace level. And we are also going to the path. But the namespace is good aggregation. You can have a lot of visuals. If you create different namespace, create a different tenants based on your namespace, you can visualize over here. On the bottom, we translate that into the amount of carbon footprint related to your workloads. So this is a very small setup, by the way. So we can have better visuals. So on certain namespace, you can have higher energy consumption. And you can also have the higher carbon footprint. So this is one example of how you can visualize workloads energy as well as your carbon. So you can also create other workloads and other dashboards based on your new requirements. And that is also supporting the environment as well. Thank you.