 Hello everyone. So we're going to present today the Kepler updates. Kepler is a CNCS and BOSC project. We are one of the main developers of Kepler. I'm Marcella Moral and she's Sonia, also known as PENG. We are from IBM Research in Tokyo and we have been contributing to the Kepler development for a few years. So we'll start just, you know, with some quick introduction what Kepler is for the ones that don't know and the main updates. Apart from bug fix and performance improvements, we have some key updates for this year. Then we're going to introduce in this short talk. So first of all, what's Kepler? Kepler is a project to measure the energy consumption of process and then aggregate it to containers, pod, and other aggrandalities. So basically, there is no way to measure the energy consumption of process direct to the hardware. There's no counters for that. But we have the energy consumption of components, for example CPU energy consumption. And the energy consumption is directly related to the resource utilization. So by having the resource utilization of process and the energy consumption of components, we do the calculation and what's the proportional energy user that was used by some specific process. To collect the resource utilization, we use the BPF that is a light that introduced low overhead to get the resource utilization for process. And for systems like that, there is no access to the hardware sensors that shows the energy consumption of components. We like virtual machines on public cloud. We use pre-trained power models that are power models that are being trained collecting the energy consumption of some resource and resource utilization in some specific bare metal node. Sorry, we export all this information to pre-trained materials and then we have the also graphonate dashboards that can visualize the energy consumption of the components of the application components. The new feature that we have right now is support for GPU virtualization, more specifically for the MIT features that is lies the GPU in different partitions. So the GPU MIT feature does not show the energy consumption per slice. It's the energy consumption for the entire GPU. So that we need power models also for that. So in this case, we collect the power consumption of the entire GPU. We get the resource utilization of each slice and partition this energy consumption between these lies and then associate that to the process that are running on these lies. The resource utilization can be defined by different components. Right now, the current implementation is a simple one that only takes an account of the tensile cores, but we are planning to extend that. Yeah, now I'm going to pass through. Okay. In this quarter, we also have more than 300 of the CPU power model has been trained using the spec power database. The spec power database is a large power database that contains more than 900 of the power reports. And this report is covered more than 200 of the CPU models from more than 40 of the industry vendors. And we can use this model for the cloud stand that has no access to the power meters or even later the cloud server that doesn't have the power meter itself. We can use one of these power models that match to the server profiles or the specifications. And we can use that to estimate the power models, sorry, to estimate the power consumptions of the workload. Also, we have our new power model trainings, our pipeline. Now we adopt the TechTone pipeline for our power model trainings. And when we integrate this TechTone pipeline to the order CI2 like GitHub actions or Ansible, now we can automate order step starting from setting up the environment for the trainings. We do the collecting, training and delivering the power models to the cloud servers that doesn't have the power meter. Now we have enable DCI to collect and train the power model from their cell posts are easy to spot instant. This is our allows us in the Kevlar model server projects to read the QR code to check the project. Finally, I would like to introduce our brand names are projects in the Kevlar ecosystem. It's called SASQL, Sustainability Curies for AI applications. This one is filled the gap between the five-grand energy matrix from the Kevlar and the need of the application level of the entity consumptions report. And we provide the results in their POMITAS graph and as and also you can curious from their COIs to as well. Okay. Again, you can check their QR code for the SASQL operator project. Just want to mention here that the key part of SASQL is, for example Kevlar, we aggregate energy per containers and pods. But the users want to aggregate the energy for different execution of different AI workloads. We have like multiple pods and multiple jobs. And to aggregate that for the lifetime of the application, different training, we can use labels and aggregate the energy consumption based on labels. Just want to also, you know, say that Kevlar, the ecosystem has other talks here in the KubeCon this year. There is a tutorial that will be on Wednesday at 2 30 and other talks that are related. And please come and join us. We also have bi-weekly calls for the community discussion and a Slack channel. And we'll be happy to discuss more about the project with everyone. Thank you very much. Thank you.