 Well, hello everyone. Thank you for joining us for the next installment of Cube, by example, Insider. The point of the show is that we interview people who are actually kind of doing the work in Kubernetes with the hope of finding out what's really going to land over the next, say, six months, rather than talking to necessarily product management, because the challenge with open source, of course, is that people always have the option to work on a night or a weekend to get the feature they really want in, even if a product manager doesn't really want it. So as a result, we find it's more interesting if we talk to the people who are actually kind of, like I said, doing the work. I'm Langdon White. I am a clinical professor at Boston University, and I teach data science at some level of computer science, as well as an experiential learning program called Spark. So if you see my water bottle, you'll see the logo, name, collision on a patchy, notwithstanding. It's been around since about 2017. So that's me, and I'm joined by my co-host who is Josh Berkus, and he's going to introduce himself. Hi, I'm Josh Berkus. I've been involved with Kubernetes and the CNCF for as long as both have been around. I work for Red Hat, and I am delighted today to be interviewing, start with Karadelia, who is a tech lead of tag environmental strategy in the CNCF. I forgot to mention that. What we're doing here today, we're talking about environment and sustainability, what it talks about. One of the big problems with big data centers is that we have all this hardware running. So what can we do about that? So sorry to interrupt there, but Karadelia, please go ahead. Yeah, no worries. Thanks for having me. Karadelia, I'm a part of Red Hat's OSPO. I work in the upstream in one of the communities in the CNCF, focused on environmental sustainability. I'm the technical lead for the tag and our technical advisory group. And I'm also the working group comms chair, co-chair. Waman, over to you. Thank you, Kara. Also, nice to meet you all. My name is Huamin Chen. I also work as Red Hat. I joined Red Hat about 10 years ago. Actually, this is coming my 10th year anniversary. So I also started working on the Kubernetes ever since this was started and also joined the CNCF ever since then. So I participated in a number of CNCF-hosted projects. And most recently, we created a Kepler, which is for the sake of environmental sustainability to provide to tour chair for people to understand what their energy consumption is. And that project is developed by major contributors, including Marcelo, who is sitting next to me this morning. My time? Probably yes. So, hello, I'm Marcelo Maral. I'm from IBM Research in Tokyo. I'm located in Tokyo. I've been working with Kubernetes maybe almost since the beginning too, you know, for different projects, doing different things. And most recently, actually for the last couple of years, I've been joining efforts with Huamin and with Red Hat to develop the Kepler project to improve the, you know, the power models and the performance and scalability of the Kepler itself and try also to investigate and publish papers and blog posts about the Kepler. Cool. So to kind of get started is like kind of so what is the point of kind of thinking about like, you know, a tag group around this subject, you know, so, you know, any of the three of you can answer, but like, what, you know, what is the, you know, the charter or the goal? And why is it, you know, considered a problem? I can hop in. So the tag is obviously leveraging open source community to define the environmental sustainability factors. So through the UN, there are 17 sustainable development goals or SDGs. And the tag is focused on a cloud native landscape to incubate and advocate for open source projects to observe and measure the cloud native infrastructure and to combat environmental challenges. And so it really is around raising awareness on environmental sustainability. And the key element of that is through open source. Okay. And what about open source? Like, why? Like, what, you know, what about open source makes that more achievable or, you know, a better goal? Well, their goals that everyone needs to comply to, whether regardless of industry and every industry has a data center. And so it's leveraging the instead of recreating the wheel, it's being able to use the principles of transparency and obviously collaboration to create those standards because really to the SDGs came about through the Paris climate accords in 2015. And even farther back than that, but reaching the net zero or net zero and goals for 2030 and for 2050 came about. And so by collaborating, you're not reinventing the wheel, you're all on the same page and you're able to integrate environmental sustainability reviews into your release cycles. And ideally, those are going to help impact your carbon footprint in a positive way. So what does this have to do with Kubernetes? Well, this has to, well, and I don't want to be taken over. You jump in here. Yeah, I think Kubernetes, you know, status quo in a data center, a cloud native data center is becoming the infrastructure for people of choice, you know, people are using them modern technologies and microservices in particular. So this becomes one of the biggest infrastructures in the world. Anything that can help the Kubernetes cloud native environments to achieve this sustainability goal will be very beneficial to the whole computation environments and the whole industry. So we believe this is the great open source environments, which will be our place on and also one of this most impactful projects we can help. So that's why we started in Kubernetes cloud native ecosystem. Gotcha. I would kind of ask too, like, you know, Kara, I think you kind of explain why kind of open source feels like the right answer for you. Hwamin, can you tell us kind of what brought you to open source in the first place or what brought you to Kubernetes? Yeah, you know, kind of beyond this, this particular scenario. So I mean, open source actually provides different perspectives, you know, when the software engineers developing software, they have the choice of whether that's capable of privacy or open. Being open, give you an advantage, such as, you know, people can jump into your projects without having to be pre-assigned and anybody who has particular interests can look at your projects and give you advice and even make a contribution. That's opened up a whole host of opportunities over there. Second of all, especially in the environmental sustainability, we need to have transparency, open standards, and open technology. Without this, any of the closed source technologies and the methodologies will be received with those four slots. So we believe open source, especially in sustainability, makes a lot of sense. And we are accelerating the innovation in this space. So we believe this is the way we are doing the right thing. And also, Marcelo, as a research perspective, you probably can see open source as a difference, bring different values. Would you mind, as I explained, from your perspective? Yeah, well, maybe I'll start with why Kubernetes, you know, so I think, you know, Kubernetes won the battle for the resource manager, the class resource manager that we have seen the past, you know, there was like, you know, open stack, measles, Kubernetes, a lot of things. And Kubernetes become like the most, you know, use a class resource manager framework that has been deployed. And right now, everyone is using that. So all the public clouds and private clouds, people are using Kubernetes. Unless they are, they have like something very, very, very specific. I think people are based on the cloud-based workloads are totally deployed in Kubernetes, the majority of them, I would say. And given that, you know, the Kubernetes community, Kubernetes is also open source, so it's also drives, you know, all the directions for in powers, all their open source projects. So, and then we are, we have also, you know, the CNCF effort to, you know, consolidate all the open source communities together and inside the Kubernetes ecosystem, which makes it easier for people to find projects and also contribute to them. And I would say that all of these projects, all the Kubernetes community can benefit from Stannability. So Stannability can bring benefits for reduce the energy costs, for example, you know, in this perspective, and also for, you know, minimize the climate impact. So, you know, for sense of awareness for the environment, things like that. And all the benefit for open source, I also see the other benefit is the collaboration between the community. I know that big companies can develop very good softwares, of course, but there is no way to compete when a lot of big companies are put effort to develop the softwares, isn't it? So we have collaborations, for example, the project that we are being working right now, isn't it, the Capper? Or we are going to talk later more about that, but just to say, we have a lot of companies involved in that, Red Hat, IBM, Intel, you know, and other partners, which makes the discussion more broader. We have received more feedbacks and have more people to work on specific features that we can improve on the project. Right, so environmental sustainability goals led to the development of the Kepler project. Can you give us a little history for the Kepler project? Yeah, so the project, when we conceived the project, so we have actually a number of discussions, meetings, surveys, so we identified opportunities and gaps in the current status and then developed the projects from that basis. And so in about early, I think that's probably end of 2021, we surveyed a number of existing projects and we are looking at the cognitive infrastructure space and we believe there's opportunities and urgency for us to get a new technology that's applicable to a lot of use cases. Just for your reference, the existing, you know, about how we are looking at the space, there are existing practices that are either estimates by a number of data sets or by some of the measurements on biomedical environments. None of these master's can reconcile with each other and none of these methods can be used at university in all the cloud-native space, meaning that's a biomedical as we are virtualizing environments in public clouds. And there's a number of other things architecturally, including how we are going to get the information that we need to estimate the energy without using a lot of resources or how to use the latest Linux technologies like EPP to minimize our footprint, since that can go a long way. So we come up with the idea of creating the capital projects based on these surveys and learnings and new technologies and that becomes successful. Sorry, go ahead. Just add some more motivation here. Maybe someone can ask, say, there are some, you know, tools in the public cloud that reports carbon footprint, so like the cloud watching the AWS, IBM also has some its own tool. However, it's changed the granularity. For example, the AWS and IBM, the public cloud, I'm not sure about the other ones. I need to check better. But I mean, the name of the tools that are doing that. But what is similar to all of them is the granularity, it's per month. So it's report the carbon footprint monthly. And this is hard to do optimizations. And although it's bringing some awareness for the user, it's hard to do optimization and understand what's the exactly part of the workload, its consumer power, things like that also do some scheduling decisions where we are going to run things in a monthly granularity. And I would say that Kepler brings here some different perspective that we show we can show the energy consumption in per seconds or per minute. So it depends on the granularity that kind of it's configurable. And this brings more opportunities for optimization. So in other words, kind of like the kind of existing tools are almost like too broad, or they're not they're kind of too far away from the application itself to kind of know entirely what's going on. It's not kind of what you're getting. It's like, you can give kind of finer greened information that can be a little bit more like application aware, so that you can affect those workloads. So what is Kepler reporting exactly? So we have a high level, we can report the server level, so the Kubernetes node level information about energy consumption, including CPU, DRAM, GPU, and we are going to support other data hardware types as well. And we also support the container level. So if you are running a container in Kubernetes, we're going to help you to understand how much energy used by the containers as CPU, GPU, and DRAM, things like that. And if you are running on a virtualized environments, we can also report how much energy used by the VMs. And if you also run this as a traditional OS environments using the database or some other specialized software, we can also report energy as a processing level. So this will help you understand workload energy consumption at different granularities. And tools based on Kepler to aggregate this information to a different level across different nodes by namespace, labeling, or something like that can help you understand the energy consumption in a distributed environments that can open up even more opportunities. Because we are using models, actually Marcelo is also a scientist, adding lots of model supports, because the models itself can be decoupled. You know, we are using the models to predict or to estimate how much energy used. These models can also be used for other purposes as well. For example, if you are using models in scheduling, you're probably kind of using energy efficiency, bring energy efficiency into consideration when you're making the scheduling decisions. Similar things can go a long way. So scaling and tuning can also borrow a page from the tuning, borrow a page from the modeling that will help them to understand if you are turning the knob in this way, what's the energy consumption will look like. And that's where I opened up opportunities for innovations in optimization of energy consumption. So you kind of alluded to it just now with the question I was kind of thinking about, which is you were kind of saying that, you know, you're using model three with everything in the environment being virtualized. How do you know what the actual energy consumption is? Or is it that you're making, you know, an estimate based on, you know, a guess about what kind of the hardware is underneath? Or, you know, are you actually receiving it? Like, what is it you're basing it on? Or is it kind of the other end of the spectrum where it's kind of like, you know, you're using more than you were before, rather than necessarily kind of like a hard number. Yeah, so you actually, Masar, I go ahead first. Yeah, so there is no simple answer for that, you know. Maybe we can need you to do some background introduction for what's the, what's the, why do you need a power model, isn't it? So let's, we have two contexts that we were saying. So the bare metal that we have extra the hardware and the virtualization where typically the hardware is not exposed inside the VM. So especially in the public cloud for security reasons. So let's focus the first example in the bare metal. There is no hardware counters to count the energy consumption of application. So then we need to, we still need to use power models right here. So we have the energy consumption of the CPU or the entire node energy consumption. And then we need to split this energy consumption to the process that are running on the node. And the basic assumption is, for example, let's think about the CPU. If a process using 10% of the CPU, 10% of the energy consumption of the CPU, it's related to this process. So defining the CPU utilizations can be more tricky, because CPU can also have instructions, cache, a lot of things. So maybe we might need more, you know, metrics to define utilization. If we want to more finer, more occurrence for the to the to display the energy consumption. But basically, in a broader view, we can use, for example, CPU time and get those ratio and divide the power. This is when we have access to the, to the directly power, the power, the power consumption from the hardware. So for example, then we, we know that 10% we just take 10% of the CPU utilization. The, however, on virtual machines, we need to, yeah, we don't have access to the energy, right now we don't have access to the energy consumption of the node. We envision maybe in the future, you know, cloud providers maybe provide some API or things for that, that will be perfect. And Kepper can access that the energy consumption of the VM and split this internally. But right now we need to do predictions. So prediction is basically a regression. We collect, we, on a bare metal node, we run a set of experiments with many different workloads in different change, the CPU frequency change, the, how the workload will behave in the bare metal, collect all this metric and just plot some regression model based on this matter. This is the power model, basically simple as that. So then the power model, of course, it has limitations. We can maybe discuss in the future, you know, later, what's the limitation for that? But this is where we have the power model. And the power model, based on the regressions, based on some specific metrics, for example, CPU time, we can use this power model on virtual environments to estimate the energy consumption for a given CPU utilization that it's inside the virtual machine. Gotcha. So, yeah, so basically you're making kind of projections about what's going on based on the kind of the information you have, right, which, you know, and it's interesting, I would imagine that there is, you know, kind of like a relatively high topic, like issue with, you know, kind of where it's running, right? I mean, you know, you probably have a pretty good advantage when you're running in a, like a well known data center, right? So if it's, if it's my data center, right, I can probably kind of inform, you know, hey, these are the types of CPUs we're running, you know, that kind of thing, and I can probably give better data. But that's what I was kind of wondering is like, you know, it's a, it's a difficult thing to, to monitor because it's, you know, it's not a, it's not a normal API that we normally do. Yeah. I mean, I guess my big question is what about public cloud, the public, do the public cloud providers actually provide accurate energy data? Yeah, I think we have to, it's a request. So I think we all have an answer for that. I think the end users, sorry, yeah, go on. No, go ahead. We'll go to each and turn, Marcelo, why don't you go first? I think if the end users, you know, start, you start to ask more about that, the cloud providers, it's changed, you know, the priorities. Of course, I have the opinion that the cloud provider, they can provide that, you know, we have this data. So, and it's just a matter of people are solving different problems, you know, putting all the workforce to different things. So, however, if there is a big requirement from some, many clients, the cloud providers will provide that as they provide the, you know, the, the carbon footprint monthly, so they can just lower the ground latitude and provide this information, maybe per minute or per hour. Per hour would be very, already very good for us, you know. Cara, what do you gotta do next? Because you haven't had a lot. Yeah, well, so I think the key question is accurate, Josh, and that is, so AWS and Azure and GCP or Google Cloud provider, they have CO2 dashboards, but they're at the cloud service level, and they're not having that granularity at that pod level, which Kepler provides, but also the other piece is that there really is not, or there isn't a industry standard. And so that's another reason why it's important for open source. So there's not an industry standard around energy consumption and how to report it or carbon footprint for containers. And so the real opportunity is, you know, open source, open standards for hybrid cloud, and how you report that as far as the E and ESG. So I also imagine there's, the models are particularly standard, right? So, you know, so it's, you know, you kind of have both sides of the problem. It's not only like, how do I, how do I go get the information? But then, you know, what's a legitimate projection about the, that information as well? Huamin, did you want to add something? Yeah, so as Kara just said, the standard is not there. And so each of the clouds can do its own thing and in its own way. So if you operate between multiple clouds, you get confusion. So that's the thing number one. As we are providing this open source architectures, as Marcelo just discussed, it does nothing significantly harder for the cloud providers to opt to adopt, train the models, get the environments and set it up, provide the models to end users. This can be done by everybody who are using the similar, similar methodology, and then that will be transparent and also adopted by all the end users. So they will understand what the metrics means by the cloud providers. So by providing fine granularity and also high resolution data will help us to do a lot of good things. You know, not like monthly, we receive a bill and then we decide what's to be done in the next month. We can actually do the real time tunings of scalings because that's the promise of the microservices. You can be very agile, you can be adaptive, and you can use the adaptation to save energy. So the tooling will create the tuning and the methodology and also the open source models. This can be adopted by the cloud public clouds and they can use the similar methodologies to help us to achieve better goals. And also cloud can do our public clouds can also help us to open up more APIs and insights into how they are measuring their sustainability. So for example, I, you know, as Amazon, this year, Amazon will announce a new generation of their own private arms architectures. They will also provide this generation over generation sustainability improvements. 30%, 60% depends on how you see rich generation. But we never have a chance outside of Amazon to measure their, you know, the power consumption. So it's really hard to end users to get any sense how can I optimize my workloads or using different architectures. That's where I saved, you know, save the energy consumption. So not disclosing this information or not disclosing this API will make it harder for any users to consume the products as well. So I advocate the open source way. And that's where we open up opportunities for not only for the end users, and also be beneficial for the cloud providers themselves. Well, it's actually going to ask kind of the wider question there, which, you know, you brought up the Paris Accord earlier, you know, are there organizations within like the UN or other NGOs or whatever who can kind of give, give some of your kind of requests some more weight? Because there's a lot of, you know, relatively prominent NGOs that are interested in kind of increasing sustainability and environmental awareness or, you know, what, you know, is, are any of those kinds of groups involved? Do you see some of those organizations showing up? Is that something that we can, you know, encourage or get more involved? Yeah, so, I mean, the sustainable development goals are created by the UN. And right now, I would say that they are looking at this, but they are not giving recommendations. And primarily the focus has been around sustainable finance. And so how the portfolios around or financial portfolios are with transitioning risk from kind of present day to their net zero goals and or physical risk, whether, you know, if you're an insurance provider and you are going to provide coverage to, you know, a large corporation in Miami, Florida. But as far as actual creating the bodies, the standards for data centers, that's really the other opportunity of, you know, there are other things that are happening in the LF, LF energy, OS climate, the CNCF, environmental sustainability tag, you know, not just within the Kubernetes space. Just clarification, LF is Linux formation. Josh, did you have another question? Well, yeah, so I mean, it sounds like right now the public clouds are not providing any disinformation. Or not in a way that is consumable by not their dashboard, right? Yeah, because, you know, when human was talking about moving workloads and stuff, I was of course thinking about public cloud, particularly because energy sources can differ a lot. You know, where, you know, for example, on AWS, US East 1 is mostly powered by fossil fuel power plants, whereas US West 2 is mostly powered by hydroelectric power, which if you're looking for an environmental perspective would actually matter. Without that information coming from the public cloud provider and how to make use of it. Yeah, that makes a perfect sense. So we have lots of thinking about these distributed environments, how workloads can be relocated based on these data differential on the carbon intensity. That's something public clouds can help us to make such intelligent decisions by providing real-time, close to real-time metrics that we can use. So that's what will be very beneficial. But in addition to the carbon intensity, there are also water consumptions. Data center needs cooling and cooling needs water. So water is not a free resource. It has a lot of environmental impact. So that's also related to the energy they consume. So using energy as one of the major sources making decisions on how to schedule workloads, how to use which data center you use. And that's where there are lots of savings for everyone of us. And it's provably possible. Sorry, I was just going to say because we already are allowed to select data centers based on data, right? So because basically the EU requires certain data to stay within the EU, et cetera. So they have the capability to give us some selection over that. Sorry, Kara, did you get to add? Yeah, I was going to add to that is that so there are the tools for measuring carbon footprint that are associated with the cloud usage. But really it's going back to the G part of ES and G is that that methodology, everyone's using different methodologies and how they are measuring that. And so the scope is going to look different. And so again, going back to a couple of questions, the accuracy is not there. And so there are tools and there's ways to get disparate information, but not a holistic view. Right, right. So to change the conversation a little bit, so Kepler is mostly about sharing the information of what's going on. But I think for Marcelo, I think you alluded to making choices based on what you see in that dashboard for lack of a better term. What in the infrastructure layer is being done? Because obviously there's my app, which I can kind of control and try to choose better consumption mechanisms for or whatever. But I can't really control how my overall infrastructure component runs. So like Kubernetes itself or all of its little friends that I plug in, are there efforts involved in trying to make those things consume less energy in and of themselves? Yeah, so I think Marcelo has participated in a number of projects using virtualization, like Kubeverse. So Kubeverse, you think about the way that's how the infrastructure is organized. So you have a better machine and you send out a number of parts and then each of the parts will be solely responsible for its own clusters. And in Kubeverse, this is just one of the examples, that you can virtualize the environments. So each of the virtual servers could be responsible for its own clusters and all infrastructures. And you can fine tune the sizing of the virtual cluster to be more resource efficient and results to be more energy efficient. So that's one of those ways. The other ways is try to minimize the footprints of lots of other things in the infrastructure. For example, the control planes. I previously, we work on a project called microchips. Microchips is basically a minimized control plane of open ships. That's where I combine all the binaries, all the containers inside of the single binary. So you have the Kubernetes controller, this is the API server, and things like that. And Kubeverse, let's do certain ways. You can coalesce them into a single binary. And that single binary will be requiring much less footprints than all of them running separately. And that's, yes, a lot of projects and technologies, so to speak. You can virtualize the control plane itself by running them inside of containers. So that's its hosted control plane, as there are lots of companies doing this, including Red Hat. So you can scale it in and out of your control planes and you can even hybridize them if you don't use across those at all. So that's where we do significantly. There are lots of dedicated resources in order to run your Kubernetes infrastructure. That makes a lot of sense. Are you seeing a lot of take-up? Is there a lot of people who find this to be a concern? Or is that something that, it's like the awareness, you're still trying to spread the awareness of tools like Kepler and the fact that this matters? Or do you think that everyone who hears about it is kind of instantly like, oh, yeah, no, that makes a lot of sense? Yeah, definitely. And getting more of these abilities will help us to accelerate adoption. And also, we want to get people's awareness in both the usability of Kepler and the scientific methodologies that we are using. That's why this is a combination of development and a research community with the end user together to co-engineer everything that's in this technology space. We will let people know how to use Kepler by providing the best usability possible and optimize the installation process, the observability usability. And that's one of these things. We can improve how the Kepler can be adopted. And also, you know, Marcelo is the leading researchers in the community. And he has published research papers explaining how Kepler is working and what's the scientific methods used by Kepler itself. And then finally, we are coming to programs like KBE, okay, to raise the visibility. So hopefully, people can understand what's the project in SPOS and how can they use these projects for helping. Just to add something, you know, I think like, you know, increasing the visibility, showing developers, you know, to increase the awareness of sustainability, energy consumption is something very important. But in the end of the day, things that makes more impact, it's like a government request, you know. So, for example, it's been, I think, less last year, the European Union requests companies report the energy consumption of AI workloads in Europe. So, and this changed the market a lot. You know, even the Kepler become much more important, you know, in the community because it is not a hard request, but it's just a soft request. As far as I know, we should double check that. But with the movement of, you know, governments request that it's may make the companies, you know, to move towards the awareness of the energy consumption and the carbon footprint. So, and I envision that in the future, those kinds of requests will be much more often. So, and it will become much more important, you know, standardize everything as Carol was saying. Standardize between companies is something that it's challenging right now. For the audience that doesn't know what does it mean. It means, for example, maybe, you know, AWS is reporting the network energy consumption, but the other kind provides are not reporting that then it's hard to compare things. What's being put in the equation. So, companies are using different equations, and it's hard to compare things. Data centers also the carbon footprint. There are some data centers of solar panels, and this is not easy to know. It's internal. So, and we have no actually, there are some tools that shows some carbon emissions of data centers. But we don't know how much solar panels they are using. The only the cloud providers know that. So, there are some gaps that it's is still there. But again, when governments start to push more to the direction, I think things are going to move faster. Yeah, and dovetailing on to that. That's a perfect segue. So what I was going to share is that from that macro level, the from an industry perspective, telecommunications or telco uses two to 3% of the world's energy from their data centers. And so, as far as standards that are coming forth, primarily from the EU, as Marcelo had mentioned, so Europe, some in North America and in APAC, or in Asia, there are, there is more interest, but it does have to come kind of from the trickle down like from the top down. Certainly, we want to make sure that developers can feel empowered and that they are partaking and contributing to, you know, Kepler, the other projects within the tag or the CNCF tag, and then, you know, LF Linux Foundation Energy, and just other sustainability projects. So, you know, I don't think it has to all be top down, but where the real impact is going to be seen is, like Marcelo was saying, is in government and is going to be through regulation and then the big industries. And from the Kepler community, we want to be prepared for this time, you know, to have the perfect tool for that. Exactly. So to veer off into the technical here a little bit, how does Kepler collect its data? I mean, where is the data actually coming from and how does it, how does it get analyzed? Yeah, so we collect data as different layers. As the OS layer, you know, we collect data from CPU instruction cycles. These are for performance counters, usually available on all of CPUs. And these are generic hardware counters and, you know, associated with the software counters from the contact switch. And most recently, Marcelo added something about the page cache and things like that. This information we collected from OS layer, we are bubble docked to the user space layer, where we collect the energy data from different counters. The Intel has its RAPO runtime average power level counters, we can read from CPU arm has its own depends on the vendors has its own methods of reporting it. And also, you know, certain things like power meters, and that's come from ACPI also reports power consumption at the whole system, including the also technology like run a fish and reverse power is consumed by the server at the control board level. So this power information, this utilization by process level is information will be filling to the power models. I just we just discussed your the attributes, the energy as different aggregation levels process container VM on part to actually part. And then this information is exported to Prometheus, so people can view on their dashboards, once they collect the promises metrics, and that's where I'll help them to understand how power is being used at different levels. Okay, of course, I have to ask, how much overhead do the collectors introduce themselves? We're trying very hard on it. So we started off as like, let's just say, a couple of years ago, where we mentioned how much energy how much CPU utilization first prince capital uses in both the EVP level and user space level. At the time, we found out it's about it's about 2%. We thought that was the state of us. But we turned out on certain environments like the real time kernels, capital actually using much more. So we take a lot of optimizations, reduce a lot of things. Marcelo also made a big contributions, we're moving on lots of redundant code and provided the profiling mechanisms that we can pinpoint exactly where optimization can be done to reduce the footprint. So we are very proud to announce that capital is using a very low overhead. In terms of the secretization, sometimes below 1%, sometimes below 2% depend on the environment and our different configurations. We believe this further improvements can be done by using even more optimizations. But we are, you know, this is no ending. Well, that's no ending. We are trying very hard to make sure that capital is very friendly and consuming very low footprint in order to give you the numbers you need. It's always the trade-off, right? I was going to ask kind of now go kind of opposite in the spectrum. It's like going back to the tag itself, what is the goal? Like what is it that you're hoping that you're going to really be able to accomplish? Or how do we invite more people to that conversation? And how do we, you know, how can people get involved? I was going to share the link to the site in a second, but you know, like that's just a webpage, you know? Like what is it you're hoping to get people to do? Well, the biggest thing is come to meetings. But the biggest thing within the tag, so we formed about a year and a half ago, and we're accepted. Our charter was accepted into the CNCF. And we have been a small grassroots team. And as time has gone on, and especially with all the work that's being done within Cloud Native Sustainability, we have a Cloud Native Sustainability week event every, or we just started our first last year. So I shouldn't say every year. And we just need all hands on deck in any region. We just open up the tag to Asia Pak. And we want to make sure that everyone feels like that they can contribute. And contributing can happen through non-coding and coding ways. So we have a green reviews working group that is focused on developing green best practices into your review software review cycles. But then there's also an extremely large arm, which is the working group comms. So that's, you know, managing our website, getting the word out in region, social media. I mean, just so much. And the other pieces is connecting to other sustainability-focused organizations. So one of them being the Green Software Foundation, also within the Linux Foundation purview. And then making sure that there's cross-collaboration. Because again, we really do not have time on our side. And that's why all hands on deck. But truly just first coming to a meeting and seeing where you're interested in. And there is no short of work to be done. So I was going to, and I should have asked this while we were still talking about the kind of more technical questions. But one of the things we do like to ask, right, is like you know, in the next, you know, three months, six months, you know, some reasonable timeframe, what are each of you most excited about for what's going to shift in Kepler or the tag itself? You know, that you're, looks like we just lost Kara. But that you're most excited about showing up. And when we start with you. And I repeat the question for you when we get to you, Kara. So we are going to be very excited for the new offerings from Kepler, obviously, the better accuracy, better support, support abilities and better usability. Specifically, you know, we come from a traditional data centers view and how we can measure the cloud usage in the data center can cloud environments and the micro generic microservice type of view. And we also expand that into different environments, including the, you know, environmental, you know, open stack kind of environments, and also the traditional OS environments. And we also plan to support the more diversified GPU environments. So if you are using GPU in certain other ways, we are most likely in the language model, this GPT era. So you're going to have a lot of optimizations in GPU, and we're going to support the different GPU configurations. Right. Yeah, I want to add about the GPU part. So we support GPU power consumption. However, there is a new trend for virtualized GPUs that it's, it's called like MIT feature. So we can, it's hard to call that partitions that, for example, any video GPUs, I'm just saying the example for that. It can partition the GPUs with predefined size. It's not flexible, but it's possible to partition the GPU. I think this might be more common in the public cloud in the future, you know, to have like some actually have a slice of GPUs instead of the whole GPU and just change the power model. Okay, because right now we need to also have the same problem, like, you know, to try to associate the power consumption to partitions and then to processes. So it's, you still need some investigation. So I think for the next months, it's something that will be the future of capital to we're investigating that. I would say, as we were saying, we are always trying to minimize the overhead, it's relatively low, but we are always trying to minimize that. So optimizations, always, we are always doing that in the capital. And, and the last thing is, we want to, something that we, I think, I don't know if we mentioned very, very well that, but the power models are specific to the hardware. So different hardware consumes different weight, the energy consumption, the energy consumption is different. So we have power models for different CPU models. And they're in the public cloud, there are a lot of kinds of CPU. So we need to train power models for that. And we also envision for the future to, you know, train more power models as the community to contribute also if someone has a different CPU. So training the power model contributes for, you know, making the power model public and help people to, you know, to have access to different power models. And we also envision for that, you know, creating more power models and have a bigger community also to contribute for that. So Kara, just to reiterate the question for you in case you missed it, but the question was basically, what are you most excited to see in the next, you know, six months or so that's going to come out of the tag or come out of Kepler or, but we are almost out of time. So, so one, one really strong answer. Refinement. So I would say refinement around the work that we're already doing and having a clearer direction from the tag's perspective as to what we want to focus on as far as achieving those standards for reporting. Gotcha. That was, that was a very tight answer. I appreciate it. So, Can I ask a technical question here because, because there's like development going on elsewhere, which is one of the things I've been wondering is, is, so Kubernetes has been really expanding the capabilities of the pluggable scheduler. So is anybody actually working on tying scheduler modules in with the Kepler data so that I can actually say, Hey, you know, I want to schedule pods based on, you know, 60% minimizing power consumption, 40% performance. Yeah, I think that is the use case. It's been developed and the technology is coming along. So we have a separate projects in the Kepler community called PIX, Power Efficiency Well Scheduler, Kubernetes Scheduler. That is towards that direction. So we are considering that the power models are using about the predictions, how much energy is going to be used if you are scheduling workloads in this way on different nodes, but they are characteristics. So this is something we feel can generate a lot of optimization potentials from the community. So we hopefully can get more contributors and evaluators to help us to improve this project and make our environments more sustainable. Well, on that note, that was a great tie off, I thought. But thank you so much for coming. We really appreciate your time. We really appreciate your work. And I really hope that, you know, this show and all the other, you know, marketing efforts that you might be making are going to bring, you know, more people to the party and really help the community grow. But I'm really excited to see it. So thank you, Josh. What's the best place for people to show up if they're showing up for the first time? I sent a link to the sustainability tag and which has data about how to go to the meetings. So I think Cara was saying earlier, the best way to start is, you know, come to a meeting and the work will fall out of the meeting and be very obvious as you get involved in the community. So you can see the CNCF calendar. We actually have a meeting tomorrow, the 31st of January. Cool. Or so. We also have community meetings for Kepler. So if someone is interested specifically in Kepler, also please join. We also have Slack channels. So ping us and we can discuss things. Gotcha. Cool. And on that note, let's conclude the episode. And again, thank you for coming. Really appreciate your time. It's a pleasure. Thanks, everyone. Thank you very much. Bye-bye.