 Nevertheless, hi. I hope you survive today until now. I hope you didn't have too much after beer yesterday and can follow me at least a little bit along. I know it's the last session, you're grieving and looking for the good after party, no, the after-conference party, those were around. So let's have a look. My name is Max. I'm Max Körbecher. I'm on the day where I earn my bread and butter, practically working as a cloud native advocate. I'm also founder of a consulting company, but our business or my business is all around open source. So when the sun goes down, I change my head and then I have like a little collection meal while it was three years in the Kubernetes release team. I'm the co-chair of the CNCF technical advisory group for environmental sustainability, also in CNCF ambassador, organizing meetups and so on. And sometimes I'm also the Linux Foundation Europe advisory board member depending on when we have meetings. So you see all day long, somehow open source, Kubernetes, container, all those beautiful things come together. But today I'm the head of the technical advisory group and did any one of you hear already about the tech? What is it? Okay, perfect. Almost everyone. So there's multiple techs within the cloud native computing foundation. Everyone takes care about a specific aspect which has some certain kind of weight within the cloud native ecosystem. And there's also techs which are not that technical oriented, like the technical advisory group for contributor strategy. Because they think about like how we can engage with more people, how we can bring them to the projects. And also some part of it is a technical advisory group for the environmental sustainability, which we have founded just a year ago. So it's a very young group, still like in the forming and shaping process, but we're slowly getting through it. And we help the technical oversight committee to find projects within the cloud native ecosystem, which helps, for example, to optimize systems, applications, always a little bit with the sustainability in mind. But also to help projects to understand how they can optimize their systems with sustainability in mind. And this is what we're currently or what I will present you a little bit today, what we're working on at the moment. Besides this topic, we have a lot of other activities. We have a landscape document where you can find a very nice overview of different projects in this field. We're putting together slowly some Kubernetes best practices, what is useful, what is not so useful, what is maybe in the misbaster corner, what is more the helpful side. We're thinking about extending the cloud native majority model with one used column about sustainability and just bake it into an overall strategy because this is topics which you need to start top down, but also bottom up. So you need to place it on any kind of layer. And we also have recently found two working groups. One is the green review working group, which practically is implementing what I'm going to show you today. And we have also the workgroup comms or communications. Why? I said because our group is not that super technical, right? We are not running around and thinking about how to extend the container runtime in future with switch kernel functions whatsoever. We are thinking more about how to work with the whole ecosystem together, how to bring them together, how to exchange because we have some boundaries. But before we go a little bit more in depth to this, we also need to take a look why we are here today. And I heard already that this analogy is not the very best one, but it's somehow true because both things are producing a lot of, well, CO2 or methane depending on what you're talking about. But over the lifecycle of either a cow or a server, there's a lot of gas pushed out into the air. So the sustainability side on food is to eat less cows because then you don't have to crawl so much and you would not over crawl the population because anyhow you don't need so much. It's a man made problem. If you would let the cows run freely around, they would be not so overpopulated. The ecosystem would regulate it to themselves so that they are not harmful for the environment. Simple as that. But who's regulating the servers, right? There is no server predator running around in the data center and just killing and slicing and dicing some of the servers away. It's either actually the other way around. We are continuously blacking in more and more servers. We want to have more servers. We need them in the big data centers. We need them on the edge. We need them in the factories. We need them in the car. We need them in our pocket. It doesn't matter. We have everywhere servers. And this is actually a problem, right? Because it's a man made problem over scaling, over utilizing what we have, what we like. And this is what we need to also reduce an act on. So that's why maybe a cow and a server are still common together on the one hand side. And I love this gift because it's fantastic. So our vision from the technical advisory group comes somehow together from the point that on the one hand side free and open source software is widely used. When we look in the cloud native space, we have some system components which are downloaded and executed more than a billion times. And this is one of 500 projects. A typical Kubernetes cluster has around 25 to 30 open source projects running. This is a lot. On the other hand side, ICT is a global driver for greenhouse gas pollution. How much is it a driver? Well, if you're here today and working on your computer, you could also discuss about the aviation industry. Or do you build currently a house? It's the same category. Construction, aviation and ICT are on the same stage of polluting the world with greenhouse gases. And this is a very big step. If you would shut down all computers on this planet, including the desktops, it's similar to shut down some of the largest economies on this planet, like Italy, for example. Almost as big as Japan. You could shut down a whole country to save the greenhouse gas amount. So our thought was like, okay, cool, we have a lot of tools running around on this planet. And by the side, we are also the same problem. What if we can make it just a little bit less worse? On the other way, be better, be a little bit more greener, be a little bit more efficient in what we are doing? Because on the software side, this is actually a very efficient step to do. And sometimes even very easy to do. So our mission was in the cloud native space is to advocate to the teams, to the different projects, ideas on how there can be maybe more efficient. What they have to do to understand what is their current software doing and how they maybe can optimize it. Just to crawl a little bit more transparency, just a little bit more to put it into the mind of the people. And maybe not in the next release, maybe not in the second next release, but maybe in one year there will be one efficiency boost, because some of the contributors decide this is what I would love to work on. And that we're not just dreaming or building castles into the air, we have seen right after we have found the technical advisory group, where the working group of KIDA, the Kubernetes event-driven scaler for applications come to us and say like, hey, we really love your idea. I make a proposal for it, and I'm going to fix that problem. Well, it took some time. They did a little step in between implemented custom resources, because it makes sense, then you can also schedule on other resources just then on only carbon. And they have extended in the next step with a carbon-aware SDK. So now you can use KIDA to at least on Azure Cloud, for example, schedule your applications based on the carbon footprint of your current region. Perfect. And this is why we're working on that topic and why we bring it to the community. What is also important, what we don't do is that we're building some kind of umbrella organization around the CNCF or more than that, also not to go beyond the CNCF regarding implementing compliance or standards. That's a little bit too broad. We're also not going to evaluate the infrastructure of someone. Funny enough, we got asked a few times like, hey, is it possible that some of your people come over and check it? I would love to, but as a group, there is no form structure or whatever for it. That's not our business. We are open source. We are working for and with each other, not for someone to optimize their problems. And we will not focus outside the cloud-native technology. And this is the most important one, because if you think about sustainability in computers, you very fast come to points like cooling. It's one of the biggest problems when you think about data centers, except it's standing somewhere in Antarctica or in the north of Finland or Norway or whatsoever. But even so, you need to think about how to efficiently cool your hardware and then constructing data centers. Again, construction is the same big problem as computing and aviation. So you layer problems and problems to maybe find a way to. So that's why we don't go there. It's a rabbit hole. It's like too much. We think about the software, isn't it? And if you want to find out more, we have very lovely websites, sustainability.cncf.io, the followers on social media. We currently plan a whole event series, a whole week, cloud-native sustainability week, where we around the globe in US, in Europe and Asia are going to have little meet-ups, little conferences sitting together discussing about maybe start initiatives. So this is a little but very big and ambitious project going on currently to bring the community more together on this side. So, long story short, how we can build a foundation to think a little bit more greener. And our open source world, on the way how we build open source, it's pretty much like this picture. If you take a look, you see outside, there are standing palms and nice trees and nice bushes. But you have also inside nice trees and nice plants and bushes. This is the ecosystem where we are working. We have our inside world where we're building our own applications. We contribute all together. We come together. It looks nice, fancy, but there will be always a glass wall where it hits reality outside. And most of the time it will work. The open source projects will work. Otherwise, you wouldn't be there anymore. But you need to be aware of what is going on on the other side of the class, but you also need to be aware of what you can change within the side of your class. If your environment is changing, if you're having more heat weather, you cannot put a cold demanding plant outside. It will not work. It will just die. It will go away. And so we need to think about how we can synchronize the demands which we see on the markets and people asking for and bringing back to the community. But also what we as a community can do and bring into all the people outside. So it's like, we give and we take. Why we see that there's a huge demand on this is because there's like four or five open source foundations who focus on climate and sustainability, which is insane. Because when we started, we were like, oh, we're checking out. Maybe there will be one. No, there's already five existing and they're all taking a look on it. So there is a very, very huge drive in it. But sometimes it's not understood in the project how big this drive actually is and this interest in it. So when we're thinking about what we can do, we had a lot of questions, tons of questions. You don't need to read them because it's just for putting this nice gift there. But it really takes us a couple of meetings and months to come to the point what we really would love to focus on what we can focus on. Because we want to achieve on the one hand side is to optimize the open source software. We want to find a way to do it. But the thing is we do not have an army of software developers who are running around and tell you like, oh, you need to use this and that algorithm to make it now more efficient. This is not helpful. We would spend decades to optimize the software in it. We want to show the carbon impact of your projects, which is more or less the most easiest part, I would say, and reduce the required energy. For this, I have to understand where it comes from, what other parts which are maybe causing that my software demands so much energy. And the most important one, maybe, is to bring the sustainability topic to every contributor's mind. Not that everyone starts running around with a green plant hat and is like, oh, we're going to destroy the planet. But just like if there's some project coming up or is like, oh, we need to improve the performance, then someone says, hey, maybe that's something. I like the topic. I understand the product context. Maybe I can go into it. Tiny changes for big impact. I think this is always what works in large scale open source. So we boiled it even more down, raising the awareness and showing the impact and the change or gaining transparency. This is the two things which we wanted to achieve. And we can do this all at once while just simply put something like a power consumption measurement somewhere on the dashboard. Well, you need to educate and you need to explain to people what does it mean. But if you reach already the point to show that, then you have the half of the way done. And then you can start thinking about and analyze and go into it. And the funny thing is which we're currently a little bit doing a research more on the commercial side with some customers is like, how much we can steal actually from the application performance sector and just apply it to sustainability topic. Because what we want to do is not that much different. Just like maybe, well, improving the performance is the one thing. Maybe to reduce a little bit the amount of energy used for the same amount of performance is a thing. But the idea is the thoughts behind it is already there. So from that perspective, we have even maybe there are solutions or some thoughts behind it, which we just can apply. The only problem is to show the metrics behind it is a little bit more difficult, because I don't care about the speed of how fast is a request answered. I would like to know how much energy does use this application over the day over the months over the year. And while we were working on it, we thought like, okay, we need some kind of framework and we need to specify it. And then accidentally we stumbled over the green software foundation, which says, hey people, we have already something called the software carbon intensity specification. Some people love it. Some people don't like it. The cool thing in this, it is flexible enough to either look at one single piece of application or to train multiple applications together and take a look on the whole landscape on the whole pipelines. And this is what we want to have. The carbon software carbon intensity specification will not help us to measure something. It also don't tell us how to measure it. That's still our problem, but that's a technical problem. And you don't take any problems is awful. But it tells us always the same way how we can put it into the same kind of formula. And it doesn't seem to be so difficult, obviously. But there are two units in it, which maybe are a little bit more far away. The one is this unit M, which is about the embodied carbons, which means like everything what you need on energy and what is produced on gas during the production of your server, which flows into your server. It's very philosophical, but I mean, the carbon doesn't stuck in your server, but you know, by baking it, you baked it into it. That's why it's embodied. And you have a functional unit R, which is a scaling factor. And if I know R, and this leads us back to application performance, for example, then I can act already on it, because I understand for which reason does my open source software jumping in its demand for energy? What is causing it? And while we work already with a few open source projects together, we found out that R is the most difficult thing which you can find. We have one project which gets preloaded with policies. The thing is every policy is different heavy. So if you use a combination of the 20 standard ones, it might be okay, but if you add just 10 self-written very complex ones, it can double, triple, multiply in a huge number the demand for energy. So R is not that easy to find. It's not just sometimes the user. It can be very complex. So what we can do with the SCI is to put it into actually two metrics which we want to show. In value or in KPI for the open source software and for the open source project. So what is the difference? Well, the software is just like the package solution, which in the end of all your efforts come out, has a release version on it, and gets spread to the world. This simple piece is very awesome to be measured. Also, you have some varieties in it. We'll come to this in a second. But to know how much energy demands are already on this one single piece is fantastic and it's very easy to achieve. The second KPI, which you thought of, is incredible hard because you have to think about the whole software supply chain which you have. Every single contribution, every single test, every PR, every, I don't know, security scan, every build, every time you store the stupid container in some registry, this is all consuming energy and storage and network and whatsoever. So to calculate this part, how much energy does my whole project need is playing on a totally different planet, right? Measure the one simple piece, okay, easy peasy, cool, I can do it, give me an Excel sheet, give me a calculator, I can calculate it somehow, calculate the whole project, I will see you on Mars because this is like a little bit more longer story in it. And sometimes we then even stuck with suppliers on it. Like if you have GitHub Action Pipelines, which is often executed for some of the projects, you do not have like deep insights into it. So we have dependencies to some of the providers. And also on cloud providers, which is often underlaying the infrastructure for building all the stuff, we get numbers, but they are not that good. They're usually based on information from past, from the last year. Not from today, not from the sunny weather, which we have today. They don't care about that. They just get the average number of last year and that's the baseline, which we have for power consumption. So what you found out is like if you have the SCI score, it may be worldwide a little bit through release, right? So it's maybe go up for whatever, they put a big feature into it, it's a new release. And then they get a little panic, oh shit, we need to remove all, or take a look into the efficiency and then it can drop. However, it's getting problematic if you think about that the software is never just hanging out, right? The software is not Netflix and Jill and beside is doing some processing of some requests. That's not going to happen. Usually the software is pending between it's idling and waiting for something going to happen, which is on this side. So you need to take a look on the scores, which is way lower. And there you sometimes have a totally different behavior of the software than when the software is under heavy load and is very busy. This is also what we have found out. Usually when the software is under load, it behaves somehow always the same, but when it idles, it's like it's randomly get crazy and start jumping around for a few seconds and then go back and sit on this corner and don't do anything. One of our colleague, Nikki, from the work group, she did for one of the other open source conferences, I think it was even the OSS North America, a presentation comparing GitOps operator, which idle and how much energy they have. And this was a quite interesting comparison because you saw on the one hand side, one operator was like flat. He was practically dead. He needs some energy to, well, waiting, but it was dead. The other one was continuously spiking, waiting, then he's even going to drop in the energy consumption and spikes up again. And sometimes it even goes to a large plateau waiting for some minutes and then drop again. Weird behavior. Why does it do it? It's both should do the same. So we see there a difference in it and that leads us to the thoughts like, okay, we cannot just simply measure the application. It will just maybe give the idle picture, but it will not give us another number. So the idea is that we will measure an application in like two or three states. We have to, it needs to be loaded and load tested and it should idle. And then we find, maybe together with the projects, the Pareto optimum. What is like the peak performance for your application where it feels comfortable? What is the amount of request you usually expect, the configuration, which you expect and so on. And then with this three values, we can go back to the project and document it and give it back also to the end user to understand how maybe good, bad the solution or software is. But I said, if we take a look for the KPI for the open source projects, then it's not just simply this craft because every simple, every single piece in it requires its own optimization. And we miss a few things, like in which infrastructure is it running, for example. And then it also depends on like, if you go to infrastructure, which is theoretically super green, but does everything through carbon offsetting. So they're actually not green, they're just by themselves that they're green. It's an ethical question, do you support it or not? But, well, for some people, it's really a question. We have one customer that's like, I don't care about carbon offset. That's like lying into your pocket. It's almost like corrupting your system because you're like, oh, I drive too fast, the officer go and give you a bill and you say like, here's 100 euro and you don't find me, right? So it can get to even an ethical question. So this piece is very, very complicated to get to. Some stuff you even don't stuck to. Like, how do you measure the energy is used on this machine just to write the function you're going to submit to the open source software? I have no idea. There's a few tools which can measure, but I guess you maybe have some slack open, maybe you listen to some music, having maybe a video call beside. I mean, some of the stuff you can extract out of it, but this question is very, very difficult. So, long story short, we focus for first on the software, on the one simple piece of software because everything else just drives us crazy, right? For this, as I said, we use CSCI, the Software Carbon Intensity Specification, very lovely, nice piece of framework. We know that we'll be also maybe in ISO in future. So the Green Software Foundation is currently working on this. That means it's not just a stupid framework, someone has invented somewhere, being bored sitting in the cellar and saying, oh, that's awesome, fantastic idea. It can be even quite relevant in future for some organizations because they could ISO certify their software and say like, hey, this is measured in that way and now we can act on it and everyone is happy. And the ISO standard itself will not help, but it maybe will be influential to other standards which building on it. Like in Germany, we have the Blue Angel certification, not well known everywhere else, but they do actually quite well job on telling like, okay, if you make IT, you need to measure a few things and then you need to find out how you optimize it and there are control mechanisms behind it. You don't have the control mechanism, how you don't know how much water waste does your data center have. You should know how much money you waste on cooling or inefficient cooling because your server X are all open and not closed and whatsoever. So they put some ideas into it and so all these, this development can fit together. Yeah, and I said, on the other hand side, when we wanted to implement just simple some measurements, we came to the point that there is a very large complexity around it because if you're working in the cloud native system, we always almost always talk about container container on a very flexible infrastructure, getting scheduled maybe by external factors, maybe getting downsized by some other factors. We have tons of different tooling. So not every tooling is able to tell you even halfway precisely how much energy is currently used for this piece of software and then even from data center to data center, you have different impacts. So all of this together can be a very, very large factor. So we come to the idea, we have to simplify it as much as possible on the one hand side and provide in the first step a very clean, a very clear setup, which we can always reproduce, which will be for all the open source projects, which are in the CNCF ecosystem, will be always the same. Maybe for the beginning, maybe forever, we don't know this, we're currently building on that, working on that. But if you have this same baseline, then we can really start comparing all the software to each other. If your software is tested and executed in the other environments, which can be in different data centers and different cloud providers and so on and so forth, it's practically very impossible to really compare them to each other because the whole chain around it is very dynamic. So what is the plan? We have on the one hand side the first project. This can be in anyone's project repository, where you define how to deploy your application and how to load test this. There's even a nice little load testing tool available in the cloud native ecosystem where you can just add this through one to YAML files and you're happy and good to go for it. So perfect scenario for us. And you will get a file into your repository, which where you define how this SEI is specified. Because again, the functional unit, the scaling factor is something which everyone needs to know by themselves. It's nothing where we can really help us because then we have to study the application, put them into a real field environment and see like, is it the amount of user which is going or is it the amount of request or is it the amount of data which is stored? The reality is, which we saw with a couple of clients where we have worked on this in the commercial sense is it's very much the mix of everything. So it's not like this one thing which I can pinpoint. And then within our repository, we have a simple pipeline running, which just like can be triggered to pull the deployment specification and the testing specification and throw it over to our own implementation of the test infrastructure, where we need, for example, some bare metal servers. This has something to do with the capabilities which we require for measure efficiently the power consumption. So tools like Kepler, which are very good in measure the power consumption for a single container or single services running in a Kubernetes cluster. They need a lot of possibilities to talk with the server. If you want to run this yet in a cloud provider, it's kind of difficult. You have problems with it and it's sometimes not that precise. You have also other solutions like SCOFANTHRA, which runs more on VMs, so if you have a non-containerized environment. The good message is it doesn't matter because we get from both very similar information, very similar metrics, which we then can push over to the DevStat server from the CNCF. And the DevStat server is a completely different project, but it's already an observability stack available to show all kinds of different other metrics. So we can continuously with every release, for example, push the metrics of the tested environment over. And so like per release, this is your current SCI score. This configuration then needs to happen on the DevStat repository, but overall it's quite lean, quite simple. So this way the project shouldn't have that big huge problem to get the things first up and running. It's maybe about fine-tuning later, but in the first step it's like, hey, okay, this is how I deploy, this is how I test, this is how I get my SCI tested in the same clear environment, pushing stats over, getting my information back for my release, and then you have forever release a comparable KPI or value where you can see if your software is going into the right direction or maybe in the wrong direction. Please don't do just one favor, don't rewrite your whole software into C because some study eight years ago says C is the most efficient one. It will be the most efficient one for sure, but it doesn't make sense to rewrite the whole software, but somehow it always pops up in this talks like, oh, you can think about how to change your whole software. Yeah, good luck with that. So how does it or how it can looks like? This is now a little demo from the Kepler. It recalculates and gives you some idea about the current energy mix and what is the key source of it, which you see in the top there. This is like a best guess estimation. The Kepler is a quite complicated project because it not only just measure it, it also has a machine learning model behind it to understand how is the energy mix behaving and so on. And then you can drill down either for namespace or per application, whatever you would like to see. And here, for example, you see the total power consumption. And on the right side, you see the total power consumption per day. So it's a very, very simple information. What is missing for us here is the information about the true CO2 footprint of this application. That's where we currently going to. The big problem is there's good, very, very good resources for it, which is on the one hand side the Wattime API. And on the other hand side, the electricity map. But you need a credit card for it. And often you don't have a credit card as open source project where a bunch of people sitting together in the evening and think about how to do some cool open source stuff. But we will get there. I believe so. Because when we take a look on the Green Software Foundation, they collaborate already quite long time with the Wattime API. They do hackathons together. So I'm pretty sure that hopefully in the future, we see that those APIs will maybe open up to the world. Or what at least could be something for us, which is very helpful. And this is like a scream for help or ideas of someone from the electricity map or what time people see it is, hey, maybe we can just mirror your server of data, because then you do not have to do not deal all the requests, but we can host it on our open source infrastructure, which is supported and paid by some other companies. And we handle all the open source requests. So we do not some economical damage to these groups. Copy some ideas. Nevertheless, this is what we roughly want to achieve, because then people can start taking down and taking a look on the application. The challenges which we primarily see here is that we have a kind of paradox on it. The more we want to measure, the more we want to check out the applications, the more CO2 footprint we will have by ourselves, which is a little stupid. We hope we can reduce this. So we have somehow to measure again also ourselves, but then it's getting like a close circle. So we need to find a good way for that. Carbon and energy mix data, is that already? This means about the bottom API or energy map API where we can get this information. We most likely cannot scale with the demand, because we have a few hundred open source projects within the CNCF landscape, as an officially recognized CNCF project. And there's like five, six, seven hundred other open source projects, which are cloud native related. Maybe you want to use it, but it will be impossible to handle all of them. So we need to be able to reproduce our setup, give them blueprint for it, and say like, okay, this is how you can run it by yourself. Anyhow, it will be like this, but we should keep in mind that we cannot just always have our own infrastructure running here. And also a very recent problem which we found out, some contributors to open source projects cannot contribute to anything than the repository they are allowed to work in. It's a legal restriction, simple as that, right? Full-time contractor for a large company getting attracted to SELAC, these two repositories you can add something publicly, everything else is forbidden. That's why we come also with this structure saying like, okay, testing and deployment must be written in your repository, because otherwise we will have a problem, because then we need to have meetings and we need to write the stuff, and that's like getting complicated. Yeah, further considerations. We were thinking about how we can document this, how we can document the specification for each of the projects, so that if you want to look into it, you do not have to go to some document pages or read through it or try to understand how the tests are running. And there's also a very cool recent project going on from the GSF called the Impact Engine Framework. It's actually a very large one, but one minor piece of it is a specification impact on IEF YAML called where you do the specification for the SCI. And I'll just give you a short example for this. You can see on the left-hand side, you have a simple definition, what it should be, where it is running, and so on and so forth. Then you come to the configuration part, where you have the pipeline and how this all metrics should be calculated. So you could add here different calculators if you want. You can write your own calculator if you want. And then in the middle part, you can see that you start putting things together, because as I told you, typically a piece of software is not just one simple application running. If you nowadays throw some Helmchart against the Kubernetes cluster, sometimes there's popping up 20 containers, whatever they are doing, but they magically appear. In this specification, you can tell a little bit more about the relation and put them back all together, so that you know, okay, I have to measure this piece for the back end, this piece for the front end, and there's maybe even an edge node. And in this example, you can see that we have some in Azure running, some in AWS running, and some in GCP running. What the IAF has in the background is a little interpreter, a little engine for it, which recalculates basically this input and makes this whole thing a little bit more friendlier. With this, I have an output and directly also my data format, which I can use. So with this, I can live with this, I can act because I can compare very simply the numbers between each other. I can put in some graphics, you can get it as a CSV if you want to pull this into a BI analytics tool whatsoever, but you get a very simple documentation of your SCI score, and then you can work on it. So, overall, all those activities which we're currently implementing, shaping and working on the implementation is about raising the transparency and make the people in the open source project in the open source world aware of like, hey, there is some aspect of it, we're just beyond security and beyond the performance of an application and beyond maybe just pure functionality, but there's also a kind of, well, software responsibility behind it to consider in your architecture to make things a little bit more sustainable. So for the next steps, we're finalizing this implementation. We have the first projects which really wants to implement it for the pipeline, a few security tools, a few application delivery tools, which actually are already quite interesting to work with them together, and then we need more contributors because at the moment we have a very engaged little work group working on this, but we need just more people for it because it's like going to scale sooner or later a little bit too much drastically, and then we need to plan how we can support and onboard all the other projects for it and ensure that they can also measure the future by themselves and raise awareness by themselves. So thank you very much, and if you have questions, I'm happy to answer. Perfect. Is there a risk that by getting tied up too much in the project that you described there that you actually get distracted from the overall goal, which I think will include a sort of more advocacy, do you know by getting caught up in detail, like whoa, but the detail is here, really we'd like to be at the keynotes talking to everybody, saying just think about it. So the good thing is that on the keynote side, the group is taking off. So I think on all major, at least in the KubeCon, the technical advisor group and the targets were always mentioned within the keynotes, even from some of the companies who are drastically founding the big events. So in this regard, we have always someone standing on the stage is like, hey, people, there's something which you should take a look at. Here you can find them, they are running around out there, you can find them and so on and so forth. At the moment, yes, this is tracking or taking a lot of effort into on the other hand side, we have also on the same very engaging group, which try to find a community. I said we have in October now, it's a full week, I think we plan 30 events on the whole globe, nearly every major city, I think the only continent which we skip, not true, I think we skip Africa and we skip Australia, but every other continent has at least one meetup going on on the overall topic of this. So that's why we also have these two work groups, raising awareness, communication, working with other open source foundations together. We would not need to come up with our own specification for this stuff. It would be cool to have our own little whatever, but if it's there, why to do it differently. So M is the most difficult one because it's the embodied carbs. You either need to take the information which you get from your server provider, they need to ask what they calculated because sometimes they show you the embodied carbon for the hardware and sometimes they show you the embodied carbon plus the whole life cycle electricity will consume and tell you this is that. Then you have the foundation for the M part. Then you need to know how long is your hardware running, three years, four years, five years, six years. And from there you need to split it down and then you need to understand how much capacity of the server, of the overall from the server your software is going to take and this percentage you will take from the embodied carbons practically out for every day, minute, whatever. What is your baseline? So there's a lot of mathematical calculations behind it. The good thing is there are tons of tables and open data sets available where you can really look up like, okay, this is my server, this is the embodied carbon. And then you also see the power consumption from the server. I recently had a discussion about where I was thinking of a German article actually where they're like, oh, the embodied carbon is not that harmful as everyone says. And this isn't theory true. When you think about the embodied carbon, if you just compare the number and think that the server is running 100% four years continuously on the maximum power consumption, then this is like one to three behavior between the both like one is the embodied carbon and three is the energy consumption. But in my opinion, that's a very big lie because first your server is not running four years 100% full max on energy. This is not what's going to happen. Most servers, if they're in the data center and they're not in the cloud provider, are maybe consumed somewhere between ideally up to obviously 90 something. But realistically, you have servers like between 30 and maybe 80% utilized, right? Some bigger companies who have like good amount of hyperscale between it or large scale Kubernetes infrastructure, which can balance some voltages, they will have a higher utilization. But if you look in the real, the classical application, you will never come almost near to the very high utilization of a server. So, and then there's another calculation from a sustainability group from France, Boavista, if I pronounce them correct, I never know. And they did the same calculation and they're like, wait, so the way how they calculate this also depends on the location of the server. If you put the server to Sweden, then your overall footprint is very, very small. If you put this server to Germany, then the footprint of the electricity is large because Germany currently runs practically on oil and gas, right? So, it also always makes then difference in which location you have to put your server. And this makes over all this very complicated to get to it and boil it down because you have always this moving part, parts in it. You're welcome. Any further questions? Doesn't sound like it. And thank you very much for being here and hope you enjoy the next days.