 Well, thanks everyone for coming. I'm Cliff Rosner, and I'm the Vice President, responsible for marketing intelligence intelligence, for the hope keeping project foundation. And some of you may actually be wondering, what is the OCP doing at KubeCon, which is where we are pretty much from, and we have been put up on the hardware. At least to the outside world. That is not actually totally true. That OCP can do, but it's offered. And we just announced a new strategy that the OCP around hardware software closed. And so, you know, five years ago, many people said, we don't care about the disability structure that you've got. They're all the same, give me one, give me ten, give me a million. Doesn't matter. But something happened five years ago, and the Google Creative Project took off. And all of a sudden, you have AI and ML, we have all these groups. We have all these diversified workflows that never came. And so, I've been responded, like Intel, and NVIDIA, and many others, are starting building their special purpose software. In fact, we have a project that's been running for three years now with the OCP that is looking at the idea that we can build sort of on integrated circuits by using small wafers of sort of on stitching together to make an integrated circuit called circuits. And what that means is, for software developers, it means a rethink about the pain infrastructure that they're going to run code on. And there's a certain dose of code now that needs to be hardware-aware. And so you can see in the green and lighter green, those layers need to be hardware-aware to do their job properly. And then, of course, if you look at something like Kubernetes, which would be in the popular illustration layer, it's still pretty independent. But we've started to have discussions now about updating the scheduler for Kubernetes. We now have algorithms in it that look at what's the current footprint of the underlying infrastructure on the runnable, including that in the decision making. So, when we approached CNCF about six months ago, we don't want you to be here. And the first thing Chris said to me is, okay, we want to talk about it. And it took us a little while, but we thought that since we have quite a lot of work going on in sustainability in the OCP, that it would be a good match to try to build awareness around the software and hardware and that awareness interaction. Maybe at some point in the future, we start talking about defining some standardized APIs where software could then probe the physical infrastructure that's running in the data center and everything. Is that a 600 watt GPM card that's in that server? I don't know if I want to optimize that versus something else. It's not stopping there. So, with that, I said, okay, let's assemble the market-leading team of people from here today that makes it a hardware and software people. So, I'm going to let, and one couple of things about the OCP project foundation, we were launched 10 years ago by Facebook. And at that time, there were five members of OCP. Now, today, we have 300 members, so not quite the size of a futon, but these are vendors, these are partner developers, these are the hyperscalers, and our mission is to take the innovations that are coming from the hyperscale data centers because they have to compute at scale, make the problems you work on, running software developers at scale, and make those available in every day product filtered into the general product. That is us in a nutshell. And today, we have over 5,000 active engineers on our mailing list because of being in our project, so that's not the same size as the Kubernetes client, but the hardware is a little bit bigger than the other members. And so, I'm going to allow the panel members to introduce themselves, and I will tell you that each of these panel members does have a interaction regularly with people to speak to. So, why don't we kick it off and just go this way for a second. Hi, so my name is Kate, Kate Mulholl. I work for Intel. I'm a senior cloud engineering manager, and I have a team. We have two focus areas. One is on the networking side, so that's really about efficient packet processing, and the second team, and a few of them are here actually in the audience, are on the resource management side, and really our focus there is around optimal workload scheduling, and really, how can we provide the energy savings? Okay, thanks. Do you want to introduce yourself? My name is Jaime Comella, and I work for Client Heat Technologies. This is a German company offering cloud services and liquid-cooled solutions and waste-seed reuse solutions for data centers. So, we are actually in both worlds. We have a foot in each of one of them, and this is important to know actually what's behind your code, guys. So, if we could maybe rename this panel, we could say what's behind your code. Besides this, I'm active at the OCP in the waste-seed reuse work stream, and thank you very much for an invitation to be here. Hi, I'm Dinesh Modraka, director of Innovation at SIVO. We're a cloud-native hosting provider, and we're running OCP kit in production in three data centers around the world. We've got presence in New York, London, and Frankfurt, and we've been running this for two years now, providing Kubernetes clusters for tenants to use. Yeah, I'm Arthur Fest. I'm from DT, and we use formally the ONI framework and Sonic in our network stack, and we use the open hardware based on the Acton Blueprint, and we run that currently in our production environment as the network fabric. So, we've got a pretty well-designed panel. I'm pretty happy about that. I do have one OCP today before we kick it off, is that we do have a little contest over there. So, if you put your business card in, or fill your fill that piece of paper with name and email, we're doing a little contest for an icon. So, we can get a little present for being here today. Again, I appreciate everyone being here at 4.30 in the afternoon at the end of the day, or at the second day of the conference. So, with that, let's kick off the discussion. And the first question that the panel told me they'd like to talk about is really asking the question, what is green software? What could make software green for anyone who were to think about it? And so, I think Marcel said he would kick that one off in another state. Yeah. So, green software for us would be like treating everything in our data centers, because we control the full stack from what does our cooling system consumes, what is our power mix, and also how we build our racks, what we do with our network stack. We control the whole thing. So, we can also do stuff like automatically have software turning off racks, which are needed, or just use the servers we really need, and all the others can be shut off, and something like that. That's where we see green software also in the Kubernetes world, taking care of that as an operator, shutting down things we don't need, and also reducing cycles and using purposely built hardware for specific use cases like data processing, and so on. And I think, Kate, you had something you wanted to add in the state. Yeah. So, I guess when we think about green software, I mean, I guess the history has been, and we talked a little bit about it at the keynote, just with this whole idea around, you know, really when you're coding, you're not thinking about the amount of computing that you have. It's just this unlimited computing, and the focus really has been around a performance, right? So, you're just trying to get the most performance out of the software, the speed, the throughput, and that has really been, for a lot of people, that kind of mentality. Where I think we need to probably move that, if we're thinking about green software, is to instead of just thinking, okay, what's the best in terms of maximum performance is around what is the right? What's the right performance? What's the right from sustainability? And I think that is slightly different, right, when you start thinking that way. So, for example, if you were running a workload that had high performance, say, for example, high computational power for running a new vaccine, for example, you might be thinking about where you'd locate it, right? Because it's going to need a lot of power. You don't want to just go on the maximum power. You might be just thinking, okay, I'll put it in a data center next to a solar farm or something. So, it's really thinking about when is the right time to run it, what is the right time, right duration, because some of these workloads just run forever, because we haven't really been thinking, we haven't been putting sustainability in the forefront of everything we've been doing. So, I do think there is a paradigm change that's required around the screen software in terms of doing the right thing, doing the right performance. And Intel, very interestingly, yeah, actually, it was just last month. AI really starts coming into play here. So, they're actually acquired this company called granular, and it's an optimized platform, an intelligent optimized platform. So, it really allows you to get more of the hardware. So, it can reduce your CPU utilization with all this kind of, by observing the applications and how they're performing. So, it's kind of really interesting. So, I do think that's kind of interesting when you're thinking about the right performance, the right sustainability, and then there's kind of this AI dimension to it too. Yeah, I think the topic of scheduling is going to be something that will go through this entire panel because it's really important, and Kubernetes really helps with scheduling. And the idea that we are designing for failure at a software level from the ground up really allows workload to be moved around a cluster or a data center or site, knowing at a software level that everything will recover. And I think we've been afraid at an operational level, which is why we've kept workloads running and running and running because we don't want to get paged overnight that it's gone down, it's gone offline, whereas Kubernetes will let it come back online. We've got things like chaos, engineering, which gives us confidence to shut things down and move workload around. If we can add something to that, yeah. And we are actually working, well, how our company was created, it was based on the decentralized cloud. So, we started putting racks in the sellers of private houses. So, this is more centralized than that is almost impossible. So, we have to move actually the workloads from A to B to C. So, actually in the base of our business is to schedule these workloads, to move them depending on different variables. And one of them is, for example, power cost because we do waste heat reuse. Another one is the needs for heating, for example. If we have the possibility of moving a workload to Helsinki, then we'll do that because Helsinki needs heating power a long time of the year and so on. So, this is software called Krake, which means octopus in German because it has a lot of legs and can move so workloads from different, from side to side, yeah, mostly. So, internal of the side, he just said that, but also outside. That topic of waste heat is also really interesting, of we always put compute in data centers and then we pay power cost to cool it. If we could find other places to keep hardware, could we put it in heat water that is then used in a community or something like that. So, it's not just where the software is running and where it is in the data center, but understanding where the data center is located and what else we can do with the energy that we're using. Yeah, I mean, that's a really good point. There's a whole kind of piece around immersion cooling that Intel has been working very closely on last year with this company called Subba and that is a lot of that. It's for the next generation of data centers. It's this precision immersion cooling fluid cloud that they're working on and it really is about that really trying to make sure that the energy is going to be, the water is reused, you can generate electricity and give that back to communities or as you say that it's used rather than been wasted as a second round. So, it's not just a waste of resources. There's one misconception that maybe I want to ask the panel to address. But still here in the quarters, you know, if I try to be sustainable, I'm going to have to give up on something. I'm going to lose performance somewhere if I have to lose. And I'm wondering if you can talk more about the direction you're going in. I can tell you that in the features technology initiative at OCP, we're actually funding some proof of concepts around heat release. And the company, some of the dimensions also quite active along with in our cooling and immersion cooling programs. So, if you could talk a little bit about, do you have to lose? Not really. Actually, you can improve performance sometimes here because with liquid cooling, you can actually push the hardware to follow boundaries in the temperature. Something you cannot reach with actually, with a classical cooling, just because of physical constraints of the air. Water, I think, is 3,500 times more heat capacity as air, just because of its density. And I think the term is heat, well, the heat factor of the wire itself. So actually, bringing liquid into the equation, it only, I would say, pushes in the performance direction, in my opinion. I know, what do you believe, guys? So, in our platform, I think we can have, or we have a lot of potential to save energy and also reduce the consumption our whole company has. And I also see, like stacks for, let's say, developers, which can then also show, hey, I have that code line here that's combining or using a lot of CPU cycles, burning them because I'm just pulling stuff. Then you have some, yeah, let's say, advice how you can fix that and make that more sustainable also directly at the code before combining and running it in an operation environment, which is also another angle of visiting. Do we have a question about that? So for us, that would be a good tool to have. So it's nothing. We don't have a tool like that. It's just, yeah, yeah. So, yeah, we can only monitor that on our platform that we now consume like 20 kilowatts more than before or something like that. But we will not, we currently don't have that insights, but which would be great. So you can directly see all that app got updated. Now it consumes like 50 kilowatt more than before. It would be nice to see that in the future. Not going too much into the hardware for a software conference. One of the reasons we went with OCP is that it's not about using power. It's about where that power is used. So even things like having a single shelf that is doing power conversion that's then distributing DC to all of the servers, it's about using the energy that we're consuming efficiently to use it rather than doing that AC-DC conversion in every server in the rack. If we combine it, we get efficient use. And I guess that extends to other areas as well, like the network stack and more around the data center. It actually leads into the next question that I wanted to talk about, but why open hardware could make a difference. Part of it is because in terms of the work that we do in designing open hardware, it's designed to run at scale. And at scale, you can't afford to lose even a little bit of energy anywhere. You've got to design it so that everything gets consumed properly. And so when you look for open hardware that's designed to run at scale, there's already efficiency built in that doesn't get built into other infrastructure. And we were just talking to a little bit about that. The other issue is in order to hook into control systems, you know, looking at the firmware that's in the devices and the lower layers of software that I showed earlier, they need to be open so they can be modified to work with the types of more efficient tools that you would be using in the direction of software. And that's going to have to happen by community. It's not going to happen by some vendor deciding to do that because they think that's the right thing and it's certainly responsible. So part of the ask I have with everybody here is to take a little time to explore. A part of our role, at least with me here today, was we want to cross-pollinate between people like yourselves and the people that are coming to the OCP on a regular basis to see how we can make a difference. I'm sorry, I wanted to interject that a little bit. So I'm not sure if anyone wants to talk about that anymore. I think, Jamie, you wanted to talk a little bit about the data center facility and the factors you're going into that. Yeah, okay. Thanks for that. Actually, I don't know actually what's your background, guys. If you are developers, mostly I'm not sure, but most probably. So you actually write your code. First layer is the hardware itself, so open hardware and so on. Liquid cool. I mean, not only immersion, but also Coldplay. There are different technologies that are more or less established in the market. And the next phase is the facility itself. And everything, all of your code, most probably go to the data center. Data centers are mostly huge places. Sometimes 10 megawatts of power are concentrated into a single place. 10 megawatts is equivalent to the power consumption of 20,000 households. 20,000 households. And this takes only 10,000 square meters, more or less. And 20,000 households take a bit more than that. And if you take into account all these powers coming inside, it's creating a value actually. It's actually moving bits at the end of all. But it's creating waste heat. 100% is going into waste heat. So this is a garbage where you have to pay, you have to invest a lot of energy to remove this energy, this garbage. And instead of doing that, if you could actually put that into a district heating network, into some facility, industrial facility that needs heat, it's a huge leverage. And actually you can monetize that waste. So this waste, actually, I mean, again, numbers. I mean, taking Spanish numbers, I'm actually from Spain, so I know the Spanish numbers, 20,000 households power consumption, but around 10,000 households for heating consumption. So this heat, this waste heat, is equivalent to 10,000 households in Spain, in Helsinki, much more. So that's why in Finland they're actually moving into the right direction, integrating data centers, huge data centers that actually are heating plants into the district heating. So actually, if you guys have some possibilities of choosing where to put your code, where to put your stuff, you're actually doing some, making some effect and going into, so putting a little stone in sustainability mountain we're actually trying to build all together. And because most probably you guys recycle and do all the stuff, but maybe not always think what's actually your code producing behind the scenes. And I know the next is also our facility, so you probably have some perspectives on that. I was just going to add that as developers and maybe some small companies, slightly larger companies, you've probably got a back office somewhere that's running hardware for your development environment, so your staging environments. And you think that that's really, really handy to have that kit in a room next to you, but you've got to remember that you're delivering power to that room and cooling to that room, whereas the efficiencies that you get of moving it to a data center are really massive from an energy efficiency point of view. So not quite software, but it's an impact we can all have by going and even turning stuff off over the weekend. It's stuff that we've been told so many times, but I know I've got kit that I just leave on on the weekend in case I look at using it, but I should turn it off. Yeah, so actually I might just come in there. So we mentioned there were quite a few software initiatives that we're driving. One is the intelligent workload placement, the telemetry where scheduler, we have the DPU where scheduling. We have two engineers that are working on it in the room. And that really makes sure that you're putting your workload, it's like that intelligence layer on top of the native Kubernetes scheduler. So instead of maybe running a workload with a node that has maybe only a low memory and it's an hungry one, you're running it on the highest one. So it's that kind of smart decision making that we're, I mean, we're in the early stages, and we've got lots of feedback from some of you guys there the other day, which is going to really help evolve that. The other thing is the CPUs and how that works. The one I wanted to call out, just because we're talking about idle servers is the power manager and two engineers from from that are here too. So really some of that is changing the frequencies of the cores and you can get like a 15 to 30% improvement when you scale the cores up and down. The other thing we're working on and we're kind of hoping that we'll be able to release it in the next coming months is really around P states and C states. So as you say, you know, if you have lots of idle servers, I mean we talked about in the keynote of this, you know, most people, majority of customers over about 50% right of companies are using their CPU utilization, maybe about, you know, 20 to 40%. So that does mean that there's a lot of a lot of energy there being wasted because you've got these servers that are on. So if you kind of put it into a P state, that's really where you scale your voltage and your frequencies and you can really get a lot of power. You can save a lot of power there. You can also, we're working with C states, you can also put them into kind of sleep modes. And it just means that they're not switched on and they're not as, well they are on, but they're not using the same power. And we've, I was chatting to Trisha there this day and, you know, we've, it's probably around about 30 to 40% we're expecting in terms of power improvements there, which is going to be huge. So we're really excited about that. And please, you know, and we do have that at the the inter booth at the moment, just if anyone's interested and wants to pop over. Actually, I have a very exclusive answer, a person wants to answer some questions. So yeah, why don't you go ahead? I am going to open up a floor of questions a bit later, but you know what, if you really need to be able to use the answer, go for it. All right, I didn't know that. So my question was, do you have any data that correlates, would you suggest that putting your workload in the place like Helsinki, a retro-eastern place, I'm assuming to the additional of the question because every server on the way also consumes energy, right? And the small stuff also adds up to a big number at the end. So do you have any, if you really need any data? Is that, oh, is that too? And that is a director, thank you. Jaime. Yeah, that's what I thought, yeah. Okay, it's because I was like, it's in there. But actually you asked me a really good question. There's another question that actually you asked the other day, which I just wanted to answer because I don't know whether I answered it correctly at the booth. You asked about reuse of hardware. What are we doing with reuse of hardware? Oh, it's interesting as well, yeah. Yeah, it's actually a really good question. So you come up with really good questions. And if you power down your calls, right, with kind of the power, in what we're working on right now, you're actually going to increase the longevity of your hardware. So I think, yeah, I think that's really, I think it's a great question you asked me the other day, actually. Yeah. As well, we have several members that are looking at what we call the circular economy. And as many of the infrastructure will run a server for two months, two years. And they're so perfectly good for your mortals. But I've had it done. So they will actually take the servers. I don't really like what they say. There's the firmware needs to be moved off to something else. And then they put them into the right place. Well, the numbers are so small in terms of what's happening. There's a company that we've wrote for, I don't have a single company for you. It's something that we definitely provide. Yeah. And you asked another question, which I wanted to jump on as well. Because we talked about moving into a data center farther away. Yes, we do have to hit a few bumps in the network in terms of switch, and then the energy is part of switches. But I think that if you look at the energy burnt by a switch versus what could be burnt, used by a server to run a computer workload, they're definitely smaller. Yeah, that would have been my answer. It also, it depends. Correct. It depends on the distance and on the workload. Correct. And there should be some formula to calculate optimal things for my workload. And you know what, we're in very early days of school. One of the things that the discussion that's going on in hyperscaring now is after PUE, which made people are familiar with this measures, utilization of energy of the building or the data center facility. And then there are other metrics we've started to work on, which measure the actual efficiency of the load of the rapid charge we don't have. So you're asking questions about where we need to be. And there are some pretty smart people working on it. Yeah, so that's also what we are currently doing. So we can monitor the complete consumption of the rack and also all the network hardware, which is involved in our stack. And normally our switches consume inside the data center around 300 to 400 watt for like 32 times 100 gig. And if you then have one connection, they consume a lot more power because the SFPs in there consume a lot of more power because the lasers need more power because of the distance. So I think there it can add up. But there you also need to think about you're not the only one using it. So you need to then to conditionate how much bandwidth do I need? So if I eat just 20 kilobits or something like that or 50 megabits, it will not add up as much as the complete machine in this data center in Germany. Because it's just running for you there and the network is normally shared. Marcel, do you have any data on how much an SFP uses idle versus in use? Because that's complicated because it's on and off. But normally like a multi-mode is like two and a half watt, two, three and a half watt. And single mode is more. It depends then also on what modulation you use. But they add up. So we are also looking at using the right SFPs there so that we don't have so much power consumption on SFPs. And that's also what we're currently looking at. I'm not saying that's not important, you just said it. Yeah. In comparison to compute it's much less. That's true, but the energy is coming from a different source. Yeah. That's the only thing which then matters because of the carbon footprint we are talking about. And that's also going to be the same server that's in both places. I think someone had a question over there. Maybe we'll take that one and then I'll get back to a few more final ones. And then we'll come back to the other questions. Another question, but I want to wait for the other players to come up. I think about the question was not the right way to do it. Okay. Raising the work of the community is a prior factor to support this work. And in my case, I am kind of in a role that I can produce information that I can get from you. So I just credit it to the company to be set up in the other person. And what I want to go with them is hard to relate. Like, hey, how are those providers and everything in our economy, and using a kind of regular source of the warming to the working weight, using it in communities of this community is requiring that energy for warming. And they, it's enabling like a full size of what are those providers that you have to pay down. Do you have some sources that we can use to spread the word? So from our perspective, we are currently looking and gathering that data. So we haven't did. And we have a dedicated team inside our company just looking for it. And they are now adding up and also checking where we can reduce our power, where we have issues of our power consumption and also how we can have more power of grid. Because that's currently most of the power which we are consuming is coming from the grid. And that's something our company is currently looking into. I'm not sure it's information that cloud providers are generally sharing, which is why you're probably finding it hard to find that information. I think as a community, if we start asking those questions and start trying to make choices based on the information that we get back, then it'll be information that it's provided and we'll start pushing some of the bigger cloud providers into the direction of making sure that they're energy efficient and that they are more responsible. So almost voting with your feet and voting with your dollars is the way to go with that. The data that I'm looking for is one that allows me to tell people, let's say you're looking for your portal in Frankfurt, for example. Maybe your latency will increase, so we need to work around that. Both what you gain on your own sustainability, what you want on that or not. But I'm not saying that that supports that. I'm not looking for, like, maybe get those and be able to... I see that it's pretty generic data for that. Isn't it good for everybody to take the direction of how to do that? Okay, maybe we'll switch back to one of our pre-planned questions and we'll come back to all of these questions in a bit. Why don't we come back to some of the discussions we're having and see if we can get a little bit more concrete around hardware, software and co-design. So if we were to imagine the world a year or two from now, based on the fact that we have both hardware and software and all the people on the panel, what can we see happening? Are APIs something that could matter and could become standardized? Is that the way to go or the better ways to go in terms of bringing the knowledge about what's going on in the infrastructure to the decision-making and software? I'm throwing that out there. It's a wacky idea. This is the space where it's supposed to have wacky ideas. Some of the things Kate was saying about the P-state, C-state stuff is interesting. It's the first time I've heard about it. But I know in Civo we're trying to get performance at the moment and utilization. So we've been looking at how we do that scheduling and how we're configuring just the CPUs in the BIOS. But what would be really a way forward would be as if we as developers were able to say this section of code needs to be high performance. And if we could send that to a CPU that is in the state or wake up a certain CPU at that point and if we have the API hooks that allow us to do that, we can then make the most out of the hardware. And then once we're out of a particularly high usage section of code that we're writing we could then shut the CPUs down and it's making those APIs available into whatever go or rust or seal whatever we're using and having that common API that we could share across Intel and AMD and then NVIDIA for the GPU side as well. I'm in agreement with NVIDIA. I just cautioned when we do talk about performance performance and I kind of said that right at the start, I do think we really need to think about the right performance and that sustainability. It is a change, it is a change of way but we need to also make it easy for engineers here as well to be able to do that. And right now we were certainly in Intel trying to drive as much as we can but these kind of initiatives would really help. I think also Kubernetes as a whole has this API concept which you can also leverage to build stuff which then is more sustainable. I saw some talks about other schedulers which were also sustainable or where you can say I need this many CPUs in the next days for my batch processes or something like that but they do it soon now and there are a lot of people just looking to schedulers and I think if we have more data where we can decide on for a scheduler it gets easier also to make a more sustainable workload happen. So I also consume in like PDUs you have in the data center and all of that data could be available as a new API in Kubernetes so that you can hook in as a developer and say oh this PDU is not as efficient as the other one and that is done in some form of number and you can say okay the whole stack I'm running on is now 68% efficient but I have one in something like Helsinki which is 89% efficient so I can better run there if my workload does not depend on the country. But I mean it's really important as well from a sustainability perspective to be thinking of co-location as well because it's really expensive in terms of energy if you're moving your workloads across the network so that is something that needs to be like location counts right in terms of energy usage so that's something we definitely need to be thinking more about. I know there are a lot of people thinking about how do we measure this stuff what do we need to put in place from a software perspective. So the other part if you have another intake like if the country is colder you have another intake you don't need so much cooling. That's also a factor which factors in in the summer and those things I also don't know how to measure that currently and also to make it available to schedule to decide on that. That's also something which I hopefully will see in the future. It's also something we've got to make easy. I mean we're at a conference of 7000 people and yet we've not filled 7000 people into this room quite rightly because it's a very niche topic at the moment so there's a lot of education either we need to do to make everyone care and spend the time into developing code that's efficient or making these scheduling decisions or we as a small group need to make it very easy for everyone else just to get it for free if they use Kubernetes. In my opinion also we're talking about totally separated worlds. Sometimes we think about the facility as facility infrastructures as a very strange world from the whole software and cloud infrastructure and our experience in all these years to be also in both of the worlds is very positive so we actually in our monitoring system we gather information from the whole facility and also from the servers and everything flows in the same system so actually we measure at the core the temperature we measure the temperature outside as well and so with this also we can actually manage the workflow movement as well. I mean at the end this is also something difficult usually because facility managers are at one side and the hardware managers or even the programmers are at the other side and they don't speak each other and this is actually a gap that we need to fill and hopefully initially this OCP can actually fill that. And actually what I'm hoping is after today some of the people in this room might reject to us and understand how we might engage with some of my counterparts at the OCP that are looking at sustainability and measure it looking at different ways to bring rigid hardware software to life and benefit from having software that can make the right decisions based upon knowledge of hardware bringing to some extent hardware-aware software and schedule to Kubernetes. So that's one thing that we're hoping to see after today. I had one more speculative question it may not be an easy one I think Jamie said actually who would get it off to us because what additional tools and conclusions So I already talked about of course liquid cooling solutions and the ways that we use already talked about that about the huge amount of energy actually is needed for data centers and wasted and another thing I also have talked about actually we have talked about everything already so just going through that so is this orchestration tools and moving loads from A to B to C actually we started also answering your question we proved that at the beginning when we had this distributed racks in the cellars of houses because one neighbor was going in holiday and turning off the heating system so at that time we realized hey we have to move this to a different house because otherwise our cloud clients won't get their services so actually we should be able to monitor those facts and be aware of those facts so this start a process in our minds to develop this open source software called Krake and of course release into the community the idea in the future will be not only Helsinki and Valencia but maybe Northern Hemisphere, Southern Hemisphere Summer, Winter, Day, Night so all of these climatic changes that actually make our data centers to behave in a different way so they should drive this transformation and then if we go farther into the cloud as I said I'm not a developer so guys please aware of that we use also life cycle management tools for cloud and this is a more efficient of doing things everything goes automized this also is a way actually looking at efficiency to be more green in a direct way so it's running into Kubernetes as well we are in the Kubernetes event so this life cycle management tool is running on Kubernetes it's much more efficient using the containers for that I know if I cover that question yeah, do you want to say something? Yeah, I'm just thinking about just software, I know we've talked about scheduling what we haven't talked about is observability and some of the metrics you get, right? I think I mentioned some of the things we're driving we also have a group working in the metrics so they're working with CollectD, Telegraph and some of the areas just to provide you with that granular level of metrics so that if you're interested in your power consumption you can come to us, work with us and we'll help you kind of get those reports so you can see kind of what your power consumption is so that's something else that we're very interested in helping people with I know I'm going back to the software, sorry Yeah, we as a Telegraph know not so many decisions as you as we need to provide the service in Germany so somehow we need to run our stuff in Germany so the only way for us is to reduce our consumption which is very high currently and also terminate traffic earlier in our network and that's the only ways we can reduce our footprint and also get more green energy into our stack Yes, absolutely, what a wonderful question thank you for asking it so absolutely everything I've spoken about today is all open sourced from the telemetry where scheduler to the GPU where scheduler to the power manager to the metrics that we have available so you can learn about your power consumption everything is open sourced and as I said a lot of this stuff we are kind of in early days there's a huge amount we can do there as I had mentioned yesterday like the possibilities here end especially when people start looking at it we also have a guide anyway as well just to help people install and play with some of this stuff too a lot of the engineers working on that are here so if you're interested please chat with them yeah, yeah, is there not that I know about I know we don't do it I don't think I've seen it from anyone else but it's a really good point something that we should probably be doing there's so much we can do the question was if cloud providers are exposing APIs or metrics on power usage and efficiency of your workload and whether you as a developer will be able to see that when you're running in a cloud we have an internal tool for quantifying your tool footprint of some workloads but it's not actually as maybe you can imagine is there an answer from the audience here thank you you can try to keep the demand of that workload because we already ran out of wind energy for example yeah I know with the way we've worked with the telemetryware scheduler we've worked it by memory usage and CPUs so depending on whether they're up down and that kind of stuff it maybe not for your specific use case though but yeah that's a great idea no but I mean these are all things to this whole thing around carbon awareness software this is exactly this kind of thing that we want people to start thinking about because you've got so many really good ideas there so how do we start thinking about this how do we change what we've been thinking about that throughput to thinking about okay so now where is how do we make something a workload run in the most sustainable way possible and get whatever threshold level of performance we need so these are all really good ideas actually it's also worth noting as well we're always at the moment seem to be talking about assuming that all of the code and the workload that is being run on hardware is the right coding decisions I mean taking blockchain at the moment is that the most efficient way we as a community need to be starting to move our computing towards we know how energy efficient or non-energy efficient even blockchain workload is and adding things to blocks and proof of stake work proof of work it is and if we are starting to move as an industry towards making this a standard is this really the most energy efficient is this the most responsible way we should be moving the web so there's a lot of things that you can do when you go back into your company and you're designing how are we going to take our product to the next technology level considering this into your decisions is something that you can do today that will affect a long time to come on the other hand there is actually a Spanish company using blockchain for tracking the origin of the power supply of data centers so there's how do they account for the power they're using to track the power that they're using yeah it's like a blockchain off-site blockchain absolutely yeah yes yes so that's actually what we're that's our phase one of our power manager was released right but it didn't have that C state piece in there we're planning in the next coming months to release the next version of the power manager with with that in there so that people will be able to to use it right but you're absolutely right it has been around but for some reason it has we haven't really been thinking about it from a sustainable perspective so that that will that functionality will come in so this will be this is this is like an operator right that you'll be able to deploy yeah yeah yeah yeah and and you know the fact that even hopefully now people who are working in the open source projects will be putting that in as an agenda well what are we doing for sustainability right they may not hopefully when they go back because they may not have done this right is okay we're gonna have this as an agenda right and we're gonna we're gonna start brainstorming how can we we get a better design how can we get better architecture but absolutely great idea around the tag yeah yeah I think it could also be something like a validation so that you have some workload which always consumes the same power or something like that and you run it in different cloud providers and then in the end have an audit who's reporting false numbers things the same with cars some report better mileage other ones not so yeah I think there needs to be truly to prove it we have the same for mobile network and everything there's also the idea of the OCP is an open open source hardware project so you would like to hope that people are checking things and if you've got one company providing a design and claiming some specs that it's verified by a competitor and as a as an open organization it should almost to a certain extent be self validating because if you know someone released it's false everyone else is going to shout about it probably say it's something that we have to just push to our employers and say look that this is this is not on this is not something we want to continue supporting and yes there might be this but actually it's almost technical debt right in in an organization and like it's almost technical debt right in in an organization and like you were pushing it for a bug fix we've got to push to say no this is no longer efficient we can say if you go to a CFO and say if I rewrite this I will save 20% of our energy costs and it will take me six months to do they're going to bite your hand off right so you need to present it in a way to a business that are going to be value what you're saying or you say you can also say something like oh I need so many CO2 certificates to consumers could also be a way to go to the management say the same thing