 Welcome, everyone. We're going to be talking about how to reuse the energy of an open-stack cloud. So that's what LeaveCloud does. We're going to talk about the why. We are going to talk about the how. And I want to introduce myself. I'm Yugor von Uppdorf. I'll be giving this talk with Stig. Hi, everyone. I'm Stig Telfa. OK, so let's start with the why. So I think everyone knows the problem of the greenhouse gases. And a big chunk of the greenhouse gases that get emitted are caused by power plants. I mean, the electricity industry is a big polluter. So they try to mitigate this problem by generating green energy. So they use, for example, windmills. And these are green power plants. But if all that electricity goes straight to data centers, the net result isn't that great. The problem doesn't really change. So how much power do data centers use? That would be about 3% of all the electricity that is used in the Netherlands. Let's put that into a bit of context. So if you look at this, data centers, and you compare it to the NS, which is the Dutch railway, the data centers actually use three times more energy. And it uses more energy than the entire railway system of the Netherlands combined and the city of Amsterdam, which is huge. And it's a huge chunk of all the wind energy that we produce. And we have a lot of wind in the Netherlands. It's a windy country. So a question for you. How much CO2 emissions does an 8-VCPU Kubernetes node cause in one year? Is that 500 grams, 50 kilograms, or 500 kilograms? Just think about these numbers for a little bit. So the answer is it's over 500 kilograms a year. And if you instead look at a GPU node, the number increases a lot. It's five times higher. And you go halfway around the world in a car for that. So where does this emission come from? This is based on an electricity mix. So 70% of this is not renewable, and 30% is. It's a normal mix. And what you actually see in this picture is that most of the emissions actually come from the data center itself, not from the machines inside. So that's the old situations where we build data centers that contain a lot of servers. And you have high power, air conditioning, or chillers, which are very inefficient and polluting. And all the electricity is converted into heat, which is just wasted. But heat is valuable. So in a new situation, what we do is we place the servers inside of residential technical rooms, where the heat is generated for the tap water. And instead of having just fossil fuels being burned to heat up tap water, we actually use green electricity to run our servers that heat up the tap water instead. So the difference that that makes is large, because we are able to use more green electricity because there's less pressure on the electricity net. It's really good for the electricity net itself. We don't have to build data centers because boiler rooms and technical rooms, they already exist. People have these rooms in their residential blocks. And because of our heat reuse, the net result is actually a carbon reduction. So this is the key to what we do. And the way we do it is we move out the compute nodes from the data center into what we call leaf sites. And that's where most of the electricity is used. So we barely need any data center footprint. And most of our energy consumption gets reused as hot tap water. And this distributed data center works as any other data center. It's 60 microsecond distance. We use 100 or 200 gigabit connections between the data center and the leaf. So it works as a single data center. And the reason that we need good connections is, of course, we cut that. We put the computes at the leaf sites. And the storage is inside of the tier three data center together with the control plane. So we deploy OpenStack in the data center. All the controllers are there. And then we deploy all the compute hosts on the leaf sites using KOB and Kala. This is me installing one of our racks at a leaf site. So you can see this is, I think, a heat exchange. I'm not our thermodynamics expert. But this is another colleague that is installing another rack at a leaf site. So we have a bunch of those. And I'm going to give the microphone to Stig. Hang on. I just need, OK. Thanks. Sorry about that, guys. So you might wonder what is the open infrastructure story to doing this. And actually, it's fundamentally everything in here is open infrastructure. In order to make this system we're using Linux and OpenStack is running largely Kubernetes workloads, which are distributed across the Amsterdam metropolitan area. And the OpenStack environment that we're building is based on this KOB Kala and Kala Ansible triplet substrate. And the way that we apply this is to bring about a sort of an infrastructure as code principle, infrastructure as code principles in which we can use the properties of KOB where we have a single version controlled source repo for all of the configuration for the entire leaf cloud infrastructure. That encompasses things like the bare metal configuration of the servers, the storage configuration, the OpenStack service configuration itself, and other things besides. This enables us then under a single repo to manage version control, peer review, and CI verification of all the changes that we're making in order to enable us then to provide a resilient and reliable service, a production service on which the leaf cloud users can depend. KOB and Kala Ansible were chosen for several reasons, primarily because Kala is about the process of putting all of the OpenStack services into Docker containers. And that enables us essentially to think about the OpenStack control plane as being something akin to a microservice architecture. Kala Ansible and Kala also have a very broad range of support for some very sort of useful services for the leaf cloud environments, such as Masakari, so that we can enable the failover and transfer of workloads from one site to another site when that site is getting taken out of service. Finally, the KOB environment brings together this whole sort of infrastructure as code principle to enable us then to manage the whole thing as one coherent entity, which is very important, and enables us also to bring together this thing called the change pipeline. So under this model, we use the kind of DevOps approaches that have an abstract configuration for the OpenStack environments. And a production variant of that configuration and a staging variant of that configuration and developer environments. And they're all representative of the same abstract core setup. And then this enables us then to develop and make changes, move fast and break things, that kind of stuff, in isolation in our own environments, and then bring those changes forward and propagate them into a staging environment in order to actually apply them and bring everything together into one place and run test suites against that, before finally then applying those changes into the production infrastructure in order to minimize the risk for disruption to the users of the end system. I think that was all we had covering today. So are there any questions? Yeah. The question was, how do we do it in very, very hot summers? Well, the question is simple. People still use warm water to shower and wash their hands in hot summers. So it doesn't really change. It's adding that the websites for Leaf Cloud and the website for StackHPC are both generating hot water. So the question was if we need special hardware to do the cooling and heat reuse, I think. So we thought we did. And we did a lot of research on this. So we had iterations with submersion cooling. We had iterations with two different types of submersion cooling. And we had water cooling on the CPU die. And then we went back to the drawing board and thought, can we make this simpler? And we did the calculations. And it turns out that it's actually more efficient to just use a heat pump than to do any of those complicated systems for cooling. Because it turns out that the efficiency is actually higher. And it's easier because you actually can use the normal type of air cooled racks. And this way you get also the peripherals of the machine to contribute to the heat reuse. So what happens in a heat pump is that when you put warm air into an heat pump, the efficiency goes up a lot. And so we do around 40 degrees of air into the heat pump. And then the efficiency of the heat pump is three times higher than if you would go with 10 degrees air into the heat pump. The power consumption of the leaf side depends on the type of hardware at a leaf side. So you have to think between 10 and 100 kilowatts. Any other questions? Yeah. So what we do is we, so he was asking about the temperature in the rooms. So the room itself is not allowed to be at 40 degrees. It has to be livable because people have to do maintenance there. But we run the air directly into the inlet of the heat pump. So the air comes out a little bit cooler. And the extracted heat goes directly into water, which is exchanged with the tap water system of the building. Yeah? The question was what do we do if the heat pump is not operating? So it depends. There are ventilators that can just get rid of the air in case of a disaster like that. A small disaster, but a disaster nevertheless. And if a leaf side, for some reason, is really not capable to run, we fail over to a different leaf side. I think we have to stop. Do we have time for one more question? All right, last question. So the question is about the network architecture. And if it's one network or multiple networks? No, low latency is very important for our infrastructure because of the disaggregated setup. Yeah, and we mount the hard drives directly inside of the data center from the leaf. Yeah. All right, that's it. Thank you. Thank you very much for. Thank you.