 Thanks for joining us and welcome to Open Infra Live, the Open Infrastructure Foundation's interactive show on YouTube, LinkedIn and Facebook, sharing a production use case studies, open source demos, industry conversations and the latest updates from the global Open Infrastructure community. My name is Thierry Cares and I will be your host for today. Like I mentioned, we are streaming live and we'll be answering questions throughout the show. So we'll be saving some time at the end for the episode for Q&A. Feel free to drop questions into the comments section of your preferred viewing system throughout the show and we will answer as many as we can. Before we get started, I want to thank the Open Infra Foundation members for making this show possible. Today we are joined by a Gold member, Canonical. And we are very excited to see Canonical as well as the rest of the community at the Open Infra Summit in a few weeks. If you haven't gotten your ticket yet, register before prices increase at the end of the day next Monday, May 16th. Like I mentioned earlier, we are joined by Canonical who is a headline sponsor for the event. So the whole team will be onsite in Berlin talking about all things Open Infra, including lucky, cloud cost analysis and DPU enablement with OpenStack. On this episode today, we are going to talk about Data Center Sustainability with a few folks from Canonical as well as one of their customers, Firmus. Sustainable Data Centers are a key topic for Open Infrastructure and I'm very excited to learn more today about how the industry is pushing forward. To kick things off, I would like to introduce the CEO of Firmus, Tim Rosenfield, to talk about their use case. Go ahead, Tim. Thanks, Thierry, and good day, everyone. My name's Tim Rosenfield. I am the CEO, co-founder of Firmus and SuperCloud. As you can tell from my accent, I'm Australian and I'm coming to you at midnight. So excuse me if I seem a little bit tired under the eyes. As I said, we are an Australian business that has been operating for around five years. We are here as part of this open source community and as part of the infrastructure community because we set about to do something quite radical as we saw it, which was to build a better and more sustainable cloud. The group of co-founders of which I'm a part looked at the trends in cloud computing, looked at the trends in power consumption, rising concern about sustainability and carbon emissions, and looked at the type of workloads that were going to be performed in tomorrow's economy in the omniverse for digital twinning, for AI, and the type of supercomputers and high-powered infrastructure that needs to be hosted. And we came up with a challenge to ourselves which was to build a cloud that would be more sustainable, lower the cost to use, and be able to power more performant compute. As I mentioned, powerful compute is a trend that we've observed and many tuning into this session, I'm sure, experienced rising cluster sizes, increased VCPU count in CPUs, denser and denser GPUs, the sorts of GPUs that we deploy, for example, are A100s from NVIDIA for deep learning and neural network work. That runs at 400 Watts today, but looking ahead to the new architectures that NVIDIA have just announced. In that case, we're talking about up to 700 Watts cards. So when we looked at how this sort of technology would be deployed, we realized there was a bit of a bottleneck which would help the planet as it rolls out a greater IT load in the cloud. There's a little stat up there on the screen, but it's actually quite shocking that 2% of global CO2 emissions come from data centers and the clouds that live inside them. That's the same amount of emissions as the airline industry. People don't equate those two stats, but obviously the buildings that host these clouds and the servers that we use, they are emitting a lot of carbon. They're consuming a lot of water. They're using a lot of land. And when you think about the growth rate in the sorts of workloads that we all use and we bear witness to, the pressing need to rethink the footprint of a cloud became apparent to us. Next slide, thanks Alison. So for us to build a more sustainable cloud meant rethinking at the real core level, the actual data center itself. Because what you'll hear about today from Canonical and from me is the use case and supercomputing and cutting edge compute, but you've got to start with the basics. And it's undeniable if those of you on the call experience in the data center space that demand for higher power loads is increasing. So whereas data centers built three or four years ago typically have a rack density of around eight to 10 kilowatts per 42RU, in some cases maybe up to 30 or 40 kilowatts, you are getting an increased demand for again, GPU heavy servers for AI, machine learning, vision systems and game serving that are pushing the limits of servers, to use servers for example, up to sort of two and a half to three kilowatts. So if you think about the sort of infrastructure needed to host that sustainably, you needed to increase the ability to host at a high level, power level per rack. The efficiency in the data center is another problem that we wanted to fix. So aside from not being able to host as dense a compute as the market demands now, the use of air as a medium to cool servers in the data centers is actually not very efficient. And there's a little graph on the right there that is the efficiency matrix and that's based on PUE, so power usage effectiveness. That is a ratio of power into a data center versus power going to the IT load. So an illustrative example, if you have a megawatt of IT load running in a data center, the global average right now in terms of PUE is 1.59. So that says for every one unit or one megawatt, you need an additional 590 kilowatts to run what is ostensibly the cooling systems. So 59% wastage on top of the base IT load, for us as a enterprise trying to build a sustainable cloud, the notion that we would waste 59% of our energy had to be challenged. The problem when you think about all of these issues together, a growing demand for higher power loads, the inability to have great efficiencies with air cooling and the global fleet of data centers expanding, you see a problem arising where sustainability is going backwards in the industry. Thanks Alison. So to build a sustainable cloud, a high performance cloud and an AI cloud that would have a high power load would be lower cost to use and would be sustainable to operate. Again, we had to go back to basics and rebuild the data center. So we break this into two parts. One is actually rebuilding the data center itself. And that's where we'll get into in a couple of slides time the actual technology that we've developed. And two, we wanted to build a modern stack that would scale to modern work practices. And that's where our partnership with Ubuntu and Canonical has been very important. We are not as core founders from the data center industry or from the cloud industry. We have an extensive team now who've come along the way who are, but at the outset it was important for us to broaden the use case for people coming to SuperCloud which is the computational cloud that we've built within our data center and make it easy to lift and shift workloads, computational workloads from other public clouds or even from experiences that they have in their own on-prem service. So we tasked Ubuntu and Canonical to build us a hypervisor that would be flexible, that had all of the familiar tools that machine learning scientists, data scientists, visual computing engineers and others would expect on the likes of EC2 or AWS or Azure and similar networking environments, similar storage environments and build that stack and build it so that we could offer that same experience but importantly, give us flexibility for what's coming next. And you'll hear later in this talk from John Thor and he talks about supercomputing because that's a really important development in clouds that we can offer natively with Canonical and the system that we've built, the ability to take disparate servers, offer them as a virtualized, containerized platform but at the same time with software changes bring them together in a true supercomputing cluster. So it was innovate in the infrastructure, innovate in the stack and build a truly modern sustainable cloud. Next slide please. So the technology that we developed as a data center to host this is based on the concept of immersion cooling. Immersion cooling itself is not new. It's been around in the IT space since the 60s. It was first written about by IBM in the late 60s as a test for an alternative method to keep a server cool. In more recent years, there are other players that have developed technology as power loads have started to increase. The most major proponent of this in the data center space in the cloud space is Alibaba. They have a 40 megawatt immersion data center that they built in China. And there are other private enterprises who have built tank by tank level installations. We looked at this technology as the future of efficiency in data centers and there's a core reason why that is the case and that is physics. So the physics of using a fluid to remove heat from a server versus air to remove heat from a server is a factor of a thousand times difference. So air has what's known as a heat transfer coefficient of two and a half watts per meter squared whereas immersion fluid has a heat transfer coefficient of two and a half thousand watts per meter squared. So considering that the main issue we were trying to solve is sustainability, which lowers cost, which decreases carbon by making the efficiency a lot greater, a thousand times greater at removing heat, which is the core driver of energy consumption in a data center. We are able to achieve our goals and design a system around that. Immersion is compatible with traditional air-cooled servers. In our case, we use servers from Supermicro. They have experienced with immersion as it has started to become adopted around the world. We use AMBCPUs, NVIDIA GPUs, Melanox networking. All these components are compatible with immersion fluid. The technology itself, as I said, is based around a much more efficient way of keeping servers cool and it leads to a couple of breakthroughs. One is the cost to build a tier three equivalent design per megawatt is 75% less than an air-cooled equivalent. And it's down again to that simple physics differential, which means the system is a lot more simple. It has a extremely low PUE. So that PUE that I mentioned earlier, our system runs at a PUE of 1.026, which is almost as close to one and purely efficient as you can get. Obviously that reduces the power consumption and reduces the carbon emissions. The ability to host high compute loads is also greatly increased. So with our system, we host 130 kilowatts per 42RU rack. Again, the servers that go in this system are compatible with traditional air-cooled data centers. We decided to build a large hyperscale size installation down here in Australia, specifically in a state of Australia called Tasmania. Tasmania is unique in Australia in that it is powered by 100% renewable energy, a lot of hydroelectricity and wind and solar. So we ended up with a design and now built 20 megawatt facility that is powered by renewable energy and operating at the most efficient setting for a data center, arguably that we know of in the world. Next slide, please. So we call this the cloud infrastructure of tomorrow today. Next slide, please. And it is within this infrastructure that what you're all probably a little bit more interested in is what we've built and that is super cloud. So super cloud, as I said at the top, is a computational cloud. Immersion in its earlier days now, although it is starting to gain a lot of industry adoption as the simple truth of the drive to be more sustainable and more cost-effective starts to win out over more costly and less sustainable air-cooled data centers. There are a few challenges to overcome when using immersion. The physical nature of the data center is slightly different. Instead of having a rack vertically, you have it obviously horizontally and you have to deal with the mess of immersion fluid. So it does take a team of specialist data center operators in our case. We have those in-house to service the service in this environment. So it's not easy to do co-location. So for us, a very important use case was to build our own computational cloud and super cloud has all the attributes that we aspire to showcase. That is power, that is efficiency and that is a lower cost to use, much lower cost because of that efficiency. Next slide, thanks. We try to quantify that because the market broadly, we've found is not really awake to the impact, the carbon impact specifically of compute. There are obviously very well-known players in the market who look to offset their power. So that's where you do a PPA and you buy green power from a generator. But the problem is in an energy grid, electrons don't discriminate. So if you're connected to an energy grid that has say 500 grams of carbon per kilowatt hour equivalent and you offset your carbon, that's still very useful but the end result here is you're still consuming power that is 500 grams per kilowatt hour because you're connected to an interconnected grid. By being connected to a genuinely renewable base grid, we can easily quantify the actual carbon output of our facility. And so we try to educate the market on this and we boil it down to two very simple components. One is the carbon emitted from a VCPU core per annum and the other one is from a GPU. In our case, we use a 100 GPUs. So for illustrative purposes and depending on how many Australians are listening on this call, this may or may not make much sense but SuperCloud, which is hosted in Tasmania which is hosted within our energy efficient infrastructure has the stats that you see up on the screen there. So we emit three and a half kilograms of carbon per VCPU per year or 572 kilograms of carbon for an A100 per annum. Contrast that with those other brackets there and those are Australian states. So Victoria, New South Wales, Queensland and Western Australia and you have up to a five to six times lower carbon coefficient than clouds that are hosted in availability zones in those big states. So part of our mission in educating the market about their carbon footprint, when they think about these really energy intensive AI and machine learning workloads that they use SuperCloud for is to say, well, you're gonna build a job. You're gonna run a job. You might be building a new neural network project and it's going to take 300 hours of GPU time. Well, what is the exact carbon going into that and how can you report on that? And so we've made a real effort to capture and report on and contrast the carbon savings by using SuperCloud and by promoting this type of energy efficient technology. Next slide please. So as I said, and this is particularly relevant to OpenInfra and OpenStack and Canonical, to showcase this infrastructure, to build SuperCloud, it wasn't good enough to just have a simple VM or a simple bare metal. We wanted a full stack, well-liked, flexible product. And so for us, OpenStack and Canonical support of OpenStack was a very logical choice. So it's enabled us to build a few distinct products that help articulate our positioning in the market. So a sustainable compute engine or SCE, think of that like our version of EC2. So it's VMs, Kubernetes, it's scalable, it's elastic and importantly it comes with machine images that we can deploy that we maintain and all the APIs that we support from NVIDIA and other platforms like Converge. Cloud native super computing is something that probably John Thorke can talk about a little bit more, but is this concept that I mentioned at the top, which is the ability to have both cloud native, so typical cloud workloads alongside the option to orchestrate a lot of bare metal into a very high-performance cluster with a simple software change. So we see a lot of demand for this type of service in the education space, in the e-research space, anywhere that groups are looking to cloudify, if you like, HPC and large AI workloads and need the power of a supercomputer, but they can now get that in the cloud and with all the sustainability advantages that we bring. And finally, of course, there is storage. So with SuperCloud, we offer block and object storage compatible with the S3 API. Finally, and importantly, sustainability unlocks the price advantage that we were seeking at the top, which was to be a lower cost cloud. Obviously by being able to build a system that is 75% less cost to build and by running it at up to 50% less power and by being powered by renewable energy, it does unlock substantial lower cost of access to this sort of technology. And so this fits into our mission of being able to offer very simple price structures to customers without the, let's say, gotcha pricing, where you might be charged for IOPS or for egress. It's very simple, very straightforward, but importantly, it's very possible because of the sustainability that we've built into our system. And that is the end of my presentation. Hola, thanks a lot Tim for this presentation. It was a really compelling use case. I'm glad that open infrastructure software like OpenStack can help enabling innovation in that space by making the software available for everyone to take and innovate with. And I guess I had one question. The software developer in me had a quick question for you. You talked a lot about innovation in the way the data center is built, but is there any improvements that could be done to support your mission more on the software side, like on OpenStack or Kubernetes or Ubuntu? Is there like things that could be developed there that would push the envelope on sustainability as well on the software side the same way you do it on the hardware side? It's a good question. I think I'd answer that by saying that I feel that open source software more broadly is already doing a lot for us because if you think about our use case of trying to educate and bring to the market the benefits of sustainable technology, but we don't have the backing or co-location force of a customer like Microsoft, Amazon or Google using the hardware, it was very important for us to be able to demonstrate the breadth of use and flexibility for consumers in the product. And so by using open source, we're able to bring a really wide range of tools that are very compatible with a lot of proprietary platforms that are already used in some of the big clouds. So I think that's answering it in a slightly different way. One way, another way to think about it would be, and that's where my head went, is are you thinking about power saving, power saving algorithms or other software based solutions to be more sustainable, maybe? To be honest with you, I think that's our next chapter. I think that will be a very fun development to say, well, when the server's not been used, how can it sleep and how can we work with the open source community to optimize that? But I would say that open source has been instrumental in proving the commercial viability of this and it's already doing a lot for us. Okay, so we'll just continue to do what we're doing since it seems to have enabled your use case. Keep pulling the strings, keep peeling the onion. That's what we did with the infrastructure and it lets there be a viable alternative to proprietary software. Okay, well, thanks again for your very interesting presentation. We'll switch to the next presenter. So I would like to introduce Tito Sklerak, Product Manager at Canonical, to talk about Canonical's reference architecture for OpenStack implementation and how those help minimize poor consumption. Go for it, Tito. Thank you, Thierry, and thank you, Tim, for the amazing introduction to the sustainability topic in general. So talking about Canonical's reference architecture for OpenStack implementation, I think you've heard that many times from us already so that we focus a lot on the price performance, achieving a price performance optimized architecture for infrastructure implementation in general, not just OpenStack, but all the other infrastructure components that build on top of OpenStack, including Kubernetes, including SAF, and many, many other technologies. And this really goes well together with this sustainable computing paradigm because if you think about that, the goal of sustainable computing is to reduce these emissions and the way it can be achieved is by putting together a more dense architecture for data center implementation, which at the end of the day translates into less servers running while actually providing the same amount of resources. So pushing the hardware into its limits, right, making sure that we really maximize the efficiency of the hardware that we use. So what do we understand as a price performance? So if you think about that, if you build a cloud, if you build a data center, there's obviously a variety of hardware choices that you can make regarding starting from Silicon, moving through typical resources that build up a data center like compute resources, memory resources, storage resources, network resources. You can end up having a super resourceful data center, but at the end of the day, not being able to leverage all of those resources, just for example, because of the software limitations, right? Or you can build a data center using like all flash storage devices, but not being able to really utilize that because it turns out that the network in your data center is a bottleneck, right? So having a look at how this ratio of cloud performance to cloud price looks like, it most likely resembles the curve that's demonstrated here on the graph. And our goal is obviously to provide a maximum performance, but at the same time, making sure so that organizations don't over invest. So that they only use that set of technologies that allows them to climb to that maximum performance, but not spending any extra money on unnecessary extensions that they would be not able to benefit from. And if we move to the next slide, please, this is how we achieve that. So there are a lot of ingredients that build up to this price performance optimized architecture, but let's start with the service placement. So as you probably know, there are various architectures that you can use when building an OpenStack cloud. There is a fully desegregated architecture where all services run in isolation on separate nodes. There is a converged architecture where control plane services run on isolated nodes while compute network and storage resources run distributed across the remaining nodes in a cluster. But there's also a hyperconverged architecture that we promote in the first place when building data centers for our customers where basically both control plane, compute network and storage resources, all of them are equally distributed across all of the nodes in a cluster. And this type of an architecture, a hyperconverged architecture, ensures maximum resource utilization because if you think about that, there are less nodes that are required to run all of the necessary services that OpenStack needs to run to be able to provide its function, which is basically being a cloud platform. It helps to standardize on a single hardware configuration across all of the nodes in a data center. And as a result, it all leads to less servers being required to run the cloud and TCO optimization because at the end of the day, clouds can be sustainable, but if they're not economical, they have no way to survive, right? And now moving to the next slide, when it comes to the actual machines, the actual hardware, we obviously cooperate with the leading hardware manufacturers around the world to be able to build up OpenStack-based clouds on the top of various hardware, but we are pretty strict when it comes to the exact hardware specifications and recommendations. So there's obviously a kind of a balance that we need to find between how many servers we can actually put inside of a single rack and there is a temptation to go with one unit rack servers in the first place, but based on various analyzes that we've made over years and based all of the lessons learned from hundreds of OpenStack cloud deployments that we've done to unit form factor is usually the most optimal from the price capacity ratio point of view to the loss of putting a lot of resources inside that at the end of the day, leads to less servers required to run a cloud of a planned capacity. And when it comes to PCIe, the fourth generation of PCIe is usually sufficient from the performance point of view and it does not put any extra codes on the businesses. If we go to the next slide, once we've chosen a server, like once we've chosen the Chassis for all of the resources, the next step is to kind of plan how many CPUs with how many cores are we going to place inside of these Chases? And the same goes for RAM and the same goes for storage, right? And obviously, if you're planning to build a server that would have like one terabyte of RAM in total across of all of the chips that it has inside, you can approach that in a various ways, right? So you can either use less high density chips or like more low density chips, right? So even at that stage, we also take a look at that from the price performance point of view. So the ratio of the overall cloud coast to the resource size mostly resembles the kind of an U-curve. So we try to avoid those border lines and the most efficient option usually lies somewhere in the middle, meaning that we try to leverage all the available slots inside of the Chases to be able to fill it in with various resources but also making sure so that the individual resource size is optimal from the pricing point of view. And moving to the next slide, when it comes to storage, this is a little bit more complex because there are obviously various types of storage devices that are available out there like you can build up the cloud consisting of regular HDDs but you can also use SSDs. There are various types of interfaces like there's SAS but there's also NVMe and there are even, you know, highly-performance, super-performance storage device like Intel Optane, for example but they're cost per gigabyte very significantly depending on what device are we using. So you can obviously build a cloud, you can build the entire cloud consisting of Intel Optane, it will provide like a super-high performance but it's going to cost a lot, right? On the other side, you can build out the cloud out of regular HDD devices, it will be super cheap but, well, it's very quickly going to become a bottleneck, right? So the solution is to find a balance somewhere in the middle and now if you go to the next slide, this is exactly how we achieved that in Canonical's reference architecture for OpenStack implementation. So we use a tiered storage approach and we use Ceph as a software-defined storage solution out of the box. We support many other options as well but this is what we use by default. So in our reference architecture, Ceph OSDs run directly on HDD devices while the storage B-Cache layer that is in front of Ceph OSDs is implemented using NVMe devices. Those can be either SSDs or Intel Optane devices depending on the exact customer needs and interestingly, we do not have enough time to present all of the results but we have an internal cloud running that uses exactly these kinds of reference architecture and we benchmark that cloud and we are able to get comparable performance results to some of the other clouds that results from them were published in the Internet that use all flash architecture. So cost savings are significant while there's no visible impact on the overall performance of the cloud. So that's mostly it in terms of the Canonicals. I think there's one more slide. I'm sorry if we go there. So on the network side, we try to leverage most of the underlying network as well, meaning that if you have a look at that from the cost per gigabyte point of view, cost per gigabit per second, the 100 gigabit per second network fabric is usually the most cost efficient especially when it comes to implementing open stack at the scale, scaling it out to tens, hundreds or even thousands of nodes. That's usually the only option at that scale at the same time. And for this reason, we obviously try to leverage various types of accelerators like SmartNICs or DPUs where starting from the open stack you all got released, it is now possible to fully offload OVN components into DPUs attached to hypervisors. We leverage GPUs significantly as well as a part of our reference architecture. As Tim pointed out, GPUs are pretty much important when it comes to sustainability. We allow GPU virtualization using NVIDIA VGPU software and as a result, we can provide this super fast network connection between instances running on top of open stack. And now I think that's the end of the presentation at least from my side. Yeah, thanks, Titus. It was really interesting. I really appreciated your approach to tiered storage in particular which is always a challenge to balance cost versus speed on the storage side and I feel like you have a good solution there. We had one question on, it's probably for me, where can I see this presentation again? Well, we actually post on YouTube the full recording of the show after the show ends so you should be able to find it on YouTube or on the link that is probably pushed into your local comments right now. And I had one question for you, Titus, before we pass to John. Would you say, since the theme of the episode is sustainability, would you say that sustainability is a top concern in your customer requirements when they come to Canonical to build their cloud infrastructure? Sure, so I think most of our customers care about maximizing the efficiency of the hardware that they get and maximizing the overall performance and cost efficiency of their data center. So even though they do not think about that through, you know, from the sustainability point of view, at the end of the day this kind of translates to sustainability, right? And I think open source technologies are really helping organizations to achieve the sustainability goals because with open source, you know, everyone can build a cloud, right? I'm not saying hyperscalers, like all of those big public cloud providers do not care about sustainability at all. I'm not saying that, but we can invest much more in sustainability with open source technologies and, you know, with the entire paradigm of the cloud computing being a little bit more distributed than it is right now with just, you know, a number of big public cloud providers handling the majority of the workloads, we can help companies like Firmus and many, many other to build local public cloud infrastructure that answers the sustainability needs. Well, thank you. And I'll switch to our last speaker, John Torr-Christensen, another product manager at Canonical to talk about HPC power usage effectiveness and including how Canonical enables this for the Firmus use case. Go for it, John. Thank you, Té. So you might be thinking what is HPC? So let me give you a quick intro. High-performance computing or supercomputing is the procedure of combining computational resources together as a single resource. The combined resources are often referred to as a supercomputer or a compute cluster. The reason this is done is to make it possible to deliver computational intensity and the ability to process complex computational workloads and applications at high speeds and then parallel. These workloads require computing power and performance that is often beyond the capabilities of a typical desktop computer or a workstation. Next slide, please. HPC is used in various fields. For example, in automotive and aerospace sectors, they use HPC for simulation workloads, such as computational fluid dynamics to simulate how a vehicle might perform or be affected by the real world. Automotive has also been a serious adopter of AI ML for autonomous driving. Next slide, please. The energy sector is a large user of high-performance computing. The energy sector might try to optimize the placement or operation of wind turbines or solar panels, optimize the delivery of electricity through its network or for the discovery of national resources. Next slide, please. The health sector. HPC and AI ML is used for anything from detection or cancer cells to the simulation of blood flow through wider organs. It's used to understand our major organs, such as our brain. Pharmaceutical companies and genomic researchers use high-performance computing to discover the function of genes and to understand complex biological processes, such as protein folding. Next slide, please. Universities are using HPC to reach new discoveries to better understand or improve the world we live in today. This might be anything from simulating various aspects of our world, such as figuring out how is the weather going to be today or tomorrow or a week from now, or just any any source of discovery. Next slide, please. Media and entertainment uses large-sending clusters for video-ending and generative visual effects. So this could be anything from the creation of animated movies or, you know, enhancing the movies or TV shows we might be watching. Next slide, please. The financial sector uses HPC for risk and anonymity detection, high-frequency trading and portfolio management. So overall, thermos with its superclub built with a clear focus on AI ML and high-performance computing has a great potential to deliver discoveries across multiple sectors in an anti-efficient way with a true focus on sustainability and it does this with its very focused and specialist resources specially configured and architected to deliver performance for these sorts of workloads. Thank you. Thanks, John, for this landscape of HPC use cases. I guess that leads to my question. What is the main relationship between those HPC use cases and what organizations like Firmus are doing around sustainability? Why would you use a sustainable data center for HPC use cases, basically? So, HPC use cases are quite compute intensive. They require a lot of energy. Firmus building the superclubs solutions on the data center optimized for low PUE drive sustainability. As it lowers the overall energy usage required to run these energy demanding workloads. This along with the usage of renewable energy really drives sustainability for Firmus and these customers. Okay, that makes sense. So, I guess my next question would be for Titus or you, John, around the talks that Canonical will do during the summit. Is there any specific session that we should be looking forward in Berlin around sustainability but also other aspects of building clouds? Sure, so happy to cover that. So, first of all, just to express my excitement about the opening for summit being resurrected as an in-person event. I think it's a great thing and I'm really looking forward to seeing all of you face-to-face in Berlin if you are only coming there. So, what do we expect from the event and what do we expect from Canonical there? So, you are going to see during our keynote there that the growth in the number of OpenStack users is kind of explosive, right? So, we've seen that first based on the results from the OpenStack user survey from 2021 where it kind of became evident that there are already 25 million of cores being run in production in OpenStack. Over the last couple of months we've seen an explosive increase in the number of new use cases for OpenStack at the same time, right? So, more and more companies start using OpenStack to build local public cloud infrastructure. Firmus is a great example here, right? But it also continues, it starts being used as a foundation for high-performance computing so there are more and more use cases for OpenStack. We are going to have a session that will use to convince you why should you become a local service provider because maybe this is something you haven't thought through before if you only have data centers in place if you have the infrastructure there's nothing blocking you from becoming a local service provider there is a business case behind that. We're going to talk a little bit more on sustainability. There's going to be a session about the hyper-converged architecture prepared by one of our engineers. There's another session about SmartNICs and DPUs and the offloading capabilities to these kind of accelerators. At the booth we'll be presenting GPU virtualization with NVIDIA with the usage of NVIDIA BGPU software and last but not least LOKI. So, this new exciting acronym describing the Open infrastructure building blocks that allow you to run both traditional and cloud-mative types of workloads. On the same infrastructure we'll showcase how developers can set up LOKI on their workstations in just a couple of commands and benefit from the best of open source on a single platform. That sounds very exciting. Looking forward to all of those talks. We have a few other talks around sustainability at the summit with BBC who will be showcasing environmental dashboards that show the carbon impact of your workloads. We'll have also how to build OpenStack Cloud together with the OpenComput project platform as well as a forum session on making OpenInfo sustainable. So, looking forward to openly discussing that with the rest of the community. We should bring back all of our speakers now for maybe we have time for a final final question so if you can all go back on camera thank you all. So, we're nearly to the end of the show. I wanted to ask you a question in general because we've discussed a lot how you all are building solutions to solve the sustainability problem in the cloud workloads that Tim really introduced well when he explained that it's not going in the right direction right now so we need a complete paradigm change in how we approach those data centers. I guess my question is what are the biggest challenges in building sustainable data centers? What is the main blocker? Is it finding the funds to do it? Is it the technology that is not necessarily completely mature? What is your top concern from a user side at Firmus but also from a solution provider side at Canonical? What is the main issue when you have one of those use cases around data center sustainability? Should I start with the data center side? The biggest challenge to adopting a sustainable data center is that adopting immersion requires the adoption again of a different way of managing physically compute servers. Where the industry is used to co-location in a data center where it is easy to physically access the server and maintain it the rise of immersion, which we believe is inevitable because of its inherent benefits, means that it is going to take more of a specialized team to operate the hardware. Someone who is familiar with the system, who understands how to deal with the more complexity in dealing with a fluid that is in the servers. Our great challenge and one of the reasons that we have SuperCloud as a showcase product is that the large scale cloud users so the big hyperscale US and European clouds are being very regimented in an air cooled environment so they've got their servers that turn up into the data centers they deploy them, they move on. We see the adoption again of this technology as inevitable but it is going to take some time for the ingrained work practices of dealing with air cooled servers to change and for that knowledge based build up. That's one of the reasons that we have gone end to end rather than just offering a data center with co-location and asking the client to understand how to operate their hardware in the environment we realize the need to actually support the hardware itself support the servers offer a stack on top if it's needed so that you can think of these as cloud modules that can be deployed anywhere and fuss free so for us it is adoption of new technology and we're trying to solve that by giving large clouds a building block to use this technology instantly. So teachers our canonical employees all require to learn scuba diving now for rescuing servers in immersion data centers or what are the main challenges on your side? So I would say the biggest challenge is to establish a benchmark so I think Firmus has done a great job and we've seen that on the slides being able to measure how much does a single vCPU on a particular cloud what's the carbon emission coming outside of this vCPU at the end of the day right? Since none two opens and clouds are usually the same there's always going to be some difference when it comes to this exact number right? So I would say the biggest challenge is to establish a benchmark and be able to review those results as we progress and adjust the reference accordingly to be always able to provide the one that results with the most sustainable data center. It feels like we need to make those metrics a lot more used across the industry to as a way to force people to ask the right questions about the impact of their workloads because clearly now today I feel like we have cloud providers that are very big on to sustainability that will really measure and display those carbon emissions but it's not used everywhere and so it's difficult to use it as a sales point or as a differentiation and so like you said I think like this benchmark or at least this promoting those metrics is really key to the next wave of adoption for sustainable data centers. Anything John? I mean the suppose the only real way to measure the environmental impact of salt resolution will be to measure the overall end usage of the infrastructures and data center of where that salt runs but in terms of sustainability HPC can drive a reduced impact for example HPC resources could be used to carry out research and enable discoveries that might help in environmental impact. Okay well thank you again everyone I feel like we're out of time so we can't really take any more questions I want to thank you all for speaking today I appreciate you all joining us and I feel like the audience had a good picture of how you can build a sustainable data center today using open infrastructure technology using Canonical and seeing Firmus as an example of how to do it well so if you enjoy today's discussion around sustainability don't forget the stocks that we just mentioned around sustainability at the upcoming Berlin Summit to continue to learn how open infrastructure plays a critical role and don't forget the prices increase end of day, Monday next week so register now unless you want to pay more we're doing a break from Open Infra Live now and we'll return to your screens in a few weeks after the Berlin Summit so if you have an idea for a future episode and we want to hear from you submit your ideas at ideas.openinfra.live and maybe we'll see you on a future show so thanks again to our guests from today and I hope to see you all in a few weeks in Berlin have a great day!