 Thank you so much, Dennis. We heard a lot about sustainability, efficiency, and optimization today. And that is also the topic of my talk, and it's something near and dear to my heart. My name is Aparna Subramanian, and I'm Director of Production Engineering at Shopify. When I look at the last 10 years of Kubernetes, it is quite evident that this technology has played a pivotal role in the innovation of our data centers. And when I wear my Shopify hat as an end user, I can also appreciate how much this has been a game changer for end users like us, because it has helped us with keeping up with the crazy speed of having to ship and scale software to production. This is a visual of the expanse of infrastructure that powers Shopify, our e-commerce platform. And over the years, we've massively scaled up our infrastructure. In the recent years, getting the most out of this infrastructure that we have is top of mind for not only myself, but several platform operators like us. And what better place to talk about sustainability than in this beautiful country? It's where the Paris Climate Accord brought the topic of climate change to the forefront of our minds and sparked worldwide cooperation on this topic. We would all agree that focusing our efforts on sustainability is important. But as end users and platform operators, what does building and operating a sustainable platform actually look like? I believe this has to be a shared responsibility. Given many of us use public cloud providers in some shape or form, the sustainability of that platform is the first thing that we'll want to understand. And then we can talk about how to run sustainable workloads on top of it. So today, I would like to invite some special guests onto the stage who can help us understand how this shared responsibility works in practice. And for that, first, please join me in welcoming Adrienne Jain. Thank you so much, Adrienne. Thanks for being here. Thanks for inviting me. Adrienne Jain is the Chief Product Officer of Scaleway. And it's here to help us understand how cloud providers are thinking about this problem. So Adrienne, first, tell us a little bit about Scaleway. Yeah, so Scaleway is a French-based cloud provider. We're a full-stack cloud provider. We offer all the way from dedicated servers to serverless products. We have data centers in France, in Amsterdam, and in Warsaw. And so, yeah, we were very excited about building a whole ecosystem of cloud products in Europe, including based on Kubernetes, of course. Awesome. Thank you so much. And what are some biggest sources of carbon emission in a data center? Well, I guess kind of intuitively, right? The main question for a data center is the managing of the temperature of the servers. And so how do we cool the servers so they don't get to the point where it's too hot for them to operate? I see. And what I understand about data center efficiency is there are two key metrics. One is power usage efficiency, and the other is water usage efficiency. So tell us a little bit about these and how are cloud providers working on improving this effectiveness? Right, so power usage efficiency is a ratio between the total power you're using for your data center and the power that's actually used to power the servers. And so you can't get to a ratio of one, because you at least need to have the lights on. So your data center technicians can be operating the servers. But basically, you're looking to get as close as possible to one, the average in the industry in 158. We have a data center that's at 118, and I'll talk about our technology there. So that's PUE, and then WE is water usage efficiency. So same thing, what's the amount of water that is brought into the data center that's not really used for operating the servers? And so there is no standard for WE, but it's important to look at both metrics, because you could say, OK, I'm going to cool my servers with a lot of water, then your PUE is efficient, but you're still using a lot of water, which is not overall eco-friendly. Yeah, and we understand that cloud providers are doing their part to contribute towards sustainability. What are some things that consumers of cloud can be doing better in order to help with sustainability? So one thing is you can do code sobriety audits. We've looked at this kind of thing. For example, a provider called Frogger.io that does this. And so looking at, is your code too complex, and therefore using too much power? We've also, at Scaleway, we've launched ARM-based instances, and so ARM is a more energy-efficient architecture than the x86. We've integrated it with Kubernetes and launched it this week, and so that's one way to scale in a more sustainable way as well in your cloud usage. Awesome. Thank you so much for sharing these insights, and I hope you enjoy KubeCon. Thank you, Aparna. Thanks, everyone. Thank you. Now let's talk about what cloud consumers can do about sustainability. And for this, I would like to introduce you to a couple of our most active contributors to the CNCF end user SIG, Todd and David. Thank you, Todd. Thank you, David. Thank you for joining me here today. David Marualli is a system architect at one-in-one mail-in media, and Todd, against AM, is a principal engineer at Intuit. One-in-one mail-in media is based in Germany, and they offer an email and communications platform to more than 40 million customers. And Intuit is headquartered in Mountain View, California, and they offer a financial technology platform, serving millions of customers worldwide. First, let's start with David. And David, in our SIG meetings, we've talked extensively about the topic of efficiency, and we've always advocated for starting off with things that are really easy to measure and simple to fix. So tell us a little bit about that, and what are some biggest sources of inefficiencies that you see? Yeah, thanks, right. So many of the potential savings are really easily accessible and easy to optimize. And I just brought a couple of examples that we can look at. We start with a service we run Kubernetes on. It's important to know that every server you run is consuming about 200 watts of idle power. And that leads to two things. First, the fewer servers you run, the less energy you waste in idle power. And the second thing is that even under full load, about a quarter of the energy is wasted for non-computer purposes. So that makes an easy case for large, well-utilized machines. Another aspect that we can look at is what I call the exaggerated, better-saved, and sorry, mentality which can save us a lot of energy if treated right. It works like this. These boxes represent CPU cores. And these green ones are your baseline consumption and maybe a variable load. So that's your peak consumption. That's what you actually need. But then your developers will certainly add a little bit of margin just to be safe in their reservations. Some administrator might remind you that all of this is running on hyper-threading cores. So these get non-linear, very high load. So please add a margin for that. You're certainly in a growing business. So add a margin for that as well. And then you might be required maybe to run a highly resilient, highly redundant service that's effectively required to run in two data centers. So that's what you end up with all these CPU cores provisioned while the green ones is what you actually need. As all these safety margins multiply, cutting down a little bit on each of them can make huge savings. And you see, we all did that without digging deep into any kind of power optimization strategies for the CPUs or green power sourcing. That goes on top of that. So that's how you end up with large savings in the end. Awesome. Thank you so much. Yeah. Maybe just one more remark. The way to do it, you measure what you already have. Then you write an automation to provision and deprovision the resources that you want to optimize. You understand your requirements, define your optimization strategy, optimization goals, and then fit your automation with that. And of you are on your journey to sustainability. Thank you so much, David. And definitely compute is one of the biggest areas of optimization. And that's what we've also found in Shopify. And it's definitely a great starting point. And over to you, Todd. In our SIG meetings, we've talked so much about auto scaling. And you've shared some innovative approaches that Intuit has used for auto scaling. So tell us more about that. Yeah, thanks, Aparna. Yes, I think as a cloud consumer, the most important aspect of sustainability is to use your resources as efficiently as possible. And a big part of that is auto scaling. To do this, we're building an AI-native development platform based on Kubernetes and cloud native software that also uses AI and data analytics within the platform itself. As you see from this slide, we have a large number of requests during our peak season. But this level of load is not constant. We depend on auto scaling to handle our heaviest load, as well as bring down the resources when they're no longer needed. But going back to David's earlier point about the sort of exaggerated, better states and sorry mentality, the same is also true for sizing and scaling of workloads. How do you know when you've done it properly for all your workloads and under all conditions? When faced with this doubt, there can be a tendency by development teams just to throw more resources at the problem. This may give peace of mind, but it can lead to higher costs and greater consumption resources leading to negative impact. This is really not sustainable, especially when there's alternatives available. So while Kubernetes and related projects provide the capacity to automatically and dynamically scale workloads, we found that configuring these systems properly can really be a challenge for application developers. As a primarily data-driven problem, we think AI will have a big impact on both capacity planning and auto scaling going in the future, allowing us to be more efficient with our computing resources. And add into it, that's exactly what we're doing. We're building an intelligent auto scaling recommendation system that reduces the burden on our developers, helps us ensure our workloads have the resources that they need, while at the same time improving the efficiency of our platform. Doing this requires a large engineering effort and may not be easy, but I'd rather invest in innovation and optimization than simply buy more hydrocarbons. Thank you so much, Todd. That was great. And even at Shopify, we have a need to have our own custom auto scalers, because we have these things called as flash sales. And what we find is we are not yet able to rely on upstream auto scalers to do it for us. But we are on the journey to work with Keda and other upstream projects to make sure that we don't have to build and maintain these solutions forever. We would love to use more of these upstream solutions. Before we end, let's talk about some bonus efficiency ideas that we've also discussed in our SIG meetings. Like David said, compute is a great starting point, but there's so much opportunity outside of compute. Let's not forget about storage and databases. We talk about things like how many database replicas are too many, and we often find ourselves in a situation where we actually do have too many. And does each storage bucket have a really robust lifecycle management policy? What does data retention policy look like? And is it strong enough? Yeah, and perhaps does bot traffic or other kind of automated traffic need to have the same latency as end user traffic? And somehow those have different quality of services? And also, how many of these auto scaling sort of anti-patterns can you undo and really trust and develop confidence in your auto scaling solution? Finally, you might consider if fewer bigger clusters are more efficient than smaller clusters in a larger number, maybe. So the optimization opportunities are really everywhere. Begin with the big and obvious things, and then you'll be quickly off on your journey to sustainability. And just like the Paris Climate Accord brought countries together to work, if cloud providers, platform teams, and app teams can closely work together, we can really make cloud sustainability actionable. Yeah, absolutely. Together, we definitely have the option of making the golden path the greenest path. Thank you for your time.