 All right, hello everyone, let's get started. My name is Kristina Devochka and I'm a software architect at admin control and I'm based in Oslo, Norway. In addition to my full-time job, I'm dedicating a bunch of my free time by being what I call a tech community octopus, by which I mean I'm doing a bunch of different activities that are related to all things cloud native, Kubernetes, green tech and cats. So if any of those topics lighten you up as well, I'd love to have a chat with you at any later point after this presentation. And I hope that with this session today, I can make you more aware about sustainability in tech and how it can be applied to Kubernetes. And I hope that I can inspire you a bit more to keep sustainability aspect in the back of your heads in your own projects. With this session, I would like to spread awareness about this topic across teams, organizations and tech community as a whole. But before we take a look at what actions you can take in order to improve sustainability of Kubernetes clusters and workloads that are running on those, let's take a look at the overall state of climate change and how tech is contributing to it. Climate change, is it overrated? This sounds like a clickbait title, a few of which you may find through the course of this presentation as my potentially not very funny attempt to capture your attention even more. But answering the question, we know one thing for sure that climate change is happening. And there is a lot of information out there in thousands of channels about the topics of climate change and sustainability. And even if we take a look at this year's agenda, we can see a multitude of sessions that are related to this topic. And even me standing here with this presentation today am part of the movement. But there is also a lot of misinformation out there which makes it challenging to separate facts from fiction. But we are all techies, we are technical people, we love facts, statistics, numbers, science. So let's take a look at a few facts that were taken from different research projects that have been done around the globe. We have been measuring temperatures on weather stations and ships from the mid-1800s. And we have been capturing surface temperatures with satellites and analyzing geologic data for the evidence of climate change. And since that time, all the data we have collected tells us one story, the earth is getting warmer. And based on all the data that has been collected throughout the years, there is almost 100% consensus among climate scientists that human activities are the main contributor to the climate change and increase in temperature. And the main driver behind increasing temperature is the greenhouse effect, which has been happening since the Industrial Revolution in the 1800s due to human manufacture expansion. And it is at its current point, the level of CO2 in the atmosphere has been at its highest of more than 37 billion tons. And during the climate Paris Agreement in 2015, 196 countries have agreed to do their best in order to avoid the temperature increase of more than 1.5 degrees Celsius to avoid the worst fallout of climate change. But as we all have good experience with estimating feature development, the same could be applied in this case as well because in reality, delivering on that agreement has proven to be more challenging than hoped for. And with currently implemented policies, we are looking at potential increase of temperature of more than three degrees Celsius, which is double as much as what was hoped for. And to me, this shows one important thing that every one of us in every industry by doing small or big action towards sustainability can contribute to the overall end result and towards our goal and agreements. But some of you, including my skeptical cat Penelope, may be wondering, but what does this all have to do with tech? Isn't technology a part of the solution? Is technology a climate friend or climate foe? And yes, technology will absolutely contribute to help mitigating climate change. And multiple research has shown that with help of digital technologies like cloud computing, Internet of Things, artificial intelligence and big data, we could help reduce emissions in multiple industries by as much as 20% by 2050. On one condition, we will be able to do that as long as those digital technologies are sustainable themselves and are being implemented, operated, and used in a sustainable manner. But the trend is clear. The need for digital technologies and the need for new data centers to host those services on is on the rise. And despite huge and important advancements and improvements in terms of energy efficiency of the hardware, cooling technology advancements and much higher focus on renewable energy, the higher demand in resource intensive services has also resulted in increased energy use, which at times can be challenging to fully cover with natural and renewable energy sources. And to me, this proves one thing. Once again, that each and every one of us should collaborate together in order to reduce the amount of unused and underutilized resources in order to reduce the total amount of energy and resource waste overall. Now, this has been a lot of science, a lot of scientific facts until now. So I would say let's cool down a bit and take a look at a few interesting facts of how tech can be contributing to climate change. And these facts, we may not think that much about. A quick quiz time. And we don't have much time overall, so I hope that you will not leave me hanging here and can just shout out the first thing you can come up with. This is the logo of OpenAI, the company behind products like chat GPT. And recently an interesting research came out where they have been measuring based on the amount of overall official data that we have at this point, they've been measuring the amount of emissions that have been produced by training a language model like GPT-3, which has been powering chat GPT during the year. And my question to you is, based on how much emissions it was estimated to be produced during training in one year, how many, the lifelong emissions of how many cars do you think it was estimated to be equal to? Any suggestions? That was not that big, but we are talking about lifelong emissions. But it's just the training. So it has been estimated to be producing during one year 502 tons of CO2, which was equal to the lifelong emissions of 109 cars, which was also estimated to be enough energy in order to power an average house in the United States for 120 years. And this is just the training of the language model where we have better control of how we, and when we are training it. But what I find is interesting is to see how we, as users of AI, of AI services like chat GPT, will be contributing to the overall amount of emissions that will be produced. For example, direct queries every day to chat GPT itself is estimated to be around 10 million queries. So now it has been quite a lot of lack of transparency and enough data to make really accurate estimations without avoiding speculation, but I'm really looking forward to see more data that will come from the authors behind these services. And the second and last question of today will of course be related to Kubernetes. There have been a bunch of research projects as well that have been looking into the amount of underutilized computer resources in Kubernetes clusters. On average in percent, how much underutilized computer resources it has been estimated to be for the Kubernetes clusters overall? Oh, not that bad. That was close. What was that, 34? Yeah, that was really close because it has been estimated to be more than 30% of underutilized computer resources that are running in Kubernetes clusters overall. And the same amount between 29 and 30% has also been a result of research for underutilized computer resources in physical and virtual servers. So it is clear that this is actually a quite small step that we can make which can help contribute to the overall goal of reducing the total amount of wasted resources. And really interesting report has been done from Cast AI where they have been looking and measuring the amount of underutilized resources of their customers that have been using this tool. And reduction in underutilized resources has been estimated to be like in 10 hundreds thousands of dollars. And at the same time you will also reduce the amount of unnecessary resources that are running out there and the cost you spare you could actually use in order to invest into more sustainability actions, sustainable related actions in your projects. So this I find to be a very interesting finding. So what this means of course is that there is always room for improvement and there are multiple actions that we can take in order to run our Kubernetes clusters and workloads on those more sustainably. Green Kubernetes, is it a myth or is it a reality that you create and I bet on the latter. And we can take a bunch of different actions here in order to improve the sustainability of our resources and systems and the layered approach would be needed here. But it all starts with awareness. And still I don't hear sustainability being brought up that much during software development life cycle. And I need to say myself that for a long time I haven't been thinking about it that much myself. And once I started looking into ways to incorporate sustainability in my personal life I also started thinking about how it can be applied at work and in tech in general. And in one of the projects where I've been working we started to be challenged by our customers. We had governmental and public sector customers that started to be more proactive looking into the total amount of sustainability of their systems including the supply chain. And that's when the table started turning and the mindset started changing. First just a few of us at work started bringing up sustainability as an equally important criteria at different stages of software and platform development. But then more came up to speed. And we started bringing this up also in dialogues with our customers. And now with time sustainability has become an equally important part of the business and overall tech roadmap strategy. And the point is that it takes time and consistency to get there. But the correct mindset, talking about it, challenging other teams, our suppliers supply chain is where it all starts. And we hear a lot about shared responsibility model when it comes to adopting public cloud. But I think that the same could be really nicely applied to the topic of sustainability and Dread Hat has illustrated it really nicely on this diagram. And depending on where you're running your systems be it public cloud, private cloud, bare metal, or and depending on your position on your role in the organization you may have different level of control when it comes to implementing sustainability related actions of the infrastructure itself where you're running your resources. But most of us have enough power and control in order to look into what actions we can implement to make the resources that run on that infrastructure more sustainable like Kubernetes clusters, like our applications. So let's break it down a bit and take a look at some of the areas where it's worth to take some considerations related to sustainability. During software development at our work we need to take a lot of different decisions and many of those decisions are related to the supply chain. Starting from choosing a public cloud provider to choosing a data center service provider to bare metal servers or just choosing a new third party library or tool incorporating sustainability as an additional evaluation criteria can help you take more conscious decisions and contribute to the overall impact. And by supporting vendors and projects that are proactive in terms of doing what they can in order to be offering more sustainable services can also help us build more eco-friendly systems and contribute to reducing the overall negative impact on the planet. For example, when evaluating a new third party tool or library in addition to looking into the functionality take a look at how heavy the application is overall how much resources it requires in order to run especially at scale, how performant it is in term compared to its alternatives. When choosing evaluating a new data center or data center service provider take a look at what kind of data centers are being offered because depending on the type the resources may be utilized differently. Take a look at what energy sources are powering those data centers. Are they powered by 100% renewable energy or partially renewable energy? Take a look at what kind of cooling technologies are being used in order to process and maybe even reuse the excessive heat coming from the service which is an additional advantage. And there are even climate positive data centers that are becoming available for us like equal data center in Sweden which is actually reusing their excessive heat in order to power one of the cities in Sweden which is really exciting project in my opinion. So do check some of the resources I'm gonna share later to learn more about it. But of course it's important to stay critical due to the high level of carbon offsetting and greenwashing some of the vendor claims may not be 100% true in reality. So just staying critical and performing effect checking would also help you get a better picture. And another aspect here is where, in which regions you decide to provision your resources. Provisioning resources in closer proximity to the users will also not only reduce the network latency but also the travel length of the network packets. Looking at the heat map is even more interesting depending on the regions in different parts of the globe they may be powered with energy in a different way. And even if a data center is being powered by renewable energy and suddenly the weather conditions change or the demand on the data center grows they may end up in a situation where there is not enough of natural energy sources and renewable energy in order to cover that demand and vendors may be forced to switch to less sustainable energy sources like coal and other fossil fuels. So therefore a few tools are out there that can provide some visualization of the emission intensity of different regions. And some public cloud providers make it easier and provide you some of that overview. Some do not do that enough but there is also an open source tool which can help you get started which you can take a look at which is called cloud carbon footprint. And it measures the amount of carbon emissions of your workloads that are running in public cloud or on-prem. But what it also provides is a visualization of the different regions that public cloud providers offer and here you may see an overview of the Microsoft Azure regions and their emission intensity ranging from the ones that are low marked with light green to those that are more emission intensive with dark red. And from choosing where we are going to build our systems another important aspect to look at is what kind of nodes do we want our Kubernetes clusters to use and adjusting the choice of the node type and size that is as well adjusted to the needs of our application as possible is a tough journey where you may need to do multiple adjustments. And from my experience we have been adjusting the node type and size multiple times both in tests and production environments but having the tools that can help you measure how your application behaves over time how it utilizes the available resources a tool that can based on this data come up with some recommendations that can help you make a better choice and adjust can be really beneficial. And now some of the managed Kubernetes services are actually offering additional help where they can even make these choices for you or can provide this data without needing to install any additional tools. So this is definitely worth looking into and there are different VM types like computer optimized, memory optimized, burstable VMs which often are recommended if you don't need to use all of the CPU all the time or have sudden spikes in your application. Another point here is considering if you could use VMs that are powered by more power efficient processors that are based on ARM architecture which utilizes energy more efficiently per unit of operation. And apart from that also spot instances are often being mentioned but not only they're mainly being mentioned as a good choice for cost reduction but at the same time by choosing spot instances where it is applicable and brings value you also contribute to utilizing the unused resources that the cloud provider has to their max potential. So I have even seen a few stories recently from some of the companies that have been running on 100% spot instances on EKS in production and have been doing this for a while. So of course this is may not necessarily be a good use case for all the production environments but even starting with looking at such actions and dev and test environments who'd already take you a step further towards the goal. In some cases you may also have a possibility to implement for instance placement, proximity placement nodes. It's called a bit differently depending on what public cloud provider it is but in some cases if you have the region where data centers are spread far from each other across the region you could implement a proximity placement group which would ensure that the nodes where you provision your resources inside a single availability zone are placed in close physical proximity to each other which would also reduce both the latency and the travel length of the network packets. So the point here is to look into ways where you could use fewer compute resources but to their max potential. And from choosing which nodes you're gonna include in your Kubernetes clusters, looking into ways how we are scaling the nodes and our workloads that are running there could also help utilize the resources we have available to their full potential. And it's all about conscious scaling, scaling only what you need and when you need it. And this can be tightly coupled to the application architecture you have. I've seen projects where you might start with using monolithic applications and run them on Kubernetes but the challenge there is for instance it will be quite challenging to only scale parts of that application. And looking into ways you can improve your application architecture can also help you scale granularly and in a better way. And also thinking about manual or automatic scaling can also be beneficial. I have been in a situation where we had to pre-provision a bunch of nodes in order because we were expecting a higher load and that was connected to the way the application was built. But this is not the most sustainable action to take because that load may never come. And you may have those nodes running for many hours without actually doing anything useful. So looking into ways if you could incorporate auto scaling but at the same time remembering what is the max amount of instances you want the auto scaler to scale to. Because I've seen issues that have been happening with for instance bugs in the application like memory leaks or even the bugs on the vendor side which caused unexpected frequent scaling of many more resources that actually were needed. So putting that limit to avoid having many more resources provisioned is also beneficial. And probably the coolest thing that you can do in Kubernetes in terms of scaling is using event-driven scaling because you could adjust to how your application behaves and when it is most meaningful for that specific application to scale with help of event-driven scaling and tools like KEDA are super powerful. They have tons of scalers that are available for you where you could use different types of metrics and adjust them to the needs of your application in order to scale it when it is needed. Like we have also, we didn't benefit all our applications didn't benefit from the default metrics to scale so we were using custom metrics as well. And an example here can be for instance scaling on the amount of HTTP requests that are coming to a specific application. So definitely if you haven't used tools like KEDA check it out. And what I find super exciting is the carbon aware scaling that is really an emerging thing right now and during the keynote we have heard about it that KEDA is now coming with a carbon aware scaler and I've been really looking forward to it and I was expecting it to be announced today so that was really cool to see where you could use for instance tools like Kepler that was also mentioned by Red Hat too. For example gather the data of energy intensity of your workloads and then you could send the data and work with it in correlation with for instance the emission intensity of data centers or you could use the machine learning models to like make even more predictive scaling or more granular decisions in terms of scaling to scale and do more work when there is potentially less overall emissions that can be produced and also scaling in the regions where it is less emission intensity. And now we are coming to something that I briefly mentioned just now which is about the underutilized resources. Has anyone heard about scrim tests? Do you know what that is? A scrim test is when you remove a service and wait until someone screams and if someone screams then you need that service and you bring it back and if someone screams then that was okay and that service was never used and I'm joking I do not recommend doing that in production but this kind of illustrates a point which I could call in Norwegian something called Dugnod. In Norway once a year all of us gather around outside and start cleaning our gardens and our surroundings, our roads. We are removing all that garbage trying to make everything nice and clean. Not everyone meets up to that one and it's always a challenge every year but still doing the Dugnod of your own systems in a continuous manner it's not a one time action but monitoring continuously. If you have any resources that are not being used I've seen that there have been services which were provisioned at some point when it was needed but then it was forgotten and it was assumed that someone came up with the idea of cleaning it up but actually no one did. You can also have tools that could help you with cleaning that up. You could use the event-driven scalars like KEDA also to scale down to zero when it's not needed. You can look into ways of turning off some of the clusters if you do not need them during the night. That's what we found out that we could actually just start by turning off the clusters in tests during the night because we had seven hours during the day when they were not used at all in tests and with help of automation, infrastructure, as well you have that opportunity to reduce the amount of manual work and be able to turn off the resources that are not needed. And also thinking about when you are scheduling and running different types of workloads like if you have batch workloads that do not require to be run at a specific point in time or during the day, you may consider doing it later when there is less demand on the servers and on the regions. So looking into ways of continuously reducing the unused resources and scaling down when not needed is definitely a very simple and important change that you can make in order to reduce resource waste. And there are also tools once again that could help you with that and can alert you upon that and you can even let those tools do perform actions and clean it up for you if you dare. And the last point I would like to mention are the applications because without, we can do all we can in our Kubernetes clusters and workloads in order to reduce the amount of to run them more sustainably. But application, if the application itself is not efficient, if it's resource intensive it will still require a lot of resources and may not be using the resources in an efficient manner. So looking into how your applications are built with different things and also challenging the team's weaknesses if you have a different role. And there are some best practices for you like the best practices for any right applications and also the infotainment which provides recommendations on how you develop applications with the power of efficiency and resource efficiency. I'm going to go to the last part to show you that if you screen shots from one of our development situations where we have to use the power of efficiency or we take a look at some days of how all the resources are doing and here we have some information about estimating conditions and also an opportunity to use the regions and their conditions. And we could also get an information about over time how the conditions are and the resources all the time throughout those months so that we can get a better overall picture of how they are doing. And these are also features from cost management and cost which is built upon all the costs as well. In addition to publishing a cost the best plan is how to use the information like the patients don't trust it. And here you could also see like the amount of resources which can be requested versus what actually is built upon the price so that you can get a better impression like here it's possible. And then some of the recommendations that come through in terms of cost reduction can also be applied to sustain your costs. You have the right size of the cost you know to live or live in the way of removing the cost that are not needed. These same tools can actually be used as resource based. Here are some examples where you could base on the analysis of the time from the situation and in terms of how the utilization of the group which is based on this being applied. Or if you like resource utilization in terms of CDR and you can also analyze the nodes you have right now and how many of the conditions you need or this is more or less ordinary. So rounding up if you need to get started there is a bunch of resources that are available. But if all at least I see it that you need to go through kind of this test, you will need to define what you would like to measure in your project, what KPI is the kind of metrics you want to measure to get data and then you would need to do that all the time and then look at the history of all the state of the systems that are important. And how do we measure that out? I'm assuming it's because of the time. Yeah. It's my work time. It's good. Yeah, so if you check out a few of those resources that I mentioned here and I would also like to give you a shout out about the tiny environment sustainability and also open vehicles for how we can contribute to more sustainable choices even more. So check that out and it's all about balance. There is always a trade-off so you would just incorporate the sustainability as an additional factor. It should help you make the best, the better choice from the sustainability and you can make bigger than you think by just taking those small choices of your everyday by challenging your suppliers, by challenging your leadership thinking about investing time for you to make some of these sections of sustainability could help your project and our climate to improve and I would like to challenge you after this session to work perform a doing now in your clusters and your climate and our climate and if you'd like to talk about this more detail I'd love to talk about it. Thank you.