 Hello, everyone. My name is Katie Gomanje and currently I am a senior field engineer at Apple. I have joined Apple two years ago and as part of this role I am aiming to bring Kubernetes and cloud native expertise to different teams and products within Apple as well I am one of the TOC or Technical Oversight Committee member for CNCF. As part of this role I am joining ten of the champions within the industry and we try to provision a technical vision for the CNCF landscape I have many other roles in the community one of them being the creator of the cloud native fundamental scores Which you will be able to find on Udacity Also, I am one of the co-lead for the KCNA exam or Kubernetes and cloud native associate certification In addition to that I am an industry awarded professional and this year I have been awarded with the next generation leader by the Women in IT Awards and also I have won the Tech Woman 100 within the UK Today, however, I would like to talk about sustainability chronicles and more specifically how we can innovate for green technology by using Kepler and Kedah And to begin with I would like to introduce the notion of cloud sustainability and more importantly Why the tech sector should address their carbon emissions here? I'm going to talk about a strategy as well on how we can introduce sustainability in your day-to-day making decision-making process Next I would like to apply sustainability to the cloud native context So pretty much how can you address emissions within your Kubernetes cluster? First I'm going to talk about Kepler, which will allow us to create this inventory of our emissions and visualize our carbon footprint and Next I'm going to talk about Kedah carbon aware operator that will allow us to scale our applications based on carbon intensity Now I would like to start by giving an introduction to the cloud native landscape Currently it is composed of 175 projects and growing and all of them extend Kubernetes functionalities These landscapes allows the flexibility to create a platform that enables our product with minimal compromises This is the landscape that we the TUC are working with and liaise with every single project to ensure that they reach the maximum potential in maturity Now currently all of these projects are dispersed for three different levels of maturity. We have sandbox incubation and graduation Now sandbox projects are a green-filled ideas that provide a solution for a niche problem space Next move towards incubation and these are the projects that already have usages in production But at the same time we see diversifications in contributions from different organizations And lastly we have graduated projects and these are the projects we refer to as the projects who cross the chasm So moved from the early majority to early adopter stage These projects provide an industry standards for the solution in that problem space Now currently within the CNCF we have a hundred and nine projects within sandbox 36 and incubation and 24 in graduation I would also like to draw your attention towards the faint gray line at the top of the graph Now this gray line represents the archive projects, and I don't think we talk sufficiently about archivals Sometimes we have projects that do not reach the momentum in terms of adoption and contribution And it's only natural for the maintainers to take the lessons learned and move towards a new initiative within the landscape Or perhaps create a new project that they would open source Now considering the ongoing success for Kubernetes and the growing cloud native landscape You might ask yourself why sustainability matters? More importantly how it can be applied to the cloud native context The short answer to this question is that currently we have multiple incentives to intertwine the economic growth With cloud and digital sustainability, and this is because of the two factors that happened The first one is scopes 21 which took place in Paris in 2015 and this led to an international agreement to keep the global warming between 1.5 and 2 degrees Celsius The second factor is the UN SDGs or sustainable development goals SDG 13 focuses on climate action and the fact that as a humanity we actually need to think about sustainability in a proactive manner Now this resulted with regulations at the national levels that in turn resulted with different organizations reporting their greenhouse emissions And we can see this in numbers as well Now currently the tech sector is responsible for 1.4 percent of total greenhouse emissions Now if we move to use renewable energy for running our infrastructure and products, these emissions would drop by 80 percent However, if we don't take any action, we would be responsible for 10 percent of global emissions within a decade Now as you can see we are at a crucial point within the industry We need to drive remediation strategies for our sustainability Build internal expertise and ensure that we think about sustainability on a daily basis Now as part of these missions, we already have the big cloud providers setting net zero targets for themselves Now when we're looking into the renewable energy, we have AWS and Azure aiming to do so by 2025 GCP already runs on renewable energy since 2022 Now when we're looking into offsetting the carbon footprint, we currently have the big cloud providers setting 2030 as their goal AWS will aim to achieve that by 2040 In addition to that we have Azure which is setting further ambitious goals for themselves And they would like to reduce deforestation, be water positive and achieve zero waste certification However, again, we do not or we should actually not only rely on the cloud providers to address sustainability We need to integrate that as part of every single org and this led to the creation of a new school of thought Which is known as green ops which derived from the phenops foundation Now green ops encapsulates all of the tooling processes culture and behavioral changes required and are related to digital and cloud sustainability Now why it actually derived from the phenops foundation? This is because we can see a direct correlation between running your infrastructure efficiently and having a positive sustainability impact So if you're running phenops principles or you're trying to go for them You would perhaps run on spot instances or you would like to move from a SAS solution to serverless So pretty much running an instance of your application only when required Perhaps you're changing the programming language for your application to use something that is more optimal for your containers Now all of these phenops principles will result into a positive sustainability impact because the less computer use The more well actually the less emissions you're going to have and we can link actually sustainability to direct profit for your organization And in addition to that the environmental sustainability working group from the phenops foundation came up with a strategy on how to Integrate sustainability in your day-to-day operations The first phase is going to be awareness Here is where you'd like to introduce sustainability to all of the stakeholders internally You want to talk about your carbon footprint and energy consumption Next we're going to have the discovery value stage Here's where we do the POCs now if you're using a cloud provider You'll be able to use the sustainability calculators or carbon footprint calculators So you already will be able to create that baseline as well at this stage Am I looking to tooling such as Kepler and Keta that I'm going to talk on in next in the next section The next stage we're going to have is the roadmap and here is where we'd like to implement some of these operations and implement them in our day-to-day work and here as well what we want to do is to Identify those anomalies and to address them. So we want a qualitative assessment of our emissions And finally we have repetition and execution now as part of this phase. It is very important for us to have Sustainability goals set up internally and more importantly want to iterate for all of these processes and revise all of our tooling and Automations to ensure that we reach the sustainability goals in an ambitious matter now Measuring the carbon emissions and our greenhouse emissions is just the first step towards mitigation and management We need to build this muscle of carbon accounting and to create this inventory of emissions for our entire work different teams products and services Now, how can you do so in a cloud native context if running Kubernetes? How can we measure our carbon footprint well for that? We have a new tool which emerged within our ecosystem, which is called Kepler Kepler is a Kubernetes efficient power level exporter that uses ebpf to trace Power-related stats for your containers and nodes and explore those as primitive metrics Now Kepler was created by Red Hat and IBM in 2022 and it was admitted within the CNCF as a sandbox project in May 2023 Now I'm going to put my TOC hat on here You can see that Kepler is a sandbox project as such we need more diversifications and maintainership and contributions If you have any usage for Kepler, if you have any POCs over the running internally I definitely encourage you to reach out to the maintainers and become an active contributor Now let's look into how Kepler works. Currently it is going to be deployed as a demon set which means we're going to have a replica of the Kepler exporter and every single node and this will allow us to collect two levels of metrics Container level metrics and node level metrics and pretty much we're going to look into power consumption and resource Utilization This is going to be done to sell for ebpf by tracing the cp performance counters and Linux kernel trace points All of these are going to be exported as Prometheus metrics And this is where we aggregate all of our data and we'll be able to visualize them in a tool such as Grafana Now how exactly does Kepler calculate the carbon footprint? This is going to be done for free emission gases. We're going to look at coal petroleum and natural gas The first thing we're going to need is our input value This is going to be the amount of energy that our application consumes and this is going to be measured in kilowatts hour Next we multiply this amount of energy we consume by an emission factor and this is a constant Now this particular coefficients I have taken from the US energy information US energy information administration Now this emission factor represents the amount of carbon dioxide that is emitted per unit of kilowatt hour consumed and The end result is going to be the amount of CO2 we have per emission gas And how it's actually going to look like if we use the Grafana dashboard for Kepler The first thing we're going to be faced with are free gauges for every single emission gas So here we have coal petroleum and natural gas If you look at natural gas we're doing pretty well in terms of emissions But if you look at coal and petroleum this is where perhaps we would like to assess address and iterate If you look at the top of the screenshot you'll be able to see these coefficients or emission factors that are hard-coded Now very important here if you're running an AWS or GCP in a particular region You'll be able to override these coefficients with the data provision by these co-providers Allowing you to have a more granular and realistic data about your emissions In addition to that you'll be able to see the data aggregated per container, namespace and per day Again, these are very good dashboards for you to assess the spikiness in your emissions and ideally you'd like to address them over the time Now with Kepler we are able to create this inventory of our emissions and we'll be able to divide that for team products and services What happens if you'd like to be more proactive in terms of how we use and schedule our workloads? Having sustainability in mind Well for that we're going to use Keta carbon-aware operator Before I talk about the operator I would like to give a very quick introduction to Keta, which is an event-driven auto-scaler and Pretty much the name says what it does. It scales our application based on an event that is triggered outside of the cluster Keta currently has been developed by Red Hat and Microsoft in 2019 It moved to CNCF sandbox in 2020 and it moved incubation a year later And this year we are very welcome to we're actually very happy to welcome Keta as a fully graduated project Again with my TSE hat on here Keta is a great example to showcase the maturity growth of a project And I definitely encourage you to go through the due diligence docs to actually understand how the project reached this level of maturity Now how Keta works to scale any application with Keta You will need a scaled object CRD or customer service definition Now the scaled object is going to have two level of configurations The first one is going to be the application we'd like to scale this can be a deployment a stateful set or an average CRD The second level of configuration is going to be the actual trigger So what event do we need to monitor and assess in order to scale our application? With Keta will be able to scale our application all the way to zero and to one But more importantly it works in conjunction with HPA or horizontal pod auto-scaler So we'll be able to increase the amount of replicas to a desired number defined by the user It is worth to mention as well that Keta has a very rich portfolio of scalars and currently there are more than 64 scalars that we can actively introduce and Use for our scaling of applications Now let's go back to sustainability How can we scale our application based on carbon awareness all for that We have a carbon aware operator that aims to optimize our carbon emissions and environmental impact by scaling our workloads based on carbon intensity Carbon intensity is pretty much the grams of CO2 or carbon dioxide equivalent emitted per unit of kilowatt hour consumed So for example, if you're plugging your infrastructure completely into a wind farm or a solar farm pretty much using renewable energy 100% your carbon intensity is going to be zero or close to zero because you do not emit any carbon while consuming this energy In a more realistic example, however, you are plugging your infrastructure to a grid that has multiple sources of energy Some of the energy can be produced by burning the coal or fuel some of it can be renewable energy So you're going to have a different level of coefficient for your carbon intensity So how exactly we can scale our application based on carbon emissions? Well for that we're going to use a carbon aware keta scaler CRD or customer service definition If you look into the specs section, the first thing we're going to do is reference our scaled object Which is managed by Keta. Here is pretty much where we say that we want this particular application to be scaled up and down Next we're going to have the carbon intensity forecast So with Keta carbon aware operator You are able to plug it into directly to third cloud providers and to extract this data of your emissions in real time So some of these providers are what time and electricity map However for this particular example, I have used mock data. So my carbon forecast Mock data variable is actually going to be set to true And lastly towards the end section is this is where we describe the behavior of scaling our application The main idea here is that if the carbon intensity is high We want the lowest amount of replicas if we have a high carbon intensity That means we have more emissions. We don't want that. So we reduce amount of replicas and vice versa when the carbon intensity is low This is when we want to increase our amount of replicas So for this particular example, we want to have 15 replicas if our carbon intensity is 543 or less and we want to reduce our amount of replicas to one if our carbon intensity reaches 579 or higher And after deploying this particular CRD will be able to visualize the behavior of our application scaling in this graph The top line represents the carbon intensity and the yellow line at the bottom represents the amount of replica Now currently we start with a carbon intensity of 540 and 15 amount of replicas Towards the middle of the graph you'll be able to see that our carbon intensity is going higher to 580 and the amount of replica Is going to one and this kind of behavior is going to be replicated further through based on the data of carbon intensity We're pulling from the grid and this brings me to the conclusion today Those far we have talked about cloud sustainability and why it is important for the tech sector to introduce sustainability as part of their day-to-day decision-making Next we looked into how we can build this inventor of our emissions by using a tool such as Kepler But more importantly we can actually take action direct action within our clusters on how we scale our applications based on carbon intensity And using Keta carbon-aware operator And all of this has been possible because the cloud native landscape is growing And it is a point where embraces this new tooling that allows us to innovate for green technology and being aware of our carbon emissions and environmental impact and Without saying we are hiring if you would be interested to work for Apple anywhere in the world I definitely encourage you to go to jobs at apple.com or we'll be able to find the latest openings I'm more than happy to answer any questions in regards to the equipment process or the experience of working at Apple If you have any questions in regards to today's talk I'm more than happy to answer thereafter or you can find me on social media such as Twitter and LinkedIn This is Katie Gamangi, and I look forward to seeing how you can shape the cloud native ecosystem Thank you, and enjoy the rest of the conference