 So good afternoon, everyone. Thanks for joining me today to talk about the energy consumption of the continuous deployment methodologies. So this is Al Hussein, a DevOps engineer working for Tetra Pak. Prior to joining Tetra Pak, I completed three master's degrees in pervasive computing and communications for sustainable development. So you can see the connection between the topic and the studies as well. So just to start to give some context to this talk. So according to Intel, more than 50% of the energy or of the greenhouse gas emissions coming from the inefficiencies in infrastructure and software. And that sounds alarming. So that's why we need to start more focusing on testing, on measuring our infrastructure power consumption in order to make some improvements. So just an overview about the deployment tools that are being used for this experiment or study. First is the traditional continuous deployment tool. As an example, here is GitHub Actions. Basically, it's a CI-CD platform for building, testing, and also deploying software. To run those pipelines, we need something called runners. Basically, it's just the machines where those tasks are executed. Currently, running those tasks are not supported on a container. However, with the Actions Runner controller, which is an open source, it makes it possible to do so. And the other methodology, which is the GitOps, and as an example of a GitOps tool, is Argo CD. It's a Kubernetes controller to deploy applications to Kubernetes declaratively. And it's also part of the Argo family. Before talking about the experimental conditions and the test bed setup, first, why Kepler? Well, there are many tools, or some tools, to measure the energy consumption of the underlying infrastructure. However, they do not provide direct measurements for the workloads running on Kubernetes, except Kepler, which is the reason why I have chosen it. And it also logs metrics at the pod or container level. Kepler uses EPPF, Extended Berkeley Packet Filter, to reduce the runtime overhead, programs to probe. Yeah, the system stats and exports the results or the energy-related data as a probability of metrics. And regarding the test bed setup, so it consists of a machine running Ubuntu 2204, on which a MeCube cluster is created. The components used for the experiments are Kube Prometheus, to just get to both Prometheus and Grafana for monitoring and visualizations. And of course, Kepler, Argo CD, and the actions run our controller, as the two continuous deployment tools. In order to ensure the consistency, I use the same sample application that will be deployed by both of the tools and also run the experiments multiple times, using a script, to ensure that the pattern remains consistent throughout the experiments. So the experimental conditions, here we can see that the developer commits a change to Git. And then that triggers the deployment using both of the tools. And they deploy the same applications, but to two different name spaces, just to manage them separately, while keeping that workload is consistent. Next, we can see that the results are here. So for the GitOps with Argo CD, we can see that it starts around 0.1W in the beginning, which is the idle state for both of the tools before deploying any application, just to ensure that the starting point is in a good state. And then there's a spike when the application gets deployed after about 12 minutes, which is the time between the actions. So the first is deploying the applications, then make a ruling updates followed by a ruleback. And in the end of the hour, approximately, we get the applications deleted. After about 10 minutes, it averages around 0.35 for the rest of the hour. And this can be attributed to the application control of Argos CD, which basically is the one that consumes most of the energy here, as we can see. As for the Argos CD repo server, there are spikes whenever there is a change in Git repository, as it pulls that every sometime. Well, we can notice for the GitHub actions that it has maybe lower consumption when it's in an idle state where there is no activity, but we can clearly see that the energy consumption is higher when there is an activity like deploying the applications, or upgrading, or even when deleting the application. It could be even almost twice as for Argos CD. Additionally, I have explored the two of the common architectures for Argos CD, which are the standalone and Habanist bulk. So for the standalone is deploying the application to the same cluster, which is Minicube, where Argos CD is installed, while for the Habanist bulk is deploying the same application but to multiple clusters, two IKS clusters for this experiment, plus the Minicube cluster. And here I leveraged the application set that allows to deploy the same applications to multiple cluster concurrently. Yeah, so the difference here, we can see that it's almost as twice as when it comes to energy consumption, it's almost as twice as for Habanist bulk, it's twice as compared to the standalone. And this is natural because we have more clusters to manage than the energy increases, however, not like linearly. And just for closing remarks, that it's important to measure the energy consumption of our applications and also choosing the tool that is designed for the purpose that we are intended to use it. And also, it would be great if more folks will join the Green Reviews Working Group of the CNCF Environmental Sustainability Tag. And that's about it folks. Your feedback is highly appreciated. Thank you. Thank you. Thank you.