 Welcome everyone, and thank you for joining us today on our panel on moving towards environmentally sustainable operations with cloud-native tools. I'm Niki Manoledaki. I'm a software engineer at WaveWorks. I work on open source cloud-native tools and advocate for environmental sustainability initiatives at WaveWorks. Today I'm joined by William Caban, who is Global Telco Chief Architect at Red Hat. With over 20 years of experience with architecting telco solutions. We have Marla Weston, who is Cloud Software Architect at Intel, who is also Chair of the Environmental Sustainability Tag and works on resource management for Kubernetes. And we have Chris Lavery, who is Customer Reliability Engineer at WaveWorks, who currently works with Thurcha Telecom and is a champion for GitOps operations and patterns. So we will focus mostly on carbon and energy optimizations, since that is what we need to do. But we will also mention some costs and regulation and carbon and energy issues to state the problem. And we will also mention some communities that are part of the CNCF plus ecosystem that you can join to continue the conversation. And lastly, we'll also be mentioning cloud-native tools throughout that can help with all of this. So to ground ourselves with some numbers, the IPCC has urged global leaders and industry leaders to maintain the global temperatures at a level of 1.5 degrees Celsius. Currently we are at an increase of 1.2 degrees Celsius. And the IPCC has also urged global policy makers and industry leaders to reach net zero by 2050. So the International Energy Agency reported in September of this year that data centers and data transmission networks each account for 1 to 1.5 percent of global electricity use. And the GSMA reported in 2019 that the telco industry consumed 2 to 3 percent of global electricity use. So with that in mind, William, how does the makeup of energy sources impact CO2 emissions on a global scale? So to start with this, let's put this into perspective from the telco area. So if we consider that as telco industry, we are hitting 2 to 3 percent of the global energy consumption. At the same time, 70 percent of that is because of the RAN element, most of it on the antennas and the DU. Being telco, that also means that we are highly distributed by definition. So the energy sources have the following impact. For example, if we were just doing regular natural gas, that is about 0.9 pounds per kilowatt hour of CO2 emissions. Now the same DU, for example, using petroleum or coal, for example, on petroleum is 2.13. So that's 1.4 times more CO2. And if we are doing coal, it will be 1.5 times. So that gives you an idea on just by changing the energy source, what's the impact? For example, there are areas that unfortunately due to the realities on countries, for example, in China where 60 percent of their power comes from coal, that means it's going to be far more challenging, for example, achieving those goals of net zero by 2050, for example. So that's how compositions of the different type of power source affect the CO2 emissions in general. Thank you. And Marla, what are some examples of this existing challenge? Some of the challenges, just even hearing me. This is very quiet. Okay. Some of the challenges right now involve basically finding where the power is available. So currently in Texas we have a whole bunch of wind energy all the way in West Texas, but we have no way for those transmission lines to get across. So right now people are looking at building data centers right next to those areas of energy. If you look at Iceland, they have advertisements right now as of last week. They had an advertisement of, build your data center here because they have hydropower and you don't really need much cooling because cooling takes a huge amount of your data center power. So there's some interesting challenges as far as getting the green energy to where it needs to be and maybe building the data centers more proximate will be a more sensible thing in the future. And compliance with CO2 emissions and regulations around that is another challenge, but also an opportunity for telcos to make a substantial difference. William, what do we mean by scope one, scope two and scope three emissions? Sure. So in last year basically, in the EU, in Europe, there was the adoption of this new regulation that basically by 2015, companies need to be reporting scope one and scope two. And the US, anything that is publicly traded is by 2024. They need to start reporting on 2023. So anything on 2024, but what happened on 2023? But what really are these scope one or two or three emissions? So scope one emissions is anything that the organization controls or owns. So that can be from cars to data centers to cloud services that they control. Not necessarily the SaaS, but if you're doing IAS, yes. So it's kind of an interesting area there. So scope two is the CO2 emissions or other CO2 emissions that are the results of generating the power that the organization consume. So the CO2 emission of carbon, that will be again counted as part of the scope two. Now what about the scope three? Scope three refers to everything else out there that is responsible that the organization is consuming and the CO2 emission by those. And that, for example, if it's a SaaS service, anything that SaaS service is generating to provide the SaaS service, that's part of the scope three. Even the travels that we take, or the transmission lines that belong to a third party, or satellite connections, all of those from the ground, all those CO2 emissions, those are scope three. And guess what? So that started last year. Now this year, and anything in the U.S. need to report by 2024. So it catch up by surprise, but pretty much, there are pretty much no tools that we can use properly to report on this for next year. That's the situation we are right now. And actually, what are some examples of regulations and policies that are in place that aim to tackle CO2 emissions compliance? So the European ESG is the one that is covering the European Union. In the U.S. is the SEC, the one that basically is enforcing all this. In both cases, it's for any publicly-trade company or organization. It's just that happened that, for example, contrary to global financial, that has been preparing for years for this, telco, not, well, some telcos are doing excellent work, but not all the telcos were ready to start with this. And so now that we've defined some of the terms and the problem statement, let's discuss some optimizations related to energy and carbon use. But first, Marlo, do you think that environmental sustainability and performance are at odds? Not today. So today, currently, we have so many inefficiencies within Kubernetes that we need to be focusing a little bit more on resource management and how we're handling it internally. Because we're not anywhere near utilizing the cores of their full capacity, so we're running a lot of servers we don't need to have. So if instead we could scale back and figure out which servers and optimize the resources we have, we're actually saving energy at that point. At some point, you are going to have to make a choice between performance and sustainability, but we're nowhere near there, at least not today. When you start looking at cooling, for instance, and you have hotspots in your data center and you're not scheduling correctly, because we're so far from full utilization, you could just schedule a little more intelligently and keep your average temperature down so you don't have those hotspots to cool. So eventually, yes, sustainability and performance will be at odds, but not today. What about energy and carbon optimizations? Are they ever at odds? They can be. So if you're looking at hardware optimizations, you want to get the newest, greatest, most energy efficient, but really you're replacing old hardware. So what is the carbon cost of your old hardware that you're replacing versus the carbon cost of your new hardware that you're replacing? And how long is it going to take you to even that out? So you can look at it as you want the best and greatest, but your gas guzzling car may still be fine compared to buying a new car as far as all the resources you need involved. And so now let's talk about optimizations for carbon and energy at the node level. I think it's Marlowe, Chris, and then William. So as far as carbon and energy optimizations at the node level, currently the cooblet doesn't easily make it easy to do power optimizations. They're still really inefficient as far as your core utilization. So we need to be looking at better tools. We need to be basically pushing in, I have a plug-in model that I can talk about offline with people later, but basically make it make your cores easily more easy to utilize. So like Nokia has a CPU node pooler, Intel-C at CRM, I have a CPU resource management thing. There's a whole bunch of tools, but they're all external to the cooblet as far as optimizations. So those need to be moved into the cooblet. Yeah, I'm probably going to take a slightly different view on virtualization and run times and talk a little bit about the need to address the VM construct as we traditionally use it in certainly in telco and VNF, and that hasn't really changed in a long time, retains a lot of the legacy of emulating physical computers like peripherals, BIOS routines, and those things all made sense in the time when you needed to virtualize for a broad multitude of systems, but if we're just talking about running CNFs and telco software inside Kubernetes, it's much more specific of purpose. And what we've seen from Weaveworks is that that frames a problem in virtualization with interesting places to target, that we could call boot and bloat. Micro VMs are a response to solving that aspect of resource usage through speed and through minimalism, and they streamline the boot process by removing things like BIOS routines and unnecessary peripherals in the VMs through the footprint, and the overhead is dramatically reduced. The solution that we've come to is called Liquid Metal and is an open source project inside Weaveworks which allows you to utilize the same tools to build and run VMs in conjunction with something like cluster API, and it's not opinionated about that runtime so that you can choose which micro VM runtime you prefer to use. You can bring a kernel and a distro that you prefer for your software stack, and you can solve problems that are, well, let me think about how to phrase that, you can create greater efficiencies by using something with a much smaller footprint, like how much resource your control plane is taking up, or having to repurpose and reuse, or you might want to repave a whole physical segment of data center with Kubernetes, but you don't necessarily want to use a traditional heavy hypervisor. I will take a different approach to this, for example, on the no level. I will call out, let's stop the madness for a second. For example, I'm active on Oren, I'm active on TIP, not so much these days, but I'm active in several communities, and when we talk about power optimization, basically everyone starts talking, oh yes, let's do this in the DU such that we can shut down this course, and let's shut down those part in the antennas, perfect, let's do that, so the antennas is in control of whoever provides the antenna, so that means as community we have zero say there, okay, so we have to solve that, but what can we control, so let's go to the no level, and what happened in the no level is that, yes, let's shut down the whole thing except the no, let's just leave it idle, how many of you know about the concept of a node power profile on the minimum consumption that a node will have, even if it has no workload on it, and the reality is about 40% of the rated power supply, so if you have a power supply of, let's say that is rated for 1400, that means at least 564, what will be consumed even if that node is doing nothing, so let's go back, all this time that we spent on let's shut down everything so we utilize only this amount of that node, are we really, because they always see something, but when we measure from, for example, go to the BMC and measure that power supply consumption from the wall, there's no reduction after those limits, so that means, yes we can do optimizations, but we also need to be conscious of, okay so after certain threshold, further optimizations are just consuming power that I'm not gaining anywhere, so I'm inviting basically the the industry on this part, let's be conscious of those and find better ways to expose those metrics on what really is the minimum consumption of a node so that we can optimize for that, I have to call out for example as a very good example, the recent work I've seen on Dell, now you can go and query their iDRAC and they actually tell you, based on this node, you will have a minimum of an idle mode, this amount of watts, and based on the hardware that is here, this maximum, that, I can feed that to a machine learning model and optimize that and I don't have to worry about shutting down every single core there, so that's a different way to look at this. Thank you all and also what optimizations are possible at the physical connectivity infrastructure or at the workload level that have not been mentioned yet, I think it's first William and then Marla and Chris will talk about hypervisors. So let me go back to on the connectivity side. So on the connectivity, we're doing a lot and we recognize as industry that for example moving to fiber will help quite a bit, but on that side as well, I would say, for example, let's be conscious of the whole aspect of it, remember scope one, two and three and not just scope one, because there's something like hard disk or storage. So everyone knows that if you put a spindle, it will consume more power than an NVMe or SSD. Sure, let's scope one. Let's go and look at scope three. It happened that for scope three, if we are looking at the spindle, it's three kilograms per terabyte. The SSD is 20 kilograms of CO2 per terabyte. So again, let's be more conscious on when choosing those optimization articles. When we're looking at a lot of companies are using hypervisors in order to simplify the problem as far as resource optimization, but it's not making it necessarily better because your workloads, that hypervisor is still I think it's around a 20% cost for your resources. So you're still running into cost of resources when you're using a hypervisor. Do you have more comments? Yeah, I mean, I think in conjunction with that, there's other extended costs where we may be doing things in software that we might be able to reduce or offload into more performant areas of hardware going forwards and things like SmartNix being able to utilize PCI paths through and more modern software solutions that allow us to reduce the overall overhead, not just of hypervisors, but of actual application functioning or application usage itself within the guest. So there's merit in exploring both paths and we think that what we really want to find is an optimum. So I think we've both discussed that there's the opportunity to remove the hypervisor altogether and use another kind of construct that is as effective and representative, but at the same time from our perspective certainly and my perspective inside a large operator is that's very difficult to do with the traditional means. We have to map everything to the new model and we end up accommodating all of them. If that makes sense to most people who are maybe running both OpenStack and VMware and physical and something else, that might be a fourth model. So we want to find a best blend sustainably speaking of where the problem is most elegantly solved. If that's in specialized Nix, if that's in a host model that's as performant or more performant than a hypervisor model, if that's in using a more performant hypervisor in the first place, although I think that you knew better than I what that means. Yeah, so hypervisors have a lot of constructs that are still left over from when we were using them as virtual machines. So basically start looking at a model where we can wrap up a set of resources and drop them into a container-like object or a capsule and just run with that instead of continuing on this model where we're having to manage all the resources within each capsule, you would just drop them in. But we don't have that yet. So what are some of the best techniques and tools that exist currently for carbon and energy aware scheduling and smart workload placement? It's William first, then Marlon and then Chris. Best techniques? I don't know, but I know about tools that are trying to solve this problem. Some more townative than others. But in general, for example, part of what my team has been working with other partners is, for example, doing workload placement on the Mac based on where the 5G slices are being really used compared to the traditional model of pre-provision, for example, the services that 5G slices will consume instead of pre-provisioning it on a whole area, doing the in-time provisioning of those based on, for example, if I have, let's say, a drone, a tractor, whatever, that is going from one area of coverage to another. So if we can track that and we track that properly, we can interact and pre-provision or provision in time when we know that it's going to be in the area, the workload closer instead of the traditional model that most of the time today just go and have to pre-provision the workload that's going to be consumed or the service in all possible locations that I want that service to be enabled. So that's a technique that is used and works well. To add on to that, currently Kubernetes native doesn't have the ability to do scheduling by particular metrics on the system, so we need to be looking at scheduling models that allow you to schedule according to various metrics. That's power or temperature, I'm sure there are other cases, what workloads are there, so predicted temperature if you're going to start increasing that according to your workload. So we need to be looking a little bit more at scheduling patterns. And to extend on that, I think that the carbon intensity data for electrical grids around the world, which is available through ABI providers like Wattime, they provide a marginal operating emissions rate and the more value that represents the points of carbon emitted to create a megawatt of energy, so the lower the more the cleaner the energy. And schedulers and auto scalers can leverage those APIs to provision and scale workloads wherever is most carbon friendly. Particularly for things like non-critical workloads, CI testing, those kinds of like QA staging. There might be more validity in application, but in the future we could potentially look at those actual CMA-PIs for deploying production workloads. I'd like to add one thing to that, that there's, for example in AWS and other cloud providers, we might not have accurate data on carbon emissions per region right now, we don't. Which would allow us to do scheduling for workloads that are running, for example, in EKS. That's missing right now, but the APIs such as Wattime and electricity maps could help with that. There one particular shout out that I would like to give is to the EU North One region, because looking at comparing different regions for their carbon emissions, it seems that's, that one stands out because of the fact that it runs in Sweden, I believe, which has quite a lot of its grid system running on renewables. So there is information out there that you can gather right now, even if you don't have API endpoints that you can consume to schedule your workloads on cloud providers. So, also, what cloud-native tools would you recommend for these use cases and optimizations? There's a list of tools that we would like to mention to you. Chris, please. Yeah, sure. So, I've looked at Carpenter and I think it's a great example of something that takes an event-driven approach to right-sizing and undulating a dynamic amount of workloads to node pools. In sustainability terms, we could use the concept of pools to create a model that accommodates those different qualities, depending on what the workload looks like, whether or not it can run on a more efficient or a more sustainable instance. It's a unique perspective on that, on that problem set and the way that it works is by observing the aggregate resource requests of unscheduled pods and makes decisions to launch and terminate nodes to minimize scheduling latency and infrastructure cost. So it lowers cluster compute costs by looking for opportunities to remove underutilized nodes and replace expensive nodes with cheaper alternatives. And in sustainability terms, we can ideally map some of that cheaper to more environmentally friendly and cost-effective. And whilst that won't necessarily be one-to-one, there's going to be opportunity. It's a pretty flexible tool to make that more sustainable centric, necessarily even in cost centric. So other tools to discuss. There's Kubebatch, which is a queuing system, basically, that you can use for time of day. So you can choose when you schedule your workloads, especially if it's a high-performance or AIML workload. There's one that's not on this list that I'm going to give a shout-out to because we just released it, called intent-driven orchestration. So you can schedule your workload according to intent. We're looking for feedback at Intel on that. It's open source. It was released on Friday. But that particular one lets you say, I want something of a low latency, so we'll schedule you accordingly. The example that we have running is specific to performance, but you can also do something similar that's specific to power as far as trying to optimize. There's Intel telemetry we're scheduling, which I know that William is using in some of his prototypes that he's architecting. But basically, it lets you take on-node metrics and then choose scheduling or descheduling decisions depending on those metrics. So that could be power. That could be temperature. It could be other things. And the other item that we're currently working on internal and will release in a couple of weeks, I was hoping we'd make it by now, but we didn't quite make it on time, is a CPU control plane that lets you have finer-grained control of your cores over what you currently have today. And it's semi-plugable. You still have to turn off all the template stuff like you do with every other CPU manager, but it's hopefully a little more easily used. Some other tools. So, KEDA, which allows you to do horizontal cut auto-scaling based on metrics. And for example, right now, we do have two different POCs on doing CO2 base, CO2 emissions base auto-scaling, horizontal auto-scaling. Kepler, which is the tool that you see there in the bottom, that's a tool that we have been working in Rehat with others like IBM Research, Intel is also helping there, WeepWorks, which allow us to measure the actual energy consumed by a pod. Doesn't matter if the pod is running on Bermedon, on virtual machines, or in the cloud. Now you can see, for example, on the level of millijoules, how much power is being consumed by that. And with that information, plus the APIs of CO2 emissions based on the power source, now we can have more accurate CO2 emissions metrics. So, we have been experimenting on this with some CNF, obviously, because of the CNF nature. It's kind of hard to auto-scale, but, for example, for UPF, it's something that actually works. So, for UPF, and for all the type of CNF that we can have more control on the scheduling, it works pretty well. There's also the AWS customer carbon footprint tool, which is not a cloud-native tool, per se. So, maybe it doesn't belong on this list, but it does marry a shout-out because it's a tool that is available globally. It's free to use. Every cloud provider right now has a tool like this one that lets you visualize your carbon emissions, and it's great for a yearly roundup of carbon emissions that were used to power your cloud resources. So, whichever provider you're using, you could look for what they provide, and if you're not satisfied with what they're providing, then please, as customers, be vocal about it. And also, how can our audience today get involved, learn best practices, and continue the conversation through communities that are in the CNF ecosystem? Marlo. So, we just formed a new tag. That's the Environmental Sustainability Tag at the CNCF, and we're looking for people to help. We currently are putting together a landscape doc that documents all of the current tools available. It is not complete. We would like help, especially from people who have their own specialty tools, because there's a lot out there. And there's also meetings every other Wednesday. The next one will be this coming Wednesday at 8 a.m. PST, 10 a.m. CST, or the Eastern time, 11 Eastern. I'm usually translating across multiple time zones. So, you're welcome to come to that and contribute and talk to us. There's also Slack channel. There's mailing list. There's Twitter. We don't have a website yet, but we're heading in that direction. And that particular body will also direct you to being involved in other tools that you may be interested in, including Kepler, which I know is up for CNCF Sandbox. I can mention the Green Software Foundation. Right. So, Green Software Foundation, they have something called the SCI. That's an index on really what's the CO2 or the emission. Basically, not only by the workload, but by everything that workload consumes. They also have the CO2 pipeline, which allows you to, for example, do static analysis of the code. And based on that static analysis, they predict the CO2 emission that that software will be responsible. By the way, it will be great if the CNF can start providing that information, so we don't have to guess it. And the Carbon Aware SDK, which is also part of... Do you have information on that? I haven't played with that. It's standards, so they're looking for more contributions to it, but it's fairly mature at this point. I see him, he's over at Intel. He's the head of that thing, so he's been pushing that. Yeah, so, like what I would say is sustainability-adjacent would be the FinOps Foundation, and if FinOps is a new term to you, I've taken this, I think, verbatim, but FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations by helping engineering, finance, technology, and business teams to collaborate on data-driven spending and budgeting decisions. And it's a practice of cloud financial management and it's focused on evolving standards through empowering community with education and advocacy. So a few statistics are that it's 7,300-plus individual members and 2,500 companies enlisted, so there's certainly a sizable amount of weight behind it as a movement. And with the high correlation between cost and the environmental sustainability, means there's heavy overlap between the two topics. So in many ways, FinOps is a good foundation for sustainability where FinOps can create organizational awareness and benefit that a sustainable perspective can overlay and build on top of those tenants. And last but not least, there's also the GitOps working group, which is part of the CNCF. And we're just creating a subgroup for environmental sustainability to look at how GitOps can be used to facilitate the deployments of green tools and schedule them, well, configure them and make them available in all environments. And we're also going to be looking at how to use Kepler with GitOps as a use case to gather energy data and compare different architectures that are GitOps-based with Kepler and look at how GitOps can help with different power models, like turning IT off because GitOps helps with that and power models such as idle and low power models and yeah, look at that. Lastly, as closing remarks, we will each highlight one ask or next step that we would like to see or that we think should be the next step in the direction for moving operations in a sustainable direction. So my ask is for more carbon and energy data to be available from cloud providers. So like the AWS customer carbon footprint tool, which is console only, there's no API for it. It would be really great to have API endpoints that we can consume so that we can improve scheduling based on that information. Also, that tool has a lag of three months so the data of my carbon emissions today will only be available in three months from now so I don't have the ability to make optimization based on that and other tools that we have like the WattTime API or electricity maps, they're great but they give estimates so we can't rely on estimates. We need real data about carbon emissions and energy usage to be able to act on those. Chris? Sure, so I've been back and forth on what the big ask that I could list would be and I think the biggest ask is really that those working in a co-location providing infrastructure really take a holistic look at their power sources and ask themselves questions organizationally about whether or not they can either participate in enabling things like renewables in conjunction with building new data centers or they can use existing renewable options and I know that's a really tall ask but I think that's part and parcel of where things are at and even though they're not so far centric they are organizational centric and they are front and center of the topic. So yeah. Okay, for me it's gonna be metrics about metrics. So in particular, for example, in the telco we measure everything but a lot of that goes to a data lake that no one looks after. Well, so think about this. So we do have now a ton of microservices and services. We have the 5G core highly distributed. We capture everything but who can tell me how much power is consumed between a communication between two of those CNF? We have the data somewhere. So we need to create something that is kind of a specification where we can start augmenting or enhancing our metrics data with for example, what's the actual power consumption of a path such that we can use that as part of the decision to schedule or reschedule workload based on the penalties or the goals on the organization is the organization trying to do reduction of CO2 emissions, that would be one path but if they are trying to be more power efficient that might not necessarily means less CO2 emissions. So all of that today, we know it's possible but we don't have a way to read or obtain that data. And I mean, if anyone knows about any project or is interested in on starting something on the environmental sustainability that's the type of tools that I will love for us as community to bring up. And from my part, I'd like to see smarter cooling models. So currently we are optimizing according to voltage use cooling, we just blast until the chips are cool enough. Sometimes we have direct to chip cooling and that helps but you still end up with hotspots in your data center. So we need to get a little bit more intelligent as to how we're doing the cooling. The cooling is overwhelmingly 30 to 55% of your use as far as your data center, which is a huge percentage. It's much more than your Xeons, it's more than your network. That's the thing that if you're looking for easy places to optimize, I would start looking there first. I think we have some time for Q and A, right? We have like eight minutes. And before we move on to that, I'd like to mention that we have a survey for the environmental sustainability tag. You can scan the QR code up there and we would really love some responses from all of you in the audience, especially telco related information that would really help us. Yeah, so we have seven minutes for Q and A. Yeah. Thank you. Very interesting panel discussion. I just wanna ask maybe a stupid question. Is it all about CO2 emissions? If you think about data centers, you can put them in a cold place like Iceland or put them under the ocean or whatever, but they're still generating heat. Heat warms the oceans, melts the polar ice caps, et cetera. So do we need to think beyond CO2 emissions and also about heating? Absolutely, absolutely, absolutely. Unfortunately, on the other side, telcos answer or are driven by regulations. And what's driving right now? Everything is that there's a regulation that everyone needs to comply with, which is reporting about CO2 emission. Personally, I don't believe in the financial tricks of reporting CO2 emission like buy, buy and create. I don't believe in that. We're just offsetting the problem. Energy efficiency, for me, is far more important at this time, and it's not because that will lead necessarily to power reduction, because if we go to the numbers, just numbers, 5G is what, five to 10 times more power efficient than 4G, but on the other hand, it requires up to 80 or 100 times more antennas to cover the same area. So I would like to tackle that type of problem more than the CO2 only, but CO2 is what drives today the need or the activity, let's say. So yes. Anybody else wants to take that on? Do we have other questions from the audience? Yeah. Maybe this is also a bit silly question, but don't you think that like clouds and all of these obstruction layers and sustainability is in contradiction? Like when we have more and more obstruction layers then the solution is less and less effective. We're really inefficient when we're talking about cloud. So I think that, I'm gonna kick that question down the road and ask that we get better at using the resources we have. So we're still building bigger and bigger data centers because we're trying to intake more data, but we're not fully utilizing our compute power. And so if you look at the studies on it, we're using something like 40% of the compute power at best. And that doesn't include optimizations. So if we start looking at optimizations, maybe we don't need more compute power. Maybe we can just keep going with what we have, but use what we have. And just to extend that a little bit, I think another part of that is like front loading, like physical infra, traditional model, like vertical monolith, big iron kind of thinking, versus the more distributed and dynamic, the more elastic cloud that we want at the node level or the service level is a good way of thinking about utility because we're investing a lot, even just in the physical infra and then it stays underutilized for potentially months, if not years, whereas if we're able to reach operating models where we're actually distributing that across the nodes, we're distributing different layers of functionality at the node level and we're able to manage that, then the utility of that on a per node basis is higher and that is ultimately probably going to ache out as more sustainable. This is, sorry, kind of in a similar vein. If anybody else besides William is in some of the standards why is there working groups? William, I'm kind of curious, like from an O-RAN perspective, what kind of comparisons are being done for like an O-RAN deployment power consumption wise, what that footprint looks like versus a traditional O-RAN model? So in O-RAN in particular, there are three that I'm aware of right now. So in working group four, which is more from the network perspective and that's about shutting off, I mean shutting off the antennas, et cetera. Working group six, which is more on the O-Cloud side and how the O-Cloud, for example, shutting off cores and be more optimal on that. And the third one was about orchestration but I think that's more on the research, not yet. Yeah, that's not an area. So basically, there's a new working group. It's not a working group itself, which is about research, about the next generation research for O-RAN, particular 6G plus. So there has been some discussions on that area but not in that particular like, okay, so let's compare a traditional model versus an open run versus even an O-RAN. So what exactly it will look like? A proprietary or more than proprietary specialized chipset specialized chipset can always be optimized more than a generic and like a cost. So that's a reality we have to live in. We just probably need to be far better at asking from our CNF vendors to adopt more cloud native construct for real. Not, I love what, if I was saying, this is like for real, do cloud native. So that will help. I think that's all the time we have for today. Thank you for joining us today. Thank you to our panelists. And yeah, let's continue this conversation. Thank you.