 What we are going to be talking today is about energy management at scale and a little bit on the pathfinding that myself and the team were doing in the network and as division of Intel in the CTO office. So I'm going to skip the abstract but you can access it through the site but basically like what we will go is a little bit on the motivation of the the work that I'm sharing here. Talk about what we call the energy proportional systems again all the contact the stuff that we're going to be sharing today is pathfinding in the context of sustainability and energy from the network and as division and then I'll go a little bit on the closings and Q&A. So I don't know how much you guys are aware on what edge is and what probably in IoT domains like that you go from small devices to down the data center but basically what I wanted to share is just one slide on basically you see the different types of deployment models that we are facing from edge perspective. It's traditionally like we came like when I was in the PhD 13 years ago having the grid computing then everything became cloud and now we're expanding back again into the into the more kind of highly distributed systems and obviously the more farther you go to the to the edge the more smaller devices you typically find and with less less power and more requirements or restriction in terms of deployment model. One thing to keep in mind so we I mean we've been working on I think working on edge for like five to six years and when all these passwords started and I lastly I'm started to use the H Big Bang as a concept like basically so like if you look 2016 we were doing deployment of 50 hundred edges at most and now real deployment models that we are doing with real customers that they watch scale we're talking about hundred thousands of edge locations right. So what what this implies right. So if you look at the at this at the context of the current edge deployment models so like we are moving there's really a trillions of space in terms of the business aspect of the edge right. So there is a change in a change happening worldwide and but there's also like climate change is a fact that is happening right and that's something that no one can neglect. We're saying that you know last like floats happening last last few weeks with lots of impact business but economic but also like persons and like from the human aspect right and this comes with with this inflection point that it's interesting to see how now we were coming from where sustainability was sort of a password and you know a lot of greenwashing I may say to something that is becoming like an imperative for most of the companies right. If you just go into the GSMA report on sustainability like from last summer you will see like the 30 top telco companies are basically aiming zero deployments in in 2030 and I'm going to get into the zero comment in few slides but that's that's becoming a reality. So what we are looking into like when you look at sustainability and energy efficiency and you look at the single point like optimizing energy from one for example one basis station providing 5G or 4G connectivity it's one thing but when we go at the scale that we are talking in the deployment model the challenge is obvious right. So from the from the world that you will see here so basically what we are proposing is what we call elastic systems and elastic infrastructure which basically is about that you need to have your systems adapt to the proportional to the demand that is happened for the services. Typically like when we do architecture like especially I'm coming from the CPU world so you always look for the peak right that you want to size and make sure that you want to deliver peak performance but the reality is that if you look at most age workloads and I bet my salary that 80% of the workloads that we have deployed in general they have a stationary like behavior right so you'll have some time that is peak but then you will have a lot of ballets or things that are changing. So the principle is can we adapt the systems and the software and hardware to basically be capable to adaptively do implement this proportionality that is needed and we will show a little bit some projection model just to give a little bit of sense on numbers on how we could potentially gain by implementing those elastic systems during the presentation. So something I want to start I mean I want to kind of to set the context right on the three pillars that are behind this work. The first one is I don't know how much you guys are familiar with the sustainability scopes so there are basically three scopes. The first one is embodied carbon like how much for example in Intel how much carbon bond is associated to manufacturing chips for example. The second one is on the operational for example like okay I'm deploying a workload into or a system that is for example doing video analytics for safety how much that system is consuming in terms of energy that's scope 2 and scope 3 is the operational derived carbon. What that means is okay now I'm deploying a system and that system is that system is failing right. So let me try to see if I can okay if it's pop-up again I'll try to activate the network but but basically the third one is okay I need to replace a system and that means that you know I have to have someone that goes there with the car doing the management and that has some implicated cost on carbon. So what you will see here we're going to be focusing on scope 2 and scope 3 and when you look on the like one of the areas that I think that from ecosystem and community perspective we need to look at it's on how we account carbon and basically being realistic on when we talk about zero carbon or carbon net deployment what that it means but because I mean if you ask me I think it's very almost impossible to have 100% green or net carbon architectures but so I think it's very important that we are kind of honest and as a ecosystem so we define the matrix on what carbon means and and and and and there's zero or or carbon net also the second pillar is on the green energy so there is one interesting paper of Microsoft that that talks about the myths around sustainability and the second one of the means that they have is green energy for everyone it's it's it's a possible so this morning this morning we had a talk on the on the green energies and we know that the distribution of energy is a challenge and not everywhere it's going to be enough compute for I mean enough energy for driving the computer is needed but on the other side the interplay of with the grid is very important because as the guys were showing this morning in many situations you have peak of solar energy and if you are not using it you have to throw it away so what do we do on those scenarios and the last pillar is okay so we are assuming the the HB Bank that I was talking about you have highly distributed systems so when you have sites distribution of how you are making use of the energy that is distributed and map and match this with the compute demands that are needed for for the H workloads so today I mean we if you look a little bit on on what there are some reference at the in the last part of the on the backup of the slide but basically from from Intel perspective and in my division so we are focused on on multiple areas obviously the hardware is one of them so we are looking at energy efficiency like from the silicon perspective trying to have system like CPUs or GPUs that maybe are 30% more efficient yet gen over gen with the process design and some of the stuff like that we're doing a lot of work on cooling systems try to understand if you deploy a system into into the far edge what is the best solution is air-based is its precision cooling imagine what's the best kind of technology there and obviously like on the on the software software aspect right so it's clear that that you you can the couple like the hardware and the software so you can have intelligent APIs from the hardware to manage and understand energy and power map resources but you need some intelligence on top of that software base that understands how how how to use and manage those powers and in this case so we forgot one of the examples is what is called Intel driven orchestration that we are providing to the to the community and we participate also like in open source projects to kind of facilitate the adoption of the of the hardware technologies that we have so if you look here that's where we are and now we're going to jump a little bit on the topic of the talk that is kind of the the area that I'm kind of interested on on a path finding or or TR perspective so what the the first principle you know the now I'm shifting a little bit from the from where we are in terms of status quo into into what do we live that that's the that we need to do from from system design the first one the first need that we were basically working on two needs so the first one is about the what we call energy proportional systems and basically I'm going to use the analogy like so we we are part working that's that's public information with a company called felix that their infrastructure owners and they are deploying B2B and B2X services in the European corridor to kind of implement for example safety use cases on the on the road like with video analytics so what happens right so in this type of deployment models when when you have rush hour so you will have high density of traffic so you will have to process more more objects in your video analytics pipeline so for the ones that you are not experts in this area so typically the this type of algorithms the load that that they require in terms of of compute is proportional to the to the objects that they need to process okay so now if you are in the rush hour between 12 and 4 p.m. so like you will need a formula one car to process the the information that it's it's in the route itself right but now you go into medium density and then from 1 to 7 p.m. and you need a MotoGP car and MotoGP to do the processing but now there's an accident so your system has to react you need the formula one car back again to do the processing of that algorithm requires but now you go into the low valley like 8 p.m. and onwards and you just need a scooter because maybe there are one or two cars that need to be processed right so the key question here is I need a system that that basically in terms of the energy consumption can work very efficiently when it's when it has to operate as as a formula one car but as well when it has to operate as a scooter so that's the first area that we want to address and the second one is the elastic time and space shifting right so this morning with the talk about the energy grid was interesting because it was about managing the the energy distribution but also like you can think about that that you can also like elastically move compute in the different edges depending on the energy availability in the different locations right so what we are looking in this area is like for example in the previous example I was talking on the corridor so you may have a roadside unit that a given point of time is processing a video analytic service there is just this camera right and maybe identifying cars or identifying accidents or objects that may kind of become a threat for the cars driving there and but what happens is that you know the and this if you look at on the on the related work that we have on the slide you will see that the system that we've been working with so they are off grid so they are with solar panels and what happened in a particular time of the day that maybe the renewable energy ratio is is is mediums is not that good but guess what you happen to have another guy that is close by to you two milliseconds with 25 gig latency connection between this this base station or roadside unit and the other one high energy ratio so it means that probably it's getting energy that it's being thrown because it doesn't fit in so the battery is fully charged the computer it's already satisfied with the energy so maybe you're throwing like 100 watts in that in that it's right so what you can elastically do is decide okay I can move this the service that is processing this this stream from the first roadside unit into the second one so that's that's what we call the elastic time on a spaceship thing so that's kind of the these two needs are the one that we're going to be talking from for the rest of the of the presentation so I'm a little bit so now if you recall when I was talking on the on the on the previous slide right so we we had the the hardware and the software on top of it so when you look at the and looking at traditional orchestration policies with Kubernetes and stuff like that so now a little bit here I'm kind of depicting okay where where we want to go into so the the first area it's a little bit of a not super itcher but a little bit so the first area that we want to look it's on the bottom on the hardware so what we call the adaptive systems and I'm going to go in through that but basically what it means is traditionally when you see like hardware like hardware systems like that have multiple GPU CPUs nicks and stuff like that or they are provided by OEMs partnered with us like Lenovo whomever is it's just a single box that is you can do some power management for the different resources but that's it right you have what you have so here the the hypothesis that we have is that you you can really you can really implement this kind of level of of mutable systems that can morph depending on on the requirements right but obviously like the second important aspect is if you want if you you have systems that can change from a Formula 1 car to a scooter you need some intelligence right that tells you either how much when it makes sense to change from one state to the other and that maybe if you want to have capacity planning you want to move warlords from A to B you need some level of intelligence that allows to do that right and basically here we're looking at federated type of approaches like federated learning to kind of learn when you have highly distributed systems on how do they behave and try to derive like for example project like the likelihood of taking benefit of moving one warlord from one place to another and the last one the last part is obviously on the on the elastic orchestration policies so like typically you have like for example Kubernetes that is managing one or multiple nodes within a cluster but now when you go into the scale out systems that you have hundreds of thousands of nodes so how you implement these elastic policies so let me I will go a little bit on on the different three areas and I'll cover a little bit like the the motivation what we're looking into and provide some some first levels of numbers on the prototypes that we've been doing I'm going to start on the on the first which is on the proportional system so let's let's assume that a system right so typically is composed of a horse that can be zion can be AMD can be whatever type of compute now you have here I'm going to be using the ipu ipu for the ones that you don't know it's infrastructure processing unit which basically is a network card that has some level of compute like cores and some asics to accelerate kind of crypto float and stuff like that but basically this is kind of another system that is connected into the into the zion so the question is now let's say that you can have a system that you have something that is a zion or or an ipu system or it's another core type based off of soc and here what you see it's it's a graph that it's very simple so intuitive so on the on the y-axis you can see units of it's kind of removed I mean we don't show the specific use case to make it generic but basically on the y-axis you see the the units of work per second per system watts and then on the x-axis the amount of units per second that of work that that a particular compute technology can can drive so now if you look more on the here on the on the right side of the picture you see okay so if I if I'm on those situations when I need a formula one car yeah sure I mean I had zero seven units of work per second per watt in a in a zion system which is it's good but now if I'm going into those situations like when I go from 600 to something that is between zero and 100 now in this situation it's more more effective to move my workload and execute that in in this small soc that it's it has less capacity but it's more effective efficient in terms of of energy so this is this is kind of the the first principle that we're using in the in the proportion in these proportional systems and in in when you look at the from architecture perspective so like this is real-life data that is is extracted from from a database in in UK that has all the then traffic for the different roles over the last 10 years so you can get it and basically extrapolate that this load load graph where basically you can see with respect to the peak load that you will have at the given point of time that's kind of your design point that you you need in order to keep the the service level objective of the of the workload how how with with respect to that peak the percentage that you have on average to utilize through the day right so and here if you can see like you have like the low valleys incremental peak lower start to go down and then you go into the low valley again right so now let's say to combining to what is just said so maybe now if you have this system that is proportional and now on a peak you're running the the service into the into the xeon or or amd or the kind of the peak the the high end part of the system now maybe you go into this kind of low valley and if you have silicon or cpu that that can implement this proportionality and switch off part of the of the of the so c now you still use the zeon as your compute engine but now if you go into the low valley you can power up the zeon and move the the service into this in this case into the network card right and and you go from system or a system that is consuming 250 watts into something that is 120 watts and you go something that is 250 watts so now just to give a little bit of of a of sense on on on potential implications when you go into a deployment model so now let's it is modeling assumptions of more more on the back on the backup but basically using this load that i was showing graph into the previous slide getting this the the current cost in terms of energy per kilowatt hour and assuming a deployment model where you have about 42 000 sites so what potentially savings you can have if you can implement those elastic systems right so this this this graph basically what it's showing is the amount of million u.s. dollar savings that you can have in the different parts of the of the of the day so the blue the blue lines are uh million u.s. dollars i'm using dollars because i'm american company so if i use euros the guys get confused so these on the commas versus dots it's an important aspect when you present a slide but basically jokes aside so you have the u.s. dollar saving per year and then the the orange is about the millions of kilograms of CO2 that you can save per year right so just with this proportionality like you can like when you go through the year like potentially you can and again this is a model right so even it's 10 percent of these or 20 percent that's this huge number so just for the deployment model with 42 42 000 edges you can save about 70 million euros dollar of energy per year so there's a obviously a business reasoning behind it but you can save 140 millions of of CO2 emissions right something i have pending it's it's to translate that in how many airplanes it it because it's probably a a way a way to get a perspective but this is this is a lot okay and so this is kind of the kind of a projection model but we are implementing the policies that i just talked about in a in in real hardware and systems so what we have done up to now is we have an ipu that is this small a network card that i was talking connected into azion so what we have done is in this ipu we're implementing the control plane and telemetry from the whole system so you have a small brain that goes into this small SOC that monitors the warlock monitors the the system and dynamically can move the the the warlock from from the main host into the into the network card and can apply this level of kind of proportionality that i was talking about so if you look on the this is the prototype and again with the current with the current technology or the current implementation that we have we can't put the whole zion and the whole because there's some dependencies so if i put the zion in in i should i put i put it off now the ipu is not working properly so there are some things that we we are working on but basically like we have a first level of of the prototyping where like we're monitoring you can see here that we we have like it's a real video analytics warlock where we can go depending where it's processing like 32 objects 15 or one objects we or one object in in the in the region of interest so it can go from this system that is 240 to 160 to 110 and obviously this and in this 110 so we are moving i'm sorry about that so in this 110 we are moving the executing the warlock in in this small SOC but the the concept is really like to have this adaptive system and and if like and just to compare with the kind of modeling that we did so now here you can with the current prototyping that i was just sharing so what we have done is this is a real execution of the of the warlock modeling the 24 hours and basically what you can see is that the shape is is very similar right so this is the the blue line is the the the the amount of savings that that you get through the now the the orange is the typical energy consumption that you would see in a standard type of system architecture and then the orange the yellow is the power that is consumed by by the zion and the blue and the and the green one is i am the the the gray one is the one that is consumed by the by the network right so you can see i mean with the current and the the team is showing this this this week in the innovation event in the innovation event in san jose but that's something that is with the current technology we can aim 30 percent already savings right so now think about if i can truly like implement the the the the this proportional systems and i can truly like shut off the the the system and keep the warlocks into the ipu like this 30 percent grows so the the the second the second part it's on the on the observability right and here i don't i don't have any performance data but basically what we are doing here it's okay so now i i have this proportional systems okay so how how do i basically use them right because the example that i showed is is just one location but if i have 30 or 100 000 locations i need to really understand like whether like for example the previous slide i i wasn't assuming that i have systems that they are connected into solar panels now maybe i can apply this proportionality to the systems that are using grid energy or that they are low in in solar energy but if i have a system that has excess of energy i can basically move my load into that and apply and i don't need maybe to apply proportionality right so to do that i mean obviously you need to have like some level of observability that helps you to decide whether i have a warlock and i need to apply proportionality local to the system or i need to move this warlock into another place because the likelihood of having more energy efficiency into the other place is much better but also like i need some intel intelligence that tells me like whether i need to switch from one place to i mean from one state to the other because obviously when you switch from the zeon to the move a zeon to the from the a warlock from the zeon into the ipu that has some implications in terms of like it's a stateful application i need to resume i need to restart you have to pay some tolls that to me to make that change so the second area that that we are looking it's really on the observability and here for example you can see like one of the models that we were training in a federated manner like that's similar to what the guys were showing this morning is okay i have like 60 000 off-grid sites that have solar energy so okay i can collectively train the the data i mean the data that i get on the current solar energy and predict what's going to come into the next 24 and 48 hours and what we are looking here is also like looking at the concept that is what we call like elastic ai training which is okay now i mean if if i'm if i need to train models at the edge and it's costing me 20 percent of energy and i'm and i'm saving 10 percent okay better stay at home right so what we are looking into here is okay so i have excess of energy in in this highly distributed system so i can use this excess of energy to train those models so basically i'm i'm not kind of adding penalties into the carbon emission because i need something more as much so that's that's another area that we we're kind of focusing on so the the the last part it's on on the on the on the spice and space and and and kind of this energy is time-space shifting and basically so here very i mean now connected to to the slide that i shared earlier in the beginning it's it's very intuitive right so what we are implementing is local control plane policies that are monitoring the the the warlords the performance that they have and the energy and basically they are working with this system energy optimizer to kind of adapt the power based on the on the application requirements and and this with cold i mean this typical closed loop that you have on observe and act right and now we we we're implementing this global intelligence we're basically now if you have like that you have this federated or well we're looking into federated orchestration policies and i'll i'll share a little bit some of the work that we're doing with one of the partners but basically like getting all these observability stacks that i was sharing like one stack one slide ago and feeling that back into a distributed orchestrator and basically getting telemetry from the infrastructure as well and basically in this case so if i have like two edges that say yeah maybe the the way that we're implementing this now is h1 may say yeah look i'm the likelihood that i have to you know to be not effective in in terms of energy is is high and now you have another side that is sending back into the this federated orchestration this is my energy capacity planning in the next 24 hours or 24 minutes or whatever it is and i have intelligence from the infrastructure that is telling to this global energy manager on on how the network is behaving so now this guy can factor all these three elements and decide to move the world right so in in this case what we're assuming is obviously you can't move any workload to any place right you have some constraints you have reliability you may have this this time you may have cost right so just to just to give us a little bit of of pressure on what we can achieve so what what we have done just like on the previous slide was talking on the energy proportional system but from the modeling perspective is okay let's assume that i have i can federate like 10 units of of h appliances right in the very same example that i just shared in the in the previous one 42 000 h locations assuming i have 40 of the locations that they are based on on off grid like like solar panels and wind turbines type of based energy and here what you can see in the in the second graph is this is real data with from one of the partners that we're working with that basically it's it shows like it's a bit of a a nature but basically what you guys can see here is through the day the the blue the blue bars is the amount amount of energy that you have into the battery of that system and then the orange line it shows you the irradiance that you have from the sun from the sun at that particular hour of the day so what you can see here in this in this area here is that you have some energy that is coming from the sun you buy your batteries are full so any any what that you are not using from this from this orange line that you are not using is wasted right so what we're assuming is that with this model and that we can group by for example age 10 the age with with the groups of 10 so how is this a potentially savings that you get have just to get a feeling on how how much we could add on top of this proportionality right so if you look at this the modeling that we we're doing here it's like you can increase like substantially more than almost like 10 on the on the energy and cost savings if you apply these energy policies so if you start starting up adding up all these things when you go at scale these numbers are substantial right and and again this is not an Intel thing so this is a this is an ecosystem kind of I believe it's Intel has something to do with it but to to make this this this kind of large scale policies we we need it's a it's an it's an ecosystem work so in in this case just to get a little bit on on the type of implementations that we are doing so I talked a little bit on this capacity to have something that is monitoring the the warlords and do this kind of elastic management on what we call Intel driven orchestration and and basically on top of now if you recall I was talking I have like three states one state is I'm running on the on the zion the second state is I'm still running on the zion but I kind of try to switch off part of the zion on the third stage is I'm hoping to into this small SOC so we have something that is called Intel driven orchestration that when you are in in this steel into the high end zion and because you are not in the lower and not on the high end something that looks monitors the warlord and based on that adaptively tries to switch off parts of the of the zion using the the sst which is a technology that allows you to do the the power management and and basically what what you can see here is is if you if you look a little bit on on Intel driven orchestration one of the interesting parts is is how it works we don't ask the application the application to say and run the work I don't know the course at 800 megahertz 1200 or whatever because typically applications are not aware they they don't really understand how we are messing up with the hardware underneath so the the way that we are working this out is you have the application is telling into the orchestration that we have here this control this local scheduler saying hey I need to do like 10 frames per second to do my job right because that's typically something that the application domain expert knows like when we work with companies that they do like b2x use cases for safety they typically tell us a for a particular camera to identify that there's an accident I need to do minimum 10 frames per second okay thumbs up so we incorporate that into the into the crd or the the yamel file manifest on the kubernetes or now what the application also we provide some apis so that the application as it gets deployed so it does to the to this guy yeah I'm getting 12 frames per second 15 50 8 5 whatever it is so with this interdriven orchestration it's gonna be adapting the system to keep try to keep as much as possible the the application throw put to the service level objective that it has in runtime but always trying to keep the minimum as you know and here what you basically can see is there are four four bars right so the here on the on on the most left so you see I go full speed I don't I'm running in performance mode I'm not doing any any type of smart management on the on the on the on the power of the compute the the second one is called but it's called balance perform which basically it's it's it's still start to look at this slo that I was talking on the application but trying to keep the slo like 20 30 above the target slo to to be in between like it's like no I'm not being super strict but also I'm not like throwing the I mean spending a lot of energy into the into the world of wasting a lot of energy and the last one which is balance power which is okay now I'm going to look using this intent driven orchestration so really getting into the trying to get close into the slo and minimize the the power consumption right so in this case obviously here in the in the here you see the the power that is consumed by the by the system and here you see the the latency that is experienced by the by the world in this case kind of safety use case on video analytics and here in the green line you see the the line on the slo right so anything that is above this green line is not acceptable anything that is above it's always good right so the the lower the the better so here like if you look like the this balance power which is close into the slo like you know threshold so it it's actually giving 40 percent of energy savings right so now if I'm connecting all the different pieces right so now you have like systems that can be proportionally that can adapt and now when I go I have some telemetry that is telling me how system and the warlords will behave and now I go into the local orchestration and I do the the kind of monitoring the warlords and when I'm running the the warlords into the zion I can manage the power to keep the slo and now the the last part that is missing is okay so now if I go I need this I have local I need the global control loops so here what I want to share is a little bit the work that we're doing with companies called nearby computing that is is based in in in Spain and it's a spin-off of personal super computing center so here we are working with these guys to basically see when you look at this and we don't have performance data yet hopefully the the next time we present the the work we will be we'll be able to show some more details but basically like what we are doing is implementing these these APIs between the between this these local orchestration policies and this federated orchestration right and here the the goal and and that's a little bit on on the the ask for or the for the ecosystem in general is if we are in starting implementing this type of policies so what are the APIs that we need to define between the different components that we have into into a software system stack right so we talk about power management we talk about telemetry we talk about local orchestration we talk about federated orchestration and it's obvious that when you go into deployment you may have to take pieces from different parts of the of the of the ecosystem so agreeing on on how those APIs look like the semantics so that we can stitch things together is something that is is going to be really really required so just maybe to to close right so I think that a little bit on the on the therefore for areas that I believe that are interesting from the from this type of of work is I think that the standardization and and kind of ecosystem work is is very important because like when you go across the different solutions that we have in the ecosystem not only in tele like all the partner I mean today this morning with they talk about 30 different software initiative for green from for to do energy management right so do we really have an understanding on on what are the standard APIs how do we hook things together and and try to do something more common not common but a language that the different stocks can can talk in a consistent manner there is like some European initiative like camera or like some of the GSMA efforts that have been driving standardization in other areas of edge computing I think that that sustainability requires one of them and the second one which is on the I think I mean maybe it's not because AI is a password but I I truly believe that AI can help us a lot and for example areas that we are looking into is a today we for when we do orchestration so we were usually using our kind of handcrafted rules to decide a happens AVRC then to see then duty right but why the systems can learn themselves on how to apply apply orchestration so these areas that we are kind of starting to look to see if we can use LLM models that they can derive automatically the what are the best orchestration policies in different in different deployments right I mean if you look at again the challenge here is the scale right you there is no single like one size one size fit all so we really need to have systems and policies that can can learn them themselves and I honestly knowing the state of the art today on the on the AI wall I think maybe I'm being too optimistic but I think that there is a lot of potential here now the other two aspects so one is on the I think that's on the on the scope so I think like I think that it's a lot of work that needs can be done in into sustainability and carbon emission I think typically we always think about energy as the main driver but there are lots of things I mean I'm not an expert on energy but you know anytime that you put hardware in a place like from solar panels or whatever is there like okay what does that what are the implications of having that system there or like I think that we really need to have a mindset where we can honestly look at all the all the final solutions and the implications not only on the energy or or the hardware that you have there but all the all full change right and there is very interesting work happening into the ecosystem on kind of all the blockchain and kind of traceability of things which I believe that the the community is doing doing awesome work and the last one which is on the attestation right so I I truly believe that we need something that that as as you know I'm I'm seeing tons of so probably like you guys lots of reports like yeah this company I'm saving 40 000 CO2 emissions and and it's like okay what what do you mean by that right so how how you prove the that you are really making a change right so like honestly like for my my kids I really obviously intel is my job but I'm really caring is about what's happening right so we need mechanisms so that when we are saying I'm saving eggs or that that can be attested like I don't know exactly the solution but that's something that that needs to be I mean it's in my opinion it's it's very important so that's it I think we have a couple of more minutes for Q&A I know I did throw lots of things for like 45 minutes but hopefully some of the things made sense so questions so that's that's a good question so we we're test we have tested it like so the typically like the the requirements that we have in terms of latency is that when an event is detected like you have 100 milliseconds to do something so now with the current implementation that we have we can react in in 20 30 milliseconds so we are within the boundaries but the question is if we go way more aggressive and I'm not putting the system in in C-State like really I'm putting the system off completely how much I can do right so I think that all the like if the current state of the R I think that it's it's possible to to do those things the one one Kavido I think that you have to cheat a little bit what I'm what I'm gonna say here is like for example if you have like something this workload that is acting that needs to do something let's say there's an accident happening and this workload needs to do that within 100 milliseconds so what we are doing now is that workload is into the system sort of in a sleeping state right and then you have another sort of workload running into the ip that says eh we have to wake this guy up and and this guy have to has to react right but to have having the this workload in in kind of a sleeping it's not that zero cost in terms of energy right you have the RAM state you have the some some in court cost so that's something that I mean we hopefully will change and then looking at really like that you may have the workload like maybe in the ip you do some level of processing and starting that into the zion and that's more work to be done there but I think honestly like technology wise it's it's doable and again it depends on what are the latency requirements if you have like there are some deployment models like industrial that you do your like the latency maybe it's in the millisecond or microsecond there maybe you don't have anything to do in this type of use cases but in general I think that we I mean latency wise you're okay any other question so now that's interesting because if you look at the the telemetry that we're having in the latest generations of the okay so let me let me step back for a second so yes the answer is yes you you can report the the telemetry power consumption per application there are there is some great work happening for example on evpf there is I mean I recommend you guys if you want to look at the carbon is the hot carbon conference there are lots of good papers in there in the last so we're in the in the committee and and there like some really good papers one of them actually talks about the evpf and and using the the kernel telemetry to kind of estimate the carbon per per application so there is something already there sometimes like the well so you can do that one of the questions that still I think needs to be solved is really when you report carbon like how much is accurate right because like you're getting some telemetry from the hardware and extrapolating your carbon consumption but I'm not 100% sure that we are really doing this this in in in the level of detail that or depth that needs to happen okay so thank you and I know it's the last one so thanks for coming now you can go and have beers and enjoy Bilbao which is a very nice nice city so thank you