 Welcome to the CNCF on-demand webinar, reduce the carbon footprint of your cloud native workloads now. I'm Robert Duffner from the product team at IT Renew. Today, we welcome Andy Randall, Chief Commercial Officer at Kinvolt and Eric Riedel, Senior VP of Engineering of IT Renew. At the end of the discussion, we will answer some questions, so please stay with us. With that, I'll turn it over to Eric to get the discussion going. All right, thanks, Robert, for the introduction. Thanks, Andy, for joining us for the webinar. What we wanna talk about today is a topic maybe doesn't come up that often in CNCF circles is the interaction with the hardware that underlies all of the infrastructure that CNCF and the world builds on from day to day. Luckily for most people, you don't have to worry about your hardware because there's those of us in our corners of the industry that are taking care of what lies underneath. And occasionally, we just like to surface a little bit of what we're doing. And our focus today based on some of the Sesame products that we have at IT Renew is focusing in this week of Earth Day on the carbon footprint that that code provides, or sorry, that hardware provides as well as how it interacts with the large scale ecosystem. So that's why we asked Andy to join us, longtime software open source advocate and implementer. And so hopefully we'll outline how openness applies to hardware as well as it does to software and how all of it can be used to be more efficient, more efficient in the software and the systems and the applications, and more efficient in the way that we provide and produce the hardware underneath. Andy, do you wanna talk a little bit about Kinvoke and the software infrastructure? So we'll start at the top and work our way down to the hardware. Yeah, absolutely. And I think the first thing to point out is when we thought about how do we build this solution, we wanted it to be open from top to bottom. So it's an open hardware architecture and an open software architecture. And IT Renew and Kinvoke have really collaborated together as a team to deliver this. And that kind of goes back to some of the founding principles that Kinvoke was established on over five years ago now, right at the beginning of the cloud native revolution. We started with a team that had a lot of expertise in Linux, a lot of the low level layers of the cloud native stack, and built on that with container technologies and Kubernetes expertise as well. The kind of values that we set up the company around were all around open source, so contributing, cooperating, community, welcoming. And this kind of embodies both how we work with IT Renew to deliver the open systems that we're gonna talk about more as well as how we wanna work with the users, with the customer community and other vendors and partners out there. So that's kind of a little bit of how we at Kinvoke think about things. And we take this expertise, we take these values and kind of the direction we're pushing in is all around how do we build a truly enterprise grade cloud native stack for deploying applications that's 100% open source and community driven. Of course, we base this around Kubernetes and we put a lot of other software together with that and we do integration with the hardware systems and that's what we're gonna talk about in the rest of the session today. Beautiful, yeah, thanks Andy. So just to briefly introduce IT Renew as a company has been around almost 20 years but Sesame as a line of rack integrated server storage and networking is a little bit more than two years old. So we're a little bit on the startup phase but building on an established base, right? And the other thing that we're building on similar to the community of CNCF and the community of Linux that so much of our software is based on is the open compute project, right? And the open compute project is just under 10 years old was originally started by Facebook. There were other hyperscalers before most notably Google, Amazon that had started innovating in hardware on their own and then Facebook and a number of others and Arista and a set of small set of vendors were responsible for bringing that into the open, right? And saying, hey, can we do hardware innovation in the same ways that we do software innovation, right? We already collaborate globally, right? The hardware industry is global in its implementation lots of dependencies, lots of supply chain that we've seen recently, you know certainly pluses and minuses but this is how we've built the industry, right? With global and worldwide collaboration but we was often being done in a one on one or a one on a few small number of vendors, right? I personally spent nine to last 10 years at EMC and then Dell after the merger we did lots of collaborations with lots of vendors and different hardware partners, different software partners but it was often done in the service of our ultimately proprietary platforms, right? And so what Open Compute does is it brings that community explicitly into the open and what you see in the individual that I have up there's a huge breadth of projects that over the last nine years that community has been able to bring together. So there's over a hundred active projects nearly 200 projects all told and they span a very wide dynamic range of hardware and related systems, right? All the way from data center facilities so this is literally about the physical infrastructure the concrete and the pipes and the cooling and so on of data center infrastructure where the teams have been able to innovate in incredible ways to bring efficiency way down at any amount of electricity and water that is wasted cooling or otherwise kind of managing the data center could better be applied to the actual computing, right? And so there's a number of metrics there that have been much reduced from 30 and 40% overhead to sometimes four and 5% overheads today that data centers are able to provide. Similarly, in terms of server innovation this rethinking again starting nine years ago sometimes longer with some of the other hyperscale vendors of what is really necessary inside a server what is core and what is context? And there's similar design simplifications that have taken place over that timeframe, right? That reduce the total number of components which makes the system simpler but as a side effect also makes it more reliable if there's less components there's less components that fail makes it more efficient and if there's a smaller number of larger components then there can be a greater mechanical and a greater electrical efficiency. For example, we use larger fans that move a larger volume of air for the same amount of input electrons. We use larger power supplies that have less waste in their conversion factors, right? And so all of those end up being multiplicative effects to both the server specific design and the rack and power design. The final thing I wanna mention while we have this visual up is you'll notice that open compute includes not just hardware vendors in the list, right? Intel, Wewin, Quanta, et cetera, Arista but also end user companies such as Facebook, AT&T, Microsoft and other component vendors, right? You see Edgecore, Delta Electronics. And so we're really bringing to the table the component vendors, right? Those who designed the power supplies and know more about the power supply than you ever really wanted to know. The system integrators who are bringing those together into operational systems. And then the end users who are like, this is how it actually operates in the data center. And so that has been incredibly powerful for us in the hardware space as I would assume is very similar in the software space, Andy. Yeah, absolutely, Eric. I mean, I think if you look at the velocity that the cloud native community has moved out over the last few years it wouldn't have been possible in a proprietary world. I mean, it's all a result of kind of collaborations that we see between vendors and end users all coming together, working in the open. Right, yeah, absolutely. And then just to nod to the worldwide aspect of what we're doing. So Robert is in California and Silicon Valley. I'm actually in a seaside town south of Boston and Andy is in Germany in Berlin, Metropolis there. So we're even this webinar is in its production global. And of course the audience will be likely in every time zone around the world. And that's really provided us a lot of power and synchronicity there. So let me talk a little bit about then the hardware footprints that underlie what we're doing here, right? So I've given the backdrop of open compute designs. So what we're showing now in the visual is a couple of the solutions from IT Renew. And if you start in the center, so in the center what we have is an integrated Sesame rack, storage, compute, networking as it's ready to ship to one of our customers. And so we have a mix of compute nodes and storage nodes so that different workloads can use different aspects of the system, right? Well, when we then add the Kubernetes, the container orchestration system, then customers are able to get exactly the same type of flexibility that they're used to in now the public cloud which has exactly the same hardware underneath, right? There's lots of servers in serverless no matter how you do it. But what we're able to provide at the design level is that type of flexibility. So when we work with our customers, we ask how many containers are you gonna run? How many cores, how much networking bandwidth, so on. And then we'll take care of fitting that into a rack. As you see in the center here, there's a rack with about a dozen nodes and some high density storage at the very bottom of the rack. And then what you see on the right hand side is a three rack build out that is actually one part of a build out that we're doing together with a customer, a partner block heating in Amsterdam. So those three racks are part of an 18 rack footprint that block heating is using to manage a computing infrastructure. But then they're doing a second benefit in that they're using the exhaust heat, the waste heat from those racks, from the CPUs in those racks to heat up water which then heats up their greenhouses which is shown in the bottom. So not only do we use the hardware for cost efficient, energy efficient computing, but then the waste heat that is generated, conservation of energy. So every electron that goes in as power has to be exhausted as heat. We use that heat again or block heating uses that heat again to warm up their greenhouses and grow tomatoes. So we're really making use of not just we've taken the efficiency of the hardware to the kind of the ultimate level but we've also made additional use in this case of the energy that's dissipated, right? And then finally on the left hand side is a desk side unit. So this is something that has been quite popular for our engineers and for some of our customers during the pandemic. So this is a way to have a four or five node hyperscale cluster under your desk. So this unit plugs into a standard 110 electrical outlet, sits underneath your desk and has exactly the same nodes as would be found in the rack. So we use this, our engineers use this to design and develop the systems and our customers use this to do benchmarking and POCs. It also gives many of our customers a sense of what's possible in reimagining the footprint of computing. So there's a number of designs I won't go into in this discussion but for edge computing in all different types of wiring closets or the corner of a mall or a real estate scenario or a manufacturing facility where it really makes sense to not bring an entire rack but maybe three nodes, four nodes, five nodes but then there is a hundred or a thousand locations. So imagine a retail customer with a thousand physical stores each store has three or four servers and they wanna treat that as a 3000 to 5000 node Kubernetes cluster because it really is globally distributed has a global workload needs kind of orchestration and monitoring just as a data center infrastructure does but now it's widely distributed and with our capabilities of reimagining the hardware we're able to bring that to bear. And just because you were talking about global there Eric it's not just available with 110 volts at all there's also a 230 or 240 volts option, right? That's right, Andy. Thanks for the reminder. The crate, the box in fact comes with two cables so the same power supply can also be used in 220 volt companies. It actually gives a little bit more juice in some cases when we use it in the racks, right? So Andy do you wanna talk a bit about the analogous so what I've tried to present is gonna have some of the components and the details of how we build up a hardware stack. Do we wanna talk a bit about the software that then so now goes in reverse, right? Let's go further up the stack. Yeah, yeah, of course. And we always each of us looks at it from a different perspective. So from the software folks it's just, oh just give us some hardware that's the easy bit and the hardware folks think, oh the software that's the easy bit that runs on top but it's kind of where they come together that actually is kind of where the magic happens and that's what's so exciting about what we're doing here. And you see that in this chart here, it all starts with a lifecycle management, right? And what is that experience when you first start to use a Sesame rack? We've put a lot of effort into working together so that when you get that rack delivered everything is pre-configured. All the software you need is located on the management node so we can deploy to the rack in a matter of a few minutes. We know what servers are there and we don't have to pull down all of the images. So this enables us to do literally a single command to provision the rack how you want. Now there may be some configuration options you wanna set to adjust how it integrates with your network or things like that but essentially everything is there within the rack. And not just a deploy time but when there are updates the whole stack is designed to be able to take updates automatically deploy them into the rack and do rolling upgrades across the cluster. So that's super important from an operational perspective and just a kind of time to value. You don't have to think about assembling a whole set of components for networking and storage and monitoring and the actual Kubernetes piece itself and the operating system and security patches and all of that it's all just automated and streamlined. So that's a lot of value right there. The next layer up is kind of from the hardware. We build the system around the flat car containing Linux operating system has a lot of advantages for systems like this. So it's optimized not just for running containers. So it is a minimal distro so it just has what you need for running containers but we've also tested and qualified and verified it on the Sesame hardware so you know that it's gonna work and it's gonna keep working. It's also a very secure base as well. So the fact that you're running everything within containers means that you can think of the operating system as an immutable thing that only ever gets updated when you do a full OS update and you switch from the base partition that's currently running to an update partition. If that upgrade doesn't work, you switch back but you don't have to worry about package management. You don't have to worry about security vectors where malicious actors actually modify the operating system on disk, all of that's protected. So it's basically the best basis from an OS perspective for running cloud native workloads. Then onto that we deploy Kubernetes where the core of the Kubernetes experience is just vanilla upstream. There's no special distro version here with modified pieces. This is the open source community Kubernetes but on top of that then we have a curated set of components that we deploy. So for networking, for storage, for monitoring, et cetera. And those components, it's not just that we select the right components to deploy, that they work together, we test them, we give you defaults out of the box. So you rarely have to think about what are all of the configuration options I need to get these things working together. That's all set up by the installer and by default by the locomotive infrastructure. Part of those components are for monitoring and telemetry, so there's Prometheus with dashboards and you can see what's going on from the top of the stack through to some of the hardware monitoring pieces as well or in one dashboard. And then at the very top level, you have a management UI where there's a clean, extensible UI from seeing what's happening within the cluster which nodes there are, what pods are running and we're increasingly building this out with more and more capabilities in terms of what we call systems intelligence. So starting with plugins based on a technology called eBPF to do things like trace monitoring of SysCalls that your applications are performing. So it even stores these on disk out of a ring buffer from the kernel. So in the event that something crashes you can say what was happening up to the point that crashes helps you diagnose and debug things happening in your cluster and also identifying where you can enhance security. So for example, defining network policies it can listen out to the network traffic that's happening and then suggest network policies for you to apply into the cluster to further increase your security. So that's kind of the layers of the stack and I guess the key point here is everything here is 100% open source, 100% community driven. There's no proprietary pieces we're trying to get in and some kind of layer here. And what KinFault brings is both development but also curating of this stack and updates and making those updates available in an automatic way which is not just a day zero and a day one issue but it's the day two and thereafter and really thinking about that full lifecycle of the experience you're gonna have with this software deployment. Right, yeah, Andy and that day two and day end is really the most important part of all of this that the experience that we've been optimizing together with KinFault for 60 minutes from truck to workload. So that's an analogy I often give or a focus I often give for our customers that we can deliver a rack on sort of a week's lead time and then once it arrives on the truck, fully cabled, fully integrated and so we can roll it right into the data center, plug it into the floor for power, the wall for networking and be running workload 60 or 90 minutes later. And one key to that is also the pre-qualification and the implementation of that software infrastructure so that there's not suddenly surprises. Oh, the network driver or this cable doesn't plug into this connector which can often lead to weeks or more of delay, sort of days and weeks of delay, right? And then the last part to point out in that vein is that the install time, the setup time if it's 60 minutes or 90 minutes is only a small fraction of the lifetime of that system, right? That the most important thing is that it's gonna spend many years running workload and as much as we go back and forth about software versus hardware and all we've talked about here is infrastructure level, right? And the real heroes or the real people doing the work here are the application developers who since we're taking care of hardware infrastructure, software infrastructure they're able to worry about mobile applications, web applications, APIs, databases, all the things that really make computing work because we're taking care of the plumbing of both the hardware and the software. Absolutely, yeah, I mean, another way of putting what we're trying to do here, Eric, is to say we're trying to deliver an experience which is as close as possible to a managed Kubernetes service but you get it on-prem and you own the whole stack and it's open and that allows you and your developers to focus on what's running on top of that infrastructure stack. You should not be spending time worrying about which version of my networking plugin or have I got or which version of my storage plugin am I running? Just let us keep that up to date. Let us make sure that it works software together with hardware, you focus on the application that's getting deployed and everything that's around that because that's enough of a job, that's a full-time job. Right, right, and then similar for us, right? So we also don't want anyone to have to worry about the difference between RJ45 and SFP and QSFP and all the other protocols that are and standards that are at the hardware level. We'll take care of that in our qualification labs and in our production facilities, right? So then the last piece that I wanna get to is what we advertise at the front, so the carbon footprint, right? And so the next visual that I put up is specific to the carbon footprint of this equipment, right? And we've now laid in all the pieces that make this picture relevant, right? So what we've done in our hardware designs and with the open compute community is we've made the systems when they're running as efficient as possible, right? Low PUE, high density, high translation of electrons into useful work, right? We've made the software infrastructure as efficient as it can be, right? Streamlined operating system containers, orchestration monitoring, looking for things that are out of whack or using an unnecessary set of resources, kind of optimized resources, right? So the infrastructure has done all it can to make the system as efficient as possible, remove waste, if you will, waste in the hardware design, waste in the software layers, right? But at the end of the day, we're still gonna have a carbon footprint. And then there's one more aspect that the IT renew approach, the sesame approach brings to the picture which is illustrated here. And so what we're showing in this chart is on the left hand side, a sense of in the past year, the total number of new servers that were delivered in 2020, right? 46 million servers which constitute over 3 million tons of CO2 equivalents and over 6 million cars that were added to the road, right? And as we know, you know, the pandemic caused an increase in computing demand and so also an increase in computing production, right? To the extent that we have delays and so on hiccups in the supply chain, but the numbers for 2021 will be even significantly larger than this, right? So very large carbon footprint, right? And so what we're showing on the right hand side is a nine year CO2 equivalent comparison of a traditional model on the right hand side with the big blue bar and the big orange bar versus a sesame model. And what we're doing is we're saving in two ways, right? And one, we're reducing the operational footprint and that's the efficiencies that we've just been talking about, right? The efficiencies of open compute design, the efficiencies in the data center, some efficiencies in software and so on, right? So the operational phase, the power that is used by the systems while they are running is the orange bar and that's being reduced by being more efficient in the way that we design and build the systems, right? But that still leaves the energy that's being expended in what is here called the pre-use phase. So what we were also referred to as the embodied energy of those servers, right? So all of those servers, all the components, the highly integrated CPUs, the integrated memories, and networking interfaces, super dense, super complicated technology has to be fabricated with processes that are both energy intensive, water intensive in a lot of cases, not people intensive, because we've automated a lot of it, but the robots again need to be fed with electrons and water, right? And so what we show on the right hand side in kind of the standard today system is we show a customer that refreshes their hardware every three years. And so that means there is a pre-use burden and embodied energy burden every time they install new hardware in the system. So what we do with IT Renew is for the last 18 years, we've been in the business of decommissioning data center equipment and helping companies like the largest hyperscalers, including Microsoft, Google, Facebook, Uber Dropbox, kind of a general who's who of the hyperscale business, and we help them extend the life of their equipment. And then when they are done with it, then we create secondary and sometimes third uses for that equipment to reduce the blue bars. So we're squeezing out the pre-use by not building new servers. So if we're able to take a set of servers that are coming off the line from an existing hyperscale customer and extend their life years four through six, often the sweet spot, but years seven through nine can also be helpful to reducing the footprint. Then we're able to do a massive savings that is additive to the savings that we do in the orange bar, right? So the orange bar savings remain if we're more efficient with the electrons, more efficient with the infrastructure, more efficient with the applications, of course. All right, also think of the algorithms, right? O log N rather than O of N, it's still important ultimately, but we're able to provide an efficiency at the hardware layer. So that is a super powerful kind of innovation that we're bringing to the marketplace for our customers today. Andy, any commentary to wrap us up or about the footprint on the software side? No, I mean, I think it's increasingly a concern for companies across every industry, like what is their environmental footprint? And at the same time, digitalization is increasing and is increasing concern across every industry. So it's kind of logical that they come together. I think the other thing that intersects with this is cloud usage as well. And if one of the things we're trying to do here is to enable customers to basically mix and match where they put their workloads. In some cases, it'll make sense to put workloads in cloud and that might be energy efficient in some scenarios, but you can put workloads on-prem in a data center that you rent using this kind of solution and get better in many ways and environmental footprint and also much better from a cost perspective as well. So what this does is it allows you, it's another kind of tool in your toolbox to reduce the environmental footprint and cost optimize at the same time. Right, yeah, absolutely. Andy, I didn't mention this upfront, but probably for the Sesame product line at the moment, almost 50% of our customers are cloud service providers themselves, right? So the end users that are creating the applications, running the servers and systems, they're renting this infrastructure from the cloud service providers that are our customers, right? We also work with some of the hyperscalers to self-recertify, to self-extend, life-extend their hardware. So the footprint reduction that I'm talking about and the innovations in the hardware space are accessible to everyone. So they're accessible to a small business, medium enterprise, a large enterprise that already has an on-premise footprint for whatever reason, as well as service providers of all shapes and sizes around the world. Yeah, and one of the things that having Kubernetes-based cloud native software infrastructure here allows you to do is to do that workload placement either within this rack or another rack and on-prem in the cloud, a mix and match where you do that workload placement for maximum efficiency. Yeah, yeah, absolutely. Yep, so hopefully we've characterized kind of the hardware interaction, the software interaction. It's a solution that the two companies have worked on together. We're happy to make that available to the marketplace. So happy to find us for additional details. Robert, do we have any questions based on our session so far? Thanks, Eric. Well, we have some questions here. I guess we'll start with you. So Eric, do you expect to see the traditional hardware vendors joining the Open Compute project? Oh, absolutely, Robert. So as I alluded to, the Open Compute project is almost 10 years old and we've seen participation across the board. So HP, Dell, other what you would term traditional vendors have been active participants and throughout the system, right? They may not participate in all the tracks, but different vendors decide on different things that they're sharing and that they're interested in. But the benefits of the community have really accrued even to traditional players. One way that you see it very specifically is there's now a standard for an OCP networking card that is used in servers across the industry because it was something that was just a kind of a no-brainer for everyone to do. But absolutely, the innovations have also folded in to what would otherwise be considered proprietary products, just like it has with Linux and containers and so on. Okay, next question's for Andy. Andy, how do you manage updates to the cloud native stack, specifically your flat car container Linux and your Kubernetes engine? Yeah, no, it's great question, Robert, because it's something we think a lot about. How do we do this in a secure way and a way that works operationally because people want to have control over updates when they're applied. So we actually have a product, it's an open source project, which is our update server, and that basically allows each of the hosts within the rack to query what is the latest version of the OS and you can apply policies on that update server as to how fast and how automatically you want those hosts to update. And then that's coordinated between the OS and the Kubernetes layer. So when a new OS version goes out, which is fairly frequently because there are security updates from time to time, that's available then on the management node and then each of the hosts within the rack based on the policies you define for it, we'll just pick it up. But it winds down the workloads that are running on it. So because they're containers, you can do that. It's called cordoning and draining the node in Kubernetes speak, apply the new OS update and reboot. The Kubernetes layer updates and the component layer updates are somewhat similar. It can be detected, you can go into the UI, it'll say, hey, this version is out of date. There's a new one, do you want to apply? Or you can do it via a command line as well. So a lot of flexibility and we try and automate as much as possible. Okay, I guess this next question I'll throw both Eric to you and to Andy. What are the key drivers your customers consider when choosing to deploy their own hardware infrastructure versus going straight to one of the public cloud service providers? Yeah, Robert, when we address this, I think the straightforward thing, right? So I've been working in high scale computing for my entire career and have watched the evolution of the public clouds over the last 15 years. The real truth of the matter is that what the public cloud has allowed customers to do is separate out the different sets of concern. So we have customers that do all possible models that you could imagine between, you know, on-premise infrastructure with proprietary software to public cloud with shared software. So we have customers that own their facilities but want to lease the hardware. We have customers that buy their hardware and use managed services for the infrastructure layer. So they pay a company a monthly fee in order to manage their software infrastructure. And so then they pay only their developers, they don't pay an infrastructure company. So what cloud has really allowed customers to do is a la carte which pieces are kind of critical for them and which pieces can be outsourced and converted into a monthly fee. And so the public cloud converts it into an all-in monthly fee that includes in development of the hardware, deployment of the hardware, development of the infrastructure, deployment of the infrastructure, but we have customers that do every piece in between. So it really isn't a question of do I do on-premise or do I do cloud? The important thing is something Andy said in the main part of the session which is I design my applications as a set of services or microservices, a set of infrastructure to a common set of APIs, a common set of paradigms. And then I'm able to run that application on top of whichever infrastructure makes the most sense. And I'm able to float that infrastructure on top of whatever hardware makes sense. And so we have customers that are 20% public cloud, we have customers that are 80% public cloud, but there is whenever there is a need for on-premise hardware then IT renew can step in. And of course the software stacks like Kubernetes and flat car and locomotive are applicable whether it's on-prem or public. Andy, do you wanna add to that? Yeah, no, I think it's a great answer. I mean, this idea that cloud is both a pricing model as well as a separation of operational concerns that's kind of the innovation that it brought. And a lot of those things can be applied into on-prem as well. And that's one of the things that we've been trying to do is to enable that operational concern as we were saying in the main session. We don't want you to worry about spending a lot of time managing the infrastructure. Now, if I can take away a lot of that effort from you, maybe we can make it as easy, almost as easy to have hardware on-prem as it is to consume just virtual machines and cloud. Okay, a couple more questions. Eric, can you comment on the typical timeframes for the hardware operational phase you see with your customers? Yeah, absolutely, Robert. So the most important thing is there is no such thing as typical. So we absolutely have customers that are focused on, improving their infrastructure, increasing their infrastructure, squeezing out a last bit of efficiency. Most recently, for example, in AI type of workloads where there really is still a Moore's law of innovation, where a piece of hardware that was just announced last week at a GPU event is a factor of two and three more powerful than the system just 18 months ago. But there is also kind of a typical heavy weight of or heavy center centroid of computing where Moore's law isn't advancing as efficiently as it was. And then there's also a set of what are often considered secondary use cases in storage and other aspects of the infrastructure where the laws have always been different. So Criter's law in storage was always a very different one than Moore's law. And so as a result, once you think about which workload is being applied to which hardware and what are the life cycles of the various hardware, you find that there is no clean distinction. There are some drivers at three years because of financial reasons, but when you look at, there are customers that will have storage systems in place for six, seven, eight years because that's where the data lives and the data maybe isn't growing or maybe it is growing and they're very comfortable with those systems and APIs. And on the other extreme, there are compute maybe a GPU type workloads where there's a turnover in 16 or 18 months, right? And then in between maybe is networking, right? So we still have customers today where we're bringing them from the one gigabit era into 25 gigabit, right? And at the other end of the spectrum, we have customers that are starting to deploy 200 gigabit and 400 gigabit solutions, right? So the most important thing, Robert and for folks on the webinar here is there is no typical and what the systems that we've described allow you to optimize within whatever your timeframes are. If your timeframe is three years, if your timeframe is two years, if your timeframe is seven or eight years, the important thing is that we can look at the optimization across workloads, across on-premise, across public cloud as Andy's has alluded to and get the most efficiency across the board, right? And it's all enabled because we're collaborating and sharing on the lingua franca of the APIs and the workloads and the deployments. Okay, last question. I guess I'll vote to both of you. Are you seeing a changing mindset with regards to how organizations are addressing sustainability? Andy, do you wanna take a, sorry? Sure, yeah. I mean, I think it's definitely coming front and center, and I, interestingly, so I moved from the States to Europe two years ago and the US two years ago was not necessarily at the forefront of concern about climate change and moving into Europe. Definitely, you saw a lot more political tailwind for pushing for climate change regulation. And I think companies are aware of that and are certainly having an eye on what regulation is coming. As much as anything, actually, I think what you're seeing generational change pushing it. Companies have people coming into the workforce who the new employees want to work for companies that are doing good for the world, good for the environment. For the millennial generation, these are important topics. And I see companies responding to demands from their employees, from their communities to be more proactive in these areas. And you see this with, for example, the net zero commitments by companies like Microsoft, by Amazon, and also in European countries which are committing to have zero traditional fuel vehicles by 2030, these are the kind of things that are really pushing forward environmental awareness. So having said that, I mean, I think it is just one of many factors still. Overall cost is still the key driver for compute infrastructure. But from what I see, I think, we can make this a win-win, right? If we're doing well by the environment, we can also make solutions that cost less. And those two, if we can make them go hand in hand, then it's good for everyone. Right, yeah. So speaking from my perspective, I think the first perspective I should bring, so speaking as an engineer, the politics of sustainability is only catching up with the scientific reality, right? So if we are inefficient in our use of the resources, then we will use more resources, right? And that's been kind of a focus of mine in my entire engineering career. We always try to create the most efficient solution to a particular problem, right? And so that's how I've approached the designs that we do in OCP, the designs that we use for power consumption and so on, don't waste the resources if we don't have to, right? Get a given amount of work for less kind of inputs. Whether those inputs are electrons in energy or rare earth metals or et cetera water in some of the processes, right? So as an engineer, I've always been focused on optimizing the processes and doing the same with less input materials, right? So in some sense, we're really just bringing all that together at the level of a rack or at the level of a data center, right? And that said, as Andy said, there is certainly a drive with employees at companies but also companies that are just now just starting to do the accounting. What is our footprint? And as soon as you look at the numbers, then you realize, hey, our footprint could be smaller, right? Where is the waste? Where is the return on investment for reducing that waste? And that's the most important thing in our solution is that when a customer buys into a Sesame rack, it is more efficient without compromise. So it's not, oh, you're paying more for the system but it's sustainable on the backend. It's actually paying less. So our customers typically pay between a 30% to 50% less than using a traditional solution and it has the sustainability benefit. So we think with those kinds of innovations that are, as Andy said, a win-win or a no-brainer, just do it this way because it's more efficient and it reduces the impact on the environment. Okay, that's a wrap. Thank you, Eric and Andy. A big thanks to the good folks at CNCF and thank you for joining us on this webinar. Thank you, Roy. Bye everybody. Thanks, Robert.