 Welcome back everyone to Open Source Summit 2023 here in Vancouver. I'm John Furrier with theCUBE. We are here breaking down all the action. Rob Streche here, analyst on theCUBE, breaking down the analysis. Rob, this is a great segment on energy. Another one, Wameen Chan, who's the senior engineer at Red Hat is here. Wameen, talk about what's going on at Red Hat. We've had multiple Red Hat folks on here. Lot of engineering going on in an open source company, Red Hat, we love, doing some really cool sustainability things. What are you working on? So as a matter of fact, I'm leading the sustainability developments as an office of CTO, the emerging technology team. So this is one of the projects that we have created in the last two years. And one of the first thing we deliver is called Project Kepler. So Kepler is the one of the first projects that enables cloud native community to manage their workload energy consumption. This is true of fantastic news. Your thing is to be living. A lot of times people are talking about what's my energy profile is, how much energy my workload consumes. If I run my database application, what's my carbon footprint, what's my energy consumption? So we are able to answer these questions by providing you this very simple tool, very intuitive AR UI. You can just see it and see how much energy you use and take actions from there. So I see on the notes here, IBM Research and Intel contributed to this project. It's community-driven, open source project that captures the power metrics across the platform. It sounds like a data acquisition opportunity. Data's in there. There we go. Let's get into the data. What's the, how do you look at that? What's the vision? You are definitely right. So the first number one task is to get the data. So what kind of data are related to energy to the workloads? That's the question number one. So to the background, we have a lot of, I've seen a lot of research, academia research done in this area. You see people are research professors and research scientists and we see tons of publication, how to measure energy based on this data acquisition. But we are able to translate that research into practical software package and stacks. From that point, you can use the scientific method to measure energy consumption. We started foundation with the transparent methodology and open source manner. So that's something we are believed to be able to help the community, help the end users and help the industry to achieve that goal. What kind of engineers are involved? Because what I love about the energy thing, I learned a lot by the way from the LFenergy.org talk on theCUBE here. I've such opened up my eyes to so much nerdness that was cool. What's the kind of engineering, software engineering is going on in the group? What's the makeup of the personnel? Can you share? That's a very good question. So one of the things I'm proud of most about this project is we as a community consist of developer engineers like me and some research scientists like from people from IBM research labs. And people actually from the community working towards the advanced degrees, the master degrees, the PhD degrees. And these are the folks that stand up between behind the project to make it happen. So we have a very diverse community, diverse backgrounds and a very fresh mind. And some pedigree too on the degrees. Yes, we see that as a very true testimony. People are very interested in this. Yeah, and that's where I wanted to, because I think again, this goes to SDG7, UN and all of that. And you know, probably be talked about at COP28. Right. I think that's over in Dubai this year, or coming up. And so who are the consumers of this, of Kepler right now? So we have a diverse customer basis. We're not saying that as customer, as a paying customer, but the interested customer, they're looking for solutions to measure the workload energy in their data centers, in their clouds. So most of the folks are come from the financial and financial service institutes, like the banks, insurance companies, these are the people that have the first request, interest. And also we see manufacturing companies, the big name manufacturing companies also have the same interest from IT departments mostly. And also there's a big chunk of the segments also from the providers providing software services to the other customers. So these are the people who have the obligation to show the carbon footprint energy consumption to the other user. Because I think that what was interesting is, to me, to John's point, I think that LF Energy was really exciting. My brother happens to work in sustainability for Mars, the candy company. And they do a lot with COP and GO and very dedicated private company. And they're looking at it from the sourcing, all the way from sourcing to manufacturing to getting the candy to you, kind of thing. And so it's, what I think is really interesting about Kepler and that is that, and I think there was some talks a couple weeks back at KubeCon, NativeCon on using things like Kepler to then decide where you run your containers on what day, what is the cheapest or most efficient hour. Yeah, that's the point, that's LF Energy and Kepler can work side by side together. So Kepler is able to help you to identify the energy consumption, but as well the energy is produced, that's LF Energy can help us. You do not just want to be for the sake of energy reduction, you want to save the carbon, that's your overall use. If you're able to identify the source energy as a different time, different location, then you can have the best carbon reduction happening. So we do believe that's a lot of collaboration opportunities between Kepler and the broader community. I mean, I want to ask you as we have limited time left, I want to get into the auto scheduling. We had a description, hey, I want to run my washing machine at a certain time. We've heard, you know, common use cases in the average home, when you get into like the data center, you know, you got Kubernetes clusters, when you run workloads, auto scheduling, workload scheduler, auto scaling are big parts. What kind of innovation are you guys seeing with power aware? Can you share your vision on that? So when you see the metrics, you know how much energy you use by workloads, by your database, for example, if you're at a time of day, in the morning, the database using 10 watts, and as a busy time it's 20 watts, you know pretty much how much resources, how much you should give to that database. So by scale the database up and down, based on the request, the number of people using the database, and as the power cap, I give 10 watts no matter how much you use, then that is a combination that you can use to adjust your resource assignments, performance, as well as the energy consumption. Just like rationing. Right. Here's your watts. That's rationing, that's right. Use it and savor and marshal those resources. Yes. Okay, I got to ask you, since you brought it up, since I brought it up actually, maybe it was for a reason. I heard, and we haven't verified this, but I heard that OpenAI is using a lot of carbon footprint to jam up there. What is your, what do you know about that? It's not my wheelhouse. I imagine the large language miles are compute intensive. I heard it's worse than crypto. In terms of like, not crypto, but worse in terms of crypto mining, is a lot of GPU usage. So I'm hearing, so what do you think about that? What's your expertise tell you? The good news is that OpenAI is providing the service that we like. The downside is that the reason that service we like, we have to use a lot of energy, a lot of carbon. So good news is that we have the solution. We have some mechanisms sitting behind. So the talk we just gave 20 minutes ago, the cloud native, cloud native sustainable AI is towards that direction. We can tune the GPU that's OpenAI are using, to the service they are providing, so that we can achieve the best performance per watt. Not just the best performance, performance per watt. So that we can achieve the performance, the usability goal, as well as reduce the energy consumption. We'll hopefully- So that's the use case vector that will help OpenAI. With their energy, carbon footprint problem that's developing as a result of their success. Yes, we can make them successful without losing a loss of power. Well they're successful now, but they're eating a lot of carbon. But they've been efficient to your point. We can make that happen. Awesome. And I think what is really exciting about it is there's for so many years been a lot of greenwashing, people saying that they're green and carbon. And I think it comes back to how is this helping determine carbon footprint? Yeah, so I think that's a very good point. A lot of claims are not backed by facts and are not backed by sorted data points. We are providing the data points. We can compile the data you are using right now versus what you're going to use in the future with all this optimization. So that is not so greenwashing. That is a wash you through the green world. So we hopefully can transform the people into the future by using this methodology. Wamin, the last minute we have left, take some time to talk to the audience, put a plug in for the project, what you guys are looking for, share some data, take a minute to, last minute to explain what you guys are doing, what your needs are, what your goals are. Take a minute to put a commercial out there. Okay, so thank you for watching this episode. So we believe that capital has a lot of potential. I think that the community in general needs to look at how they use in the projects, how that project can be improved based on your own use case and the requirements. We love to see more contributors, doctors, and evangelists to help us to grow bigger and better. Wamin Chen here with Red Hat doing their job contributing. Again, save the planet, make the world a better place, change the world. That's what we all want to do. It's very mission-driven, but it's important and open source is changing the game. And we heard the AI angle here as you start getting into more power management, aware situations, auto-scaling, all can be done responsibly. So thank you for watching. Thank you for taking the time. We appreciate you putting on the Cube, doing our part, sharing the data with you. We're here in Vancouver, beautiful Vancouver. You can see how our office today is beautiful. We're sitting out there watching the boats go by and you can see just a beautiful day in the mountains. This is our office. They're all looking at that shot now. This is the open source summit, the best minds gathering to create the future. Cube is here. We'll be right back with more day two and we've got day three tomorrow. Stay with us. We'll be right back.