 Welcome back to theCUBE's continuous coverage of AWS re-invent 2021. I'm Lisa Martin. We are running one of the industry's most important hybrid tech events this year with AWS and it's enormous ecosystem of partners. Two live sets we have going on right now. There's a dueling set right across from me. Two remote studios, over 100 guests on the program. They'll be digging into really the next decade of cloud innovation. I'm pleased to welcome two guests that sit next to me here. We've got Erin Arnoldson, partner at BCG Gamma and Adi Zolotov, Associate Director of Data Science at BCG Gamma. Guys, welcome to the program. Thanks for having us. Thanks for having us. Adi, let's go ahead and start with you. Give us the low down what's going on at BCG Gamma. We are focused on building responsible, sustainable and efficient AI at scale to solve pressing business problems. Good, we're going to dig into that more. There was a lot of talk about AI this morning during the keynote yesterday as well. And one of the things, Erin, that we talked about the last day and a half is that every company these days has to be a data company. But the volume of data is so great that we've got to have AI to be able to help all the humans process and find all of the nuggets that are buried within these volumes of data for companies to be competitive. You talk about sustainable, efficient. Let's go ahead and talk about what do you mean by efficient AI? It sounds great. But help unpack what that actually means and how does an organization and any industry actually achieve it? Yeah, so when we talk about efficient AI we're really talking about resilience, scale and adoption. So we all know that the environment in which AI tools and systems are deployed change and update very frequently. And those changes and updates can lead to errors, downtime, which a road user trust. And so when you're designing your AI it's really critical to build it and ensure it's resilient to those types of changes in the operational environment. And that really means designing it upfront to adhere to company standards around documentation, testing, bias as well as approved model architecture. So that piece is really critical. The other piece about efficient AI is we're really talking about using better code structure to ensure and enable that teams can search, learn, and really clone AI IP to bring AI at scale across company silos. So what efficient AI does is it ensures that companies can go from proof of concept and exploration to deploying AI at scale. And the final piece is really about solving the right business problems quickly in a way that ensures that users and customers will adopt and actually use the tool and capability. That adoption there is absolutely critical. Yeah, I was, you know, when we're talking about AI most of the time we're talking about three component. It's written, we call it like the 10, 20, 70 rule. 10% of the change is really about the better AI algorithms that are coming out. 20% is the better architecture, the technology, all of those components, but 70% of it is really about how are we influencing our business partners to make better decisions? How are we making sure AI is built right into the operational decision flow? And that's really when we start talking about better AI. We move it away from kind of our pet project buzzword bingo into decision operational flows. And there's a journey there. There's a journey that we all are on. You see the evolution of AI right now. And I liken it a lot to myself when I'm, I'm a big football fan, right? And fantasy football is like my passion. I see. And when I look at the decision-making's I've made 10 years ago versus now. Now I actually have my own models. I'm running against it. I'm very much into the details of what is the data telling me, but it's not until I bring that together with my decision-making process that really makes it so that I have bragging rights on Sundays. I wouldn't want to compete against Aaron. I mean, you know, I've got a lot of friends that do fantasy football, but I don't think they're taking, they're actually doing data-driven approaches as you are. One of the things, I'm glad that you talked about the 10 to 20, 70 formula for dividing investments in AI. One of the things that really surprised me, and I'm looking at my notes here, because I was writing this down, was that you said 10% AI and machine learning algorithms, 20% software and technology infrastructure, 70% though, is also change management. That is hard, especially at this speed with which every industry is operating today. What we've seen in the last 22 months, we've seen a massive acceleration to the cloud. Every business pivoting many times. Where are customers in terms of understanding the challenges that they can solve with AI, given the fact that we're still in such a dynamic global environment? Adi, what are you seeing? So, I think it's actually quite bimodal. Some companies, including the public sector, are really leaning in and exploring all the different applications and all the different solutions. Unfortunately, if they're not emphasizing that 70% on change management and the culture change and user adoption, those investments are substantial, but you don't get the return on the investment. And the other hand, the other part of that bimodal distribution is there are folks who are still really reluctant because they have made investments and it hasn't brought about the change that they were hoping for. And so, I think it's really critical to bring that holistic approach to bringing AI and advanced analytic tools to really change the way a company is attacking its problems and bringing solutions to its users and customers. Yeah, I liken it a lot to us as adults of when we teach our kids about math, right? Like, less of my time with my own kids is focused on teaching them the principles and all those things, but it's more teaching them to be comfortable. Why are they learning math? What are they doing? How is that going to prepare them to be more competitive later on in life? And so, the same thing's happening in this evolution in AI, right? There is this big tech and AI transformations that are happening, but the questions we need to ask ourselves within is are we taking the time to make sure our companies and our people are on the journey with us and that they understand that this is going to be better for them and give them a competitive advantage? That's critical. We've talked a lot in the last couple. We talk a lot about every show about people who process technologies and people is part of that. But I've definitely seen more of a focus, I think, the last two and a half days of the people emphasis going. We have to upskill our people. We have to train our people. We have to make sure that they understand how this technology can partner with them and enable them rather than take things away. So it's nice to hear you talking about the big focus there being on the people that is, because without that, then a day to year point, a lot of those projects aren't successful. And not only, I think the other piece there in terms of bringing the user along for the journey is you don't want them to feel like this is just another tool, right? Another part of their, in addition to their workflow, right, you want to take the burden away. You want it to really not add, but to their list of daily tasks, but subtract and make it easier. And I think that that's really critical for a lot of companies as well. Well, I think along with what you're talking about, we have to teach people to be responsible. So it's one thing to do the job better, but it's another thing to be responsible because in today's world, we have to think about our responsibilities back to our communities, to our consumers, to our shareholders and ultimately to the environment itself. And so response, as we are thinking about AI, we need to think differently too, because let's face it, data is fuel, and we can accidentally make the wrong decisions for the globe by making the right decisions for stakeholders. We have to do a better job of understanding why we're doing what we're doing, and not only the intended consequences of our decisions, but also the unintended consequences. And then we need to be responsible in the ways that we're using AI and that we're transparent in our use thereof. Right. Yeah, Erin, I think that's incredibly critical. I think responsible AI has to be at the heart of AI transformation. And one of the interesting things that we have found is that organizations perceive their responsible AI maturity to be substantially higher than it actually is. Is that right? And less than 50% of organizations that have a fully implemented AI at scale do not have a responsible AI capability. And so at BCG, we've been working quite hard to integrate our gamma responsible AI program into the big AI transformations because it's so critical, it's so absolutely important. And really that there's a lot of facets to that, but one of the critical ones is it ensures the goals and the outcomes of the AI systems are fair and unbiased and explainable, which is so critical. Absolutely. I think it also ensures that we follow best practices for data governance to protect user privacy, which I think is another critical piece here, as well as minimizing any negative social or environmental impact, which again, it's just got to be at the forefront of AI development. What about? And I think that there's a tech part to that too. So like one thing that we're working on called gamma facet is really, for the longest time in this AI transformation, AI is kind of a black box and it's kind of mystical, but we optimize our results. The transformation when we talk about better AI is that the decision maker is in the center and knows the outcome and so we make it a clear box. And so really we're working a lot on the most common Python packages to make them more clear so that the business user and the data scientist understands the decisions that they're making and how it will impact the company and longer term society. And what about the sustainability front? I mean, it's clear that I can understand why you have the 10, 20, 70 approach, that that 70% is really important. There are companies that think they're much further advanced in terms of use of responsible AI responsibly than they really are, but we talk about sustainability all the time. It's a buzzword, but it's also something that's incredibly important to companies like AWS. Imagine to companies like yourself. Where does, what does sustainable AI look like and how do organizations implement it along with responsible AI, efficient AI? Yeah, I think it's the question in some ways right now given everything that's happening around the world. And so AI for sustainability is really critical. I think we all have a part to play in this fight to ensure our global environment. So I think we need to use the same AI expertise, the same AI technology that we bring to maximize revenue and minimize cost to minimizing a company's footprint long term. I think that's really critical. One of the things we've seen is that 85% of companies want to reduce their emissions, but less than 10% of them know how to accurately measure their footprint. And so we've been focusing on AI for sustainability across a couple of different pillars. The first is measuring the current impact from operations. The second is data mining for optimal decisions to reduce that footprint. And the third is scenarios to plan better strategies to alter our impact. Excellent, well there's so much work to be done. Guys, thank you for joining me, talking about what BCG is doing for responsible, efficient, ethical, sustainable AI. A lot of opportunities, I'm sure for you guys of AWS and your list of clients. But we thank you for taking the time out to talk with us this morning. All right, thank you. All right. Take care. For my guests, I'm Lisa Martin. You're watching theCUBE, the global leader in live tech coverage.