 Good morning. Good afternoon. Good evening to everyone. Thank you for joining us today. I'm Alison Snyder. I'm a managing editor at Axios. And thank you for joining us for today's session. So algorithms used for decision making for hiring and lending, policing and more have raised a lot of serious concerns about bias and discrimination. And even if the math or some of the math can be fixed, there are questions about how these technologies are responsibly and justly used and developed and applied. And so countries, companies and technologists have laid out dozens of ethics, AI ethics principles to try to in response. And we're going to look at how those ethical principles are being put into practice. So at the end of the session, there will be a more detailed discussion among World Economic Forum members. If you're a part of that and you'll be participating, please stay on at the end of the discussion for some more instructions about how that will work. And the panel today is Mr. Hermann Greff, who's the CEO and Chairman of the Executive Board of Sperbank, Russia's largest bank, which is also involved in developing Russia's national AI strategy. Yaxin Zhang, he is a scientist and entrepreneur who's now the Chair Professor of AI Science at Tsinghua University. Welcome. Athena Kanora, she's the Executive VP and Chief Strategy and Transformation Officer at PepsiCo. And finally, C. Vijaya Kumar, who goes by CVK. He is the President and CEO of HCL Technologies, which is a nearly 40-year-old Indian multinational IT services and consulting company. So thank you all for being here. I really appreciate it. And I was hoping we could start by talking about, from each of your vantage points, are ethics principles being integrated fast enough to keep up with the technologies in the industries you work in? And maybe CVK, if you could start, that would be great. Yeah, thank you, Allison. When we talk about responsible AI, I think a lot of aspects are still evolving. And if you want to talk about the policies and principles, what I personally believe is we need to have something which is an overriding principle, like what is the Hippocratic Oath in the medical world, where first is do no harm, either help or do not harm the patient. I think an equivalent analogy is what we should be looking for. I think there are four key principles. One is, of course, ethical, which is what everyone is highlighting. This is, to be more specific, I think it's important to responsibly source the data, manage the data code, and ensure there is an absence of bias and it's being done in a fair and an inclusive manner. The second aspect would be explainable AI, which means there needs to be transparency. The systems and algorithms should be transparent and should be able to explain to the users and regulators why an algorithm is predicting what it is predicting and why is it making the decision it is making. I think this would be a very important aspect. The third element will be secure. Whatever systems and the software that we put in place should follow the highest standards of security because anything which gets compromised can have some kind of devastating impact, especially in areas like autonomous driving and things like that where AI is playing an important role. The fourth element is the accountability and the governance. Who is accountable for these algorithms? How is it being governed? What is the validation process that we go through? How thorough is the validation process and how is the data which is going to build these algorithms free of any bias? I would think ethical, explainable, secure and accountable are the four key principles which should drive the development of the AI and explainable AI methodologies. Mr. Graf, may I ask in the banking industry, how do you see it? Are the ethical principles taking place with the technology and if not, what are the gaps? Hello, everybody. Thank you very much, Alison, for the opportunity to participate in this discussion. And I think that if you speak about banking, Sbernau is not a bank, it's a huge ecosystem, but inside the core of our business is banking. And if you speak about ethics for us, it's also a key question. And if you speak about the ethics of AI, I think that we need to divide this question in two parts. The first one, who is responsible for that? Because if you create this machine learning process, we need to speak about the ethics on the site of developers. Because everything what is the core of your corporate culture can be implemented in the algorithms for machine learning. And the second question, what kind of ethical principles are more important for banking business? I would like to say that the first question is the security. Secure AI, secure your data and protect the customer data. This is the question number one. The second one, for us, this is the accuracy. The AI, the models, then the results, then you need to be accurate. And as a result on the first two questions, this is the third one, this is trust. We need trust to have trust of our customers. If in our case, we invest a lot of our efforts in this problem, and last year we created a special declaration inside SBIR how we use, how we implement AI in our processes and in our products. And the principles we divided for each stage of product development process, product creation, product development, and product placement. And I think that we need to speak also about the second part of this problem. We need to have a right for mistake because we speak about the people. In our organization, we have more than 2,000 data scientists and the people who are responsible for AI. And we need to understand if the size of your company is so big like SBIRBANG, you need to be ready to see mistakes. And what I say that, please, guys, in the first two stages in the product creation, product development, it's acceptable to have mistakes. But in production, for bank is crucially important to eliminate all mistakes because it would be very painful for millions, 100 millions of our customers. And this is the very important question, how we can eliminate this process, how we can organize the whole process to verify our models, to organize two different parts in our business, different departments who try to validate each model which we try to implement in our business. And this is a very important question. May I ask Athena and Yashin, maybe Athena first to weigh in on that and also to talk about a little bit from your perspective. Yes, thank you, Alisson. I'm glad to participate in this panel with such a distinguished guest. Let me just start with the fact. The fact is that the development of AI technologies is happening so fast and the framework, principles and policies that are done with that are lagging behind. And we have to acknowledge that because no matter what we do as corporations and as governments, still this lag will exist and therefore we have to take action now. Saying that, what do we do as an industry? And of course, we at PepsiCo, being at the lead of the consumer goods industry, we have also a responsibility. If you were to look at how many consumers with us in this industry, billions of consumers, every CPG company and we at PepsiCo, we have a responsibility to number one, ensure that the inherent bias that comes from the systems and there is bias on how we manage the data, there can be bias on how we mine the information through the algorithmic bias that comes with that and of course bias on how we approach the consumers through technologies like computer vision and of course hyper-personalization. If we look at kind of this whole framework, what do we do with that? So number one, what we have established is a governance layer, the operating model on how you manage the three facets of data, algorithmic bias and the technology, the connectivity of AI across the value chain needs to happen with strict governance. As Herman said earlier, who is responsible and accountable for the development, deployment, maintenance of all of those AI solutions. Second, what we have established is our playbooks, who across the value chain touches those systems so we can have transparency, flexibility and actually visibility on what is the outcome of the full value chain. The third one, we have some clear rules and guidelines aligned with our PepsiCo way as to when we develop AI systems and where we don't develop AI systems. Because inherently AI is being used to be able to mine information and it has to be done in the broader social good umbrella. So we are very clear when it comes to capabilities around recruitment. We are extremely careful to rely fully on AI to be able to do that. When it comes to hyper-personalization, we are adamant, we don't host personal data to be able to mine information around how we target the consumers. So to be able kind of to wrap up and put kind of the overarching framework, the industry, the CPG industry has a very big responsibility because of the reach of the consumers and the broader ecosystem that comes with that. AI are associates, are employees and of course kind of the small, medium-sized businesses that work around this ecosystem and therefore setting standards, AI standards for the industry to be one of the advocates and work with the government is an embedded. Gashen, you work specifically on autonomous vehicles and I'm curious sort of when in that realm where do things stand and where do you think the biggest gaps are and maybe most urgent and top priorities in terms of the ethical questions that are arising because there's a lot, right? Oh, I'm sorry, I think maybe you might be muted still. Just better, can you hear me? Okay, I can hear you. Okay, thank you. Honestly, it's nice to be apart. Oh no, I'm sorry. Gashen, I think we might be having some audio challenges. I'm going to give him a second and I have another question to move on to really quickly. Again, this is for all of you. How do you within your organizations empower the people who are set, who are tasked with developing AI ethics principles? How do you empower them to do the work that they do, especially when it might run up against research or the bottom line? CVK, do you want to take that one? Yes, I think first of all, I mean, we are a technology services company. So there are a lot of development community who is focused on analyzing data, building algorithms for managing a lot of technology landscape and for our clients. So while we empower them to do the right technical work, we have a governance layer to ensure security, data privacy, to be making sure that we are within the guidelines prescribed by different countries, GDPR and things like that from how we use data. That's one aspect. The second aspect is we are largely a B2B business, but there are a lot of use cases where we have a large workforce, 160,000 people delivering work for maybe 500 plus clients. Sometimes selecting people for an assignment is done through some level of automation and there is some amount of machine learning and AI which is getting embedded and it is still very experimental in nature. So we are just making sure that the underlying logic, underlying foundational data is accurate and we can really trace back the reasons and the logic. That's what we are doing and it's done, one is while we empower the developers to do what is required to get the right insights, but it needs an overarching governance layer and that's what we are attempting to put in place. I hope that's helpful. Yep, absolutely. Thank you. I want to try Yasha again, see if we can, if we've got the audio fixed? No, I still can't hear you. No. Okay, Athena, are you able to weigh on on that question as well? Yeah, of course. A couple of points. Of course, we are in the food industry, so we have two communities. When it comes to AI, we have the developer community, the people that develop the AI products and then we have the user community. The different business stakeholders, whether it's our supply chain, things, our commercial teams, our service teams, our R&D teams, that are being kind of the adapters of those solutions. So the approach that we have is twofold. One is for the developer community, we make sure that they have the necessary platform support, service layers, technology to be able to drive that, but as I said before, in a very strict governance. So we can internally audit what this community does without creating any, I would say, bias in the activities that they are executing. The important is the user community. What is the level of adoption, usability, and learning experience that they have around those AI systems. And therefore, we have established literacy programs around technology data AI to be able to obscure them, educate them on the opportunities, but also the risks of those AI systems, how responsibly they can take advantage of the new technologies and ensuring that the proliferation of solutions and capabilities that we develop at PepsiCo is ingested in a way that becomes natural for all the business stakeholders to use, adopt, and enhance the human experience because we cannot forget that AI is here to enhance the human experience and not to replace the human experience. Can I ask each of you, and Mr. Graph, as well, I'd like to bring you back in here. Last year, IBM said that they would not develop or sell facial recognition technology without clear regulations. And I think just picking up on your last point, like, are there red lines that you would draw around AI technologies today, either because they're not necessary in the situation, they're inherently unethical from your perspective, or because they could potentially do more harm than good in their current state? Are there technologies like that where you would say, not now, not yet, not ever? CVK, would you like to start? Yeah, I personally do not think technology per se is a challenge. I think it's about how the technology is being deployed. How do you build it into some of the business processes? I think that's where some potential challenges could come in. I personally think we should continue to develop technologies because I think always that will be the leading indicator for doing some innovation. But how do we use it? Are we using it in the right way? Do we have secure frameworks? Do we have governance around that? Are you able to explain at all the points that we talked about? I think there should be more emphasis around that, rather than saying we would not invest or develop some technology. So not inherent in the technology, but in the governance? Okay, Mr. Griff, are there, I guess, red lines around different AI-based technologies that you would draw today? If you speak about AI, it's a special case. The first question was about research and how we empower the people. We have 16 labs in our organization and each of these 16 labs use AI technology because now everything is based on AI. And I think that you need to give 100% freedom for the people who work on the first stage, on the creation, technology creation, product creation, or they try to create something new disrupt your business model. And I think that crucially important is the question that we need to put the red lines for different companies which depend from the AI maturity. And in this case, we are in Russia and we are in the size of the Burbank. We create a special AI maturity index for everybody. And we try to evaluate each part of our organization. We do it twice a year and we provide this idea for the whole organization, including the government, and we try to understand how mature different parts of our business and different organizations inside us. And I think if you mature enough, what we say, we divide the whole organization on three different parts. It's an artificial classification, but it helps us. The first one, this is an organization who starts implementing AI. The second one, this is the AI radio organizations. And the third one, what we call them, AI native organizations. And if you speak about the third part of organizations or third part of your business, I think it must be no borders to implement AI. And we need to speak only about these kind of principles which call the CVK, these principles of general accepted AI principles, secure, explainable, reliable, etc. And this is the main framework for that. But it's the beginning of the way. I think that very important is if we can share our experience. And this idea with AI maturity index, I provide to WEF. And I think if we can organize and publish each year, WEF, AI maturity index, it would be very interesting and very helpful for everybody. Athena May ask, what do you think? Are there red lines in the use of AI within your industry? Or even more broadly, do you consider any? I think there should be some red lines. So no one wants to stop the evolution of technology. The fact that I think everyone will benefit from that. However, it is on everyone's benefit, and especially the societies and the more valuable segments of the population, to be able to trace back how the AI models by the algorithms operate. What is the use? What is the target that we put the models against which kind of to run? And what is the traceability and explainability of those models? Because whether we like it or not, there are many unknown parameters when it comes to AI systems. And of course, you can fine tune them as you see fit based on your corporate priorities or your government priorities. But ultimately, we shouldn't forget that we cannot undermine key social parameters that come with that when it comes to targeting consumers just to benefit the company growth or to target vulnerable segments of the population so we can benefit on the back of other competitors. So the red lines that we should be putting together and collectively on the industry should aspire to is number one, traceability. We need to be able to track from the beginning of the value chain to the end of the value chain why we use those AI models. Number one, number two, to be able to explain the usage, who touches them and for what reason. And number three, there has to be, as we all discussed, clear governance and a maturity assessment, not just of the companies, but also on the recipients. I, you know, the ultimate audience who is affected by the use of those technologies. So I think if we are able to put this framework and then align it with policies, government policies which are much broader than the country borders, then we will all benefit. We've got about five minutes and two questions. So, you know, AI is often framed as a race and it's a race of immense geopolitical importance and consequence. And my question is, does that sort of run the risk of downplaying ethics because of fears that someone will jump ahead and develop a technology or someone else will do it? In other words, is that the right framing and how do you sort of integrate ethics into it? I think, Alison, yeah, I think just able to down in. I missed your early question, but let me see, you know, I'm very happy to be part of the panel. And you know, just from my industry, the tech industry and the software internet, we have really come a very, very long way in terms of why we stand in the AI ethics. So there are three things that you have to do to make this work. You know, first, of course, is the level of awareness. We have to start from the very top. You know, when I started the Tsinghua Institute for AI Industry Research, and one of my first emails was to state our principle, three R principles, the responsive AI and the resilient AI and the responsible. So the thing, you know, we work on technology, you have to be responsive, you know, to the needs of the industry and the society. Talk about a ton of striving, talk about the work that accelerates drug discovery, the technology that improve personal health of the things and also have to be resilient, the transparent, explainable and the robust and also working on the things that reduce data bias, the model of vulnerability and the security and responsible as well, especially for engineers or scientists, you have to make sure that, you know, you put ethics and the value beyond just technology itself. And I was the president of Baidu for a few years, actually, and Baidu, you know, there was a committee on data for privacy, security and governance. And I was the chair of that committee and making sure we have the right people and have a cross-company initiative. The second element which is very important is you have to map this into the right domain, you know, for your product, your industry. Now, you have to start with data, to avoid data bias, you have to start with the right kind of data, integrity, the right of data, the scope of use and the life cycle of that data. And also you have to develop and apply the right type of algorithms, the deep learning which is a little bit opaque. There are also other AI algorithms which are more transparent and they need to somehow making sure there's a logic, a rule-based, a knowledge-based that is part of the AI algorithm. And also you have to control the training, learning and the inference. AI algorithms is like a baby. It's like your dog. You have to train it and making sure it involves and have the right kind of environment. And the last one which is super important is to have an operational framework, the right type of workflow and toolbox decision-making process, otherwise it won't get anywhere. But in fact, when I was invited, we had a data agent that actually owns the flow and the management data and also it will be held accountable. By the way, actually, I just read the work from the World Economic Forum on unlocking the popular sector AI, the AI procurement in the box, the workflow, the two sets. I thought that was really important because a lot of times you talk about AI, you have that level of awareness, but you don't have the right type of execution framework, right type of tool to make the work. Let me just say that. It'll be a lot of time. We were just having a conversation about sort of drawing red lines around technologies for whatever reasons. And I think I wanted to ask you actually, you've seen this from different perspectives, right? You have the U.S. new because from your time at Microsoft, you have the China view. What do I guess business leaders need to understand about how countries view these issues differently? Well, they're obviously a common set of principles, the value, the ethics. We just talk about some of the responsibility, accountability framework. I think it's also important to recognize there are differences. This is just like the product you build. The product you build are for different customers. The ones that build for a Chinese customer might be different for U.S. customers. And also you have to understand the regulatory differences and understand the rules and regulations. And I'm actually very happy to see both U.S. and Europe and China have developed a very, over the last few years, a set of rules and regulations and policies in China. There's a lot of work from different ministries and agencies in terms of security and privacy and data issues. So I see progress, but also I think we need to recognize the difference in terms of the market and the users and the industry. We're out of time, but I want to ask one more really quick question to everyone. So if you could just a short lightning round. A year ago in Davos, almost exactly a year ago, some big tech companies called for more regulation of AI. I want to know from each of you, what would be most effective in the near term in ensuring the responsible use of AI? Is it international normative red lines? Is it informal agreements between private sector players? Is it something else altogether? And I'll start with CVK. I think the most important thing is how data is used. I think that's the most important governing or regulation that's required. Great. Athena? Yeah, for sure more standardization of the policies, both around data and the use of AI. Currently it's very fragmented, very driven by the countries and by common standards. Mr. Gref? I think that it's very difficult to predict what we can discuss in one year, because AI and everything what is connected with AI developed so fast that I think that we need to speak about it a little bit later. But what I would like to say about the borders of using AI, I think that this 10 commandments from the Bible, this is very important, one very important principle which we need to remember when we work with each of the new technologies. And the first line, if we speak about AI, I call AI technology of the 21st century, which penetrates everything and is a enabler for each of business, each of part of our life. I think these principles would be relevant for each period of time and we can discuss how we can implement in our life with the support of the new technologies, all these 10 principles. Professor Zhang, final word? Yeah, I was number one and it has to be an open and candid dialogue in the government, in NGOs, academia and industry. And the second thing is, let's make sure we have the right technology, the tools to reduce data bias, to make sure we're dealing with the right kind of framework in terms of privacy, security and data solvency. For example, there's a lot of great progress in the last few years in terms of this. For example, homophobic computing that allow operations done in crypto data, saturated learning that can do learning without actually sharing original data and the differential privacy, a lot of things. And also the technologies that reduce the data leakage and making sure we can add the rules and the logic to the algorithm from black box to black to glass box. And we have a lot of scientists working on this, the knowledge extraction and making sure it's knowledge-driven, just data-driven. Thank you all again and thank you to everyone who's watching online.