 The biggest risk with AI may be failing to work on it and make more progress with it, because it can impact billions of people. Welcome to Sundar Pichai. Some of you will remember the discussion we had last year. And in the meantime, you have moved even up, not only being the CEO of Google, but also being the CEO of Alphabet. So thank you for joining again this exchange of ideas, opinions. But my first question is, you have called yourself a technology optimist. And we hear a lot sometimes of concerns about technologies. What makes you actually optimist? Well, first of all, Professor Schwab, thank you for having me here. It's a pleasure to be back. What makes me a technology optimist? I think it's more how I got introduced to technology. Growing up, I think I had to wait for a long time. I would hear about things, but I had to literally wait before I got my hands on either a telephone or a television. And each thing, when it came into our household, you know, I discreetly remember how it changed our lives. Television allowed me access to world news, football and cricket, which I'm passionate about. So I always had this first-hand experience of how gaining access to technology changes people's lives. Later on, I was inspired by the One Laptop per Child project, this goal to give $100 laptops to every child. They quite didn't get there, but I think it was a very inspiring goal and made a lot of progress in the industry. And later, we were able to make progress with Android each year. We bring hundreds of millions of people and they get access to computing for the first time. We do this with low-cost, affordable Chromebooks. And seeing the difference it makes in people's lives, I think it gives me great hope for the path ahead. And more recently with AI, just in the last month alone, we have seen how AI can clearly help doctors better detect breast cancer with more accuracy. We just recently launched better rainfall prediction over time AI can play a role in climate change. So when you see these examples first-hand, I'm clear-eyed about the risks with technology, but the biggest risk with AI may be failing to work on it and make more progress with it because it can impact billions of people. Yeah, but Sundar, if I look what has happened in technology over the last, I would even say 30 years, there was one big breakthrough. It was actually when AlphaGo was beating these at all. I think we haven't really understood yet the implications of this breakthrough. And now your company, Google is again at the forefront of another revolution which may have even more consequences, positive or negative one. It's actually what you just announced in quantum computing, it's a breakthrough. And I have to say it's very difficult to understand. I just know it could have tremendous implications. Can you explain what we can expect from quantum computing? And you are now the leader. You have made some big breakthrough. No, it's an extraordinarily important milestone. You know, last year we achieved something what's known in the field as quantum supremacy. It is when you can take quantum computers and they can do something which classical computers cannot. And I like the way you characterize it. It's as inspiring a milestone as the deep blue moment or AlphaGo playing with Lace et al. To me, you know, nature at a fundamental level works in a quantum way. You know, at a subatomic level, things can exist in many different states at the same time. Classical computers work in ones and zeros. So we know that's an imperfect way to simulate nature. Nature works differently. So what's exciting about quantum computing and why we are so excited about the possibilities is it'll allow us to understand the world in a deeper way. We can simulate nature better. So that means simulating molecular structures. So maybe we can discover better drugs. Understanding climate in a deeper way so that we can predict weather patterns and tackle climate change. We can design better batteries. Nitrogen fixation, which is the process by which we make the world's fertilizers, accounts for 2% of carbon emissions. And the process hasn't changed in a long time because it's very complicated. Quantum computers one day allows us the hope that we can make that process more efficient. So it's very profound. We've all been dealing in technology with the end of Moore's law. You know, it's really revolutionized the past 40 years, but it's leveled off. So when I look at the future and say, how do we drive improvements? Quantum would be one of the tools in our arsenal by which we can keep something like Moore's law continuing to evolve. So the potential is huge. And we'll have challenges in a five to 10-year time frame. Quantum computing will break encryption as we know it today. But we can work around it. We need to do quantum encryption. So there are challenges, as always, with any evolving technology. But I think the combination of AI and quantum will help us tackle some of the biggest problems we see. And you add also to a certain extent genetics. I mean, I think quantum computing and biology will have a great potential, positive and negative one. The positive one, as you're saying rightly, is to simulate molecules, protein folding, et cetera. It's very, very complex today. We cannot do it with classical computers, with quantum computers we can. But we have to be clear-eyed about all these powerful technologies. And this is why I think we need to be deliberate and regulate technologies like AI. And as a society, we need to engage on it. But that leads me to the next question, actually. Because in an editorial in Financial Times, which I read just before the annual meeting, you stated, and I quote, Google's role starts with recognizing the need for a principled and regulated approach for applying artificial intelligence. What does it really mean? You know, I've said this before. AI is one of the most profound things we are working on as humanity. It's more profound than fire or electricity or any of the other bigger things we have worked on. It has tremendous positive sides to it. But it has real negative consequences. When you think about technologies like facial recognition, it can be used to benefit. It can be used to find missing people. But it can be used for mass surveillance. And as democratic countries with a shared set of values, we need to build on those values and make sure when we approach AI, we are doing it in a way that serves society. And that means making sure AI doesn't have bias, that we build and test it for safety. We make sure that there is human agency, that it's ultimately accountable to people. In about 18 months ago, we published a set of principles under which we would develop AI as Google. But it's been very encouraging to see the European Commissioners identified AI and sustainability as their top priorities. And you has put out a set of principles last week. And be it the OECD or G20, they are talking about this, which I think is very, very encouraging. And I think we need a common framework by which we approach AI. Are you satisfied with those frameworks, you said, which have been developed until now? I mean, you refer to the OECD framework, G20 framework. It's an early start. But I'm very encouraged that they have a lot of commonality, and that's because they are rooted in common human values. So I think it's a great start, but we need to get more specific and evolve it significantly. I think the European Commission is working on a white paper around AI. And I think that's an important first step. And we all need to engage. As a company, we are committed to engaging in the process. But it's going to need everyone from around the world. AI is no different from climate. You can't get safety by just having one country or a set of countries working on it. You need a global framework to arrive at a safer world there. But, Sundar, you emphasize a global framework now. The question is, how much is actually China engaged into those efforts? And don't you see the danger of the two circles? And so that's the end. We end up with two different frameworks, one which is more coming out of Beijing, and one which is developed inside the OECD concept. I think there is concern that we could bifurcate here. But I think it's important not to do so. I'm optimistic because just like in climate, I think there's more alignment. We have things like the Paris Agreement. The world comes together because everyone shares the climate in which the Earth, how it affects the Earth. And so I think that's true for AI. So down the line, I think there'll be a common gravitational pull, regardless of who you are, to try and converge. Otherwise, you won't be able to achieve peace and prosperity. So I think there'll be a gravitational pull. No, we need it. And actually, it's a form it's the center for the forced industrial revolution is trying to bring the parties together. Now, I changed from moment to subject. And when you look at the GDPR, the California Data Privacy Act, regulators start to take meaningful action to protect consumer privacy. And address, I mean, it's a second issue. It's the growing anti-trust concerns. Google buying up all startups, which are in the, let's say, AI area and so on. And some believe that actually companies like Google should pay, I think it was called, a digital dividend. Can you share with us what is actually the policy of Google? And I have here, too, I come back, privacy and anti-trust. It's a great question. First of all, maybe we'll talk about privacy. GDPR has been a great template. I think it gives a standardized privacy framework. And often when we are in other countries and when they are thinking about privacy regulation, we point to GDPR as a template. I'm glad Europe took the lead on it. And I think that gives a good framework for all of us to work on. For us, privacy is at the heart of what we do. Users come to Google at very important moments, ask us questions. We deal with people's sensitive information in Gmail, Google Photos, and so on. And so we have to earn that trust. And today, we do it by giving them control and transparency and choice around it. And over time, I think AI actually allows us to do this better. We can do more for our users. Most of the data today we deal with is to help users with their information needs. And we can do that with less data over time. And it's counterintuitive. But last year, for example, if you use Google's keyboard, we actually now learn what to suggest. But we don't send the raw data back. We only compute our models. And the data stays on the phones. So over time, I think we can do more things on-device. We can use AI to actually preserve privacy as we improve user experiences. And I do think it's important that products need to work for everyone. It's a foundational principle. So today, if you take a product like YouTube, we allow users to pay for it and get it in an ad-free basis. Or you have an ad-supported product. It's what allows us to take information and provide many services to billions of people around the world. And privacy cannot be a luxury good. We need to make sure we develop services in a way that works for everyone, but puts them first and is privacy enhancing. And that's the journey we are all on. But ultimately, it's up to users to choose. On your second question, I think with our scale, rightfully comes scrutiny. You're right, we have bought startups. But as a company, we invest every single year in hundreds of startups through our venture arms. We support entrepreneurs and incubators around the world. Through our Grow with Google program, we are trying to digitally scale millions of people. In Europe alone, we have skilled over 5 million Europeans. So with scale comes the chance to work on things, take a long-term view on important technologies like AI and quantum computing. And so it gives us a chance to do that. But ultimately, we have to do it all in a way that works for society. That's the real test. And society has to judge whether what we are doing is beneficial. And we want to engage constructively in the process and earn our right to do that. But we are in building up scale for scale's sake. We are trying to do important things for our users. I'm sure. Let's integrate the audience for a moment. Even if you don't have a microphone, just speak loud. I'm sure there might be a follow-up. If you don't mind, let's see. Anybody who? And we stick to the question of privacy or antitrust, please. That's a moment. Do you want? Yeah? Hi, Sundar. My name is Samar Sikha. I'm from the Global Shapers community from India. First of all, I'm a huge admirer and a tech enthusiast myself. I wanted to ask you, India is in right now. It's a huge market for you. Your program on the next billion users is aimed at India. India is right now in the middle of a data protection bill, which has seen a lot of changes over the last year. It was first seen as very restrictive to global multinationals like Google and other big tech companies. Now that it is being debated, India, the government, has eased some of those regulations. At either of these points, Google has not actively lobbied or fought this bill. They have said that we will comply with what the government says. What is your view on it? Because you're one of the few CEOs, unlike I think many of your counterparts, who takes a balanced view saying that there are risks, but there are also benefits. People generally come out, CEOs generally come out on either end of the spectrum. What would you like to see come out of the data protection bill, where the next billion users in India could benefit, but Google could also move towards its vision? I would just enlarge your question, and not relate it to India, but to what? So the question would be, if you would design from your point of view an ideal data protection bill, how would it look like? It's a great question. We are in the World Economic Forum. We bring people together. That's what the internet is all about. The value of internet comes in connecting the world. And to do that, you need a free, open internet to work. At the same time, and I see it's not just in India, as Professor Schwab mentioned, it's a big topic in Europe, and all other countries around the world. Politicians, rightfully, they are charged with protecting their citizens. And as part of that, data sovereignty is an important topic as well. But it is inherently a balance. And I think countries need to focus on the highest risk areas and maybe add production around it. But you want to help preserve a common internet. Even in India, for example, if you take a product like YouTube, many creators in India, more than 50% of their views come from outside of India. The internet is essentially an export product. You can build a service. Regardless of where you build it, you can reach people around the world. That's what's great about the digital economy. It creates new opportunities. And so that's the balance countries have to strike. But I think there are good regulations. GDPR is a good framework. As we think about how you can protect privacy for your users, for your citizens, doesn't always mean data has to be siloed in a particular way. And I think we need to evolve this framework carefully. Any other comments? Let me see. Maybe someone's behind. Let's go into back. Yeah. Oh, it's a. Hello, my name is Guan from Learnable AI. Actually, I started from Harvard Innovation Lab. So we do a lot of explainable AI. And I'm a big fan of Google's technology, especially Deep Mind. So I want to know, in terms of transparency and explainability for AI, what do Google think and what's Google's plan? Thank you. Such a great question. One of our AI principles is that AI is ultimately accountable to humans. And to do that, well, explainability is a big part of it. Now, you can imagine a self-driving car making a decision and us being able to explain. I think it's worth remembering humans can't always explain how we make our decisions. We think we can. And we say some things. But that's not how it really happens. So I think it's worth remembering that. But we are building. It's one of our most active areas of research. For example, the counter AI bias, last year we published research. So for example, if you have an image recognition algorithm and it predicts and says, these are doctors, we can now say, what are the variables you're using to predict that these images are doctors? It may say white coats. That makes sense. But sometimes the model can say male, because it has seen only pictures of male doctors. And then you know it's not working well. That's an example of explainability, which we do. And we are working hard to drive that. But it is an area of research. But I think it's an important principle to do that before we use AI in high risk applications. But it's exciting as well. AI actually gives us a chance to do some things where humans are actually biased. And we reinforce our bias to understand that counter it and do it in a better way as well. So we need to invest to get there. Let me take one also. Hello. I'm Claudio Laverance, also from the Global Shapers community. And I wanted to know your thoughts about data privacy and health-related data. What do you think is the direction that Google will lead the health data? The good thing about the health care sector is that there are already strong regulations in place. As we think about regulating AI, I think it's important to leverage regulations where they exist. And health care has good privacy-protecting regulations in place. As Google, we see a huge opportunity in health care. But when we work on health care, when we work with hospitals, the data belongs to hospitals. And that's how we approach it. Where we can, we encrypt the data, and the hospitals would have the key for it. But look at the potential here. When we look at an area like radiology, people, there are oftentimes cancer gets missed and the difference in outcomes is profound. When you take an area like lung cancer and you show the pathology results to experts, very often if you show it to 10 experts, five people agree one way, five people agree the other way. We know we can use AI to make it better. And so I think it's important we do that. But I think these are areas in which you have to do it with privacy in mind. I'm encouraged that there is strong privacy-protecting regulation already in place, which gives us a framework to do it well. But I think health care offers the biggest potential, I think, over the next five to 10 years to really improve outcomes. And so we are committed to doing that. So Forum is working together with the Japanese government to address this issue, to find the right balance between privacy and access to the data. Now, if I look at Apple and particularly also your transition from Google to Alphabet, you are involved in so many different areas, I mean Wi-Mo, Sidewalk, and so on, and so on. What, how do you see the future of the company? Is it a giant which sucks up everything? Or how do you see Google in five years from now? Look, we know we will do well only if others do well along with us. That's how Google works today through search. We help users reach information they want, including businesses, and businesses grow along with search. In the US alone last year, we created $335 billion of economic opportunity. And that's true in every country around the world. We think with Alphabet, there is a real chance to take a long-term view and work on technology which can improve people's lives. But we won't do it alone. In many of the other bets which we are working on, where we can, we take outside investments. These companies are independent, so you can imagine we'll do it in partnerships with other companies. And Alphabet gives us the flexibility to have different structures for different areas in the way we need them to. If it's healthcare, we can deeply partner with other companies. Today, we partner with the leading healthcare companies as we work on these efforts. So this is, we understand for Alphabet to do well, we inherently need to do it in a way that works with other companies, creates the ecosystem around it. This is why last year, just through our venture army, invested in over 100 companies, we are just investors in these companies. And they're going to be independent companies. We want them to thrive and succeed. And so, you know, that's the way we think about it. But I think it gives us a real chance to take a long-term view, be it self-driving cars or AI, and approach it with a long-term view in mind. So now, one last question. You are now at the top of one, one of the three or four, whatever you say. Not only most reliable companies, but most powerful companies. You said you are an optimist. When you wake up at night, what... And you cannot sleep anymore. What worries you at such time? You were pretty insightful. When you wake up at night, that is literally true. Yeah, I do wake up at night. You know, and what worries me at night? You know, I think technology has a chance to transform, you know, society for the good. But we need to learn to harness it to work for society's good. But I do worry that we turn our backs on technology. And I worry that, you know, when people do that, they get left behind too. And so to me, how do you do it in an inclusive way? You know, just coming here, I was in Belgium and I went to Molengeek. You know, it's a startup incubator in Molenbeek. In that community, you see people who may not have gone to school, but when you give them access to digital skills, they're so hungry for it. People want to learn technology and be a part of it. That's the desire you see around the world when we travel, when I go to emerging markets. You know, it's a big source of opportunity. And so I think it's our duty and responsibility to drive this growth in an inclusive way. And so that keeps me up at night. If you translate it in the political field, I think the transition we have had is, in the First Industrial Revolution, we created new ideologies like Marxism, Socialism and Capitalism. Today, I think the dividing line in society is between those who embrace technology and those who reject technology. So you have also an important role as a prophet or missionary to explain all those new technologies at the end are beneficial for humankind. You know, very much so. Search at the heart works. It does not matter whether you're a student in Africa or you're a professor at Stanford. You know, search works the same way as long as you have access to a computer and connectivity. You know, that's an equalizing force. And you know, then that's how we need to approach technology over time. And I think that's the real opportunity we have. So you don't have to, if I take your role, you don't have to sell only a product or manage your, I don't know how many people now, but you have to be at the out front to explain continuously that what you are doing is good. Now, how much time do you devote to this assert portion of your responsibility? You know, I think with naturally, with scale comes that responsibility. I do see as a big role for us to engage externally in a partner with other institutions, be it governments, regulators, non-profits, educational institutions. The forum. The World Economic Forum and Engage. And I think it's a big part of what we do, so. No, thank you very much to have responded to those sometimes critical questions. And we hope, we said we can continue this discussion in the coming years and we wish you all the best in the meantime. And we all will use Google substantially in the meantime. Thank you. Thank you very much. Thank you.