 Good evening everyone out there in the virtual realm. My name is Mirja Mekkel and I'm very delighted to host this conversation tonight with the CEO of Alphabet and Google, Sundar Pichai. Good morning, Sundar. Hi, Mirja. How are you? I'm good, actually. I'm far away, but I'm good. It would have been much nicer to talk to each other in person, but that'll be possible hopefully next year again. Well, I'm glad we can do this virtually and, you know, as he said, hopefully it changes over the course of the year. I think so. So maybe we just jump into our conversation about insights and ideas with, yeah, connecting the last year and the year that just began. A year of restrictions lies behind us. Recession is still going on, and 2021 just begun. How do you encourage the people around you, the teams at Google, maybe even yourself to somehow still trust in the power of technology, in the power of technological innovation? Give us a streak of light, please. Yeah, it's something I think about a lot. You know, I'm a technology optimist. Not because I believe in technology, but I believe in how people come together to use technology for good. From my personal experience, you know, it's always resonated with me because technology created opportunities in my personal life. I see countless examples around the world of how technology is driving progress. COVID itself has been a moment of reflection. And while you see all the hard impact around the world, you also see the role technology plays in how most of the economy is still functioning around the world. The vaccine development, you realize it's happening on a foundation of technological progress over the past many decades. When you see advances like mRNA come into fusion, and you know it's going to be applicable to other hard problems down the line, within alphabet as DeepMind made progress with AlphaFold, you had a moment where AI in a few years made an advance. You know, it's a 50-year breakthrough in terms of protein folding. And I can see its potential to streamline aspects of drug development. And so it gives me hope that 10 to 20 years down the line for the next challenge, you know, it's a foundation which will help drive progress. And that's how the virtuous cycle continues. And so, you know, as a company, you know, I take a long-term view. And whenever you step back, you see the long arc of technological progress. And, you know, that's what I talk to my teams about and gives me optimism through this very, very difficult time. Let's break this down a bit. I'm totally with you regarding the messenger RNA. I think that's really a breakthrough. I mean, a vaccine regularly takes like five years and longer to be developed, to be researched. And now we have it already in different forms. On the other hand, what I observed is that artificial intelligence, for example, didn't place such a big role in the analysis of COVID in many regards. It was kind of underwhelming, let me put it like that. And now we have huge difficulties with the distribution of the vaccine. Couldn't AI help with that? You know, I think, I mean, today we have the technology tools to help with, you know, cloud computing and, you know, machine learning and algorithms exist today. I think the real potential of AI will come, you know, we are still in very early days of AI and I think all of this will come into play, you know, in a 10 to 20 year timeframe. But having said that, you know, I see countless. So for example, when you look at, you know, how did we as a company deal with, you know, making sure users got the right information around COVID and preventing misinformation. I mean, we did rely on AI algorithms to make a lot of decisions and, you know, it's a difficult area. And I think AI contributed significantly to help us make progress there. But, you know, the pandemic is, you know, it's a one in a hundred year event globally. And I don't think AI played your right. Any significant role in helping us navigate it. It is the foundations of all the technological progress over the past many years. And I think AI is laying the foundation for us to tackle future problems. You know, our current problems like climate change and, you know, how do you solve the next advances in health care and so on. If you figure a pandemic in, let's say, 10 or 15 years, what would then be different with the progress of AI you were just describing for our future? You know, I mean, you know, our ability to, you know, detect, plan, coordinate, you know, and drive the next generation of therapeutics and vaccines, etc. Everything will, you know, assuming as humanity, we use this moment to learn from it and apply our, you know, creativity and resources. You know, I think we will tackle it definitely better the next time around. And so, I see all of this in a cross cutting way, help drive progress. And, you know, so for example, you can see even through this pandemic, we have shortage of masks. We learned what is the right mask guidance through the course of a pandemic. And these are all, you know, simple things to get right the next time. And, you know, and, you know, how do you act rate therapeutics, you know, I think AI will play a big role by the time we have to deal with the next one. So it's not just about machine learning systems, it's still about human learning systems as well to tackle issues like that. Absolutely. Now, you just mentioned the problem of misinformation around COVID and there was quite a lot. I would like to take a moment to take it to a little bit of a meta level and also look at what happened in the start of during the start of this year. In many ways, 2021 has started very unexpectedly or maybe not, but maybe yes. What went through your mind when you saw the pictures of a mob storming the US capital on January 6th? It was a horrible feeling, you know, and never in a million years I thought you would see, you know, in one of the best functioning democracies in the world, you know, transitions of power are, you know, hallmark of democracy working well. And, you know, definitely shocked me the same way. I think it shocked people around the world. And I'm glad the reaction was what it was. And it's a learning moment for all of us. And I think, you know, you realize the foundations of democracies are essentially fragile. And it takes all of us to understand that and reinforce it every way we can. I think we can find some twist also in how the public sees the whole issue of misinformation. There was a study published today that says that something like 59% of the American people surveyed in an online poll think they support breaking up big tech because of the issue of monopolies of dominance and of misinformation. What do you make of that? I think there are real issues, important societal issues to be debated about, you know, where do the boundaries lie with freedom of speech? What do we need for a well functioning society? These are important questions. No single company can solve them. And, you know, and I am glad that our conversations, including potential regulatory approaches to give clearer rules of the road. And I think large technology companies have an important role and responsibility to get this right. And, you know, for us as Google, it's foundational to our mission. Every single day in search, we get, you know, billions of queries, 15% of it which we have never seen before. And we work hard to present the most accurate information possible. And, and, but having said that, I mean, the internet is so distributed. There are conversations happening across every aspect of the internet. You see important, even just yesterday when you see the, you know, the conversations in the US around the stock market movements and stock where the conversations are happening shows that, you know, internet is, you know, disseminating information at a scale never seen before. This phenomenon is bigger than any single company. It is here to stay. And I think as a society, we need to develop the next set of frameworks for us to, you know, function through that. And I think that's the debate we are in the middle of. We take our role as a company seriously, but, and I think, and rightfully, that needs to be focused on the larger companies. But I would argue that, you know, it's an internet wide issue. And I think the debate needs to be deeper. I like the idea of working on a new framework. I think that is something that is probably necessary and will take a while. And the whole issue of freedom of speech, it's a really difficult question we are facing right now. Because even though you rightly say that single companies can't decide about freedom of speech and what is on the internet and what is banned from the internet, in a situation like the 6th of January and the days after, some decisions had to be taken. So a very practical question. Do you think Twitter CEO Jack Dorsey took the right decision in banning the ex-US President Donald Trump from Twitter? I mean, I do think, you know, platforms operate in a context. And, you know, I think, you know, every platform needs to have clear, transparent rules about what is allowed, what's not allowed. And you have to do it in the context of what's right for your platform. So, you know, I definitely don't want to comment on another platform. What I would say is, I think collectively, all of us were dealing with an extraordinary moment where we saw, you know, the potential for real harm in the physical world and, you know, potential for incitement to violence. And I think, you know, free speech is foundational. As a company, we have always stood for free speech. But there are boundaries, which I think as societies, we need to agree on. And, you know, incitement to violence is an example of one of those kinds of boundaries. So that's the context in which I think we need to look at the moment. And I think, but I do, to go back to my earlier point, I do think it's important for governments to, you know, debate this and give clear guidance. The answers are going to vary. There is no one size fits all. I think it's important different companies have different approaches. I think that gives people choice. But I think we need clear rules of the road. And I think there are a variety of approaches governments are looking at. For example, I think ensuring that policies are transparent, decisions are explained, people have a way to appeal those decisions, and overall companies issuing transparency reports. Those are all examples of clear rules. I think, like we have done in privacy with GDPR, you know, I think there are advances like that, which we need to need to undertake. Google just opened a second safety engineering center in Dublin in Ireland. And it's, as to my knowledge, the first one specifically focused on content responsibility. That sounds like a small step toward a clearer acknowledgement that the technical platform and the content provided on it can't be separated. Is it? You know, I'm very excited about it's our second Google safety engineering center. The first is in Munich in Germany on privacy. This is on, this is focused on content responsibility, not just brings our experts technical and policy experts and our trust and safety teams, but allows us to engage with other stakeholders, including academic government experts, et cetera. You know, content responsibility is an ongoing. We will never be done with this work. You know, we take it very seriously. It's one of our largest areas of investment. And I see that, you know, I gave the search example in YouTube. It's been our biggest area of focus for the past four years. And I look at many areas and I see the progress we have made, but the scale of the Internet, I already mentioned, you know, how the context keeps changing, you know, something like COVID didn't exist a year ago, right? And so on a given day, the nature of information and the problems, you know, these are constantly evolving. And so I think that's why we are setting up this center. It gives us a way to engage externally. And, you know, I don't think these are societal level problems, and you're going to need multiple stakeholders. You need public-private partnerships to get this right. And I think for us, we want to set up organizations and a way by which we can engage, particularly in Europe. So I'm really excited for it. The political year is expected to bring something like a paradigm change in terms of Czech governance. What do you anticipate from the new Biden-Harris government administration? You know, I think, you know, it's definitely early days. You know, as a company, we've always been committed to engaging with the government, regardless of an administration as a company, we want to do our part. So for example, if it's COVID, we've been focused on what our role can be in economic recovery, supporting small businesses, helping people gain digital skills, you know, we have announced initiatives around racial equity, vaccination, how can we help in vaccination? So we are going to engage on areas where we can find common ground and advocate for it. Sustainability has been an important area of focus for us. I've been encouraged, you know, along certain dimensions. I think the Transatlantic Alliance is something that needs to be, you know, invested in and restored. And I think it's really important when I look at challenges like sustainability or AI regulation and safety, we are going to need in a more global cooperation. And so I'm encouraged by the early signals from the from the administration. But the question of how the Biden-Harris administration will deal with big tech is, of course, not just about cooperation. We are talking hard facts there in terms of competition law and issues of monopolies. How do you prepare for a time that might be much more uncomfortable for you as a company as well as other big tech companies? You know, I've long held the view that it's appropriate for democratic societies to hold large companies accountable. And this is not new to us. And the administration changed. We have had either processes and procedures in the context of previous administrations as well. I'm glad there are, you know, proper processes. It allows us to engage, explain what we do. You know, as a company, we've always always been focused for 21 years on making sure we are innovating, contributing value to society, helping drive products and services for users in a beneficial way. And as part of that, we will engage constructively. There may be feedback on changes we need to undertake. And, you know, we feel it's our responsibility to do that as well. So I don't necessarily see that as a new thing. It is something definitely that comes with scale. And I think it's our duty to engage with regulatory bodies, with democratic societies, and answer questions they have. And so I see it as a natural part of our evolution. Okay. We have talked a lot about stakeholder capitalism at this World Economic Forum. It's not a new concept. But I think especially in these days, when we kind of re-calibrate the values of how we work together, how our economy thrives, and all these kinds of things, that's a concept to be reconsidered. And I would like to ask you how you combine that a few days ago, Google workers across the globe announced Alpha Global and International Union Alliance to hold Alphabet accountable in a way. Do the employees have the feeling or do you have the feeling the employees might think they have to take the matter of stakeholder capitalism into their own hands now? I think as a company, we've always held a view that employees are an important constituency for us. We deeply care about our employees. We give giving voice to employees is a core part of our culture. And we engage with many employee resource groups in many forums, including in Europe. We've had European worker councils for a long time. And so I think this is not new for us. I think listening is an important part of our culture. And I do think we want to work with our employees to drive progress together, be it on areas like sustainability or diversity and inclusion. And so I think I continue to believe that companies which are able to engage with employees and involve them, it's the strength of the company I feel. And so that's how I think about it. I would like to press you on that a little bit more with one more question because Google has been struggling with a series of employee issues these days. There was one with the most public awareness being the depending on the perspective resignation or firing of Timnit Gebru, a well-reputed researcher on Google's ethical AI team. How seriously does Google take the issue of the critical questioning required to develop a responsible AI? And how independent can research in a company be to really address these questions? No. I mean, issues around, I've always, AI is one of the most profound things we are working on. And developing it responsibly is one of the most important things for us to get right. As a company, we have publicly committed to AI principles. We are one of the leaders in industry in terms of the research we do around AI ethics. And we published over 200 papers in the past year or so. But I think you're raising an important question which is, I don't think, I don't see it as company, we are committed to be doing the best we can in terms of approaching our work in a responsible way and pursuing AI ethics. But I think there is an extraordinarily important role for regulation, for, I've said, AI is too important an area not to be regulated, for independent institutions, academic, other non-profit institutions to do this. And I don't think companies alone can get this right. I think part of the reason you see a lot of debate, I mean, we engage as a company where a lot more transparent than most of the companies. And so you do see us in the middle of these issues. But I take it as a sign that we allow for debate to happen around this area. And we need to get better as a company. We are committed to doing so. But I look at the state of how we approach this and I am confident about the effort we are putting into it and our commitment to do better here over time. Maybe let's use the last minute to take a look at new technologies and how they might help our economies, our societies to thrive. At the end of November last year, Google's DeepMind, you mentioned that at the beginning of our conversation, announced a gigantic leap in solving protein structures. The Deep Learning Software Alpha Fold manages to accurately predict protein structures from their amino acid sequence. Can you give us an example of the practical value of this breakthrough? What will we do with that? I mean, I think today drug development is an extraordinarily involved process, both in terms of time and resources, in terms of people and money. And what something like Alpha Fold gives the chances for biotech pharmaceutical companies, anyone in the drug development process, to become much more efficient and streamlined and to find the right protein structures for a target problem. And so effectively streamlined the development of a solution. And so I view this very profound. But it's one step of a complicated drug development process. But that's how advances happen. And when we made progress with messenger RNA technology, today we are deploying it in the context of a pandemic. So I view it as a foundational advance. And I think I see countless examples across the company. And about 18 months ago, we demonstrated quantum supremacy as a company. And I think it's another example of a foundational advance which will help us maybe model climate better in the future. And I think progress takes time. And I think all of the internet today is built on some foundational technologies which were developed 30, 40 years ago. And so that's how we achieve progress. I still don't like the term quantum supremacy sounds like a very military phrase. I prefer quantum advantage. But let's stick to what might come out of that. When will this technology be available for a broader market quantum computing? I think what you will see is over the next three to five years, I think a few places will give outside companies and developers to a quantum cloud, if you will, for them to tackle early research problems. So we are definitely early in the cycle. I think the demonstrations show the potential of what's possible. And that's why we are all excited about it. But I think you are on a 10 to 20-year journey of really bringing it to actual practical use. But the next three to five years will be exciting because you can harness the creativity of the world. That is, you can let other developers test quantum applications by providing the quantum infrastructure which we are building with APIs to the outside world. And I'm really excited for that. Are you worried that Chinese researchers announced at the end of last year to have reached quantum advantage as well? Will that be the next battle of the tech titan nations somehow? There is no doubt to me across the world there are going to be progress in areas like AI and quantum. I mean, these are foundational areas. China with its technology resources, the investment from the government is definitely going to make progress across these areas as well. Which is why to my earlier point, we are going to need better global frameworks for AI safety. Just like to tackle something like climate change, we have the notion of a Paris agreement. We need countries to come together because no single country can solve a planetary issue. I think issues around technology progress with AI and quantum computing, we are going to need additional frameworks like that to ensure safety for the planet. And so I think it's important to acknowledge that it's going to be global progress. And I think we should think early and come together to solve the bigger longer term safety issues from these technologies. So we somehow need a global quantum regulation framework. Who is going to provide for that? It's not just quantum, it's around AI safety and we need to make sure that just like in the past, we have had other frameworks globally and they are definitely not easy to do. It needs willing countries and governments and an upset of committed people to drive progress here. And I think it will be multi-decade efforts. But I expect, for example, I hope in G20, in OECD, these are all the forums and there are discussions happening around this. And I think, again, you're going to need public-private partnerships to drive all of this forward. Let me challenge you in one more meta perspective regarding quantum computing. Quantum computers work by the principle of entanglement from quantum mechanics, which means that each particle of the pair or group cannot be described independently of the state of the others. What I thought is, isn't that kind of a wonderful metaphor of the world we are living in and might teach us something about humbly accepting complexity, accepting ambiguity in the state of the world we are living in right now? Definitely. Quantum computing is one of those wonderful things in which the more you spend time trying to understand it, it's humbling and you get very philosophical in nature. No one other than the great Richard Feynman said, anyone who claims to understand quantum computing doesn't really understand it. And we get the exact quote wrong. But I think there is definitely something about the notion that the physical world, as we perceive it, doesn't represent the underlying reality. And as a humanity, we have a long way to go before we understand the true nature in a deeper way. What I am excited about quantum physics and quantum computing is it gives us a chance to get closer to that understanding one day. One last question. Looking at your background, what is that little dinosaur right behind you on the shelf? Oh, I think when we were developing Chrome, the browser, when there was no internet connectivity, you could play the dino game. And so it's something we did in our early days. And so it's a little bit of a Easter egg for me that I carry with at times. I thought it's something like never look back to the things that have been extinguished forever, but look to the future maybe. I think that's a good perspective to carry too. So we're running out of time. Thank you very much for this conversation and the insights and ideas Sundar. It was a pleasure. And thanks to everyone out there in the virtual realm. Thank you. Thanks, Miriam. And thanks to the World Economic Forum for organizing this. Appreciate it.