 This special session, and I have to say maybe because my heart is ultimately an engineer's heart. I'm always looking forward to this session which becomes a tradition, and I don't have to introduce, of course, Satya Nadella, and we have this dialogue trying to get an understanding about what's happening, and you are the leader of probably one, or if not, the leading tech company. You will enlighten us certainly, so we will have a dialogue, but I want to thank you also here in this context for Satya Fossey, a great corporation that we have together with Accenture. I know two years ago we had this adjustment, talked about it, we had a discussion about how we could, in some way, democratize the World Economic Forum, and allow people who cannot come to Davos to participate. And of course the Metaverse, or the three-dimensional virtual room, however you want to call it, I think provides a tremendous opportunity. So we started our cooperation to build the global collaboration village. We started this cooperation just two years ago. At the last annual meeting we showed the first demonstration, and now it has become reality. If you are ever interested, please go to the Global Collaboration Village and see the tremendous potential. We actually have the first inhabitants. We have five or four of our partners, plus an international organization, which have established their presence now in the Global Collaboration Village to reach out to customers, not mainly to customers, but mainly to the public to talk about particularly what they are doing in the social area. So Satya, when we met last time, Generative AI was a very young baby. Now it has, I would say, it has become a teenager, very fast, from a small baby. It has taken less than a year. How do you see the situation? First of all, it's wonderful to be back here at the World Economic Forum Klaus, and also it's unbelievable to see the vision you had for the Global Collaboration Village from two years ago take that shape. You're right. I mean, the interesting thing is I distinctly remember in 93 November is when I think Mosaic first came out. And obviously, for me, that was a very big event. I joined Microsoft in 92, and the web changed a lot. And so November of 2002 is when chat GPT, I think, really helped, I think, all of us perhaps for the first time relate to this new generation of AI. And since then, as you said, things have been scaling. This rate of diffusion across countries, across industries has been really fast and furious. And it's just been fantastic to see. For me, by the way, the first time I became a real believer that something had drastically changed was when I saw GitHub Copilot. It was probably six to eight months before even chat GPT when I started seeing for the most elite knowledge work there is software engineering, you see a new tool that changes what, you know, in fact, the drudgery of software engineering, the joy of software engineering was back and the drudgery was out. That's what made me a real convert that this is pretty magical. And since then, we have obviously launched Copilot, you know, Copilot for broad horizontal work, frontline work. We have Copilates for security operations. You see now have Copilates in healthcare for, you know, really ensuring that you can reduce the physician burden when it comes to having the dialogue with their patients. The UAE has rolled out a personalized tutor for every student in the country. And so this rate of diffusion, this ability to take, in fact, I think Bill first talked about information at your fingertips at Comdex in 93. And this is more about intelligence at your fingertips or expertise at your fingertips. That's I think the era we are definitely in. And I think 24 will probably be the year where all of this will scale. Satya, I have to confess some of the introductions of people which I had to write. I wrote with chat GPT. But don't tell anybody. There's a lot of discussion about AI during this meeting. And we talk about impact. We may later talk about productivity, particularly in the context of the economy. But what is overlooked in this discussion? Do you feel we talk about impact on skills and so on? What is overlooked in your opinion? I mean, it's not overlooked, but I think what is salient, I think, is obviously I talked a lot about what it's going to do to horizontal knowledge work and frontline work. But I think it's about what AI will do to science, perhaps is the most interesting thing to me. In fact, just last week we announced something which I sort of felt like this is something that can be done, but I had not understood that this can be done. So we took one of our models called Matagen, which is sort of a generative model to generate new molecules and material. And we put it through an entire round trip where we came up with new molecules for a new material, worked in collaboration with one of the national labs in the United States, the Pacific Northwest National Lab, and figured out how to produce a new battery that's got 70% less lithium. That's just phenomenal. When we think about the energy transition, it's about taking 250 years of chemistry and somehow bringing it down to 25 years. So this is a proof point of that. Same thing is happening in biology. I see Jim in the front row here. If I think about what we're doing even with Page and what we can do with cancer detection or what we're doing with Broad in biology and to be able to use AI to simulate the molecular behavior. So I think that science is probably the place where we will start seeing real acceleration. So up to now, the digitization revolution has brought new tools to science but has not fundamentally accelerated science. But if you can fundamentally accelerate science, the cures to diseases, the energy transition, fundamental new material science, all of these I think are going to be pretty, pretty profound. Now everybody talks about AI but actually there are many other technologies in the force industry evolution and I think it's particularly the combination of AI with some of those technologies. What other technologies create this progress for society in your opinion? On the technology front, I'm always sort of anchoring back to three things. One is when it comes to the core compute infrastructure, we just need more of it. So we have the von Neumann machine that still rules the world and the question is can we birth the new quantum revolution? So I'm always excited about quantum. In fact, some of what we're seeing is AI as the emulation layer for what is going to be the simulation layer which is quantum. So if I think about these two things, that's very powerful. Quantum is one, AI of course. The other one is mixed reality, presence. When I think about embodied AI is the other way to think about it, which is whether it's sensors on us which is sort of a little more of the devices like VR, AR, mixed reality or humanoid robots is another one or automobiles that are autonomous. So I think of these three things as perhaps where compute AI and fundamentally autonomous and mixed reality devices are all going to come together to create I think the platforms of innovation. Would you agree when we look back in history and similar invention was probably the invention of printing and it created renaissance. Do you feel that those technologies can really create a new renaissance for humankind? I think that the general purpose technologies, that's the power of these, right, the general, when you have, like even for me, one of the things that's sort of a real privilege for me is to be able to come to a forum like this to have a chance to meet with people in retail, people in pharma, people in banking, people in every sector of the economy across every region of the world and say wow digital technology is being used in profound ways. That means this is a general purpose technology. So any time when there are real breakthroughs in general purpose technology and the frontier is shifted, I think that broad ability to have renaissance, right, which you'll have better medical outcomes, you'll have better educational outcomes, better, you know, products and services in our lives and that abundance, that innovation is I think what drives human societies forward. During the meeting here and of course before, I had quite a number of discussions with heads of government and state and particularly if we take less developed countries, they are afraid that this could become a new divider and create a new tension between Norths and Souths. Would you, do you feel along the same lines or? I think that it is something that we have to be very, very mindful of because the last thing the world needs is technologies that create more of the divide, right. I mean, if anything, my hope for sure is centered on realizing what we just talked about. Think about it. You have now a technology like something like GPT-4 that essentially can be used to create a personal tutor for every student in the world, right. It's absolutely economically feasible even with just the government spending that's happening even in the global south, right. So it's not just in the UAE, but it can happen everywhere. Same thing with medical advice, right. So you can have in the pockets of every person in the world, all eight billion of us, better medical advice, better advice on how to exercise your rights or your ability to get at assistance from your own local governments and so on. So I think really the potential is there. There is always barriers, right. One of the things I feel great about is when one of the barriers can be access to computing. The fact that the last 15 years, how cloud and mobile have become ubiquitous. That's one of the reasons why I feel, Klaus, that this generation of technology will diffuse a lot faster. What may have taken 15 years or 10 years, depending on how you count with the cloud computing era, may take five or even less and this will cover even the global south. Now, what do we need really to get to ensure a better tomorrow? I would say, so is technology. We have as a seam here rebuilding trust and I mentioned this morning in my opening speech that the fast speed of technology also leads to fear and maybe to pessimism. And it's a source of what we see today in terms of polarization of opinions and so on. How can we make sure that we get it right? What would be your most significant advice? Yeah, to me, I think the thing for us as a digital technology industry, the biggest lesson learned perhaps for us is that we have to take the unintended consequences of any new technology along with all the benefits and think about them simultaneously as opposed to waiting for the unintended consequences to show up and then address them. And so I think that's the fundamental change in the last 10 years because I feel like our license to operate as an industry depends on that because I don't think the world will put up any more with any of us coming up with something that has not thought through safety, trust, equity. These are big issues for everyone in the world. And so I think, and this, by the way, is not new for many other industries, but it is a little new for the tech industry. And we have to sort of raise up to the occasion, if you will. In that context, I feel I'm very optimistic of because of the dialogue that's happening. People in our own industry are stepping it up to say, okay, here are the ways we are going to raise the standards on safety. For example, the amount of time that is spent on doing alignment work, safety work before GPT-4 was released. That is substantial investment that OpenAI and we made, and that's in fact becoming the norm across all foundation models. And that's great to see. And also, of course, it's not just being left to the industry. The government's all over, whether it's in the United States, if we have an executive order in the U.S., we had a safety summit in the U.K., the European Union cares deeply about it, China cares about it. So everybody is converging. That's also good to see the world sort of coming and saying, we need new technology, we need some guardrails, and we need norms of how we deploy this technology. And that, I think, that combination of private innovation with safety first approach to engineering, I would call it. And then regulation that allows us to ensure that the broad societal benefits are amplified and the unintended consequences are dampened. I think it's going to be the way forward. But it will be very important to develop global regulations, and in the present fragmented situation, it will be very difficult. We see already now, Europe and the U.S. have different approaches. I was very happy to hear the Chinese premiers this morning making a commitment also to a global regulatory approach. Is there any realistic chance that we may see something similar to the environmental area where we have COP or the international energy agency? Do you feel, A, is desirable and B, is it realistic? I mean, I think it's very desirable, because I think without, at this point, these are global challenges and require global norms and global standards. Otherwise, it's going to be very tough to contain, tough to enforce, and tough to, quite frankly, move the needle even on some of the core research that is needed. But that said, I must say that there seems to be broad consensus, though, that is emerging. If I had to sort of summarize the state of play, the way I think we're all talking about it is that it's clear that when it comes to large foundation models, we should have real rigorous evaluations and red teaming and safety and guardrails before we launch anything new. And then when it comes to applications, we should have a risk-based assessment of how to deploy this technology. If you're deploying it in healthcare, you should apply healthcare regs to AI. If you're deploying it in financial services, you should deploy, again, the financial risk consideration. So I think that if we take even something as simple as that as a basis to build some consensus and norms, I think we can come together. So I'm hopeful. I don't know whether there's a new agency. But I think at least even all the capitals that I'm at that where people are talking about it, they're not talking about it differently. They're all essentially talking around the same set of services. But sadly, I do think that the politicians, I mean, I as an engineer, I'm always trying to be at the forefront of technology and so on. But I have difficulties to capture what artificial intelligence, particularly the new forms, really is. I had to learn to use, let's say, chat GTP, not just like I use a search machine, but to really use it as a colleague and to ask questions. But do you think politicians have to see, I asked our very, how shall I say, pertinent question, do you think politicians have to see sophistication to understand and in such a way regulate AI? Look, I at the end of the day, I think nothing can outstrip our sort of ability to govern it, right? Which is at the end of the day, human society, the biggest lesson of history is that not to be so much in awe of some technology that we sort of feel that we cannot control it, we cannot use it for the betterment of our people. So in that context, we need our politicians to lean in and I see it, right? The other thing, Klaus, you said, which I, again, this is the one time, if I look at the 70-year arc of human history, I mean, computing history, this is the easiest technology, right? In other words, the breakthrough here is, in fact, the 70 years we've been striving to find the most natural user interface so that computers understand us, not us understanding computers. So I don't think it's about politicians, it's about really politicians, it's about more about the technology needs to be simple enough, straightforward enough. It may be very high tech inside, but it should be governable and the principles of governance should be clear and I'm very optimistic that that will happen. So regulating the applications, I mean, this is a different approach, regulating more the input side and regulating, on the other hand, more the applications. I think that that's it. What would you argue for? I would do both ends of it, right? Because at the frontier of it, the key, for example, there are two types of things. There's risks that are here and now. For example, you can take something like a deepfakes and what it could do to the democratic process or you could take bioterrorism. These are here and now issues and those things should be dealt with regulations of the application domain and the dissemination of information and what have you. Then there is the existential risk of this is after all a self-improving technology or if it does become a self-improving technology and we lose control. That's the control problem and the AI take off and that's viewed as the existential issue. And so the bottom line is in order for us to ensure that that doesn't happen, you have to have a set of safety around it and regulation around it before anybody uses lots of compute to produce something. So that's I think the two ends of it. If everybody uses a lot of artificial intelligence, don't you see a limitation given by the energy consumption which will be necessary? And so we will have a curve pressuring down because it becomes environmentally unsustainable. Yeah, it's a great question because first of all maybe a couple of ways at least I come at it. One is what is the compute draw of global power is around two to three percent today. Let's say doubles. Then let's go to the other side of it. The output of compute, if you'd measured like even take something like artificial intelligence in terms of cost per token. The Moore's law is very much alive when it comes to AI. In fact, the prices are dropping like the best days of Moore's law. So that means the world is benefiting from the most malleable source of input as a factor of production where the costs are coming down. Then let's combine it with what I said previously, which is if you want to compress 250 years of chemistry to 25 years, find new materials, find new breakthroughs in biology, this is the input. So this is going to be the one that will really help us create that abundance, that acceleration and what have you. And do it by the way with the most efficient way possible. Then the other side of it is upstream, for example, these are all green field projects. Today we are one of the largest buyers, if not the largest buyer of renewable when it comes to all our data centers. We are stimulating in fact the demand for whether it's wind or solar or nuclear, everything and everything in between. And so therefore there's some of the innovation happening around sustainability. By the way, it's not just on carbon, it's carbon, water and waste. And so I feel that I think some of what we will stimulate as really the energy transition will then power what is the most efficient factor of production, which will then lead to that fundamental acceleration and productivity. Satya, when we had the emergence of the internet 20, 25 years ago, people spoke very much about the impact on the economy. And at that time many people argued this will increase productivity. But we have seen sluggish productivity growth. Now we have the same situation, many people saying we will have great increase in economic productivity which will drive of course global prosperity. Will we be positively surprised or will we be disappointed as we were to a certain extent with the internet? Yeah, this is something that I'm very, very, let's say both passionate about and I'm very grounded in. Because to your point right now as we speak inflation adjusted, there is no economic growth in the world I would say. And that's a pretty disappointing state. In fact the developed world may have negative economic growth. And so in a world like that we may need a new input. And that's why I'm very optimistic about AI being that general purpose technology that drives economic growth. You know by the way, here's the interesting economic fact. Even Robert Gordon who has written most eloquently about sort of the critique of information technology and productivity will acknowledge that PCs were the last time when actual economic growth came about. So the last time it showed up in productivity stats were when PCs became ubiquitous. In fact I think of this AI technology very much like the PC generation. Because if you think about it right, in fact I shudder to think Klaus. I don't know how the heck we managed to do work before PCs. Think about doing as a multinational company forecast. Before email, a spreadsheet, I don't even know how we would do forecasting. But somehow I guess we did it, but now we do forecast and the business process changed. Similarly I think in this co-pilot era as it spreads I think what's going to happen is work and work artifacts and workflow are fundamentally going to change. And that is going to lead to economic output. That will also lead to that scientific acceleration. So these two things I do believe should get us back to without inflation. Or inflation adjusted 2% to 3% of economic growth. That has to be our target and I think to me that's the bar. So to your point, if this is, look, I don't know in the last 15 years there's arguments about how do you fund measures. But one other observation I'd say is the mobile phone revolution in particular was phenomenal. It changed a lot of the consumer workflow. It changed consumption patterns. After all we were able to consume more video. We were able to consume more social media. Lots of interesting information was available to us. Lots of news. Whereas now, again, if you think about it, you're back to creation. In fact, it was fascinating. One of my colleagues was telling me about her four-year-old son one weekend said, Mom, I need more designer time. So the kid wants to create more designs and prompt more designs. That is the beginning, I think, of that productivity revolution. Yeah, I just want to take you up. I have a 15-year, 16-year-old grandson who told me, look, when you are back from Davos, I want your advice. What should I study in order to be a big guy in artificial intelligence? What would you recommend to tell him? That's a very key word, big guy. The beauty, I think, quite honestly, of artificial intelligence is you can study whatever. In fact, one of my pet things, I'm a trained electrical engineer. When I was studying electrical engineering, I never understood Maxwell's equations. But now I finally get it. Because, guess what? In fact, I got it before the artificial intelligence because somebody wrote a lovely website in JavaScript which visualized the Maxwell's equation. Now anyone can pick up any field of study, any science, anything and see that the AI can be really helpful in helping them learn the most difficult concepts. If I switch for a moment, we are coming to an end, but you know it's the soul of the World Economic Forum with stakeholder capitalism. Now there have been some counter-movements. What is your presence thinking about stakeholder capitalism? I mean, it is a fact. The way I've thought about, the way you've conceptualized WAF is the social contract. Take Microsoft, our license to operate in the world comes from us finding profitable solutions to the challenges of people and planet. I picked this definition from Colin Mayer at Oxford. I like that succinct way of saying what is the social purpose of a corporation. I think of the two key words. The profitable piece, after all, the best known mechanism we have found is to be able to allocate our scarce resources to create profit is the best way to generate innovation. I respect that. But then the second part is the other key word, which is solutions to the challenges of people and planet. What do you create, ultimately, if it is not a solution, not just for your investor's returns, but it has to be a solution for the challenges of the people and planet. That's the way, in fact, our investors. So to your point about multi-stakeholderism, it's not a nice thing. Our investors should care about multiple stakeholders because that's the only way they can get long-term returns and the license to operate for the company exists. So that's the view I've taken. And I'm glad you really brought about, I would say, that awakening in business and, quite frankly, broad society that thinking about all the stakeholders. One of the other things I've realized in my 10 years as CEO, perhaps, is if there's one lesson I've learned, it's not about one stakeholder at a time. It's about all the stakeholders all the time. And it's tough. It's not easy. But that is the job. Because if you don't do that, you are not serving the one stakeholder that you care about, which is the investor. Yeah, it's not an easy role. That's right. It's as well, as well. You just mentioned you have been CEO now for 10 years. Now, the context has so much changed for leading a company. And this means also you have to adapt your leadership capabilities. So if you compare yourself, Satya, with what you have been 10 years before, how would you say you have developed new facets to your leadership style, which are necessary in today's context? And what would you advise to the audience? What can they learn? I mean, you are one of, if not the most successful entrepreneur in the world, or business leader, what should people share with you? That's a great question, Klaus. The honest answer is I'm not really reflected on the 10 years in some sense as much as I've been reflecting on, this is my 32nd year at Microsoft. And it's the second year of AI. And my 32 years have been punctuated by three other sort of real paradigm shifts that I've been privileged enough to participate in. The PC client server was the first, the second was the web, the third was the mobile and cloud, and now the fourth is AI. And so in some sense, I'm trying to learn, go back to year twos of the other three. So I'm trying to sort of relearn how should I operate when it's year two of any paradigm shift, which is different, right? Because when you're building something new, it requires you to have a very different profile of what is risk, what is scale, what is investment required. It's always difficult. In fact, in our business, there is no franchise value. So you have to be all in on the new while at the same time maintaining what currently runs the business. And so I think that that's a very difficult task. We've done it, I would say, successfully, but there are no guarantees. And so with that sort of humility, I would say, that's the thing. You can't have hubris because that's what's brought down civilizations and companies and people from ancient Greece to modern Silicon Valley. But you do need to have some confidence that you've done it before and some humility that you can learn something new. And so that's the posture I would take. I would add maybe the capability to think conceptually has become much more important. And I feel the horizon of the concept has so much widened. And that's probably, I mean, that's what I'm experiencing in the forum, where we deal with all the issues. It was easy 20 years ago, and now it's dealing with complexity. Dealing with complexity. And someone was mentioning to me in the forum, it's true, I mean, even when I look at sort of, let's face it, the last 10 years, 15 years with the interest rates where they were, we look forgotten. I myself have forgotten. It was the last time we had a recession. It's been a long time. And that is true in the United States. But one of the things that was your point, somebody said, go to, you know, send your management teams to many other countries that have dealt with this all the time. And so to some degree, learning from what is happening around the world is another conceptual understanding that I think we can all take. To finish our session, would you agree that we moved from agricultural to industrial age and then we moved to a service-dominated economy and now we move to an intelligent economy but much faster than the other transitions took time? I think so. I think they all build on each other, obviously, but the thing that I feel is that we've never had a broad general-purpose technology that diffused to all corners of the globe and created abundance equally. That's the dream. What will the world look like if we were able to solve global problems as one community? We have a shot at it. So we should take technology in the context of our theme of the annual meeting. Technology can help to rebuild confidence and trust. Thank you very much.