 So, we now come to our first keynote speaker, Professor Daron Acemolu, who will discuss the topic, Can We Have a Better Future for Work, Wages and Democracy? Mr. Acemolu is a professor at the Massachusetts Institute of Technology and he has authored various books on related topics and received a number of awards for his work. Viewers can read more details about our speakers on the ESRB website under the conference speakers. And now, Professor Acemolu, the floor is yours. Thank you very much. It's a true pleasure to talk to the European systemic risk board about a set of issues that I think many of you have thought about a little bit, but I believe they need to be more of the focus of questions about systemic risk and that's about the effects of new digital technologies, especially artificial intelligence on broader society. There is an incredible amount of excitement about AI and there is no doubt that there have been huge advances both in applied and theoretical research in AI and the fruits of these are visible from the fact that AI is now permeating many different parts of the economy, the financial system, communication, entertainment, health and other areas. There's a huge amount of optimism as well. For example, the Economist magazine wrote recently that fears of job losses from AI are completely exaggerated and we might get fewer checkout attendants at supermarkets but more massage therapists are nothing to worry about and productivity growth is going to make all of us the beneficiaries of these new technologies. McKinsey Davos statement for 2022 was in the same vein arguing that we're going to create a transform manufacturing sector, help build fulfilling, rewarding and sustainable careers thanks to AI and there are of course a huge amount of potential from AI technologies in risk regulation, financial transactions, health and many other areas. But when you look at the way that many industrialized economies are working today, there are also clear reasons for concern. Even well before the pandemic and before the rapid spread of AI, not all was well. In many advanced nations, for example here I'm showing the data for the US, the labor share has plummeted. So much more of the income in the economy is now going to capital than labor. Again, focusing in the US, this is not just a story of capital versus labor. There has been a systemic increase in inequality within labor groups and in a way that looks very jarring. Here in this picture I'm showing the evolution of real wages for 10 demographic groups in the United States. All the way from high school dropouts to workers with postgraduate degrees, men and women. And what you see is that in the 1960s and 70s, and the same was true even more in the 1950s from different data sets, you see a period of shared prosperity where the real wages of all 10 demographic groups were growing in tandem about 2.5% in real terms. And then from around 1980 onwards, you see a complete reversal, a much larger increase in inequality, but even more remarkably, a decline in the real wages of low education groups. For example, high school graduate, high school dropout men, same thing for women. Now across the industrialized nations, this is a fairly unique picture. No other country has as weak labor market regulations protecting low wage workers as the United States, but the overall trend towards inequality both at the top and in the middle of the distribution and stagnant wages at the bottom are fairly common. So if you look at the omnibus measure of inequality, the Gini coefficient, US is ahead of other countries, leader in inequality and has had a very large increase in inequality since the early 1980s, but the same as more or less the case for other industrialized nations. Another very important parallel across these countries is that the employment structure has evolved in a very similar way. Again, as the Economist magazine and McKinsey Davos pointed out, this may or may not be a cause for concern. Perhaps some jobs are going to disappear, new jobs are going to come, but the extent to which the employment structure has changed is quite remarkable. So if you look at all countries in the industrialized world for which we have data and you look at the employment structure by occupation, separating occupations into lowest paying group, you know, the very unskilled service work, homework, protection, etc. Highest skill work, professional jobs, managerial jobs, technicians, engineers in green and the middle paying ones, clerical work, production work on factory floors in red, what you see that these middle paying occupations, the middle class has contracted in all nations, not just the United States. These are all sort of pre AI developments and my own work has emphasized that they are intimately linked to automation trends. And in particular, for example, both digital automation in offices and robot based automation in factories. And another slide I was going to share shows, for example, across the United States, what happens to employment and manufacturing employment in local labor markets, commuting zones where you have more intensive adoption of robots. And you see a decline in both employment and wages. It's emblematic of the heartland of heavy industry in the United States like the Midwest, but it's throughout the economy. So what this chart shows is the contribution of pre AI automation on to inequality by looking at the real wage changes for detailed demographic groups distinguished by education, gender, age and ethnicity. And on the horizontal axis, looking at whether the tasks that they used to perform in 1980 have been automated. So for example, a number of 25 or 30% shows that for that demographic group, many of the tasks in which they used to specialize have been automated. On the left hand side, you see that before the wave of automation, there wasn't much of a relationship between what was happening before the 1980s and this post 1980 automation. But on the right, you see a very sharp negative relationship indicating that much of that inequality that I depicted in the first few slides in the US is related to automation. So this is again, mostly before AI. It's taking place with more basic digital technologies and offices. But it is a worrying pattern, especially for any type of shared prosperity. And it brings all of the risks that high levels of unemployment do. So how levels of inequality and high levels of opportunity do. Now the question is, what is AI going to do in this picture? And really, can we see the footprints of AI in the labor market already? So in the next slide, what I show is that if you look at various measures of AI and here I'm focusing on AI related job postings in the United States, you see there's very little of it in the 2007, 2010, 2011, and so on. But around 2015, there is a huge pickup in AI activity. And it's exactly like AI proponents are arguing. It's quite widespread across industries. You see it pretty much in every industry, broad industry. Now the question is, what is AI going to do? How is it going to reorganize jobs? Is it going to create opportunities and so on? So in the next slide, I talk briefly about this because one promise of AI is that in fact it could create new human tasks, complementing workers, facilitating trade and matchmaking, reorganizing production. And in fact, there were many early pioneers of digital technologies such as MIT's Norbert Wiener, Douglas Engelbar, JCR Licklider, who advocated this type of perspective, thinking that the biggest use of machines would be to increase human capabilities. And there were significant advances in this direction, including the mouse and hypertext, which Engelbar introduced in this effort and many others. But if you look at the recent past, much of AI activity within the economy has focused on automation and data collection. On the automation side, one way of seeing that is in the next slide, and you could look at which are the companies that are adopting, which are the workplaces, establishments that are adopting AI technologies. And what these three charts do is that they divide US establishments into four quartiles. The blue and the green are the ones that have a lot of tasks that can be automated by AI. And the red and the yellow don't have many and uses three different measures of how which tasks can be automated. And what you see is that the bulk of the increase in AI comes from businesses that seem to want to automate things. And in fact, in the next picture, you see that those same businesses, again shown in green and blue, slow down their hiring. In fact, they stop hiring more workers from 2015, whereas the non-AI affected yet yellow and black continue to increase. This is true in a much more detailed granular level, but I'm showing it at this broad brush for making the presentation simpler. So where does this leave us? Well, I think what it leaves us is with a question of whether we are using AI for continuing a trend that had started earlier and excessively automating. And there are reasons why for worrying about excessive automation, because if you look at the direction of AI, it's very heavily influenced by the priorities of a handful of companies. According to McKinsey, two-thirds of all spending on AI comes from a handful of US and Chinese companies. Many companies are excessively focused on cost cutting. And there are also other factors. Let me sort of mention two of them. The tax code in most of the industrialized nations today powerfully supports capital, but not labor. So here I'm showing the data for the US. If you look at the marginal tax rate that a worker and a firm would pay, it's about 25 percent and it's remained at 25 percent for over 40 years. But if you look at the tax that a company would pay for installing capital to replace workers, that used to be around 15, 20 percent, but it's now less than 5 percent. So there are quite significant financial inducements to companies to invest in machines instead of workers. And then to add to it, and I think the whole field of AI has gone in a direction where things like reaching human parity, performing human tasks, becoming more human-like are the most valued and prized achievements. So that provides another impetus for going more in an automation direction. So if we go to the next slide, what this picture shows is one little piece of addition to that narrative by emphasizing the role of cost cutting. And cost cutting has become a major concern for many corporations in the United States and in Europe. And in recent work, for example, my co-authors Alexi and Daniel Le Maire, we look at what happens in Denmark and the United States when CEOs that are trained in business education like an MBA degree take the helm at a company. And what we find is that in both countries, one of the major things, actually the most important thing that they do is they slow down wage growth and they reduce the labor share. And a lot of that we trace to the sort of the priorities that these new managers are bringing to the job, making the corporation leaner, re-engineering the corporation, looking after the shareholder interests and so on. So there is sort of a synergy here between the tools that are made available by digital technologies, including AI and what the companies are or what the new managers are envisaging as the most interesting direction. So if you go back, if you go to more slides, this is showing that they're not much more productive, actually, it's just a wage reduction. But what I want to sort of emphasize next is actually the risks are not just in the financial domain and in the job domain, but also in the democracy domain. Growing inequality has coincided around the world with the retrenchment of democracy. And then from the late 1970s to around 2016, every year there was improvements in democracy around the world. More countries were becoming democratic and existing democratic countries were improving their democratic quality. But around 2016, we see a reversal in this. And from then on, more and more countries are abandoning democratic institutions or clamping down on political rights and freedom of speech and so on. Now, in the next slide, I discuss what the reasons for that might be. And there are obviously direct and indirect reasons. Growing economic disparities have become a major lightning rod. But also there is growing evidence that how we are using AI is damaging democracy. One aspect is from the government side, AI-powered surveillance technologies in the hands of non-democratic governments is creating major pressures on AI around the world. But also in the industrialized world, you know, the ad-based business models are transforming many of the platforms that more and more people are using. And the structure of these platforms is creating both the wrong types of incentives for democratic discourse and the wrong type of engagement from bottom-up sort of processes, which are, of course, critically important for democratic continuity. Now, there are additional things. Again, I bring it back to the AI researchers and what the ethical responsibilities of AI researchers are. But let me not get into that. Let me sort of make two more points, and then I will conclude. If we go to the next slide, what I show is that actually, or what I start arguing is that we shouldn't actually fear AI. It really depends. New technologies are things that give us additional tools. They expand our capabilities, and it's how we use them that's critical. And that's why regulation here is absolutely important. If we regulate them right and if society directs them in more socially beneficial direction, it becomes a very powerful tool in our hands to improve our objectives, creating shared prosperity, less greater stability, less unemployment, less inequality. And in fact, there are great examples of how this was done during the last several decades. If we look at Germany, Japan and South Korea, we see that the robots have had very different effects than they had in the United States. Why? Well, the next picture shows a very important part of that. Actually, sorry, skip them to the next one. I'm not gonna talk about this given that time is short. What you see is that countries like this, South Korea and Germany have aged very rapidly. But as they aged and they started facing labor shortages, at the same time, they also hugely increased their investments in robots. So on the left, you see in green and gray, the very large increase in aging, the fraction of the population that's above 55 in South Korea and Germany. And on the right, you see these two countries, again, in green and gray, much ahead of others in terms of investments in robots. And the consequence of this has been that actually productivity has increased. They have been able to deal with labor shortages and in fact have increased their exports in the sectors which rely on these robotics production. So let me now conclude, which is I'm gonna skip the next slide, which shows the same pattern for many more countries, but let me conclude with the final slide. So future of work, future of democracy, I think I cannot imagine more important topics and I think today we cannot talk of the future of work and future of democracy without mentioning the word risk. And I think AI is necessarily going to have a major impact on these things. But how it will impact these and other outcomes is not predetermined. For example, we can have AI for good automation, high productivity automation technologies accompanied by new tasks, new opportunities for workers, or we can have bad automation, reducing employment, increasing inequality, as I've shown from a few countries has been the case. We can have good communication technologies, which was the hope in the 1990s when digital technologies were first spreading, creating new public spheres, foundations for more decentralized action, greater democratic responsibility, or we can have bad communication technologies facilitating surveillance, misinformation, emotional outrage, and polarization. And I think regulatory responsibility, researchers' responsibility, especially in the AI field, and also society's responsibility, bottom-up participation in the direction of these technologies are gonna be the crucial factors determining this future. Thank you. Thank you very much, Professor Ate Morlu. So we have a few questions for you before we let you go. And the first question is, you pointed out very well, obviously, the benefits and the challenges of AI. Now, how can we ensure that investments into AI and technological progress are put to good use for the economy? That's the million dollar question. And unfortunately, I'm gonna let you down. I don't have a perfect answer to that. If I only did, I think every technology is different. In some cases, naturally they're going to be some players, researchers, companies that are going to push in different directions. I think in the case of AI, we got off to a bad start. And the bad start is because AI is already under the control of the largest corporations that humanity has ever witnessed. And these corporations have very specific business models. So we're not getting the sort of diversity that would be the foundation for many different interesting users. So what that means is that we need two sorts of countervailing forces to create a playing field in which different types of applications of AI can be experimented with. And those that have social benefits can be sort of pursued more. And those are gonna be regulatory sort of forces. We need to regulate how, for example, online platforms work. And also we need societal pressure. So the democratic process, even if it's impaired right now, as I've mentioned, needs to work in order to sort of provide input from society how we want these AI technologies to be used. We want them to be used for ceaseless data collection for more discrimination, for less choice for individuals or for other things. I think that there is, regulators are going to have an important stay in this, but it's not just regulators business. I think it's a more bottom-up process that we need. Thank you very much for your answer. Another question we have here is on income inequality. And you spoke quite a bit about that already. What are the implications of higher income inequality and financial stability? I think that's another great question. I mean, obviously in industrialized countries, we are going to have some amount of inequality and we need some amount of inequality. If we were trying to get everybody to have the same income, that wouldn't be good for entrepreneurship. It wouldn't be good for investment. It would not generate incentives for people to get into different sort of fields in order to sort of be more ambitious. But we have way past those levels of inequality. For example, again, the US is an extreme case, but US was already a fairly unequal economy in 1980. And almost every year, inequality has increased since 1980. And today there is a huge gulf between people in the top 1%, for example, and those at the bottom. So those sorts of levels of inequality, I think create economic, social, and risk problems. Economically, I think they imply that economic opportunities are not going to be open to everybody. Those who have much more money are going to be able to monopolize the best education, the best jobs, the best social network. Socially, it's going to create a lot of this content. I don't think you can understand what's going on with, for example, the populist challenge, especially from the right in many industrialized nations, without taking this huge increase in inequality into account. And that's probably as big a risk as any financial crisis. But also financially, I think inequality creates more risk-taking, more irresponsible behavior because people are trying to sort of make up for what they cannot do through economic opportunities. So I don't think it is a coincidence that people are after get rich quick schemes such as crypto in a time where economic opportunities have dwindled. So I think we need a more holistic approach and financial risk and general risk assessment has to be a central part of that. Thank you very much. And one last question on that. I mean, one step further, AI can impact financial stability. So how will the markets change when more financial decisions are made by AI rather than human resources? Well, I think that's another fantastic issue. I mean, I think, for instance, AI has the capacity to help regulation greatly. One of the issues of regulation is that regulators don't know many of the aspects of risks that are brewing and hidden in balance sheets. They cannot, in real time, keep up with new products, new risk-taking opportunities. The shadow banking, for example, that remained for a long time unregulated was partly as a result of that. So AI tools can be hugely useful in the hands of regulators when they are used correctly. But at the same time, we also know that as more machine-based investment takes over, that's going to introduce new risks as well. In an ideal world, you can say, well, machines are gonna invest better than humans, and they're gonna avoid some of the human-created risks. I don't think there's any evidence of that as yet, and I don't think that's gonna be the case in the near future. So we need to have a new sort of regulatory framework. How do you regulate the machines that do the investment as well? All right, thank you very much. I mean, we will let you go at this point, but thank you very much for your time.