 Hello, and welcome to this special edition of the TSE Digital Economics Conference. So the organizing committee composed of Jacques Cremère, Daniel Schoff, Paul C. Bright, who's on sabbatical this year, and myself, we are very happy to offer you this special Zoom session with two excellent keynote speakers, John VanRenem and Giacomo Calzari. We of course would have preferred to welcome you in person, as was the initial plan, but judging by the large number of registered participants, this event is much more than a consolation price. So the first presentation today is the Susan Scotchmer Memorial Lecture. Susan was a professor of economics and law at UC Berkeley, a brilliant theorist and a leading scholar in the field of innovation economics, with pioneering contributions in particular on the question of cumulative innovation. As I'm sure many of you, I have learned a lot from many papers, especially on cumulative innovation, characterized by simple yet illuminating theory and from her wonderful book on innovations and incentives, which I encourage you to read if you haven't done so yet. Susan was also a friend of TSE, and many of us in Toulouse keep fond memories of her frequent visits and participations to this conference. This year, we are delighted that John VanRenem has accepted to give this lecture in memory of Susan. John is the Ronald Coase chair in economics and school professor at the London School of Economics and one of the best economists working on innovation and firm performance. He will talk about the rise of superstar firms. But before giving the floor to John, let me just remind you of two things. First, the lectures are recorded and will be posted on our website. And second, you can ask questions using the chat and we'll try to keep a bit of time for Q&A between the two lectures and at the end of Jacomo's lecture. So without further ado, John, the floor is yours. So thank you very much for the introduction. Let me just see if I can get my slides up. See, so you full screen. Can you see that, Alexander? Is that clear? Yes. Great, fantastic. OK, so well, first of all, I should say, thanks to everybody for coming here and, you know, I'm sad that I can't be in Toulouse. I'd love to come to the beautiful city of Toulouse. I have a lot of happy memories, but next year, hopefully, that we'll be able to do this. So I'm going to talk a bit about the rise of superstar firms and causes and consequences. And I'm really delighted that this is in memory of Suzanne, hang on, let me just get this down, of Suzanne, who, you know, I knew I knew well when when she was with us and, you know, she gave me lots of encouragement, especially when I was a more junior person. And one of the things actually, which was just mentioned was that she has this fantastic book on innovation and centres. And when I first started teaching courses and the economics of innovation, there wasn't really a textbook you could use apart from Suzanne's. It was a really excellent introduction to the whole field. So this was like the core text back then when I first started teaching courses on innovation economics at UCL many, many, many years ago. So, you know, I'm going to give you an overview of issues around superstar firms. I'm going to draw on lots of different ongoing work with several co-authors. In particular, you may know, this is work with David Auter from MIT and Larry Katz and Paterson David Dorn on superstar firms, the Labour Share. So I'll draw on some of the ideas and that, but I'm also going to be drawing on our ongoing work in that area with David and also new work with Yanda Locke here for the Deaton inequality review that I've been working on, which in a genre was kind of giving me lots of comments on. Also some new work with Mary Amiti from New York Fed and many other people. I just mentioned this has also been inspired by these annual conferences I run with Chad Cyberson. If you're interested, please apply. We have one every year on mega firms and on the kind of, you know, politics and economics of very large firms. OK, so I decided to do this, you know, to make this the subject of the talk today, partly because of this headline that I saw in forms at the beginning of the year, which says that Apple became the first company worth three trillion dollars, greater than the GDP of the UK. So, you know, that's partly that's Apple doing very well and partly that's the UK doing very badly over the last decade or so or more. But in either case, it's very kind of a remarkable fact. And, you know, of course, it's not just Apple, which has this, you know, very large footprints. Microsoft is worth about two and a half trillion dollars that is all the beginning of this year. Google is worth about half of just under two trillion. Amazon, one point seven trillion and Facebook just under one trillion. So, you know, these are sometimes called the Gaffam firms. I like the phrase gaffa because in in London, cockney slang, gaffa means boss, your kind of pattern. But these are like boss firms. These are the big firms of the of the world economy. Now, growth of these firms has been kind of supercharged by covid as so much as we're speaking now has moved online and also in terms of retail and other kinds of activities to move online. But, you know, the growth of these firms obviously predates the pandemic by a long period. And, you know, part of what we want to do is to understand what's going on in terms of, you know, the reasons the rise of such superstar firms and also the consequences. Now, I know, give a bit of introduction, talk about what I'm going to call increasing differences between firms, a bit of some markups. And then I'm also going to link this a bit with work on imperfect completion and product market and labor market, which I think is one way to kind of frame these questions before giving some assessments and some brief policy conclusions. So the kind of first thing, of course, to emphasise is that the growth of superstar firms isn't just a phenomena in the kind of classical digital sectors as important as the kind of Gathams are. I'm going to argue that it's a much more general phenomena that we see in many industries. And, you know, the increase of, you know, the footprints of these type of firms has raised the concern that maybe product market power has generally increased and this may have potential welfare costs in terms of living standards, such as high quality adjusted prices and therefore low real wages, perhaps negative effects on productivity growth, innovation and finally effects on labor markets, such as the falling share of workers and value added and also maybe increases in equality. But the kind of concerns cause a broader than those more economic concerns. There's also concerns around the future of democracy, such as if some many of these very large firms will lobby to shift the rules of the game, those concerns of privacy. I'll touch on these these last ones, but not very much. My focus is more going to be on the more standard economic kind of issues. And, you know, the reason I got interested initially in looking at this area was the fact that, you know, what I've exploited to myself in much of my career is this explosion of micro data, which we've been fortunate enough to have. We live in this golden age of data, really. With the opening up of administrative data sets and private sector databases on companies. And this has enabled the documentation of huge cross-sectional differences in terms of firm size, in terms of productivity, exports, management practices. And of course, you know, this, this is a long tradition in economics about this. So I put Rob as your rat here of the law, who did the kind of classic work in the 30s on firm inequality and maybe less a well-known, Francis Walker, the first president of MIT, who, in the first edition of the Court of Agile of Economics, argued that a lot of the differences that we see in performance are related to this wide heterogeneity management practices. So management practices, for example, the work I've done with Nick Bloom and Rafa Sandoval, this has documented very big differences across firms of productivity. I've highlighted the most important country in the world here, France, as well as a few other countries. And what this diagram shows you is just the variation, this is just the histogram of our measure of management quality that we've done in the World Management Survey over many years. So firms at the left tail, you know, say with schools of less than two are firms who are basically not collecting data or information on the inventories or things happening at the firm. The company's not setting any sensible goals. The company's not promoting or paying people or hiring people based on efforts and ability. It's amazing how these firms manage to exist and persist in the modern market economy, but they do. So this heterogeneity, not just of management, but in a range of other services, is a really first order fact that we need to understand. And this has had a big influence throughout fields of economics. And obviously, it's also been critical to IO, but I think it's now generally accepted in most economic fields, like in trade, the Melitz model, labor with some of the new models that describe linking wages and labor shares to firm heterogeneity, and macro, I think, of Shane Klemmer and so on. So although this, I think, has been accepted and has influenced economic fields, less well known on the stood is the fact that these big differences have actually been getting wider over time. And one of the things I'm going to show you is that these differences both in terms of size and some other dimensions have actually got much larger in the U.S. and many, if not most, other OECD countries. So one dimension of this, and this is the thing I'm going to focus on, is kind of concentration, industrial concentration, the relative size of sales of firms in the industry, which has generally increased since the 1980s. Secondly, and related to that, is the fact that if we look at aggregate markups, prices over marginal costs, harder to measure, but those also seem to have increased over time since the early 1980s. And these factors, I'm going to argue, can be used to help understand some labor market changes, such as the fall of the labor share of GDP. Now, there's an important caveat, and so bio-beware, but these are moments of the data. So linking these changes, these positive changes with welfare changes, requires more assumptions and more models to understand that. Obviously, we understand that a lot in terms of thinking about what the relevant market is and how we link concentration to other measures. These are things we're taking into account, I argue, clearly, but not having got a trivial relationship to wealth, but you have to think about it harder than just documents to get. Now, one of some of the explanations for these changes is going to rise in concentration markups. Well, there's several different stories, which we'll talk about. One, I guess, I sometimes call this bugle or apple story, and this is to do with the increased importance of platform competition, network effects, and this is going to call to the digital economy, the kind of industries we've talked about, the firms we're talking about, which is happening in digital markets. So that's clearly part of what's happening, but I'll argue it goes beyond that. So even if I call this the Walmart story, because even if we look beyond those digital markets and to markets which are more about the use of new technology, these also appear to have the same kind of trends. And I think to understand that, one argument is what's happening is that, you could think of this as the increased importance of fixed costs. So, for example, if intangible capital, such as an ICT and software has become more important, so this has a big fixed cost component, larger firms are going to be better at using that because they're larger, they've got a larger scale, and this will give advantage to a bigger firms. So I call this the Walmart story, because if you think about retail, then in many parts of the retail sector, you have dominant firms, superstar firms like Walmart, and one of the reasons Walmart is very successful is it's been able to invest hundreds of millions or billions of dollars in software systems which enable it to track its inventory as it moves across the world in global value chains and also have kind of just-in-time inventory management control which can enable it to change the product mix that it has in different stores and through distribution centres around the world. And there's no way a kind of smaller independent chain can do that, at least of all the small mom and pop shops. So that's part of the story that you might have there. Now, that's an anecdote, but one bit of systematic evidence comes from this nice work by Lashkaria, who's a great talent on French data, which shows that the kind of relative intensity of the use of software compared to other forms of investment increases very sharply with firm size. These larger firms are much more kind of software and IT intensive in other types of firms. So that's a second story. A third of four story relates, I think, broadly around competition. So I think the most common story about why things have changed that you often read about in the media is to do with falling competition. So people often immediately assume that these changes of concentration and increased markups are due to the fact that competition has fallen, and particularly due to the weakening of anti-trust enforcement that you hear a lot about in the US weakening competition policy, allowing too many mergers, too much anti-competitive activity. So, Thomas Philippon has argued this strongly as many other authors. So that's certainly a possibility, and I'm going to show you in some sectors that might be true. But there's also a kind of argument that things may have been done in the opposite direction. So with globalization, with lower communication costs and trade localization, these are forces which tend to allocate market share towards the more efficient firms. Think about the Mellot type of model. And if that's the case, you can actually generate a lot of these patterns through actual positive changes in competition. So you can imagine that if globalization has become more important and superstar firms are more efficient or more innovative producing better quality products, more activity shifts towards those larger firms. That will increase concentration. And if that reallocation of forces strong enough, that can also increase accurate markups. So I think that's something which we'll look at as well and have some evidence for in some markets. So many macroeconomic models now are trying to take some of these facts and put them in informal models. So I mentioned a few of these examples here, including colleagues like Martin Gerrida and Philly Baggion. But I think that, although I think as useful as those activities are, it may be that there's what this kind of one kind of model of the macroeconomy to fit all the facts may not be right. And an alternative way to think about this is there may be different explanations happening in different parts of the economy. I think this needs to be more worth thinking about that kind of alternative approach to these rather than just to have a single macroeconomic model explain all the facts. So if that was true, that would be great if we can get a compelling macro model to do that. But that may be more challenging than it sounds. Okay, so let me start off with giving you some more facts just about the data. So we're thinking about trying to measure relative size of firms. The natural thing to do is just to think about jobs. So I downloaded, this was like about a week ago, the latest data from the US, which were from the kind of late 1970s, we can track the fraction of all jobs in large firms. So this is one measure of that. What this line shows you is the fraction of all jobs in the US who are in firms with more than 5,000 employees. It's about 2,500 firms out of the 5.3 million firms in the US who are in this kind of superstar firm category as defined by more than 5,000 workers. And as we know from the Givra onwards, there's a power law in the firm size. So these small number of firms, 20, 0, 3% of the US economy, a number of firms in the US economy, employ a lot of people over a quarter of all people, about 28% in the late 1980s. But the startling thing is by the eve of the pandemic, the fraction of workers who are in these firms have risen very dramatically from about 28% to 35%, to 7 percentage points. So that's a very startling and a large increase of the number of people working in these big firms. That's the kind of simplest possible measure of the increased importance of the superstar firms. Now, several problems with this, one problem of course is that employment may not be a very good measure of the footprint of the importance of these large firms. I've been to the Google headquarters and mountain view a couple of times. And one of the striking things how relatively few people are kind of employed in a company like Google, compared to how important it is in terms of its influence in the world. And my former colleague, Eric Bridgelton, often calls this the phenomena of scale without mass. Many of these firms have very large scale, but without the mass of having a large number of workers. Now, some of that is because the workers are outsourced, either domestically or formally. But a lot of it is because the sales of these firms are actually outstrips and please. So in a sense, a measure of which to use sales and also to think about sales relative to the kind of industry you're in. So the work that David and myself and others did about just over a year, probably just over a year ago, tried to document that in terms of using US census data, so the kind of population of all firms in the US and have the simple type of measures of concentration, which is things like the share of sales in the top four firms or Herfindahl indices. So these are based on four-digit industries, so quite narrow industries like synthetic cements, maybe not narrow enough, but relatively narrow, across the whole of the US. And they're so safer manufacturing, there'll be about four or 500 different industries and these are weighted average chains of concentration. So you can see, no matter which broad sector of the economy that you look at, whether it's retail, wholesale or manufacturing, so finance, no too big to fail, you might think it's happening in the banks. These are where a lot of the GAFAM firms work in services, but even in more traditional industries like retail or manufacturing, you see on average this kind of important increase of concentration. So this is, this was, in fact, we've updated this work and these trends seem to be continuing. Is this just a US phenomenon? Well, the answer is no. So if you do a similar type of analysis in Europe, this is from the OECDs, a multi-prog database also using, again, population, census and initial data, you also see these increases of market concentration, albeit the data is over a slightly shorter period of time. There's also some very nice work by the chief economist team in DigiComp, which has looked at, this is kind of publicly available data on companies in the private and public sector from Orbis combined with some other data set, which enables you to split out the sales of them accurately across industries. These are the five biggest economies in Europe over the last 20 years. And again, you see the share of the largest firms seems to be increasing in the industries over time. And then, so we could go on, but this does seem to be a phenomenon which is kind of common across many countries. Now, of course, it has to be emphasized industrial concentration is not the same as market power. Really, if you want to look at a particular antitrust market, you want to use a better defined and typically narrower antitrust market. It's a very nice work by Ben Cardetel, which has tried to do that in retail, taking into account local competition. We'd also want to take imports into account. That's a nice work by Mete and Heist trying to do that in the US. But fundamentally, it's very hard to define markets and alternative way to try and think about how this might relate to some measures of market power is to look at price cost margins. So I'll do that in a second. Before I do that, I just want to mention a couple of other dimensions of firm inequality rather than the size and also show you that these have also increased. So this is the standard deviation of labor productivity in the US over time from John Holter Anger's work. So you can see from 1996, there's been general increases of labor, the dispersion of labor productivity. This is work, again, from the OECD, looking across 16 different OECD countries, looking at the kind of 90-10, the differences between the top and bottom, 10% of firms in terms of productivity, either TFP or labor productivity. Again, these things be increases. And this is brand new work with Yanda Loca that I've been doing on the UK on for the Deton Review. So what this does, this is kind of looks at the trends, but looks at the median firm in terms of productivity. And this actually mirrors what's happening in the agro-economy to some extent in the UK that after the great recession, there was this big fall of productivity. And the median firm, basically, there's been no change of productivity over this kind of 20-year period. All the increases of productivity are happening towards the top of the distribution. And the firms at the bottom of the distribution are actually, if anything, reducing the low productivity. So big increases of dispersion, again, comparativity. And then finally, for wages, this goes back to some US work from Songital in the Quality Journal of Economics. This is looking at chains of different parts of the distribution. And the red line shows you, the blue line shows you what's well known. There's been this big increase in inequality, people at the top have had big increases, people at the bottom have had just about no increases. But the news in this paper is that if you split this out into what's happening within versus between firms, all the difference of increase of inequality, and this is the red line, is happening because the differences between firms are getting bigger. If you look within firms, inequality within firms has been completely flat, apart from at the very, very top, which is basically the CEO. So this is, again, saying if you want to understand inequality, you really need to understand what's happening between different firms and why such inequality has been increasing. This version of firm performance has been changing. Okay, so that's a few facts just on the differences between firms on those dimensions. Let's talk about markups. Now markups are, of course, harder to measure than things like concentration. So there's a couple of different approaches. One is a demand approach, so you can actually try and estimate the demand equation and when the supply assumption back out what marginal costs are. And this is kind of the very low BLP type of approach. But if you want to do that, you can't really do that to describe the whole economy because we just don't have brand specific prices across a wide range of the economy. So the alternative approach is based on a more production-function-based approach leaving you off work by Robert Hall back in the 1980s. And the idea here is that we can use the wedge between the output elasticity per factor of production, the variable factor of production if you want to add a major price to marginal costs, and its share in total revenue. So if it's perfect competition, then the output elasticity is equal to the factor share, that's the kind of solo 57 results, but to the extent there's a wedge between that output elasticity, the technological elasticity, and the share of the factor in revenue, that's going to be an indicator of what the markup is. So we can do this in different methods, through accounting methods, or through econometrically estimating production functions at Deloca and Barinski. But whichever way we do this, the following patterns seem to be coming out. So this is Jan's kind of QJA paper showing on Confistat, which are populist U.S. firms, what's happened to estimates the markup, and they've increased from 20% to 60% in terms of gross markups since the early 1980s, they're the same period when concentration rose. You might worry a lot about using publicly listed U.S. firms, because there are only, they only cover about a third of all employment, and they're very selected. But if you do the same type of analysis on census data, this is from manufacturing, for example, in the U.S., then you actually see a quality to be similar pattern where, since the early 1980s, you've had quite a large increase of the aggregate markup. So it's worth emphasizing, this is the kind of aggregate markup, the size weighted markup. If you looked at the individual firms and looked at, say, the median markup and the median firm, that's actually remained pretty flat. And many other sectors, this is manufacturing, other sectors that is actually fallen. So what's happening here is this phenomenon where there has been this reallocation of activity toward the larger firms who tend to have higher markups. So part of what's going on here, there's some increase in markups to the top firms, but the bigger phenomena here is that more of the economy has shifted towards these superstar firms who have very high markups. So we also, this is again from this new work on the UK, we document very similar things. In the UK, we can look at both listed and unlisted firms. And although consistent with what I showed you before, the unlisted, the smaller firms of low markups, there's positive trends of weighted average markups in both of these groups of firms. And then finally, looking more broadly, again, drawing on the two yans work using listed firms across a number of different parts of the world, you can see that these increase of markup seems to be happening, not just in North America, but in Europe, the nation, and many other parts of the world as well. So just to take stock of where we are, looks like industrial concentration has risen, especially if you use sales. Markups over marginal cost seems to have also arisen. And this seems to be driven by this, primarily by this reallocation towards the larger firms who generally have larger markups, rather than a general rise of markups possible firms. And this has happened, not just in the US, but also in other countries, other recent countries like the EU. So just to think about, is this a good thing or is this a bad thing? Well, of course, it depends on what the underlying forces are. On the positive side, these superstar firms are more productive, so the reallocation towards them, should it be a beneficial thing for productivity. These superstars are not classical monopolists. So if you look at, say, the industries which are concentrating more strongly, these are industries which actually seem to have a greater rate of innovation, as measured by patents or R&D or citation rates and patents. The productivity has also gone up more quickly in these industries. Prices haven't gone up more quickly in these industries, as you might think, from the thing about Google or Facebook, many of these products are free. So these are not, these are not classical kind of monopoly industries, as we think are slow and sluggish. A third advantage of these superstar firms, and this comes out of recent work with Mary Amiti, Kananig's and Cedric Duprez, is that there appears to be positive productivity spillovers from these firms onto other firms. So if you look at the multinational literature, a lot of the benefits of multinationals are not just because they're more productive, but because when they form relationships with other firms, they help spread the productivity through technology transfer. And we've kind of, we've done some recent work, which has just stretched off the press. This is on Belgian data, where we're going to actually see the population of all sales between all firms in the economy. And what we show is that, here's one bit of evidence of looking at an event study when a local firm starts trading with the superstar firm, appears to be quite a big increase in the productivity. So this is the, there's no pre-trend when you start selling to a superstar firm, you get a big increase of your productivity, eventually of about eight to 10%. So this is one of the ways in which these types of firms create benefits, not just for themselves, but also for other firms in the economy. So we said that, we should also be aware of the cost. So, you know, the fact that larger could give them the ability to exercise market power and lead to negative outcomes. There is a question of the extent to which have superstars attain their size due to the exercise of this power. And even if they're not, now they have this power, are they becoming better at creating barriers to small arrivals? So this, although there's a positive relationship with things like patents, could these patents be used to create barriers to diffusion of new innovations and productivity? Could they be lobbying to change the rules of the game? Could they be using more tax arbitrage to avoid paying taxes? So those are all things that we'll come back to at the end. I want to talk about now about the implications of the labor markets and the equality, which is an important thing to actually, I think, link to what's happening in the product and labor markets. So let me just give you a little bit of a framework for thinking about this. So the model I'm going to show you is a generalization development model that we presented in the QJ paper. So there, and we'll continue having this, we'll have heterogeneous productivity as we saw was first order, work and allowance for some market power firms and we're facing downward sloping product demand curves. We can model this in different ways. We have a nice relatively simple monopsular competition type of framework. But what we'll add to this is to have labor market power. So firms are going to be facing upward sloping labor supply curves. So one way to model that is they're going to wage posting monopsony type of model or monopsynistic competition type of model. And this now builds on this kind of large literature, really exciting literature emerging which tries to bring in the imperfect competition in the labor market and the product market together. So I mentioned several of these papers. Most of them are empirical papers which are kind of documents and evidence for important elements of monopsony power, wage setting power the firms have. And of course this builds on an earlier literature which is trying to put these things together, but more in the kind of bargaining framework like my job market paper back a few many years ago. Okay. So let's think about this type of a model. So in the model we write down the kind of static first order condition with respect to labor is going to yield a labor share of revenue which is going to depend three parameters. So this is going to be payroll relative to value added. This is going to be the share. So there's going to be three elements. One is a kind of technological element, this output elasticity with respect to labor. So if we had perfect competition in factor markets and output markets these will be equal to one and you just get the classical solar result of the share of labor and value added as equal to the output elasticity with respect to labor. But then the second element, and this was already in the original the original paper where we have a markup term of price over marginal cost. So the extent that you have a wage between price and marginal cost this will tend to reduce the labor share because wages are important part of the marginal cost. And if you increase your markup this is going to shift the labor share downwards. Of course this will generally depend on the own and cross price product demand elasticities. And the new element is having this kind of inverse markdown parameter. So think of this as the marginal productivity of labor with respect to the wage the side parameter. And this monopsony power will depend on the firm's labor supply elasticities that it faces. So this could come from the fact that workers value the amenities of firms in different ways. Now they may be living further or far away from different firms or they might have different values on the amenities that the firm has in terms of flexibility of when you can work and women that might be particularly important for. But in any case this will drive this wedge that's going to mark down between the MRK the marginal revenue property of labor and wage. So if we think about the change of the labor share from an individual firm I you can just write this as the change of these three different parameters the technology parameter the markup parameter and the kind of markdown parameter. Then the final thing we need to think to take this to the kind of industrial macro data is the fact that the think of this as the economy wide labor share will depend on the firm's labor share multiplied by the relative size of that firm in the market share of the firm. So this is important because what it means is that changes of the aggregate labor share depend on changes of the firm's size distribution and the covariance of firm size with the labor share. So if the environment changes so my platform competition becomes more important or these fixed costs become more important to favor superstar firms this can depress the labor share without necessarily even changing any of these individual parameters. So it could be that what's happening is just the you know movements of activity of market share to the firms which already had had had low labor shares for example we kind of show that some of that in the US data. Why does it matter if the labor share falls? Well you know one reason it matters is of income inequality since most people's income comes from the labor income if the share of labor income declines this is going to have a direct effect from overall income inequality. So there's a clear link there. So what's happened to the labor share? Well this is what's happened in the US as a separate world war you can see after the stability this kind of Caldor fact from the early 1980s this started to fall and fell very dramatically in the 2000s. The labor share has not just fallen in the US it's also fallen in Japan and China and in Germany the other three largest countries in the world. This is what's happened in the UK so you can see that there's been a general drift down in the UK although not for every period and I'll mention why that might be later. So what we've done in this new paper with Jan is we've tried to actually apply this framework to thinking about the UK so if we think about the change of the aggregate labor share it's the change of the elements that I just showed you. If let's just take a first app at this and let's say that if technology and markdowns were constant we can take these out of the brackets then the only thing which would be driving the change the labor share is going to be what's happened to the aggregate markup. So in the UK as I showed you earlier the aggregate markup's risen it's risen by about 0.4% per annum over this 81 to 2019 period and that implies a fall from this formula of the labor share about 72 percentage points. Now in fact in the UK the estimates that we made are about half that. The labor share hasn't fallen by as much as you might think given the change of the aggregate markup. So in the context of this kind of framework that must mean something has moved in the other direction least in the case of the UK. So either monopsony power or technology. So one story that technology could have changed towards labor. This is pretty unlikely. If anything we think the automation has meant things have got worse for labor rather than better. So John Asimoglu and my colleagues argued this. But one thing which may have changed is monopsony power. So could there be smaller markdowns over this time period? So the most obvious reason why that might have happened with the UK is that there was an introduction for the first time of a national minimum wage in 1999 and what this diagram shows you is that over that period the bite of that minimum wage this is the relative value of the minimum wage compared to the median hourly earnings has become much stronger. So the UK now has one of the toughest minimum wages in the OECD. So to the extent that a minimum wage weighs against monopsony power this could be one of the this could be the reason why we haven't had as big a fall of the labor share in the UK as we have had in the US. And you know, some evidence for that work I did a few years ago is the Machen and Merkur Draker which actually looked at the impact of the minimum wage of microdata and this is consistent with many other studies now including famous ones where David Carr is that, you know, minimum wages increase wages they don't tend to have such a big effect in jobs but we showed that they did squeeze profits quite significantly. So, you know, this is consistent with maybe understanding why the microdata count is failing of course. Okay, at a given time I'm going to skip skip the next slide and just get on to the conclusion which is about an assessment of what's been happening. So I gave you these three broad explanations of what might be happening institutional factors technological factors and globalisation but the fact we see qualitatively similar patterns across countries in terms of markups and concentration suggests that some underlying forces. So I think it's unlikely that country specific institutions such as a weaker anti-trust enforcement in the U.S. the Donalds explanation. So if you look at the European Union, no one's arguing I think that G.D. Culper's got weaker over time people complaining might be too tough and you know some of the firms but nobody's saying it's got weaker. So I think although these might help explain some of the different magnitudes I think the fundamental story can't really be an institutional story like declining anti-trust given the similarities we see. So I think it's much more likely the main explanation are things like technology stories what's happening in this sector's intensity producing digital like the GAFAMs and this adoption of intangible capital which are high fixed cost in the intensive digital using industries. So what does this mean for policy? You know need to have a reaction which is to say we're going to break up these firms or make this world as likely to be quite costly but you know even if the superstar success is not due to weaker anti-trust institutions in this winter take all world we live in it's going to be very important to modernize these anti-trust policies. So you know I think that you know going forward we have to be very alert to the potential harm so thinking of ex ante regulation such as the EU's digital markets act thinking of trying to have more of a role for thinking about future competition and merger decisions not just looking at current market structure but how market structure is going to change and how innovation might change has to be a critical part of the merger assessment standards. I think the standards of proof need to shift more to when dominant platforms acquire smaller start-ups that could be competitors of the future the moment especially in the US that's birds and the fruit too much on the on the regulators and finding ways to increase structural competition such as through the EU single market I think all the critical things and the other kind of policy message I think comes out of some of the work we've been doing on the UK is that it's really important to the labour market policies and labour market institutions which can form a counterweight to some of the power of the superstar firm so things like minimum wages collective bargaining having you know labour standards such as in the gig economy is really important strengthening job mobility and of course human capital as a way of counterbalancing some of the power which is on the on the very largest firms so in conclusion hope to stimulate thinking and thoughts and researchers that we see these growing differences between the top firm the superstar firm is the rest of the economy as as indicated by increased concentration and markups I think it helps explain some labour market phenomena as well such as a changing labour share but we also need to consider labour market institutions when we do that in terms of the overall explanation of this I think the technology is the dominant factor especially in the digital producing and digital using sectors although I think that you know there's still a lot of work to be done in thinking about well in certain sectors and other kind of factors could globalisation institutions be important to the understanding of the change we're taking place at the time so I think it's a extremely rich area for those of us who are interested in the digital economy lots of work to do and an exciting time to do it thank you very much thanks a lot John for this very stimulating talk yes to answer Jack's question we have time for a couple of questions if I may start I mean I haven't seen any question in the chat but oh okay well actually Jack has a question so Jack do you want to ask your question? I was going to ask something similar yeah so thank you very much John this was it's great it's great to have an overall framework to think about those things Darren Asimoglu okay stresses a factor which you you cannot mention only briefly and not at all in your assessment at the end but we should pay lots of attention to the nature of innovation in automation and robotics and in particular whether it's a complement or substitute to labour and but you don't mention this at all in your conclusions at the end so do you disagree with Darren's analysis or do you think it's something which is you know beyond what policy makers can do or what's happening here? No I think I think it's an alternative explanation of what's been happening it's a drone's argument you know in the terms of the framework I put down is that it's the the alpha parameter in some sense it's the technology which is changing in a way which is against labour and the the automation factors are the things which are pushing down the labour share and they're not being compensated by the what he calls the reinstatance effect of new new skills and new jobs being created so I think that's that's a certain hypothesis on the table and you know it would imply that some of the you know the other elements are not being measured correctly and that is the thing which is which is going on to change to change what's happening you know I you know I think that there is a role for sure and I agree with your own that you know we've talked about this and argued for this is that it is possible to change the direction of technical change to influence that and I think we should try and do that we'd like to have technical change which are more pro-labour than anti-labour and in the same way we can direct technical change to some extent in terms of you know climate change and green technologies through carbon tax other things I've argued for that I think it's possible I think it's still an open question how able we are to do that though so the elasticity of policy with respect to change the direction of technical change to be pro-labour I think is is a difficult one is a difficult one and maybe it's possible and I think we should work more on that I haven't seen compelling evidence to that so that's a very strong effect and you know I you know it may well be right maybe it is more to do with robots and other things and it is to do with the product market changes that I've been emphasising more you know the robot story that one of the issues with it is that it's a relatively small fraction of the economy so if you look at robots the share of overall capital they're pretty tiny so you have to argue it's a much more broader type of automation type of thing so I see that as a you know as an alternative hypothesis to understanding what's going on and certainly something where you know there could be a lot more interesting work to be done yes I also a question I just wanted to react on one point you mentioned on the last slides on you know the labor market intervention like you're talking about policies such as raising the minimum wage I'm struggling a bit to see how that could how that could improve things if we think of superstar firms because they are very productive and from what I understand they pay their workers quite well on average so I'm a bit surprised because it doesn't seem like the minimum wage would be binding for those guys so what am I missing here well I think it's a more subtle thing than just saying it's going to be against the superstar firms themselves so I think it's more that there's going to be equilibrium effects in the entire labor market and potentially not necessarily but potentially as you get to an economy which is all dominated by a smaller number of firms so I think that is like can can can in principle lead to a situation where wages are falling below modular revenue products so another way to say this I think in general you know and this is channeling our manning wages are generally set below modular revenue products and having institutions like the minimum wage could be ways of trying to keep wages at a level above what those those might otherwise be now that's not to say that's the right policy for every country so in France for example I think though the minimum wage is already pretty strong but an economy like the UK which is a very weakly unionized economy with quite liberal liberalized labor markets the equilibrium wage is likely to be pretty low were it not for these other interventions like like minimum wage so I think you know as ever it's going to be different sets of institutions are appropriate for different countries but I think that those those institutions have a role to play as we go forward okay and there's another question by Ellen Ralston Ellen do you want to open your mic and ask the question sure thank you very much it's very interesting but I was just wondering if there might might or might not be a long-term impact of the stronger labor policy so we see very few super giants in in Europe and do you think the labor policies are a factor yeah so I have to I have to apply what's called the cross-paper constraint so I have a I have a paper with Philip Aguillon where we looked at some of the French labor market institutions and there is clearly an impact of them especially when you get above 50 employees a lot of labor market regulation those tend to have impact on you know discouraging firms to grow above the 50 thresholds and also has a negative effect on innovation so I think that there are some negative effects whether you know the other thing we do find in that paper though is that they tend to affect the incremental innovations but not the radical innovations you know if you're going to be big you're going to be big you know paying these extra stuff paying some extra cost of the regulation is not something which is going to be a large effect in preventing from being large so so my my my my feeling is that for the very top the more radical innovation the more important stuff is not fundamentally held back by labor market regulation but it certainly is something to be aware of and there's not not arguing that all labor market regulation that we have is is a good thing I think you have to think about the way to do it in a smart way and that's to be combined with other kinds of policies but I do think there is a role for labor standards of regulation I think particularly in parts of the economy now which are becoming very fragmented like in the gig economy there I think having having some minimum standards as it's quite important. Okay Thanks a lot John Thanks again for this terrific presentation now it's time to to move to our second keynote speaker so our second keynote lecture is by Giacomo Calzorari from the European University Institute in Florence so not only is Giacomo a TSE alumni and a great economist but he's also at the forefront of a super exciting research agenda at the frontier of economics and computer science and he will talk today about product recommendation and market concentration Giacomo we just set these things here we go can you hear me can you see my slides excellent very good so let me start really thank you thanking you a lot all the people organizing this conference I'm thrilled and honored to give this talk today in particular being a talk in memory and in order of Suzanne so let me also say that this is a joint work with a great group of co-authors I want to mention them Emilio Calvano Vincenzo de Nicolò and Sergio Pastorello I think it's difficult to to think about a better group of co-authors and the friends with whom to make research and have great fun so what I'm going to talk today is as you see here product recommendation and market concentration and in fact it's the topic it's part of sorry just a sec it doesn't year ago it's part of a broader research agenda we have been working on for quite a bit of time on artificial intelligence and implications in on markets so we all know that AI is already impacting markets and we are all excited about the markets that AI is showing us the great successes but at the same time I think we all share some of our worries about potential consequence something can go wrong we have in the back of our mind this idea so with this research agenda what we do we want to better understand how AI once embedded into markets may actually work and what could be the consequences and we think this requires to somehow bring together computer science economics and the way we do this is by studying actual AI tools in realistic economic environments like what we did in a series of paper on AI and algorithmic pricing so more to the topic of today I think we all agree we have experience as consumers that nowadays we have an immense set of alternatives and most of the alternatives are unknown think about the products you could buy the news you could read the movies the songs the financial assets to buy the posts in social media to read and the academic paper to read as well so there are really many many alternatives let me just give you a couple of examples here in digital platforms currently in the U.S. in Amazon Marketplace you can buy more than 300 300 million different items and on Spotify you can listen to more than 90 million of songs and the national catalog of Netflix is several thousand movies but now speaking of YouTube the videos of YouTube nowadays are more than 26 millions now clearly we would never be able to explore these oceans of alternative there are too many and even if we were able to somehow become aware of existence of these alternatives we wouldn't know what would be our preference or tastes with respect to these these programs so we need some help so let me go on with this maritime metaphor in these oceans we need a new sextant to navigate what is it well what possibility is precisely one remarkable application of artificial intelligence that is the recommender systems so recommender systems are software programs that are specifically designed to provide personalized suggestions to users and consumers about possible items and products that may try so they are designed in fact to solve the prediction issue predicting users preferences for unknown items and they do this using some assessment on other users and items and in fact in computer scientists science they are called collaborative tools I will come back on this important element so as I said it's an application of AI that quite remarkable and already quite common in markets that mediates the interaction between consumers or users producers that is items and in between you would have platforms using recommender systems so why do we care about this well we do care because already nowadays recommender systems are having a huge impact in market so in in the examples I was was giving before Netflix Amazon already there a large fraction of choices of actual choices of individuals are generator or induced by suggestions of recommender systems if not the majority look for example what happens in Netflix where 35% of the movies have been recommended so they're already impacting quite significantly markets and there is a general worry about algorithmic recommendation so there is a heated debate policy debate on the risk of competition of these tools and even the risk of for democracy and this has been debated on both side of the oceans the European Commission the US Congress FTC there are several papers by national competition authorities and there is an idea there that we may risk what is called reach get richer effect that is recommender systems may end up exasperating popularity there are some papers in computer science that somehow pointing to this to this risk so actually this reach get richer effect is in part relatable to what John was mentioning before on on superstar so in fact you see that we are very complimentary in these these two talks of course I would take a very much more micro perspective here now this reach get richer effect may be caused by an important and essential element of AI algorithms once they are deployed in markets and that is the fact that these tools these softwares need to be trained and actually retrained over time on data which once they are in the using in the market they themselves contributed to generate so in economics we would say that we have a big intelligent issue here so the general claim is that recommender systems may reinforce market power somehow amplifying competitive advantage so we decided to look into it and this is our research agenda on recommender systems so we think there are a number of questions important question to look at so first point our recommender system just another example of a technology that reduces search costs like what it was internet at the very beginning or is there something something specific and different new our recommendations that users obtain from recommender system bias and will as I said before dominance of some sellers and products be further reinforced by recommendations and in the end what is the ultimate impact on competition and market power now there are different approaches that you may want to use in order to address these questions of course one approach is to look at the problem from the theoretical point of view and this has been done by a series of very interesting paper for example associating recommender system to a cost search cost reduction or looking at the possibility that with recommendation could be somehow steered or manipulated like for example in the case of self preference things for you may see here in this list of quotas a big bunch of people from to lose very active on this topic or recommender system can be also seen from the point of view of information design with the tools of variation persuasion so these theoretical approaches are very useful very important but of course I might not make the claim that this is not enough because they are using a very very stylized version of what is a recommender system in reality and to some extent they are missing this collaborative element that we mentioned further in a moment you can also try to use empirical work on actual algorithm here there are some papers doing this starting the causal effect of algorithms in individual choices then the difficulty is that is there are rare possibility to obtain good data and it's difficult to generalize what you can get from this specifically so we took along the line of the general approach and agenda was mentioning at the very beginning the idea of exploit is starting this problem with an experimental approach using realistic simulations and it will be more clear in a moment so what we do essentially in with this research agenda is we operate actual uh recommender systems in synthetic and control environment synthetic means that we generate preferences and products and controlled we means that we can control the data on which the algorithm is trained now this approach has to at least two important challenges that we have to take care of the first is that algorithms must be sufficiently similar to those that are using actual market and the second is that also the economic environments must be sufficiently realistic so the intended contribution here is first methodological as I said bringing or bridging economics and computer science and and we do we try to do this using sound economic models as I said with realistic AI algorithms and the second contribution is more specific we want to study the links as I said between user items recommendations and the implication of our product market competition so let me give you a general idea framework to uh that is useful for a recommended system so imagine an environment where our capital i users capital j items and a rating matrix r which is i times j you have an example here very simple examples there are four items a b c d and there are four users one two three four and the content of some of these cells tells you for example these two a so the number four tells you what is the rating that user two gave to product a so the rating matrix in fact contains some observant ratings that we call r i j the rating that the user i gave to item j now typically so you can think this rating matrix like for example in netflix the items are the movies in the columns and and the users are the viewers in netflix now in reality these rating matrix are huge as i mentioned we have thousands of movies thousands if not millions of viewers so they are very large matrices and and this is the most important element is that they are sparse actually very sparse meaning that the cells that are non-blank non-empty are in between depending on the different environment from one percent to 10 percent now the problem of a recommended system is actually predicting the missing rating so filling the block and make then personalized recommendation so now let me tell you how a state of the art algorithm for recommendation works in a nutshell so what i'm going to present here is what is called model-based collaborative collaborative filtering so four steps in the first step the recommender system assumes some parsing model model of the rating assuming certain number of cut factors let's say k for each factor there is a usual dimension that we indicate here with theta i h which is the proclivity of factor h for off user i okay and item dimension beta j h is the intensity of factor h that is present in item j so these models don't use any semantic meaning for the k factors but if you want to somehow have an idea so theta can be the taste for sugar the content sugar of individual i and beta j is the content of sugar in product j the second step once we have this model in the recommender system is to actually estimate these usual dimension an item dimension factor the theta h and the beta h and this is typically done by minimizing some accuracy loss on the observed rating of course the non-blank cells in the rating matrix now once you have recovered for any user and any item this theta h and beta h you can impute all the missing entries by simply making a vector product between the estimates theta and beta so i can get even if user i never tried item w i can reconstruct the estimated rating for this pair user an item then step four once i fill the blank of the matrix for each user for each row i can order the ratings and recommend to the user i the product that has the highest the highest rating that's the functioning of recommender system you may have noticed that the problem is in fact that similar to what is called in mathematics matrix factorization the only difference is that matrix factorization we normally use so we generally use complete matrix here we start from an incomplete matrix but we ended up with two matrices the user matrix and the item matrix that are of a smaller size the important thing to notice here is what i was mentioning before the collaborative component of this matrix factorization so in fact what we are using in this recommender systems is a user an item correlation i think this is a simple in a sense obvious observation but it's important because it's telling you the recommender systems are not just another example of a reduction in the cost of search they are adding in these search process a collaborative component okay now let me tell you about the economic environment as I said we are controlling and this is our general approach in our research agenda we control preferences so we design preferences in the following way we assume exactly the same preference model as the algorithm in order to eliminate any uncontrolled bias so in the end we are going to use a random utility model with logic errors that where the utility of user i for item j at some period t is the vector product of the parameters theta and beta plus summer and we design the consumers the taste of the consumers that is the theta by unpacking them in a way that I'm going to show you in a moment with an example and similar we do for items so we design products that are substitutes by picking the betas and we will I will show you results on cases of pure horizontal differentiation or pure vertical differentiation when there is there's going to be one product that is the best product in the market and some mixed version in between so for the moment we will abstract from prices so products will be associated to firms but these firms for the moment are completely passing and the baseline environment that I'm going to show you is on 100 users 100 items and two factors oops here we go this is an example of pure horizontal differentiation what you see here in this graph each one of these brownish dot is one item associated with the two coordinates that is the beta the content of these items with respect to the two dimensions beta one and beta two okay so you see the 100 products displayed uniformly over these arcs of a circle and you also see a map of indifference curves of a consumer that has some ratio of taste parameters theta one and theta two and as you can see there is just one in this economy just one product that is the ideal product for this individual so in this example of pure horizontal differentiation what we do we make sure that for each user there is in the market one preferred item that the user user may not know may be unaware of and vice versa so the experimental protocol that we use and then I will come I will go soon to show you some of the results the experimental protocol is the following we study a repeated consumption environment and recommendation where for 100 periods from one to one and at any point in time we in our protocol we do three things the first we update the rating matrix that is we add for each user eye the item of one and only one item sorry the rating of one and only one item which one hold on I will explain you in a moment the second step that we do is we estimate so we feed into the algorithm the updated rating matrix RT and the algorithm will give us the complete RAD matrix rating matrix and then third step we do the recommendation recommending the single best product to each one of the users now how do we update the rating matrix we do this important step in two ways the first way as it would happen in reality we add as a rating the observed utility for user eye item j a period of the item that these users was recommended in the previous period and the second way we populate the rating matrix is by exogenous data so the utility of an item that was taken at random and I will clarify in a moment why we are doing these two different exercises and we are in this procedure this exact procedure of a one other period for 1000 sessions and they're going to show you the averages now it's an important caveat here you should note that what we want to do here is really to understand the properties of the rating systems the recommended system and hence we don't want to have biases embedded in our analysis so there will be no room for steering recommendation here and users will follow recommendation and they will through fully report ratings probably is not very close to what happens in reality but this is the only way we think in order to understand properly what are the characteristics of these AI algorithms last point we need a benchmark to compare with and the bench we're going to use is something that is standard in economics is an individual search benchmark where at any point in time the individual randomly samples among the products and observes the associated matched utility the individual then perfectly recalls past observation and then chooses what is the best option across all the observed utility and you see as time passes individual individual search have done more search the consideration set in this benchmark becomes larger and larger and similarly on the recommended system in the recommended system case the rating matrix becomes less and less sparse so here we go into some of the findings I'm going to show you first a set of results with exogenous data so data are populated in the rating matrix exogenous with random dots and then I will show you what is the consequence instead of having endogenous data so what you see here is a quality of the recommendation the left panel is the case of horizontal differentiation the right panel is the case of vertical differentiation and mixed differentiation in the middle so what you see here is a measure of normalized utility of our users in expectation and the red line is the benchmark of the individual search and the blue line is the recommended system as I said with exogenous data now there are two so there will be a series of this type of graph so I want to clarify there are essentially two important parts to look at in this figure is the first 10 periods and the reason why these are important is that they correspond to a level of sparsity which is close to the reality and also you may want to look at around 50 the iteration 50 because in that case in somehow signaling showing us what are the limit properties of the recommended system although in a level of sparsity that is not very realistic so first observation here is that the recommenders the utility with the recommended system is higher tend to be higher than in the video search benchmark if the rating matrix is not too sparse for example here the benchmark does better than the recommended system and this may reflect what is called in computer science a cold start problem when there are too little data and the second effect is that as you move from horizontal towards the vertical differentiation you see that the recommendations becomes better and better better than the benchmark and the idea is there that when you go to vertical differentiation since the recommender is exploiting similarity and since with vertically differentiation consumers tend to agree on what is the best product there you have the recommender doing a better job now look what happened on market concentration this goes back to some of the comments in John's talk before so we look at an HHI index of induced market shares in these two environments with individual search and with the recommender system and you see systematically that the recommender system the blue line generates substantially increase in concentration with respect to the benchmark and this can be observed also if you look at the number of products that receive non-zero market share with the recommender system there are fewer items that get positive market share than with respect to the individual search and with horizontal differentiation also selling firms change something that you don't see actually but we observe that the firms that have positive market share change over a session but still we can observe some of the firms in the population of the firms that are favored and within session period after period there is some persistence that we think it's important to document now if you look at the market share of the dominant of the dominant firm well here the recommender system does a good job in the case of horizontal differentiation there is no dominant firm and hence the recommender system as the individual search benchmark cannot find a dominant firm but if you look instead at vertical differentiation the recommender system does a much better job in identifying the superior products when they exist okay now so summarizing we have seen that recommender systems with exogenous data increase concentration and per se for us this means that there is a bias in the algorithm I will come back on this as I said before some of the concern about recommendation and this idea of items becoming more popular and popular with this cumulative feedback loop that creates entrenchment we want to see whether the fact that the reality data are endogenous may actually play a role in addition to the bias in that because if this is the case we can say that there is also a bias in the data which is the part that has been mostly emphasized for example in the computer science literature so what you see here is the HHI index the Erfindal index on concentration the novelty is the greenish line the green line is the HHI index when the recommender system uses endogenous data that itself created contributed to create period at the beat so the punchline is that perhaps concentrations likely increases with endogenous data so remember you should look around this part here in the first 10 periods in order to have a realistic sparsity of the data but certainly the second order effect with respect to do the bias in the algorithm that was documenting before so let me summarize the findings so far recommender system may help consumers especially when we are talking about products with vertical where vertical differentiation is important and the matrix is not too sparse they generate significantly they induce significant increase in market concentration and the worries on community feedback generated by endogenous data is not really supported by our there are two more steps I want to cover the first is well as John was saying before concentration is not bad per se so it's really an issue this increase market concentration and the second step is where does this excessive concentration come from in the end so we want to unpack this bias that was mentioned before first point is concentration bias an issue well we know that it depends on competition so we may have highly concentrated market because there are few lucky firms that for some you know random stories no particular merit get market power or it could be that the market is very competitive and selects the best firms as John was saying before now what is the effect of recommenders on the implied intensity of competition so what we do to address this question is measure the intensity of competition in our environment with the implicit national equilibrium prices that could emerge as an equilibrium at any point in time so at any point in time we take the demand that are mediated or generated implicitly by the recommender system and given this demand we calculated static Nash equilibrium prices and we do so similarly for the individual search benchmark so our Nash equilibrium prices are our inverse measure of competition and you should notice that we are not having not using strategic firms here they're not before we're looking remember we are using this price measure as an inverse measure of competition so here is what we see so what you see here is for the individual search the recommender system with endogenous and exogenous data respectively the green and the blue line the weighted average Nash equilibrium prices so there is a first clear and first order effect that recommender systems intensify competition prices are significantly lower prices also reflect concentration bias as you may notice in this first period here and data also have a rich effect on competition because the perceived differentiation reduces with high sparsity we'll come back on this comment later on and again the endogenous data again is not particularly reliable now last point I want to cover is we observed biases in the algorithm but now we want to know more and we can't do more because we are controlling the environment and that's the one of the blasts of our search agenda I think we want to know whether they cause the causes of algorithmic biases bias are due to failure in estimating product characteristics for consumer preferences or potentially both and this is what we do so I'm going to present our results here this figure here which shows you an example in the case of horizontal differentiation with exogenous data at period 10 that is when the rate matrix has sparsity around 10 percent or the non-black cells around 10 percent so what you have here is an entire economy that we're studying the red line is showing you the true item and users and they are uniformly distributed over the red line okay as I said before in the example of horizontal differentiation all the rest is the estimates and in particular the blue disks are the estimated if you want consumers it's the estimated pairs of data for each one of our consumers and the dots are the estimated items so that is the estimated products items the beta and you see the market share with the green sorry with the gray disks so clearly this shows you a very important we think well a clear pattern we observe three systematic estimation bias first consumers are bunched together remember a consumer should be spread over the red circle while they are here all bunched together in this part here items as well get bunched together instead of being again over the red arc they are in this area here and third element is that some items obtain overstated or too high quality and these are the guys here in north this direction those guys have higher betas and as a consequence they promise higher utility and as a consequence they will be recommended more by the recommended system now notice that bunching together consumers and items would in general increase competition okay because there is a reduced differentiation but on the contrary having some products with overstated quality may actually reduce competition now I'm close to conclude with my talk now how robust are the results concentration bias prices and the unpacking that was showing you of these different prices so we are doing a lot of robustness check of this research we have worked on very large matrices and this is an important step because remember it's not only a matter of the sparsity of the matrix that should be closer to reality is also a matter of the number of parameters that the algorithm has to estimate relative to the number of observations so with the large matrices we are accounting for this we are using categorized ratings where the content of information that the rating is to the matrix is coarser instead of utility just for example the five star metrics that you have in amazon we are studying the role of hyper parameters and cross validation which is one way to select the hyper parameters that you it's using these algorithms we are studying the consequence of possibly misspecifying the model remember what I showed you before the economic model the number of factors was exactly the same as in what it was in the model of the algorithms we are studying the possibility that the algorithm instead of recommending just one product recommends let's say the first five products the first 10 products and then the consumer chooses some of them we are studying other market reaction not only pricing but also the possibility that if you as a seller never are able to sell for time then you may exit the market or the market may attract some new firms and we are also studying what happens if users certain users have a different amount of ratings we think this is important because this type of ability to control the amount of information for the given users may allow for a discussion on the debate on data portability on platforms and the last point is that we are investigating what happens with the recommenders that comes with some constraints there is an interesting recent article on the AI magazine which is the magazine of the association for development of artificial intelligence these are computer scientists where they recently mentioned that they focus so far quite a bit on consumer centric recommenders and in fact remember our recommender system are minimizing is accuracy loss in terms of the match value for the consumers and but they say we could start thinking more about also the side of producers or some more exoteric stuff like a serendipity which means somehow giving the users the impression that they were able to discover something while instead in the end it was the recommender that recommended stuff so concluding first of all is let me say again I'd say preliminary work but I think there are some important takeaways of interesting takeaways of this research already the design of actual recommended systems contain biases they generate homogenization on products and tastes consumers taste and this has the potential of increasing competition I was saying before they tend to overstate quality of some items and this has the potential to reduce competition all these biases they go in one direction in terms of concentration they increase concentration but overall they also increase competition as I showed you before and the last point is that the general concern of the feedback loop the diabolic feedback loop that is the bias that endogenous data may generate we think received at least for what we have seen so far too much emphasis and it turns out to be a second order effect in our so if I want to summarize our findings well I think there is a kind of positive message here these algorithms have biases but these biases are are not unavoidable like you would have been if they were based on endogenous data there are biases in the algorithms and these systematic biases that negatively affect in the end one of the players either the consumers or some of the firms could be in the end eliminated this is our approach our view as economists by future better algorithms and let me stop here and thank you for your attention thanks a lot thanks Giacomo so so far there are no questions in the chat so let me let me ask you one question so actually it's maybe more of a clarification question okay can you go back to the slides where you were trying to see what are the the effects of a recommender system on prices and explain a bit the mechanism I didn't get the mechanism through which prices are reduced because of recommender systems yeah okay so the demand here for the recommender system is the demand that is implicitly generated by the recommender system okay so we're going to use a utility that is a the estimated utility minus the price so this is the net utility that's going to be the element to form our demand so with this demand we then calculate the national equilibrium prices and we compare the national equilibrium prices that emerge within this procedure with the recommender system with the same procedure that would emerge with the individual benchmark okay so the first important observation is that the combination of the biases that was mentioned in before actually reduce the prices the equilibrium prices which for us is a major an inverse major competition so in other terms the biases that are embedded in the recommender systems induce more intense competition or if you want the implicit demand elasticity are higher in this case in the case of the recommender system and and the point is that as I said or there's the third bullet here there is a complex here combination of the biases the three biases were mentioned before actually later in the talk the bias on homogenization of items and products and also the bias on the vertical dimension since products are bunched very much together they are perceived as more homogeneous and this very much increases the competition okay Amelia Fletcher has a question Amelia if you want to ask it so you said I mean I think it's really really interesting work fantastic but you said that you had completely abstracted from behavioral biases but it struck me that in the computer science literature it's those behavioral biases that quite often drive some of the data biases so for example the fact that when people do reviews they tend to review things they loved and things they hated but not things they just kind of feel okay about and that creates a bias in the in the data that is collected do you I have no doubt that you're going to go on and look at this sort of thing but do you think that you might be missing some of the data bias effect by excluding and abstracting from those behavioral biases and therefore do you think it might be a bit early to make a call on that on that particular point thanks a lot Amelia thank you very to the point I agree you may have noticed that there are no policy conclusion here and that's certainly too early and so really our impression reading is literature on computer science is that they did a lot of stuff but in and of not very well controlled way mixing too many things together and for us it is very difficult to understand what's doing what what causing what so really that what we are doing here is first step into and we wanted to have everything clean and understandable piece by piece and it's only by doing this that we are able here for example to disentangle these three elements then I completely agree with you there may be other biases the data one bias is that recommendations do not come you know they have certain patterns that have been recognized I completely agree with you and for me the implication of this is that this is going to be a very long line of research okay let's let's see so we're a bit over time but there's a couple of questions so let's take one haski has a question haski do you want to open your mic so I mean and the media sort of focused on the consumer side of this I'm a little bit more focused on the producer side of this and it's easy to imagine that the kind of recommender system affects the behavior of firms here you know when you've taken from behavior you've just taken some static Nash pricing but one could easily imagine sort of bait and switch type type things where I built up a reputation to then milk it now maybe that building up the position in the algorithm early on intensifies competition I mean the complicated things to think about but again sort of speaks to there's a big agenda to conduct on this thanks a lot haski in the interest of time I will thank you for for this comment I agree with you just noticed that in national equilibrium prices we are not using as a description the real description of behavior of the firms there is a way to manipulate the recommended system if you drop the price you will be recommended more this is already happening the reality we really use it in order to calculate the implicit associated the intensity of competition yeah thanks a lot okay thanks jack omo so I received a couple of questions in the chat in a private chat but as you see we're already almost 10 minutes past the scheduled time so I'll send jack omo the questions and send you the answers so let me let me thank again john and jack omo for great presentations let me thank all of you for attending we had more than 100 participants so that's great and let's hope that we already said this last year but let's hope that next year we will be able to to see you in person in January so keep an eye on the on your mailbox for on your email for the call for papers for next year thanks to you thank you thank you john thank you jack omo thank you for organizing it