 Good afternoon. I would like to welcome you all very warmly to the Seventh ECB Annual Research Conference. After two years of holding this conference virtually, we are very happy to see many of you in person. Thank you so much for coming to Frankfurt. An equally warm welcome to those of you who can only join us online. Thanks to Luc and his team, we have an exciting program with high-quality research on very important topics. One set of presentations will help us to better understand business cycle dynamics and the effects of monetary policy. The focus will be on price rigidities in microeconomic data, the role of temporary layoffs for unemployment dynamics, the effect of macroeconomic uncertainty on household spending decisions, and the potential crowding out of bank lending by quantitative easing policies. We will also hear about a new econometric methodology for constructing monetary policy counterfactuals. Another group of presentations will cover structural changes in the economy, in particular the data economy and automation, and what they imply for the functioning of our economy and for monetary policy. The two highlights of the conference will be the Jean Monnet lecture by Jean Tirol, and the debate about the outlook for inflation in the euro area, as seen from the outside with Paul Krugman and Larry Summers, moderated by Beatrice Veda di Mauro. In my remarks today, I will discuss how recent research, including that presented here, helps us to understand the transmission of our monetary policy. I will argue that we have made considerable progress in combining the evidence obtained from microprice data and the macro evidence on the effects of monetary policy. This gives us a better understanding of how monetary policy affects inflation and economic activity, thereby supporting our efforts to properly respond to the current situation. In the wake of the coronavirus pandemic and Russia's invasion of Ukraine, the global economy has experienced strong inflationary pressures together with very large relative price changes. These developments contrast sharply with the preceding long period of subdued inflation. In the euro area, HICP inflation had for many years been below our 2% target. When the pandemic hit, the inflation rate fell even further, going into negative territory for a period of five months at the end of 2020. I remember an interview with an Austrian newspaper in early 2021, when I was asked by the journalist whether positive inflation still existed at all. I assured him that inflation was not dead, pointing to the expected increase in inflation over the year, without anticipating what was about to happen. Inflation indeed rebounded and began its sharp upward move to 5% by the end of that year and, according to preliminary data, further to 9.1% last month. And inflation may actually rise further in the near term. While all HICP components added to this high number, energy price inflation in the euro area is standing at close to 40%, illustrating the magnitude of relative price changes in the recent period. Central banks around the world are responding forcefully to these developments. The ECB raised its policy rates by 50 basis points in July and by another 75 basis points last week. Based on our current assessment over the governing councils' next several meetings, we expect to raise interest rates further towards levels that will ensure the timely return of inflation to our 2% medium term target. The current normalization phase we focus the attention of monetary policy makers on one of the classic questions in macroeconomics. How do central bank policy rates affect inflation and aggregate economic activity? Over the past 40 years, economists have made significant progress in answering this question. I would like to summarize the key lessons learned and pose a few questions that remain open. One stream of the literature investigates the effects of policy rates using macro data, time series econometric models and dynamic stochastic general equilibria models. A broad consensus has emerged about the aggregate effects of interest rate policy. The econometric models, mainly structural vector auto regressions, show that policy rate changes have significant effects on both inflation and economic activity. Maximum effects occur with a lag with the impact on inflation taking somewhat longer to materialize than the impact on the real economy. Furthermore, at least at times when inflation is not too far from the 2% target, the models indicate that the effect on inflation tends to be modest relative to the effect on the real economy. In other words, the Phillips curve is rather flat. DSGE models can replicate the effects of interest rate changes found in the macro data, shedding some light on the transmission mechanism of monetary policy. With the advent of more and more granular data, research shifted its focus to micro data on prices. Researchers documented how individual firms set prices, aiming at building models that would be consistent with both the price setting behavior at the micro level and the effects of monetary policy in the macro data. Meeting these two objectives simultaneously proved challenging. When economists studied the micro data underlying the consumer price index in the United States and other countries, they found that individual price changes are infrequent, but typically large and absolute terms in the order of 10%. In their famous paper, Menu Costs and Phillips Curves, Mikhail Golosov and Bob Lukas investigated the implications of this evidence for the effects of monetary policy in a dynamic equilibrium model with idiosyncratic shocks and a fixed cost of nominal price changes. When calibrated to match the average frequency and size of price changes in the micro data in a low inflation environment, the model predicted monetary policy having a strong effect on inflation and a weak effect on output, that is a steep Phillips curve in contrast to the relationship found in the macro data. The key reason behind this result was the selection effect. In the Menu Costs model, after changing the policy rate, prices further away from their optimum adjust sooner than others. In the Golosov-Lukas model, this selection effect is quite strong. When the policy rate rises, prices previously far above the optimum decrease by large amounts, while substantial price increases that were about to occur are postponed. Therefore, in that model, price changes are infrequent, but monetary policy has a strong effect on inflation and a weak effect on the real economy. In order to reconcile these partly contradictory findings, more work was required to match the evidence from the micro price data and the macro evidence regarding the effects of monetary policy. Since the Golosov-Lukas paper was published, economists have been studying numerous micro price data sets from different countries. Researchers have been busy constructing models that can match various features of the micro data, including the average frequency and size of price changes. In 2018, the European system of central banks established Prisma, the price setting micro data analysis network, to collect and study various kinds of micro data, aiming to deepen our understanding of price setting behavior and inflation dynamics. Today, Peter Karadi will present the findings from a research project undertaken as part of the Prisma network. In the paper, Peter and his co-authors, Raphael Schöne and Jesse Wursten, set out to measure the selection effect in micro data. To give you a preview of their findings in a nutshell, the selection effect is absent. The probability that a given price will change increases to a certain extent when that price is further away from the optimum. However, the probability of price adjustment, conditional on an aggregate shock, does not seem to depend on the distance from the optimum. The authors also discuss which models can match such price setting behavior. The promising candidates are state dependent models with random menu costs and models of information constrained price setting. In both classes of models, the selection effect can be weak. Some prices fail to adjust, even though the distance from the optimum may be large, while other prices change despite the distance being small. The transmission of monetary policy in these models depends not only on nominal rigidities, governed by the frequency of price changes and the selection effect, but also on whether a change in the policy rate triggers large or small price adjustments. Real rigidities are features of the economy that dampen the size of these price adjustments. Think of real-range rigidity, sticky intermediate input prices, or inattention to inflation. Models with the weak selection effect and with real rigidities conditional on an aggregate shock imply that policy rate changes have a modest effect on inflation compared with the effect on economic activity, suggesting that the Phillips curve is rather flat. These models therefore meet the challenge of matching both the micro price data and the macro evidence on the effects of monetary policy. At the same time, the evidence I have summarized so far comes from the period when inflation rates were fairly stable and not far from 2%. Would we expect the same patterns in an environment of high inflation? For example, a prominent analysis of the US micro price data from the great inflation of the 1970s and early 1980s finds a substantial increase in the frequency of price changes. This was a period when the inflation rate remained above 5% for many years, and inflation expectations were not well anchored. Similarly, model-based simulations from the Prisma network show that in the euro area, the frequency of price changes increases with the inflation rate, something that we are indeed observing today with a historically high share of firms expecting to increase their prices over the coming month. According to these simulations, a material increase in the slope of the Phillips curve may occur if inflation were to stay persistently above a certain threshold for an extended period. Once again, challenging previous findings. Let me conclude. Recent research to be presented today suggests that the slope of the Phillips curve may depend on the inflation environment, with the curve potentially becoming steeper when inflation is high. The evidence is still scarce, however, and more research is needed. Moreover, numerous questions remain unanswered. How high and how persistent does inflation need to be for the frequency of price changes to increase significantly? How would this affect the strength of the selection effect? And could real rigidities become weaker? For example, because real wages just faster or price setters pay more attention to inflation? Improving our grasp of these issues is essential to foster our understanding of the effects of monetary policy. Research is the backbone of good policy making. And we count on you helping us to advance our knowledge of these fundamental issues, which helps us to deliver on our mandate of price stability. I would like you to thank you very much for your attention and I wish you all a very interesting and productive conference. Thank you very much. So welcome to this first session of the seventh annual research conference. I'm very happy that I'm able to share the session that I was asked to share this session. And I would like to give the floor right away to Peter Karadi and his paper that he wrote together with his co-authors, Rafaid Schoenle and Jesse Worsten on price selection in the microdata. Thanks a lot for the organizers for having the paper in the program. Yeah, so this is joint work with Rafaid Schoenle and Jesse Worsten and the usual disclaimer applies. So what I'm going to talk about is price selection in the microdata and the motivation is quite clear. So this is a classic question in macroeconomics that the rigidity of the price level actually influences the real effects of the monetary policy and also the amplification through demand type channels. And from previous research, we know that prices change infrequently. And in standard price setting models, this low frequency of price changes implies that the aggregate price level is rigid. But in models, the price rigidity is microfounded by fixed cost, the menu cost of price adjustment, like the model of Golosov Lukas. Actually, the price level can stay flexible even if only a small fraction of prices adjust. And the reason is that in these frameworks, large price changes are going to be endogenously selected. And why is this the case? So if there is a fixed cost of adjusting the prices, then it's optimal for the firms to concentrate on the products which are the most misaligned and adjust the prices of these products. So this way, they can mitigate the cost of price adjustments. But then, if an aggregate shock hits, then these are going to be the most misaligned prices that are get adjusted. They will change a lot because they are not just get adjusted by the aggregate shock, but they are also misaligned in the product level and the firm level. So there is going to be an interaction between the idiosyncratic product level shocks and the aggregate shock. And this will raise the flexibility of the aggregate price level. So what is it we are going to do in this paper? We are going to revisit this Golozum and Lukas critique of price rigidity by looking at microdata. And we would like to measure the strengths of the selection effect there. So we are going to do it by measuring both the price misalignment to measure the product level causes of the selection and identify aggregate shocks. So they identify the macro shocks that trigger these effects. So what is it we are going to find? We are going to find evidence for state dependence in price setting. So we will find that the probability of adjustment is going to increase with price misalignment unconditionally. So if we kind of look at just, so pull the data over time. But importantly we find that the selection as we defined it is not there. So condition on aggregate shock, the size of the misalignment is going to be immaterial. Instead what we find that the kind of state dependent adjustment is best described by an active gross extensive margin. So there is going to be a shift between the frequency of price increases versus price decreases. So if there is a tightening there is going to be less increases, more decreases and that kind of accounts for most of the adjustment in the data. So we think that this provides some guidance for model choice and policy implication. So the model is, so these results are consistent with mildly state dependent models with linear adjustment hazard and actually sizeable monetary non-neutrality. So I'm in the talk, I'm going to first talk about the framework, explaining a bit more in detail how we define selection and then go to the data in the data, going to concentrate in the presentation on supermarket data and going to estimate both the price gap proxy. We are going to use distance from competitor's prices and for an aggregate shock we are going to use a credit shock. But in the paper we show that the results are robust to using a more general producer price index micro data and other proxies for price gap as well as other aggregate shocks. Then we look at combine this data to look at selection and show some robustness. If I have time I'm going to talk about the literature a bit more. So let me kind of jump into the conceptual framework and the goal is to identify channel of adjustment of the price level to the aggregate shock. Basically you should think of this as kind of an accounting framework in an environment with sticky prices. In the original Caballero angle framework they identified two channels. So one is the intensive margin which looks at when an aggregate shock hits all firms which adjust their prices are going to adjust it by more. So there's going to be larger adjustment for each adjusting firm. And then there is an extensive margin channel which is that there are going to be new adjusters and this channel is the only one which is active in state dependent price setting models. Our contribution is to separate this extensive margin channel into two channels. One is what we call the gross extensive margin which is going to be a shift between price increases and decreases and the selection effect and how we define it is looking at better larger gaps. So prices with larger gaps adjust with higher probability, conditional and aggregate shock. And to, we argue that actually so previous recent research have shown that in a lot of these models the dynamics after a monetary shock is actually quite similar. So it's sufficient to concentrate on either the impact effect or an effect at a particular horizon at least approximately. So what the starting point is a price adjustment model with price adjustment friction. So there is the price adjustment is lumpy. What Kabalyero Engel and others pointed out is that it's very useful to concentrate on the price gap which is defined as the difference of the price from a theoretical optimal price. And this optimal price is influenced continuously by both the product level and aggregate factors while the price level because of these price adjustment frictions are adjusting only occasionally. So using the price gap we can decompose inflation into the multiple of several terms. One is the density of the price gap multiplied by the probability and the size of the probability of adjustment. And this is the lambda X this is the hazard function and the size of the adjustment conditional gap which is just minus the gap itself. So this is illustrated on this figure which shows how the adjustment happens in an SS pricing framework. There the shaded area shows the price gap the this purple hazard function the step hazard function shows the probability of adjustment it is zero if the gap is smaller than a threshold and one above the threshold. And then the dark shaded area shows what happens and these are the price decreases in this example and these are what actually contributes to the inflation itself. Here you can see that actually the decreases are large. But importantly the what this fact that in the SS pricing framework the price decreases are large doesn't imply selection as we define it. Here you see actually two versions of the model. One is where you like the before when the SS pricing is a step function the other one is a linear adjustment hazard function. There you can see that the probability of adjustment is still increasing but it's not jumping like in the SS pricing framework. What we propose and this has already been done in the literature is to show that you can decompose the inflation into two components. One is kind of based on the averages average size and average frequency of adjustment and the covariance term. And the covariance term is asking how the size of the adjustment cover is with the probability of adjustment. And if this covariance is positive then you have a state dependence in the price setting. Importantly models where like the cargo price setting model where the hazard is flat this covariance term is zero so there's no state dependence in price setting. But this is not what we call a selection. Instead for selection we want to ask that when an aggregate shock hits what channels are active and you can show that these kind of three terms become active. One is the intensive margin which is just all the prices which are adjusting anyway they are going to be adjusted by more. And then on the extensive margin there are going to be two terms. One is the gross extensive margin which is that there are going to be more decreases than increases. And then the selection term is asking how much the new changes the probability or the position of those who are changing now because of the shock are covered with the size. So basically is it true that larger price changes are now going to be changed by more with higher probability than before. So in the figure you can see that in terms of the selection there is going to be big difference between a model in Golovsov Lukas which is on the left hand side and the model with a mildly state dependent pricing where you have this linear hazard. There the new in both cases the dark shaded area show the position of the new decreases. So these are the ones which kind of get triggered by the aggregate shock and you see that in case of the Golovsov Lukas case the new decreases are concentrated at a high point. So they are going to be changed by a large amount while in case of a mildly state dependent model they are dispersed like on the right hand side and in this case there is going to be no selection. So this table just overviews what I just said. In the time dependent model it's only the intensive margin which is effective. In SS and convex hazard models there are intensive margin, gross extensive margin in selection and in a linear hazard model the selection is not active. Instead the effects are coming from the gross extensive margin. So the paper is looking so the next part is to look at to measure the shape of the hazard function and get density in the data as has the strength of the margin of adjustment both unconditionally and conditional aggregate shock. So let me jump into this. So the data that we are using is supermarket data from the United States. The advantage of this data it's very granular so it has 170,000 products also wide coverage in the US so it's over 50 markets and also it has a long time series so it's available for 12 years. So it's a very suitable for our causes because it has both granularity which helps us to identify high quality information about close substitutes of the exact same product as well as long time series so we can identify aggregate fluctuations. So we do some cleaning of the data we filter out temporary discounts and do some time regurgitation to go from weekly data to a monthly data. So to look at price gaps we argue that the relevant component of the gap is actually observable. So what we use it is a distance from the average price of close competitors. So we observe a particular price we can see the exact same price in other stores and we will use how far the aggregate price is from this average of competitors. We actually control for some store figures effects to control for regional variation or amenities. But the point is that if stores want to avoid price misalignment then this is going to be a reasonable measure of the price gap. They actually want to do it in both directions. They don't want their prices to be higher than the competitors because then they will face low demand but they also don't want them to be too low because then they can increase profits by increasing the price some more. So formally the competitors reference price gap is going to be defined as below where we are just taking the average of the exact same product in competitors prices after taking some store figures effect. We also control for unobserved heterogeneity. So basically from the gaps we deduct estimated product store fixed effects and this actually is important for our results. So this is one of the main figures of the paper. So we hear what you see is that how the probability of a price adjustment changes with the competitor price gap. And what you see is that importantly the probability of adjustment increases significantly with distance from zero. It is also not exactly but approximately linear and also positive at zero and mildly asymmetric. Actually these results are in line with previous results which usually use the more narrower data. If you look at the size of the adjustment or the average size of adjustment as a function of the price gap then you get the following figure. What's striking here is that there is an almost one to one linear relationship between the size of adjustment and the gap. So if you know that there is a firm faces a gap then on average it wants to close this gap. So this shows that our measure is actually a relevant component of the gap indeed. And the last figure shows the density of the gaps which shows that despite sales filtering and store fixed effect there is still sizable dispersion and fat tails on the distribution. So one thing we can do right away just by using this hazard function and density is to do the composition that we proposed at the beginning because for this we just need the hazard so we need just these objects and do a calculation. So our goal is to separate these three channels. If you do this what we find is that the relative contribution of the channels is that the intensive margin is the most important. The gross extensive margin increases the effect by around one third of the intensive margin and the selection effect in this example is minuscule in this way. So just to emphasize this means that the extensive margin effect is important. We are not saying that that's what these results are saying. Instead what mostly drives the results can be is just the shift between increases and decreases after a shock. So the next what we do is try to kind of reassess the same thing using an aggregate shock. So here we are kind of doing it unconditionally without looking at how the economy responds to an aggregate shock. We want to see it see it in the data that are these results borne out. So what we use for this is a credit shock. So we are looking at the size of our exogenous tightening of credit conditions which we identify using timing restriction. So the idea is to look at the increase in a measure of a premium, of excess bond premiums, so basically a default fee corporate spread created by Simon Greer-Christens-Yvon Zakhryshek without any contemporaneous effect on activity prices or interest rates. So this is how we identify kind of the exogenous causal shifting in credit. So just to show you how the economy responds to a shock like this, let me first run just a series of local projection or LSE reactions where we look at different variables of interest and we are interested in how this credit shock passes through to their behavior. For controls, we are going to use one to 12 months legs of the consumer price index industrial production and the one year rate, one year treasury rate as well as the excess bond premium. So these figures show how the imposes look like so there is a shift in the excess bond premium which dies out within a year. The interest rates and monetary policy responds to it so easy but it's not enough to offset the effect on industrial production which is sizable and persistent and also the core CPI which is the price index that we look at is declining actually slowly and peaks around 24 months after the effect. If we run the same regression using our supermarket index just to show you that it makes sense, the results are actually similar to what happens to the core CPI, the effects are not before 24 months. So this actually motivates us to look at this 24 months horizon which is basically the peak level effect of the shock. So to look at selection we would like to combine this product level proxy with this aggregate shock. So the question is again are the new adjusters after the shock have large gaps and our approach is that we are looking at selection as the interaction between the aggregate shock and the product level for proxy and ask whether it influences the probability of price adjustment. So the linear probability model that we follow are shown here. The dependent variables are the indicator of a price increase or a price decrease between period T and the H period in the future. In a particular, for a particular product in a particular store and as a explanatory variables we have the price gap in the months before the shock to control for the regular effect of the price gap. Then the excess bond premium to control for the effect of the aggregate shock on the probability of adjustment this is going to be kind of the average effect or the cross-extensive margin effect. And the interaction term and this is our focus with interaction term is asking when an aggregate shock hits are the prices which are changed have a larger gap or not. So this is going to be the selection effect. We also have various controls. For example, we control for the age of the price to control for time dependence. We have series of aggregate controls. Actually these are the same ones as we had in the local projection exercise I showed you before. And we also have product store fixed effects as well as calendar months fixed effects. So to control for unexplained cross-sectional heterogeneity as well as seasonality. And we cluster standard errors across categories and time. So this is kind of the main table of the paper which shows how on the left column the price increases, the right column but how the price decreases. The probability of price decreases respond to these various factors. What you see is that the effect of the gap itself has a significant effect. The shock itself also has a significant effect but their interaction term is insignificant. And this is consistent in various robustness exercises. And in terms of quantity, so these are sizeable. So what we see is that if you move from the gap from the first quartile to the third quartile you see that the probability of price increase is 26 percentage point lower. Also in terms of the adjustment of the gross extensive margin is sizable so that one standard deviation credit tightening which is a 33 basis point decreases the probability of price increase by one percentage point. And at the same time increases the probability of price decrease by a similar one. So symmetric one percentage point. But we find kind of no selection and but some evidence of time dependence. So if we put together kind of the theoretical and the empirical results what we find in the data there seems to be effective intensive margin effective gross extensive margin but no selection and this is kind of consistent with model with linear hazard but inconsistent with both time dependent models with like a Calvo model which assumes constant hazard or SS and Comex hazard models like Olasova and Lucas. So just quickly let me show you one robustness exercise. So one thing which you might worry about is that the linearity assumption in the regressions might be a bit too strong. So here we are relaxing it and assuming that, so instead of assuming that the probability and that the relationship between the gap and the probability of adjustment is linear we just create different groups of some firms with different gap sizes and look at how the probability changes. The average size and what you can see is that first that the relationship is quite linear in this case not exactly linear but close. So that's the red lines show that kind of unconditionally there is this relationship linear relationship between the gap and the price increases and the price decreases. And the interaction term which you see the blue lines there the relationship is insignificant at zero. So if we relax the linearity assumption we find that the results survive. In the paper we run a battery of other robustness checks and this result is actually survives in all cases. So I have some time to talk quickly about the literature or part of the literature. So in the literature actually selection is a robust prediction of various many cosmos with steep hazard functions. So the classic papers are Kaplan's, Pohlberg and Golossom Lukas. And actually in more recent iterations it has been found that this selection actually turns to comes back. So for example I have worked with Adam Reif where we assume that idiosyncratic shock has fat tails like in the famous paper of Virgilio Midrigan. But we find that if you assume that these shocks have a particular form kind of a robust form then the selection comes back. And similarly in paper of Bonomo and Quarters find that if you have multi-product firms but assume that firms still face some fixed cost of adjustment for each product they change then selection effect can come back. But importantly selection weekends if the hazard function is flatter because of information frictions or random menu costs. So our paper we are kind of addressing the same question we try to look at kind of as an empirical question. So how the hazard function look like. And we are not the first looking at the hazard function. There are kind of two strands of literature. One is looking at the hazard function implicitly. So estimate density as hazard function by matching moments. And for example Francesco has a great paper doing that and finding that the hazard function shape that fits the data most is a quadratic one. But then there are also other papers looking at explicit hazard function. And interestingly these papers tend to find that the hazard functions are close to linear. There are also other papers which actually try to look at selection directly by constructing informative models that are looking at this, including work of Luca de Della and co-authors. So let me just conclude. So what we did is we looked at granular supermarket and piped PPI data in the paper to measure selection. We find evidence for state dependence but no evidence for selection. Instead we find that the effects coming from the gross extensive margin and this is consistent with linear hazard and state dependent models. And our implications that we draw from this is that a shift between price increases versus decreases is what determines the extensive margin and the shape of the hazard function is actually informative about the strengths of this shift. So it makes a lot of sense to concentrate and learn more about the shape and the slope of the hazard function. Thank you very much. Thank you, Peter, yes. Now we have Francesco Lippi from the Inody Institute for Economics and Finance, giving the discussion of this paper. Thank you. Okay, how to put this? Okay, so thank you very much for the invitation. It's a pleasure to be here and it's a pleasure to discuss this paper which is very interesting especially for someone like me who's been working with these models for many years. So what do they do in the paper? They consider sticky prices and study. They want to answer questions about the propagation of monetary shocks, in particular credit shocks but think more generally how monetary policy works like Isabel put it. And they have great data that are useful to empirically analyze what firms actually do when they come to changing prices. That is whether or not they change the prices, what Peter's called the extensive margin and how much, by how much prices are changed. And once, so first they characterize firm behavior, then they discuss the implications of this behavior for the propagation of aggregate shocks which is indeed kind of the name of the game. In this literature, you have two polar models. You have Calvo where firms adjust prices just because. The Calvo ferry arrives and they adjust prices. Obviously, we are never happy nor proud to use such a model like you don't tell it to your friends in the business industry, you're doing that because I think you guys are doing this. And the other hand, you have goals of a look as where you know everything, there's a fixed cost and you only adjust when you reach that critical threshold. That's also an incredibly exaggerated model. It's probably not gonna be true and it's not true. And Peter, I think the main interesting result of this paper is to show very clearly both models are wrong, they're wrong big way. And you can do something more with this data. You can kind of get a very precise idea of where you are in this space span between these two extremes. That's what I'm trying to do. So specifically what they do, they measure these X's, these desired adjustments. Okay, so a firm is happy if X is zero, I'm putting a hat because that's really kind of the empirical measure of the X. At some point, I'll bring in the theory X. And they embrace this Caballero angle, very nice framework where if you tell me your X, I'm gonna tell you what's the probability that you adjust. So in the extreme models, this is an extreme, it's a simple object. Calvo, this probability is just a number. It doesn't depend on X. It's a constant function, flat. Gullos of Lucas, it's kind of an L-shaped object. Zero, and then you reach a critical X bar and it shoots to infinity in continuous time to one in discrete time. Okay, that's what they do. So they estimate these lambdas. What do they find? They find that it's linear. It's very nice. Eichenbaum, Jaimovich, and Rebello found basically the same using just one supermarket data, linear in the absolute value of X. And then they study, then they wanna do more. They say, well, but what if I have aggregate shocks? How do the aggregate shocks affect this probability of price changes? And so they run linear regressions of the probability of adjustment on X, epsilon, and the interaction term. And I already sort of summarize what are the main results. So let me summarize the framework that we're using to think about this problem. This is Caballero angle and it's a nice framework because both Calvo and Golozov and Lukas are nested as limiting cases. So there's a firm who controls the X firm, I. And now I don't have the hat because, so M star is like the ideal markup and P is the price, the marginal cost. You can see how the aggregate shocks that they have will affect the decisions of the firm because it will affect marginal cost. If credit becomes more expensive, my marginal costs are higher, I may wanna change my prices. That's the idea. Assumption in this model is if there is no, you know, these are models written before 2021, so there was very little inflation. So X's are bouncing around because of idiosyncratic shocks. That's mostly what it is. And then the theory, the optimal policy will produce one of these hazard functions. The reason why it's probabilistic and it's not zero one, there's several ways to justify this. You know, you could think that these fixed costs are random. You draw them from a distribution. So, you know, Peter and I have the same max, but he draws a low adjustment cost, then he adjusts. I don't draw it, I don't adjust, that's the idea. And intuitively, I think it makes sense in many models, the bigger the gap, the bigger the probability. You know, you have bigger motives for doing something. Another big assumption in this model, if you decide to adjust, you close the gap. You do, you change your price such that you jump on top of, you know, Mu star. That's not an assumption that has to be true in all models. For instance, models with sales or models with high inflation. You do some front loading of inflation. When you adjust, you don't close the gap. You may wanna start with a high price. These are just different models. Okay, so if you give me one of these models, if you give me those primitives, then, you know, I can basically aggregate for the economy and work out what the economy will look like. What is F? By the way, F in this model will be convex. You know, I wrote down a little note for the Kolmogorov equation. Just like he finds. You can compute the aggregate frequency. You can compute several observable moments like distribution of price changes. And these results in figure two from the paper are real beauty. I think they're a real beauty because if you give them to someone like, I mean, for me, they're a real beauty because I work on these things. So, you know, you give me these three figures. First of all, I can say, okay, first figure, great. They're closing the gap. The slope is minus one. It's just like the theory suggests. Remember, they didn't have to. You could have seen something very different. The middle figure, there are no Calvo Ferris. Again, I feel good. My brother is in business. I don't wanna tell him that I'm working with Ferris. You know, today I'm at the ECB. I wanna tell him we are serious researchers looking at data seriously, no Ferris. That's what we find. And the density is convex and symmetric just like the theory will suggest. So, if you are like a caballero angle guy, you can stop here. Okay, let's analyze how a shock propagates in this model. You don't need to, I mean, they spend a lot of time in the paper discussing selection, which, you know, it's okay, but selection is a little bit like looking at a soccer game and counting, shooting on the goal. That's nice, but really you wanna count goals because that's in the end what determines whether or not you win. So, I'm happy to do stuff about selection, but there's something more interesting. Once you give me a GHF, a generalized hazard function, that's all I need to study how shocks propagate. So this is, I'm using some theory results here. These models can be solved pencil and paper pretty accurately. So these are some results I produce using, you know, recent paper we have. So in the left hand side, you see the primitives. These are three generalized hazard functions. You know, the flat one here, the black one is Calvo, obviously, probability doesn't depend on the state. Then there's the goals of Lucas, like the Rock B goal. And then there's a linear absolute value hazard function. Okay, just for fun in the middle panel is what these three functions predict in terms of shape of observable price distribution of the size of the adjustments. And on the right panel, you see like the implications for the propagation of aggregate shocks. And this is not one simulation. These are analytic results. I can tell you like, you know, it's a proposition. So if you give me these models, I can tell you exactly how much bigger the output response is in Calvo, the black line, compared to goals of Lucas. It's six times bigger in terms of the area. And the linear hazard is actually, you know, in between a little bit closer to goals of Lucas. So in spite of the lack of selection, in spite this model has the state dependence, the key is state dependence. The fact that this hazard is not flat, which means that ages that need to respond will respond when there's a shock. And does it matter? Here's the answer. It does. So what they do, they focus a lot of their analysis on selection. In particular, they define selection. I'm sorry, it's a bit small, but you saw it in Peter's light. They run these linear probability models where, say, probability of price increases is run on a measure of the aggregate shock of the gap and of their interaction. And the interaction is their intuitive idea of whether or not you have selection because given an aggregate shock, you know, the probability that you adjust should be bigger if the gap is bigger. Fine. And that's what they find. They don't find this coefficient on the selection term being significant. Okay, so what do I think about this? To me, it's a bit of a distraction, but let me think about it nevertheless. So let me explore the theory behind these regressions, the theory behind the metrics. Should I expect or should I not expect the interaction term to matter in these regressions? Well, remember the theory has a definition of X. The X includes the aggregate shocks. The way they measure the X's, this X hat, is the difference between one firm X and the other firm X. Now, because everybody's affected by the aggregate shocks, their X's do not have the aggregate shock by construction, it washes out. So let me construct the theory-based gap, which is their X hat plus their epsilon. The epsilon is a credit shock. I'm putting an alpha because, you know, I'm not really sure about the units. X is measured in units of price deviations, the credit shocks, you need to have some elasticities. So suppose we ask some good micro guys about the alpha. Anyway, there's an alpha there. So now the hazard function that I expect to be working in this data is some function of X hat plus epsilon. So what's the question? Should we expect the interaction term? Well, it depends on the shape of lambda. If lambda is linear, then, you know, a linear function doesn't have any interaction term. It's a linear function. All the higher derivatives are zero. Now, here is a bit tricky. Well, it's not really linear. It's linear in the absolute value. The absolute value is not a linear function. So actually, you know, it's also not differentiable at zero, but let's say you do something quick and dirty and you kind of approximate an absolute value, you're actually gonna find some role for an interaction because it's a nonlinear function. Let's do something simple to clear ideas. Suppose the hazard was quadratic. Well, then, of course, you should have an interaction term because we all know the math for the square of the binomial, but you should also have X square and epsilon square, not in levels. So that's why I kind of find it useful to think of the theory because it kind of guides me to what I should look for in the empirics. So I run some, you know, I set up a model. I simulated some data. It was cheap, so I simulated billions of data. So the first regression you see, these are like asymptotics. That's why the t-statistics are humongous. You know, if I estimate the hazard function, well, that's a model I'm using. That's a true model. Of course, I recover it and I don't see any interaction for the product. I shouldn't, I'm just checking that my code is written correctly, hopefully it is. Then I'm doing, also I'm focusing, you know, another, this is a bit of a detail, but the model tells you if your gap is high today, you adjust to day. Now, I understand, you know, they have true data, so, and we saw that the shock takes time to realize. So they look at effects after 24 months, but you have to be careful in pushing the horizon for the outcome so far away because if you push it really too far away, something is gonna happen. At some point, they will adjust. So for instance, in my simulation, if I put, you know, one year, everybody adjusts in one year. I'm having a model with two adjustments per year. Okay, so, you know, what if I mis-specify the model? I don't, instead of estimating the absolute value, I just throw in X, epsilon and their product, I actually find that the product is kind of borderline significant, much less than the X. Now, here there is also an issue of inference. I have billions and billions of data. I don't think they have as many aggregate shocks. So if I like put, let me assume that in the 10 years they have, they can observe 20 aggregate shocks. So basically divide all my t-stats by 10. Then I will get that stuff is not, it's like interaction and epsilon is borderline significant. So I'm just doing this to say, look, what you find really kind of depends on, first of all, functional form assumptions and the specification of the regression really depends on that and number of observations. And in my simulations, it's harder to estimate the effects of epsilon because I have a large cross-section but I don't have that longer time series. So in the end, I was thinking, maybe Jean Tirol is in the room and he doesn't know about monetary policy, at least what we do. So I want, you know, are we really lost? Like, have we learned anything in the past 20 years? I think we did. So these nice paper shows are price setters, attentives, yes they are. Decisions depend on the state. Do we care about time on state dependent models? Do we care? I think we do. So if I was like, you know, a policymaker today with these big energy shocks, the COVID supply bottlenecks, the trade wars, models like this behave very differently from a time dependent models where firms are just waiting for the Calvo ferry to do an adjustment. And we have lots of evidence on, say, Swiss big surprise appreciation about when these big events occur. Firms are fast. Peter's own paper on VAT changes, you know, in Hungary, Alvarez, my co-author, I know Maya Argentina like changing the price so utilities dramatically and prices changing fast. Bottom line is when there are big shocks, there are big reactions. I think as economists, we should be proud of it because it's kind of reassuring about our job. We're, you know, suppose firms are just doing things just because then it would be a bit depressing. My final comment is something I would do with this data. So even in the little model I use, the aggregate response is very small. One way to pump up these responses is to think of models where there are strategic complementarities, where each firm's decisions about their little acts also depend on the big acts. That's a different lambda. And I think this is a really important, big question that they could do a lot with the data they have. Thank you. Thank you, Francesco. Maybe, Peter, you want to react quickly before we open up to the whole audience. And so people can still think a bit about questions, but maybe you want to react directly to what Francesco suggested. Almost everybody, we agree. And I think it's very important to emphasize that even though there is this kind of potential confusion in what we say and don't say. And importantly, when we say that selection is not there, we are not saying that state dependence is not there. And I think it's very useful that you pointed out that in this case, if we have, and what we show is that there is this relationship between the gap and the problem with the price change, it will have an influence on how the aggregate economy responds. And so we need to go away from a carbon model. And I think, so in your discussions, you suggested we go and figure out much more the theory behind the metrics. I think this is well taken. I think we want to go there. Just to reassure you, actually, it's true that if you go 24 months out, the effects are kind of the probability of price adjustment is much higher than if you are looking at the month out. But I can assure you that actually we still have this V-shape relationship. So some of the effects are still safe. And I think you are absolutely right about strategic complementarities. I think it would be. This data might be useful to also learn about that. Thanks a lot again. Thanks to you. It's a great paper. So are there any questions from the audience? There's also the Slido tool, of course. So for everybody who is online attending us, please feel free to ask your questions in Slido. If not, I have a question, actually, sort of more also from, let's say, practitioner points of view. I mean, you did your study basically on US data. And that covered a span from 2001 to 2012 when inflation was actually relatively low in comparison to if you would look at it now. And also, if you look at the shocks, I mean, nowadays we're confronted a lot with, let's say, aggregate supply shocks. I mean, in how far would your results change with, let's say, more higher inflation regime or a regime shift where you have actually more these aggregate shocks, supply shocks, than demand shocks, for example? Yeah, so thanks for the question. So one thing is that in this Prisma network, so price, I think micro data analysis network, we have actually acquired similar data for the euro area and we found that the results are kind of qualitatively comparable and we can compare also quantities. So this is something which I want to point out. Unlucky, so I mean, luckily in actual sense, inflation was low also for the period we have the data. So in some sense, we cannot directly look at the evidence how it changes, but we would like to in the future. But we can use these theoretical models to have some ideas about how the results would change if the shocks would be larger or inflation would be higher. And actually what we find is that if the shocks are large, then there will be kind of much larger effect, much more firms would adjust. And also if the trend inflation is higher then we should expect more firms adjusting. So overall, the slope of the Phillips curve should be actually higher in these situations. And actually quantitative models could try to give kind of quantitative numbers for this, but that might depend a lot on the particular details and assumptions. Thank you. Are there any questions from the audience? Yes, there is raised hand. I think there's somebody with a microphone that could come, yes, maybe introduce yourself quickly. Hi, I'm Alistair Mackay. I was hoping Peter could sort of elaborate on the motivation for looking at the dynamics conditional on a identified shock, because as Francesco pointed out, we can calculate the sort of agri-dynamics directly from GHF. So just want to hear sort of the rationale for the value of the identified shock. I think, just go ahead. Okay, yeah, so I think in some sense, what we, so I think there's some difference in the philosophy of the paper and for example, Francesco's approach. So here we kind of wanted to establish empirical results with kind of minimal assumptions on the economy. And here we, it's true that after we have these, if we are ready to use the assumptions of the model, then we basically have everything what we need. But if we kind of not ready to make these kind of strong assumptions of a structural model, then it would be useful to actually establish some evidence using an aggregate shock and just looking at what happens there. And I think in some sense we are lucky that these results are actually borne out more or less from kind of theoretical models that we use. So it kind of supports these models. And then you can say that we wouldn't need to do this, but I think it's still useful to kind of deliver structural effects. And then we can compare it to two kind of models, but kind of different types of models as well. We also hope that for kind of an audience who was kind of not that deep into kind of structural models, these results are interesting and provide some intuition. Okay, can I add one thing? Yes. So my take on that was, you know, when I say it after his figure two, I would stop here and I would just be happy and be able to calculate aggregate impulse responses. Of course, that's under the assumption that that's a data generating process. But in my, one thing I find interesting is that they're kind of testing this model, right? We know, for instance, rationally in attention models where people pay different attention to different types of shocks, aggregate versus idiosyncratic. So I like their experiment. They're trying to set up, like let's see if they respond to X as they respond to epsilon. So once you say, I would do it maybe only positive so that there's no functional form confusion and just focus on the straight line and running regression on this positive probabilities on epsilon and X. But then, you know, maybe you could find that they don't respond to epsilon. Maybe they're not paying attention to epsilon. So that's additional information in my view. Thank you. Anybody else? So there, there's a raised hand. Morton Raman. No, I was wondering whether, so you have all this micro data. It is a bit special because it is from supermarket. So lots of goods, lots of things that we don't have in that. But should we really, should we think that the price of milk is sort of done the same way as the price of a car? So should we really, I'm just wondering about the extent to which we should think of one model of price setting or not. I mean, we know in the core CPI that they are different from commodities and so on. So to what extent should we aggregate at this level here? There's one model of price setting. Is that useful for monetary policy? Maybe we take one more question than, I saw a raised hand before, but I, okay, Luke, there's somebody with a mic. Yeah, here, Luke? Yeah, maybe just a round it off. Francesca, you said we've learned a lot. At least we know which models are incorrect. And I can tell you sort of at least from a theoretical perspective where things will be going. So I wanted to push the chairs question a little bit more since there are many on the call that are actually doing monetary policy. And maybe less familiar with these models that you write down, what would be the prescription for monetary policy from your theory for where we are today in terms of not just large shocks, but also a high inflation environment. Maybe I give the floor for the last three minutes back again to Peter. Okay, so it's a Morton's point. I think it's very well taken. So I think it's when people look at particular markets, they usually kind of look at price setting, which is very different in different areas. So I think in some sense, what we are trying to do is simplify the reality to be and ask whether we can learn something that is useful. And actually in the paper, we also look at PPI. So producer price index micro data, which covers not just supermarkets, but basically the whole economy. And we find that at least these results that we look at are kind of consistent. So robust there, so that we find that there is this state dependence as well, but not the selection. So I mean, we hope that it is useful, but I think there is, if someone wants to dig deeper, there's great gains could be have from kind of really understanding the details of price setting in different markets. In terms of, how the, so what can we learn? I think this is a very hard question. But in some sense, one potential answer is that a lot of these, what the literature is after, is really the kind of effective slope of the Phillips curve and what we kind of get out from looking at micro data is that this slope is actually higher than previously assumed. So we actually need to kind of design optimal policy based on this and I mean, it really, so what to do is kind of depends on a lot of factors, for example, what kind of shock is hitting you. But actually I think looking at the literature, what we already know quite a lot, what you should do based on the slope of the Phillips curve. Yes, Francesco? Yeah, my take on, I agree with Peter, it's obviously difficult question as most policy questions, but there's some like high level thing that we understand using this model, which is when we live in normal times, we don't see that many price changes and we think prices are not changing that often, we don't really know why that is, are they not paying attention or is that because, now what we know if we think that these kind of models, the state-dependent models are behind the process is that once a big shock arrives, firms will not wait, you will have a cluster of price changes, as we are seeing, you will see more frequent price changes, you will see a larger fraction of price increases going up big time and the reason I would worry as a policy maker, similarly to what happened when many countries joined the euro, consumers who are not educated in following these tendencies, they see lots of price increases, they see prices changing everywhere, this thing can get out of hand, this is a very delicate time, it's a very different behavior if you look at the unfolding of one of these shocks in one of these time-dependent models where you have to wait for the ferries, firms don't wait for the ferries, so now we're in the middle of the storm and we need to reassure the markets that we know what's going on and we are taking measures to avoid second round effects, et cetera. Thank you very much, I can completely concur that indeed what we see also at the moment is a much quicker pass-through of these shocks to now the consumer price level and I think this work that you're doing, Peter and also others are doing in this Prisma network is very useful for us to kind of gauge such effects and be more aware also for the future. So thank you very much and with this, you can basically leave now the stage and I would like to ask the next speaker to come up. We have here now the next two people, first we have Antonella Trigari and we also have the discussant Fabienne Postelvinet, so please join me here on stage. Thank you very much. So we will proceed like we did also for the first session so that you can basically go up to the lectern and give your presentation, you have 30 minutes for this and then we will have the discussion by Fabienne and yes. So the floor is all yours, Antonella, for your paper that you did jointly together with Marc Gertler and Christopher Huckfeld on temporary layoffs, loss of recall and cyclical unemployment dynamics. Thank you so much. So thank you so much for having me in the program, it's really a great pleasure, not so great pleasure to be the second after this very, very interesting paper. So this is different topic, so it's going to be a talk about temporary layoffs, loss of recall, which I'm going to define in cyclical unemployment dynamics and this is, as it's been said, John worked with Marc Gertler and Chris Huckfeld, who just joined the Federal Reserve Board. So in this paper what we basically do is to both measure and models temporary layoffs and the role that temporary layoffs play for cyclical unemployment dynamics. And we are motivated, as you can imagine, by the unprecedented increase, it's going to be a paper about the US, by the unprecedented increase, and it's very different in the euro era, in temporary layoffs in the recent recessionary episodes, essentially about 15% of the workforce of employed workers was moved to temporary layoffs at the onset of the COVID crisis. Now, at the same time, because this recent recession has very unusual feature, we also want to look at some earlier evidence, so we want to think about a framework that can capture recent data, but also historical episodes, so that the framework can also be used, potentially, for future, to study future episodes. So what we do, we first document the contribution of temporary layoffs to unemployment dynamics starting in 1978. So we go back to 1978. So let me introduce a few definitions. So first of all, ex ante and ex pass, layoffs can be both temporary or permanent. And actually, many workers have some expectations that's how temporary layoffs are measured, that they will go back to their previous job. Naturally, many of those workers with such expectation, they actually go back to their previous job. Because workers in temporary layoff have a high recall rate, high re-employment probabilities relative to workers in permanent unemployment, this flow from employment to temporary layoff is typically perceived as a flow that moderates cyclic unemployment dynamics. That's the traditional view. What we're going to emphasize instead is a different factor, and in particular, related to the fact that those workers who are on temporary layoff, who exit unemployment to temporary layoff, may lose attachment, may lose connection to the job, to the previous employer, over time. And a phenomenon that we're going to call loss of recall, loss of the recall option. And in this case, what happened is that layoff where ex-sant expected to be temporary become ex-post permanent, and in particular, they inherit the lower re-employment probability of permanent layoff workers. And so we're going to argue that this second factor is instead playing a destabilizing role for cyclic unemployment dynamics. So what we do, we develop then, finally, a model of unemployment fluctuations that is going to distinguish between endogenous layoff that are both temporary and permanent, as well as endogenous flows between the three labor market states that we're going to have in the model, employment, and two different types of unemployment, jobless unemployment. So these are going to be, we adopt this terminology that has been introduced recently by Hall and Kutlyak, jobless unemployment and temporary layoff unemployment. So jobless unemployment are workers who have some expectations to go back to their previous job. Sorry, they have no expectation they're jobless to go back to their previous job, and so they search for a new job. And temporary layoff unemployment are workers who have some expectation to be recalled back, and so they wait for recall. And a particular important flow is going to be the one between these two states, between temporary layoff unemployment and jobless unemployment. Then we calibrate the model to pre-pandemic data, and we show that the model does well in replicating some business cycle facts. And finally, we turn the attention to the COVID-19 labor market. So while we do that, I already mentioned that, so let me be quickly on these slides, we do that because we want to add to the traditional view that temporary layoff is a component of total unemployment, the place of stabilizing role because of the high recall rates. We want to add a new, I mean we're not the first to think about this phenomenon, cats and mayer were sort of in the 1990s discussing about the possibility of losing connection over time. Once you are in temporary layoff with previous employers, but we are the first to quantify this effect, to quantify this flow, to emphasize that workers who lose the recall option, they do so in a secret manner. They do so at higher rates during recessions. And we're also going to measure a new stock, which is going to be the stock of workers in jobless unemployment from temporary layoff, meaning workers who last exited employment through via a temporary layoff and then over time lost their recall option. And let me note that one team is going to be that this flows recall and loss of recall are endogenous and that's policy dependent. We're then want to understand of course what happened in the COVID-19 pandemic, where 50% of employed workers were moved to temporary layoff, and which is a distinguishing feature of this recession. A second distinguishing feature is that there was a very important response to this labor market dynamics, a fiscal package, the Paycheck Protection Program, which was larger than the Recovery Act, the 2009 Recovery Act, and was introduced to deliver forgivable loans to firms to preserve jobs, to preserve job retention. So we're going to study what role did PPP play in shaping employment recovery, and we find that this program indeed was successful. We find large effects, and in particular we are going to show that these effects have acted through preventing loss of recall. Oh, this was Peter, okay. We put this here. So I'm going to mainly focus on the empirics, of temporary layoff and employment, very briefly give you the flavor of the model. The model is very rich. If I go through the details of the model, I would spend quite some time. So then I'll briefly tell you about the calibration of the model and move to the application. So in this part, we are going to argue that indeed, temporary layoff is important for shaping the dynamics of unemployment over the cycle. So here we are going to start by looking at the stocks and then I will look at the flow in the next slide. And you see a table with moments, first and second moments, regarding total unemployment, you jobless unemployment and temporary layoff and employment. Now, both jobless and temporary layoff and employment are strongly counter cyclical and they are also highly volatile. At the same time in red, you see the temporary layoff and employment is only one eighth of total unemployment. So you might conclude from this that temporary layoff and employment is not going to play a big role for unemployment dynamics. But the fact that the stock is low doesn't mean of course that the flow might start to be large. So here we are looking at the flows between the four states. We also include in activity in pre-pandemic data. And as you can see, so first of all in blue, you see that temporary layoff actually are account for one third of layoffs. Okay, so it has to be important to think about temporary layoff in this dimension. At the same time, if you focus on the second line, you're going to see that temporary layoff is a state which is very transient. The reason why it's very transient, there are two reasons for that. The first is that they have high exit rate. So they have high re-employment probabilities as you can see in red. And but they also have a high chance to exit that state to permanent jobless unemployment. So in green, you see the probability of losing the recall option of moving from TL to JL. Now, at the same time, so this is actually an important table because it first of all emphasized that the re-employment probability of temporary layoff unemployment, it's almost double than the re-employment probability of those workers in jobless unemployment. In addition, what we do here, we also look at the transition rate for workers in jobless unemployment, conditional on being in temporary layoff and employment in the previous period. So worker who just lost the recall option. Why we do that? Because we want to provide further support to the idea that loss of recall is actually a meaningful phenomenon. This is measured based on survey. And so if it is, then we should expect that the probabilities of conditional of workers in JL, conditional being TL yesterday are actually very close to the unconditional probability of workers in JL. And that's what we find. They're almost undistinguishable and different from the probabilities of workers in temporary layoff unemployment. So this also brings additional support to the idea that these two as measured in CPS are distinct labor market states, temporary layoff unemployment and jobless unemployment. Now, the last piece of evidence I want to show you is about cyclicality. So here you see the cyclical properties of the five flows that we consider. And the main message is going to be that temporary layoff E2TS are particularly important during recession. So first of all, more employed workers are put on temporary layoff. So counter cyclical temporary layoff. In recession, fewer workers from temporary layoff are recalled to employment. So pro cyclical recall probabilities. More workers move from TL to JL. So counter cyclical loss of recall. This is going to suggest what we are going to call a direct and undine direct effect of temporary layoff. And in particular, we are going to argue that there are two ways in which temporary layoffs contributes to the increase in unemployment over our session. What is the direct effect? And it's simply measured by the stock of workers in temporary layoff unemployment. And this is going to increase because of higher layoff probability, temporary layoff probability and lower recall probability. But there is also an indirect effect which cannot be measured by simply looking at the stocks. And we need also to look at the flows because the stocks are not going to measure those workers who initially exit to temporary layoff unemployment and then lose their recall option over time. So the stock of workers in jobless unemployment from temporary layoff unemployment which is an indirect effect which in turn is going to be associated to destabilizing effect. So what we are going to do, we are going to develop a methods to estimate this indirect effect. So we are going to estimate a time series for workers in jobless unemployment who came from temporary layoff unemployment. And we are going to look at the property of that time series. So let me now show you some data. So this is a plot of temporary layoff unemployment in pre-pandemic data. As you can see, temporary layoff unemployment is, as we already showed with numbers, is extremely counter cyclical but you also see some diminishing cyclicality especially after the 1980s recession. This is actually one of the reason that while the past literature has been focusing quite a lot on temporary layoff unemployment in recent years until COVID, of course, there's been less attention to this labor market state. Now, however, if you add to this stock the new stock that we compute, temporary layoff unemployment from, sorry, jobless unemployment coming from temporary layoff unemployment, then you see a different picture. In particular, now we are measuring both the diet and the in-dirt contribution of temporary layoff unemployment to the increase of unemployment over recessions. And we see that, especially in the later years, this is important during the great recession, the contribution of temporary layoff unemployment to overall unemployment almost double because of the indirect effect. Now, of course, if you then put COVID, you don't see anything else, but what's important to see here is that most during COVID the things are different because most of the increase in total unemployment is accounted by the direct effect. And we're going to argue that the fact that the indirect effect was so small was largely due to policy, and in particular to the PPP fiscal plan. Okay, so let me now show you the model. So this is a rich model because we have three states and we model endogenous flows from the three states. So we are going to have, so the starting point, first of all, is an RBC model with search and matching friction, a general equilibrium model with perfect consumption insurance and wage rigidity via staggered nush bargaining as in previous work I have with Mark Gertler. And key variation to that setup are going to be endogenous separation into temporary layoff unemployment as well as endogenous separation into jobless unemployment. So we move away from this typical exogenous separation assumption and we have both separation into temporary and permanent unemployment. We're going to have recall hiring from temporary layoff unemployment and standard new hiring from jobless unemployment. And we, this is a twist that we don't have time to talk about that, but we also allow for temporary pay cuts to limit the extent of inefficient separation. Due to wage rigidity. Okay, so some details and then I'll stop with the model. So we're going to have workers who are unemployed if they are in the jobless unemployment state they search for work in a very standard DMP style matching market. If they are in temporary layoff unemployment they wait for recall or loss of recall. To model separation, temporary and permanent separation we're going to assume overhead costs and in particular idiosyncratic firm specific overhead costs which is going to lead to the firm to shut down. So to separation into jobless unemployment of existing workers. At the same time the firm will have some stock of temporary layoff workers who are on temporary layoff. And those workers will also endogenously lose their recall option because the firm shut down. And then we have workers specific overhead costs which are going to lead to separation into temporary layoff. So firms are going to put some workers above a certain threshold for the overhead costs. They're going to put a fraction of their workers on temporary layoff. For surviving firm then after separation firms are going to run their business. If they survive they're going to run capital hire from jobless unemployment recall from temporary layoff. And the way we model iron and recall is through some adjustment costs cost of adjusting the labor force. They are going to be symmetric but with different parameters which we are going to estimate in the data to match the different elasticity of recalls and new hires to the firm job value. And I already said about Nash bargaining. Okay, so we calibrate the model based on pre-pandemic data. What we do we match standard labor market stock and flows but we also of course match moments that regards temporary layoff, the stocks of temporary layoff and the flows in and out of temporary layoff. We use both long run moments and business cycle feature and so this is just one small thing I want to say. So we target some volatilities, some labor market volatilities. We don't ask the model to match those. We target some volatility of TL unemployment, JL unemployment but we tie our ends in a number of ways and the model does pretty well in matching those data. So let me now conclude with the application to the COVID-19 recession. So first of all, let me clarify that we do not have an epidemiological model so we do not model the endogenous spread of the virus. What we do, we capture the economic effect of COVID through introducing two structure shocks. One, and this is new in the literature are, because there are other papers who have been modeling COVID through lockdown shocks. So these are MIT shocks and they're going to move workers from employment to temporary layoff and employment. There is going to be a distinction between workers who are in temporary layoff through lockdowns and workers who are in temporary layoff through endogenous temporary layoff. But they go to lockdown and then we model the consequence of social distancing through shocks to effective TFP which we interpret as a reduction in the utilization of capital and labor. And we are going to add two parameters that are specific to the workers in lockdown. So we're going to allow and we're going to estimate these parameters together with the shocks. We are going to allow for the possibility that workers who are in temporary layoff due to lockdown might have a different probability of moving to permanent, to jobless unemployment of losing their recall option. And indeed we're going to estimate the degree of attachment of those workers in temporary layoff due to lockdown as opposed to standard temporary layoff of a slightly higher degree of attachment. And we are also going to allow for the possibility that recalling back those workers during COVID might imply lower or different adjustment costs to firms. We're going also to introduce PPP. We are going to follow Kaplan-Mole Violante. PPP is going to be a model as a diet factor payment. We are of course are going to calibrate the size of the program, 12.5% of GDP in the first two round and about 5% of GDP in the second round. We're going to assume that 85% of that amount was actually forgiven and we're going to assume that the program is unexpected, that the funds are used when they are allocated and that after the announcement, the availability of the funds and how they evolve is known. Then we estimate the shocks, the series for the shocks and we estimate the two additional parameters to match the evolution of the stocks and the evolution of the flows during the COVID. And we do well. So finally, I have a couple of slides and then I'm done. So we study the role of policy. So we keep decision rule parameters and shocks but we remove PPP. And what we find is that actually PPP was successful in what it was intended to do. So prevent destruction of matches and encourage job retention. We find monthly employment gains of more than two percentage points in the first six months with those gains that are fading out over time but they're still about 1% after more than a year. In particular, the mechanism is going to be the following. The PPP simulated recalls, the cumulative number of recalls over this first six months actually double because of PPP and because of higher recalls. So these also induce lower reduction in the loss of recall. And to make this point more powerful, in a more visually. So what we do, we plot here three series. So that's a bit of an unusual way to show a counterfactual but so we have in blue temporary layoff and employment in the data. In red, we have temporary layoff and employment plus jobless unemployment from temporary layoff and employment in the data. So that's the diet and the indirect effect of temporary layoff and employment in the data during the COVID procession. And here what we are plotting, the difference between the blue and the red is the increase in jobless unemployment from temporary layoff and employment in the counterfactual absent PPP. And as you can see, there is a significant effect in terms of preventing workers of moving from temporary layoff and employment to jobless unemployment. So let me conclude by just mentioning directions for future work. So in the paper that we wrote, the cost of loss of recall is that moving to jobless unemployment means inheriting lower re-employment rates. But there is a different type of cost and we don't have that in the model which is the fact that loss of recall is also going to dissipate much specific capital. So it would be interesting to consider heterogeneous much quality. And once you start thinking about much specific capital, there is also this idea that has been put forward actually by Davies and Coaters, by Steve Davies and Coaters that these programs might actually have a cost, they preserve matches, they preserve employment, but they might have hindered reallocation. And this could be particularly true for PPP because PPP was targeting smaller firms. And so this might have inherent efficient reallocation. So I think I'm in time. I wanted to show another picture, but maybe later if it comes out in the discussion, can I go back to the slides in case? Okay, good. So thanks. Thank you very much. Antonella, thanks a lot. I think I forgot to mention that you come from Bocconi University. So at least I should do it now. And with this, I would like to give the floor right away to Fabien Postelviné from the University College London and the Institute for Fiscal Studies with his discussion of the paper. Thank you very much. Thank you very much. So thank you, Antonella, for a crystal clear presentation. And thanks to the organizers for giving me the opportunity to read this paper and document myself about temporary layoffs in a more precise way than I probably would have done otherwise, so I'm glad I did. Let me start with maybe a short history of the thinking about temporary layoffs in, I suppose, macro labor. For a long time, Antonella has kind of hinted that the conventional view has been that temporary layoffs are not important because they're a very small fraction of the total stock of unemployment. Now, recently, a more recent literature starting with Hall and Cudley-Act, Masturini and Fujita, et cetera, have emphasized that flows between temporary layoffs and employment are important and cyclical. Therefore, temporary layoffs are important to understand unemployment dynamics. Now, I should mention as well that, obviously, even the stock of workers on temporary layoffs, as we've seen in a picture that Antonella showed, has become first order importance in April 2020 with the pandemic recession. And that's obviously been emphasized in a very recent literature. Now, this paper highlights the role of a new flow or another flow, which is large and cyclical. And that's the flow between temporary layoffs and what the authors call jobless unemployment and what others have called unemployment without recallers. I can't remember what the other terms were, but what standard labor economists would think of as being unemployment. So that's my brief history. I think this is an extremely fair point that the authors sort of motivate their paper on. Indeed, in the data, these flows are large. The authors put special emphasis on the flow from so temporary layoffs to jobless unemployment, which they term loss of recall. If you look at numbers, this is the table that Antonella showed at the beginning of her presentation. Those are flow rates between different labor and the three, sorry, the four labor market states that are emphasized in the paper. So here, the flows from temporary layoff to employment, 43.5% from jobless unemployment to employment, on average, over the 35-year period that Antonella is looking at, 24.4%. So a little bit over half that of temporary layoff to unemployment. And so workers on temporary layoffs have a much higher probability of returning to employment than workers who are just unemployed. What this paper emphasizes is this number highlighted in red here, is that the flow from temporary layoff into jobless unemployment is quite substantial. This is almost 20%. Now, one number that is not so much emphasized in the paper and which I'd like to emphasize now is that 2.2% of jobless unemployment to temporary layoffs. If you just crank out the numbers very quickly, the jobless stock, the stock of jobless unemployment is about a little bit less than seven times larger than the stock of workers on temporary layoff. And so if you take 2.2% of stock that seven times larger, you get, in levels, you get a number of workers who move from jobless unemployment to temporary layoffs that is not very far, or at least a similar order of magnitude, to the number who move from temporary layoff to jobless unemployment. And that begs the question of what is it that we're measuring here? What does it mean to go from jobless unemployment to moving back to temporary layoffs? Temporary layoff is a situation where you expect to be recalled to your old employer. So what does it mean to go back after a certain duration in unemployment into temporary layoff? And more generally, the question I want to ask here is what is measured? What does this temporary layoff label that is seen in the, that is constructed by the authors from CPS data, what exactly does it measure? Now, what the authors say here, the way the authors interpret it, interpret, you know, a move from temporary layoff to jobless unemployment is as a loss of recall. And what they say, I'm going to give a quote from the paper. If a transition from temporary layoff to jobless unemployment represents a true loss of recall, and I've just lost my timer, it's back, then we would expect the re-employment probability of such workers to be similar to the unconditional re-employment probability of workers in jobless unemployment. Otherwise, we would expect the re-employment probability of workers moving from temporary layoff to jobless unemployment to remain high. But I mean, that might well be true in a world with constant hazard or in a completely unconditional world. But if you introduce, you know, for any reason, duration dependence, for example, in the process of returning to employment from unemployment, and if the temporary layoff label is correlated in some way with short durations, then we would expect the exact same patterns to occur, as highlighted by the authors. So I've sort of had this half-jokey title here of my silly model of temporary layoffs. So let me give you what I cannot characterize otherwise as a silly model of temporary layoffs. So a labor market in steady state. So I have nothing to say about cyclicality here. Workers can be either employed or unemployed, just like the authors do in their theory part. I just condition out an activity here for simplicity. When employed, workers in my simplified world all face the same IID job loss risk. And when unemployed, well, each worker, which I'll label as I, has an individual specific job finding probability fi. So some workers have a high, actually a job finding probability of one per period. So they find a job with certainty at the end of one month. And a fraction alpha of workers have a lower job finding probability, some number between zero and one. So not a very sophisticated model. And then on top of that, I'm going to affix a label to each worker, a type Ti, either TL or JL. That type is going to change stochastically over time following some stochastic process, which essentially is not exactly a first order Markov in my simulation, but essentially that. It's completely independent of their job finding rate. So in that sense, my label TL or JL in this model is a completely meaningless label. It's just a label. And then I try to fit my model to the author's data. And the transition matrices that the authors show in the paper. And well, here's the result. I mean, it's not perfect, but it's not bad for such a silly model. And in particular, it's able to capture the fact that the probability of moving from temporary layoff to employment is apparently much higher than that of moving from jobless unemployment to employment. And also the fact that if you condition on being previously in temporary layoff, people who are in jobless unemployment have a much lower probability of returning to a job. So the key features that the key numbers that the authors emphasize are actually replicated by this model, where temporary layoff or jobless unemployment is a meaningless label. Now, the mechanism, of course, in this model is that it takes at least one month for workers to make a temporary layoff to jobless unemployment transition because that label gets changed every month or it gets changed with some probability every month. So thus, all of the high job finding rate workers are gone by the time that the first label change occurs. And so workers in the JL previously in TL samples, so who used to be in the temporary layoff state and are now in jobless unemployment, are selected. They're negatively selected in the sense that they're all the people with low job finding rates. Now, of course, again, let me emphasize, I don't believe that this is the true model of what's going on. This is a silly model. I can't emphasize that enough. But it does replicate this aspect of the data, at least. And I was wondering whether anything that the authors could say of how much of that is going on in the data. And I think the empirical part of the paper would benefit from having a little bit more in-depth conditioning on observables and maybe duration analysis in the sort of diagnosis, I guess, about measurement, what it is that is measured by temporary layoffs. So that was, I guess, my points about the empirical part of the paper. Let me just very briefly move to the model, the G-H-T model, as in Gertler, Huckfeldt-Tragari. So yeah, this slide is a festival of acronyms. I'm sorry about that. So the G-H-T model builds upon the Gertler and Tragari model, which is a classic paper published in, well, 13 years ago in the JPE. It's a very sophisticated, contrary to my silly model, it's a very sophisticated DSGE model with matching frictions, a la Diamond-Morenstein paceritas, featuring a whole large number of moving parts. In particular, worker-level transitory idiosyncratic cost shocks, causing workers, well, causing temporary layoffs. Job or firm-level idiosyncratic cost shocks causing permanent firm closures and job destruction, and a whole array of other things such as real wage inertia, capital, capacity utilization, et cetera, et cetera. So a very sophisticated machinery. On the other hand, the model only has one aggregate shock to TFP, at least outside of COVID times. And it does a very good job of mimicking 35 years of data, roughly 35 years of aggregate data on labor market dynamics. So that's quite impressive. Now, I want to make two points about the model, at least one and a half point, I guess. So the authors describe, as Antonella emphasized during her conclusion, temporary layoffs is destabilizing. So they say we place particular emphasis on the following destabilizing effect of temporary layoffs, namely that a sizable fraction of workers who initially exit employment for temporary layoffs are not recalled. Well, that's true. But in the model, is it a good thing or a bad thing? Temporary layoffs, in the model, they're a good thing, right? The possibility of recall is a way for firms to escape search costs. You wouldn't want to get rid of temporary layoffs in that model. In fact, that same model could be interpreted as a side, just because I come from the UK. But this same model could probably be used as a representation of UK-style zero-hour contracts, some kind of flexible type of work scheduling. And they would say that this is also a good thing, something that's up for debate. But in this model, where essentially workers are risk-neutral because there's insurance within the family, there's search costs that can be circumvented using temporary layoffs, they're a good thing, temporary layoffs. So the question, I guess, now is what policy conclusions can be drawn? We're in a very sophisticated model with rigidity and with matching frictions with externalities, et cetera. So is it the case that in this model, private job destruction, job creation decisions, and separations into temporary layoffs, are they suboptimal in some way? Do they differ from the planners? Is there some form of dynamic inefficiency? I guess those are questions that I think would be very interesting to address with this model. And finally, just in my last minute and a half, talking about the pandemic and the PPP policy. So I have to give it a little bit of help with extra shocks. But the model does do an impressive job of capturing aggregate labor market dynamics during the pandemic, even though it was calibrated into pre-pandemic data. And one of the author's messages in the paper is that PPP was successful in fulfilling its intended purpose of encouraging firms to rehire workers on temporary layoffs. So this is just a small point I want to make here, and it's kind of unfair to make that point now, in a way, because it's a point that could apply to a lot of the literature around PPP. But the way that PPP is modeled in the paper looks very much like a free lunch given to the economy. It's essentially a bit more sophisticated than that, but it's essentially a positive productivity shock that partially offsets the negative COVID shock. So as such, it's not entirely surprising that it encourages firms to offset some of the negative effects of the COVID shock. And so I guess my final point here is that probably it'd be useful to go a little bit further and build in sort of anticipations of having to pay for PPP a little bit later, maybe, and see how that changes the results. And with my seven seconds, five seconds to go, I'm just gonna close here, and thank you very much. Thanks. Thank you very much also for sticking for the time limit, so perfectly. Maybe I just give the floor quickly first to you. Maybe you want to react directly. Yeah, sure, sure. Thanks Fabienne for this insightful discussion. Of course, all the points you raised are well taken. So let me start with very briefly to comment on what is this measure of temporary layoff versus jobless unemployment? And I would like to emphasize here that we, in using this measure, we are within a very large literature that goes back to Katzenmae in the 90s that has been revamped recently because of COVID. And of course, there is also alternative literature that think about recalls, and this is associated to the seminal paper in particular by Fujita Moskari, think about recalls and unemployment in a slightly different way without really distinguishing between these two states using the CPS data as we do. And this important paper has actually encouraged all others that have been working with this definition to sort of justify it better. And we sort of entertained this a bit later. So what we did was to show this table. We produced this table where we have this right constant as a rate. Let me actually say that with this transition matrix we can actually replicate some duration dependence data that have been documented by other people. But there is also other type of evidence which points to the fact that workers in temporary layoff and employment do behave differently. They do have behavior which is different. They do search much less than workers in... And so even if it was for that fact, that would be a meaningful distinction that workers face and behavior on their actual perception of whether they're going to be recalled back. But yes, it's a well-taken point that we need to deepen a bit more this aspect as has been done by other people. Now regarding this 2.2% flow of people move from JL to TL actually we don't have that in the model because it's much smaller at least in terms of probabilities. But you can think of part of it being measurement error of course and part of it being actually workers who realize that they didn't think they had in their hand an option to be recalled and actually they do at a later stage. So I don't think that's a completely absurd phenomenon. Now regarding the policy prescription, we didn't get into this, we didn't entertain this and we didn't do it for a number of reasons including the fact that we would be missing very important aspect that I mentioned at the end of my presentation of recalls, we need to have a terogeneous much quality and think about real location and preserving much capital. Now yes, and finally yes to your last small comment, yes is through that, but it's through that it's maybe not so surprising that this PPP achieved what we did. But okay so first of all the point of the paper is to quantify it, it's a quantitative paper so what we want to do is to quantify it and we also have a structural model and so it's not only quantified but emphasize the mechanism and in particular emphasize the importance of this additional flow of mitigating this additional flow that we document but yes we could somehow take into account the fact that this is not a free lunch and there will be some, thank you. So the floor is open to any questions from the audience, if there are none then I of course have a question but if there is then please go ahead because I mean for my question that I had is because this is again paper based on US data and now when you look at also a bit the success story that we had in Europe during COVID with this short term or this short, how do you call it, this short, basically short time work schemes, they were kind of a bit different in the setup, so it was not so much loans to the companies but rather direct transfer, would you think that this kind of mechanisms, I mean that it matters whether you do it like as a loan or whether you do it as a direct transfer or would that not change the results? Okay so actually the picture I wanted to show was exactly about the difference because I was expecting people thinking about Euro versus the US and so the one thing, it's at the very end of the, it's the last picture I added it today, it's not part of the paper but I thought it was interesting to make this point here. Okay so here you see in the first plot you see the unemployment rate, it's a different scale just to make the dynamics comparable in the United States and in the Euro area and when you look at the COVID recession you see this enormous jump in unemployment in the US and in relative terms a very small increase in the unemployment rate in the Euro area and that has been sometimes ago I was super surprised about that and then I discovered that actually temporary layoff they do exist and they have a different definition but they have been used, it's part of these job retention schemes that you'll mention is just one of those but they've been used, they're just counted differently. Temporary layoff workers in the US are counted among the unemployed and temporary layoff workers are counted among the employed in the Euro area so then I constructed this to counterfactual where in one case I take away temporary layoff workers from US unemployment in the other at temporary layoff workers in the Euro area to the unemployed and then you see one is the middle plot and the other is the, and you see that the dynamics are strikingly similar when you do that. Now of course the most substantial question is, whether these objects are the same and whether they work differently or not. And even the definition are different because in the United States the definition, so you workers are classified as being on temporary layoff if they have an indication of a date sometime in the future to be going back to their previous employer or if they have some expectations that in the next six months they will go back. So this is the definition. Instead in the Euro area the definition is a bit more, is stricter because you are on temporary layoff if you have an expectation to go back to work within the next three months or if and, and not or, and you are receiving at least 50% of your wage. So that's a much definition with a much stronger attachment than in the U.S. So presumably it would be interesting to try to measure loss of recall which must be present also in Europe and the extent to it but I expect it to be different. Now this is a bit orthogonal to what you ask which is more about, I guess, you know, the, so I haven't thought about, you know, loan versus subsidies. No, but it goes into directly this direction that if you have a different mechanism you still show that the, Yeah, it doesn't really matter too much for the results of your paper apparently. So that I think is an interesting result. Now with this I would like to give the floor and for the audience the chance to also ask questions. Check whether there's anything going on online but apparently not. Luke has a question. I think we can. So Antonella, I understand you had to rush a bit over your model, the time constraints but the unimportant parameter is this firm specific overhead costs that you introduce. And I was just wondering for the benefit of the audience if you could give an example of what, you know, what they would capture in the real world. Yeah, so it would capture costs at the, you know, that are not directly associated to the use of a particular input so you can think of, you know, cost of running the business like paying the rents, paying the utilities, administrative costs, that's the idea. And actually the PPP was designed to be used, firms could use PPP both to pay for wages and to pay for this kind of cost which they, at least part of those they were still in place even though they were not operating during COVID. Yes, Francesco Lippi, he's the mind. Thank you. So just a clarification about a point that was raised by Fabien, who are these agents moving from the jobless unemployment to the recall unemployment? Is this like some statistical assessment margin or is there really like a type that can, you know, because if I lose a job, I lose a job then I'm not attached to anything. So what does it mean to go back to a temporary unemployment? Yeah, so we haven't digged into that flow because it's smaller in at least in terms of, you know, the probability of making that transition. And we don't have that flow in the model. So maybe we could think a bit more about that. The way we thought about it was either measurement error which is, you know, this is a CPS, it's a survey we do have measurement error. Or, you know, I could also capture an actual phenomenon of workers that initially they just think they have no, they lost their job permanently. And then some something, you know, comes out, some new information, they, you know, and they realize actually they have a recall option standing in place or maybe that there is a, you know, some effort in the very short term of the firm to reconnect. I mean, they're, you know, yeah, this kind of. So thank you very much to both of you, both Antonella and Fabia. And with this we conclude our first session of this conference and we now here in the hall have the chance to go for coffee. The break is 15 minutes and then I hope to see you all back either online or here in the room and looking forward to the next session. So thanks again. Thank you very much. Yeah, I'm just checking where is my speaker. Welcome back to the second session of this conference. It's my pleasure to introduce to you Yiming Ma from the Columbia Business School. She will be presenting her paper which will be discussed by my dear colleague on my right hand side. And you have 30 minutes. Hi, good afternoon everyone. Thank you so much for being here. It's a great pleasure to be sharing this work with you. This is the Reserve Supply Channel of unconventional monetary policy. It's joint works with Will Diamond from Wharton and Jiang Jiang at Kellogg. So we started out this observing that there's been a very large and continued expansion of central bank reserves around the world. And in the US, for example, reserves outstanding before the crisis was very low at just 50 billion in 2006. And they have jumped up once during the financial crisis to 2.8 trillion in 2015. And then another time, following the COVID interventions to 4.1 trillion in 2021. Now the US is not alone. Also for the ECB, the balance sheet size has grown substantially reaching above 8 trillion in 2021. So in this rise in reserves, one of the key contributors is quantitative easing or the APP program here for the ECB. And what this involves is the purchase of securities, such as government bonds, et cetera, et cetera. And importantly though, this purchase is financed by reserves that are safe and liquid assets that can only be held by the banking sector. Now this is important to notice because the banking sector after the crisis has also been constrained by increasing amounts of regulation. For example, leverage ratios and supplementary leverage ratios essentially impose a cost on how large banks' balance sheet size can get. So in this backdrop, we wanted to really understand what is the effect of this increase in reserve supply that is now concentrated in the banking sector? In particular, how does it affect the functioning of the core activities that we know that banks do, including lending to the real economy? In particular, could there be any side effects of having such a large supply? And we think that this question is important for thinking about how we should design central bank policy going forward and in particular, how large the optimal size of central bank balance sheet should be. And so just to take a step back, so theory, banking theory has given different potential results for this question, right? There's been very seminal theories that say how banks are in the business of maturity transformation. So their assets are of a longer maturity than their liabilities. And so having some more liquid assets such as reserves could actually help to reduce some of the risk of these maturity mismatch and help banks lend more by having a liquid asset buffer, right? So for this first set of theories, having a larger supply of this potentially scarce liquid asset could actually improve lending to the real economy. Now more recently, however, there's been a second set of theories pointing to balance sheet costs that constrain banks size in total. So if you have more of one particular asset, such as reserves, you could be crowding out the existence of other types of assets such as loans to the real economy. So ex ante, you know, we're not really sure which ones of these channels would dominate, which is why we want to look at the data. Now one very simple way to look at the data would be just to look at this over time. So here in red, I have the amount of reserves held by US depository institutions. And you can see very clearly an almost flat line leading up into the financial crisis that then, you know, distinctively jumps up for the financial crisis. And again, start lead during COVID. In blue, I am plotting the ratio of E liquid assets on bank balance sheets, where E liquid assets are all the assets, excluding cash, reserves, Fed funds, repos, treasuries and agency securities. So think of the blue line as representing essentially lending that the banking sector is extending through your economy. Okay, and what you see is almost like a mirror image, whereas when the red line jumps up, there's almost a simultaneous drop in what the blue line shows, right? So the simple interpretation of this would be, as reserves go up, the banking sector is lending less as a proportion of its asset size. Okay, but as we all know here in this room, quantitative easing and APP, it was not a random policy, right? It was implemented because there was a recession. So perhaps what we see in the contraction of lending is not so much the result of a policy, but the result of the overall recession in general, because as we know, loan demand tends to drop in times of recessions. Okay, and so although this is preliminary evidence that there may be a crowding out effect of reserves on bank lending, it is definitely not conclusive because of the endogeneity with the business cycle. So instead of looking at the time series, we want to have a framework that we estimate using variation that does not come from QE, that comes from something not related to underlying business cycle fluctuations and demand shocks. By estimating a framework with demand and supply, we can then run a counterfactual, right? In this paper, we only move reserve supply in the system so that we can then observe how much bank lending, deposit taking, and mortgage lending changes. So that's gonna be the goal of this paper. And what we find as the reserve supply channel of unconventional monetary policy is that actually additional reserves, or else equal, crowd out bank lending. And over the period from 2008 to 2017, on average, each dollar of reserves injected crowded out 19 cents of corporate bank lending. Relatively, deposit and mortgage quantities are less affected, and that is because these markets are relatively less elastic than the corporate loan market. The underlying mechanism that we think is at play is that you have reserves that can only be held by banks that cannot be freely reallocated to non-bank intermediaries, and you have banks that are subject to regulation, which makes their balance sheet space costly and hence can lead to the crowding out of other assets if there's an additional input of reserves. Okay, I wanna stress that this result is not the only effect of QE. Many of you have probably looked at many of the other channels, for example, the effect of asset purchases, which definitely are important and should be thought of as existing in parallel to what we are looking at here. But what we wanted to highlight a little bit is that when looking at QE, it's not only important to look at what the effect of asset purchases are. The flip side of the coin is the reserves injected, and it's the same amount as the assets being purchased. And so we really wanted to contribute to the literature by taking a closer look at the reserves that have been created, which so far has received much less attention. So the results we have, the one of crowding out, should be thought of a complimentary results to everything we know so far about the effect on asset markets through the purchase of different securities. Now, in quantifying that effect, we also put some numbers on seminal theories and banking that have looked at how different components of banks' activities relate to each other. Theories that have said how deposit taking and lending can provide positive synergies. Theories that's had how liquid assets can facilitate the holding of illiquid assets while reducing run risk, et cetera, et cetera. And the setting we use is going to be a structural model. Again, because we think that relying on the time series alone is not gonna be an effective way at understanding causation. So with that, let me give you a one slide overview of the model. I promise this is by far the denser slide and after that it's gonna get a lot easier. Okay, so we have a bank M facing a residual demand curve. So it knows that it's gonna set its own rate RL, but that it's also gonna be affected by the other rates that its competing banks are setting. Okay, the bank would then like to maximize its profits, which comprise of the revenue it's earning from lending and to corporations and mortgages. It's gonna then pay its deposit costs, but importantly, and here you see, there is that large C at the end of the profit function, the bank has to pay a cost. And this cost exceeds just the deposit interest rate that it has to pay its depositors. This cost, think of it as everything else that the banks would have to incur. We talked about overhead costs earlier, but this could also for a bank's case involves regulatory cost, potential expected bankruptcy costs, et cetera, et cetera. And importantly, this cost, we allow it to be a function of different bank balance sheet components, including the quantity of loans, deposits, mortgages and securities. So we will allow, for example, this cost to vary as the amount of reserves, the amount of securities, liquid assets in the economy is changing. Okay, so with that cost function, very standard banks are setting their marginal return equal to the marginal costs for loans, mortgages and deposits, and for liquid securities because it's a competitive market, the marginal cost is just equal to the price. Okay, so in a figure, this is gonna look a lot better, we have very standard in blue the marginal revenue from lending, banks are going to set the margin revenue equal to their marginal cost, so that's the intersection of the blue and red line. Okay, and so that is in equilibrium, and now we wanna understand, suppose we had more reserves in the system, suppose we did more APP, suppose we did more QE, what's gonna happen to the marginal cost of providing loans, and suppose the estimate, and again, we're gonna ask the data that, but just for example, suppose it's going to increase the marginal cost of lending. As you can see, it's the upward shift of the red curve to the red dotted line, right? In that case, it would been increased in the interest rate on loans, and tracing that down the demand curve in green, we would observe a contraction in the quantity of loans. And so this graphical illustration is what we want to take to the data and estimate in real quantities. Okay, so demand supply, very standard, let me start with the demand system. So we use micro data for deposits, mortgages, and loans, thinking of deposit and mortgages as a county time level market, and for loans as a state level market. We have this issue that when we wanna estimate the loan demand curve, the deposit demand curve, we need to use supply shocks to trace out the demand curve. And here, we want to use again, a shock that is very unrelated to QE. We use the reallocation of bank funding after natural disasters. So natural disasters happen, think of hurricanes, think of flooding. There's an increase in local loan demand, and this has been shown in the reduced form literature. Now, this increase in loan demand at a given branch is going to translate into a loan supply shock at other branches of the same bank, assuming that internal capital markets are efficient, and that the bank is really allocating funding, okay? As long as the loan demand at branches that are unaffected by disasters are not correlated with loan demand at the affected branches, this should comprise a valid supply shock to trace out the demand for deposits, mortgages, and loans. So this is precisely how we estimate this instrument. So we calculate how much in total is a given bank's branches exposed to natural disaster losses. And then we look at the unexposed branches, how much their loan lending, mortgages, and deposits are going to change, okay? So in a loan to demand system, we think of how differences in the rate of providing deposits, the observable characteristics, and the unobserved characteristics can explain differences in the log market shares, okay? As standard two-stage lease squares, as the instrument we just mentioned, gives us the following results. It shows that for a given bank, if one bank unilaterally changes its deposit rate, for example, by 10 basis points, its deposit volume is going to increase by about 4.6%. Now, if you look at columns two and three, you see that the coefficients are about 10 times larger. It shows that the price elasticity in loan and mortgage markets are a lot larger. So the same rate change is gonna have a much larger volume to respond. And that's perhaps not surprising if you think of firms and borrowers as being much more sensitive relative to your sleepy depositors. Now, this again is about how one bank changes its rate. What if every bank in a given economy changes its deposit or loan rates, right? That should not only help them to steal business from each other, that also changes the size of the entire market. So to understand how the outside option size or the size of the market is changing, we then consolidate our instrument to the market level. So basically asking how much do branches of banks in a given market, in a given county, how much are they exposed to natural disaster losses? We use that as an instrument to understand the aggregate effect of a demand shock. And we find that if all banks in a given market change their rates by 10 basis points, then for deposits, that's gonna be a 1.3% change in total volumes, mortgages have a 4.0% change and loans have by far the largest change again at 16.1%. For the same rate pass through, loans and mortgages are more responsive, once more. So we know how much quantities would change given a rate pass through. And now we need to understand what is that rate pass through. So once again, we have observations of marginal revenue from our demand estimation that in equilibrium is gonna be equal to realizations of marginal costs. But now we wanna understand how this marginal cost is going to change if the central bank is injecting more reserves, if we have more QE. Now, it's a complex function, the one in the first line that is the functional form of the cost function we allow. And importantly, we want to have these interactions between different balance sheet quantities. So we want to allow how, when there's different amount of reserves, how the cost of bank lending, how the cost of taking out mortgages is changing, for example. And so this interaction effect in the cost function makes our supply system a bit more difficult to estimate than the standard demand and supply system in which if you have a demand shock for the deposits, for example, you're only changing the deposits costs. So what we will then have though, is that if there's a deposit demand shock, it's not just the deposit and deposit quantities and costs that are changing, it's also the loans, the mortgages that are gonna adjust the same time because of this interrelated cost function that we set up in the first place. So instead of having just one dimensional supply systems, you will have a multi-dimensional one and for that we would need multiple instruments to really pin down all these cost function parameters. And again, we wanna find some shocks that are completely unrelated to QE to get rid of the endogeneity problem. The first one is we essentially reuse these natural disasters, but instead of looking at how a bank transfers the initial demand shock, we just use the initial demand shock to begin this because now we need a demand shock to trace out the supply curve. Okay, the second one is a very standard bardic instrument where we look at how the deposit growth of different counties change over time and we think of banks as being exposed to that deposit growth and that not coming as something from the bank supply side. So with these instruments, we can run the bank's marginal costs as well as their quantities of loans, deposit and mortgages against each of these instruments and we show that using these regression coefficients, we can jointly pin down the coefficients of the cost function. Remember, that's the big function I showed you over here or the case or the different parameters here that can help us understand how does bank's marginal cost relate to their balance sheet composition. So what we find is the following. Here we are running in the panel A, different costs and volumes on the natural disaster shock and you can see that the volumes are all going up but for the costs, the mortgage and loan costs are higher so when you have more demand, it's gonna be more costly to give out lending and deposits are more valuable so their costs are relatively lower. As seen by the negative sign, the first column in the second panel, this is a bardic deposit shock so you're exposed to a lot of deposit growth and here you see that the mortgages and loans, they're cheaper to lend out so this seems all to be consistent is what we would think of these shocks as doing and then the second step again is to use these coefficients to pin down the cost function and here is what we have and probably the most interesting result of the paper is in this matrix. This matrix shows you for a given change in the quantity of deposits, mortgages, loans and securities, there I'm going row by row. How much does the marginal cost of deposits, mortgages, loans and securities change and there I'm going column by column. If you look at the diagonal of this matrix, you see that the coefficients are positive. This means that if you have more deposits then the additional unit of deposit is going to be more costly and similarly for mortgages and loans as we would expect. Now what's really interesting is if you zoom in onto the last row again that is how the effect of a unit of securities changes the marginal cost of different banking activities and the middle two columns shows a positive coefficient, right? So you see 0.317 and 0.264 that shows that having more reserves, having more liquid securities on bank balance sheets is increasing rather than decreasing the marginal cost of lending to firms and the marginal cost of giving out mortgages. That shows that it's not the case at least in the sample period we have that having more reserves is making lending cheaper. In contrast it's actually making it more expensive, right? And so it seems that relative to the two sets of theories it is the set of balance sheet cost theories that is dominating the overall results. Now, quantitatively what these coefficients mean is that if you have 100 million increase in reserves for the average bank branch the marginal cost of mortgages increases by 31.7 basis points and the marginal cost of loans increases by 26.4 basis points. Now you may think these are basis points so probably not something we want to worry about but I would argue that it depends first on how many of those millions of reserves we are injecting. It would also depend on how the change in marginal costs is translating into changes in interest rates and how eventually that change in interest rates is translating into changes in quantities. Now for that second part remember that's what we did in the demand system where we estimated how a given change in interest rates translates into quantity changes depending on the elasticity of demand. So in the last section what we do is to put everything we have so far together. We have a demand system that tells us how quantities change given interest rate changes and we have a cost function, a supply side that tells us how when reserve quantities change how the marginal costs of lending of taking out mortgages of issuing deposits how that changes. So we run a counterfactual analysis in which we inject the amount of reserves into our system as we observe in practice from quantitative easing over the years of 2008 to 2017. We inject the reserves, we observe, we let the system equilibrate what is the increase in marginal cost of everything. We see what's the increase in markup given banks' market power and then we see how the eventual changes in interest rates translate into quantity changes. So in the new equilibrium what we find first is that if you have more reserves you need to reward the holding of reserves more in a closed system. So relative to the market wide risk-free rate the interest on excess reserves or the reserve spread increases by an average of 16 basis points. Okay and so if you compare that to something like the interest on excess reserves minus the Fed funds rate spread which in the data over this time period is 11.6 basis points then I would say it's not exactly the same but it's quite surprising that it's in the same ballpark because we have not used any really related data in that sort. The correlation of these two data series is also very high over time. So we really seem to observe that as you have more reserves you're forced to reward the holding of these reserves more and that in the market because the Fed is setting this interest on excess reserves it's the Fed funds rate that is changing to change the spread. Now this initial change in the spread on reserves is passing through to different markets that banks operate in for deposits. The deposit spread increases by 12.7 basis points and for mortgages and loans it's at 18.8 and 15.6 basis points respectively. So in terms of just the quantity the price response, excuse me, you don't see such a big difference, right? So approximately the deposit is the lowest and then mortgage is the highest but approximately if you're in the 12 to 18 basis point range. Now what is really different however is how much these rate changes pass through to the quantities. And here we find that it's really bank loans that are seeing the largest effect. For a dollar injected we find a 19 cent drop in the amount of loans extended to firms versus that for mortgages and deposits those quantities are much less affected. And here the reason again goes back to the demand systems we estimated we've found that loan demand is much more elastic than deposit and mortgage demand so that the similar amounts of rate pass-throughs are gonna have a much larger effect in the loan market. And once again I stress that this is not the only channel of quantitative easing or of APP again this is result of injecting reserves, right? In total you would also expect some effects from the assets being purchased expect effects from changes in the long-term interest rate and in the yield curve and those are all important. But here we highlight one channel that was probably not really paid attention to before which is that the addition of reserves which have to happen if the central bank is purchasing anything and if reserves are confined within the banking sector that that has the side effect of not helping bank lending but crowding out bank lending. And the effect is at 19 cents per dollar and over time you see in blue and red we compare how our projections and the data they align so that it really seems that over time we are seeing evidence of increases in reserves leading to declines in loan amounts. So again this is the reserve supply channel of unconventional monetary policy. We think it's an additional factor that really should be considered and we think about how much QE should we be doing what should the optimal size of bank balance sheet be because bank regulation is in place and that's unlikely something we can change. We came to this conclusion not by looking at QE because we were worried about the endogeneity. We set up a demand and supply system that we estimated using exogenous variation and we ran a counterfactual that showed us a dollar in reserves crowds at 19 cents of loans from bank balance sheets. I think implications of this results out going forward could be that amount of reserves if you have larger reserves one of the costs is potential crowding out but if you want to alleviate some of the negative effects one thing that in the US has been done during COVID is for example to exempt reserves from the supplementary leverage ratio regulation and that is something one might want to think about if you want to keep having large central bank balance sheet sites going forward. The other possibility is to no longer restrict the access to reserves to the banking system. All of the problems that arise here are because you have a lot of reserves that cannot leave the banking system and the banking system has a lot of costs in holding these reserves. Suppose you were able to have other type of intermediaries say money market funds have access to central bank reserves then the injection of reserves no longer have to be fully absorbed by bank balance sheets and at least you can allow the market to tell who is in a better position to absorb these reserves and perhaps banks can then have a greater capacity of extending lending through your economy. With that, thank you very much. I really look forward to the discussion and all of your comments. Thank you, Yeming. Your paper will be discussed by my colleague Agnesa Deonalo. Good afternoon, everybody. It's a great pleasure having the chance to discuss this super interesting paper. I think you got it already that for this audience, for this place, this is really a topical paper, something that everybody should read in this audience. So this paper is about QE or APP, any asset purchase problem by central bank and zoom in on a specific characteristic of asset purchase or the creation of reserves and the accumulation of central bank reserves on banks balance sheet. Banks balance sheet only because as Yeming mentioned at the very end, these reserves, no matter who is the final counterpart in the asset purchases by the central bank, have to stay and have to be held on banks balance sheet. And under some circumstances, this means that QE translates to an expansion of the balance sheet of banks. So the question that Yeming and Ercohto are asking this paper is whether this increase in the supply of reserve is actually having an effect on other banks' decision, in particular about the decision of banks of providing loans, mortgages and raising deposit. And they have a very top provoking funding that actually increase in the reserve supply during the 2008, 2007 period crowded out bank lending. So what they show is that for every dollar of reserve that was injected, there was a 19 cent reduction in bank loans. And they call this the reserve supply channel as Yeming mentioned several times, this is not capturing the overall effect of QE, likely for us, but it's rather showing a counter effect or a force that is counterweighting the stimuli impact of QE. And if you think about the overall amount of reserves that were injected, this is not a small number, so it's a very sizable effect. So what's the idea, what's the mechanism behind the emergence of this reserve supply channel? So the idea is that the injection of reserves and the expansion of balance sheet that comes with it is actually changing the cost of providing other banking services, loans, mortgages and also deposit. And in principle, this effect will go either way. Reserves are a very safe and liquid asset, so banks with more reserves are banks that are better prepared to face, for example, liquidity shock, and so this should give them the ability to lend more, the ability and willingness to lend more. But the effect that Yeming and Erco to show is that it goes the other way around. So you have more reserves and somehow this increased the cost of providing other services, so in particularly loans, and so the overall effect is the one that I mentioned at the beginning, so that there's a crowding out of bank loans due to the increase in reserves. So I know that this doesn't really do justice to the paper and the authors and the great work that they put into their estimation, and I think Yeming did a great job summarizing the estimation strategy of what they do. So they build a structural model of the amount and supply of loans and other banking service, and they estimate this model through instrumental variables. And it's really a challenging task and I think they tackle very, very well with a very sophisticated estimation strategy which she went through in her presentation. So and as I said already at the beginning, this is a very interesting and policy relevant paper. So and potentially present a controversial result because as I said, this is something that I think no one in the central bank was expecting or paid too much attention to and there's some Fed colleague who actually in a very different setting prove a sort of a opposite result so that actually this accumulation of reserve led to an increase in bank loans. So what I will try to do in my discussion is not to go through the estimation strategy but rather try to focus on the mechanism behind the result and try to understand better why an increase in reserve and banks balance sheet is leading to a reduction in bank loans. So try to understand under which circumstances we should be worried about these reserve supply channel of QE and one instead we can dream and sleep better. So let me sort of do like a sort of list of basic ingredient that you need to get these reserve supply channel to emerge. So first of all you need that the banks are not the ultimate seller in the of asset that are purchased by the central bank. So this is for example was it's very different across the US and Europe where for example for treasury sovereign bonds, those in Europe were mostly held on banks balance sheet that were the ultimate seller. And because if banks are the ultimate seller essentially there's only a swap between treasuries and reserves which are very similar asset with very similar properties so there's not an expansion in banks balance sheet and this is an ingredient that is absolutely needed to have this reserve channel at play. And the second ingredient that you need is that there's some sort of constraint as Heeming pointed out that the bank are facing and this constraint must be a constraint that really binds for a bank and also you need the reserves and loan centers into this constraint as a substitute. So if you increase the amount of reserves you make the banks to be closer to the point in which the constraint binds or is no longer satisfied and then you have to reduce something else on the balance sheet or changing something else on the balance sheet for the constraint to be satisfied again. So this is what captured this substitution between reserves and loans and the existence of this constraint is what actually is the driving force behind the fact that holding more reserves makes actually lending more costly. So what this constraint could be? So Heeming mentioned at the very beginning of our presentation a leverage constraint and in the paper they also mentioned they have a policy implications section at the very end of the paper in which they discuss what could be the source of this cost. And so let's think together about what the constraint that banks might be facing. So hardly this is a risk weighted or a liquidity constraint because reserves and loans do not enter as a substitute in this constraint. Reserves carry no risk weight and reserves are actually there to improve your liquidity position so they are not treated the same way as loans in those constraints. So as Heeming mentioned what potential candidate or a suspect could be the leverage requirement or the supplementary leverage requirement which as she mentioned was reserved for the different treatment during COVID both in the US and in Europe. And the Bank of England already in 2016 excluded reserves from the computation of the supplementary leverage requirement. But so my point about this constraint is that it's there but entering to force in the US only in 2018 as a requirement. Of course banks knew already about this constraint it was a rule already in 2014 and they're probably or most likely front loaded. So this constraint might be a candidate. And what I think is that the sample period that they have that goes from 2001 to 2017 is really an exciting time for someone that is interested in bank regulation because this is the time in which we had the crisis then we had basal three regulation and then the proposal and then implementation and entering to force of the values requirement. So they have a very long samples sample time. So and then I was wondering whether it will be possible to sort of zoom in or separate or zoom in on some shorter time period where I don't know regulation entered to force or also to try to understand which one of these but regulatory constraint might be the one that is actually driving the result. Also because if you think of banks before the crisis and right after the crisis and nowadays or so closer to the end of your sample period the ability of banks to satisfy constraint were very different. So if I was checking for with European data so now the average European bank has an SLR of 5.2% the requirement is only three. So they are way above the constraint which means that this is no longer binding and I wonder when this happened. So I think zooming in in the shorter time period could have give a better idea of which one of this constraint if it's really a regulatory constraint that is binding or is something else. I think this is the other point that I wanted to make is that I is about a balance sheet expansion. So and the asset purchases is described in the US. Banks were not the ultimate counter party in the purchase of assets. They were acting as intermediaries with non banks and I think you have very convincing data on this and you mentioned this at the very beginning of the paper and also in the policy discussion. So what happens essentially if we look at the banks balance sheet was on the asset side we saw an increase of reserves and we saw something else also moving on the liability side of the banks balance sheet which was an increase in deposit. And I didn't put the wonderful asset metrics that you have but deposit and reserves actually have not changed the marginal cost of lending in an opposite direction. So if you have one euro more of deposit you actually reduce the marginal cost of lending. If you have one euro more of reserves you actually increase the marginal cost of lending. Of course the magnitude of the two effect is not the same but it's quite close. So I was wondering whether the exercise in the counter factual which simulated injection of reserves only but without thinking much about how this injection or expansion of reserve on the asset side is funded on the liability side whether it is saying anything to us in terms of the magnitude of the effect. I don't think it will change the direction but you might say something about the magnitude. And also I mean the other point that I wanted to make is that looking at the liability side I know this is my bias, personal bias of always looking at the liability side but there are very different type of deposit and the one that banks were they were backing up the increase in reserves were mostly wholesale funding. So something that is extremely roundable unstable which is not really something that goes in the direction of pushing up your incentive to lend. So this might also be a force that is at play that is a consequence of the increase in reserves and of all the points that you are making about banks not being intermediary in these QE asset purchases and that could be an interesting angle to explore. Let me skip the smaller comments because I think we can discuss separately among the two of us. So let me just summarize. I really enjoyed reading this paper. So I think I was in the office reading it in the last few days and I think all my comments were, oh my God, really? Oh really, I really want to know more. So and you start really from the abstract where you want to, you think that is super, something that is super interesting and I don't think it's just because I'm in a policy institution who did QE but it's really because it's a very well-written paper, very interesting, very careful analysis. And I think there are some avenues in which maybe not even for this paper but for some future work in which one can deepen the discussion about the underlying mechanism behind the result. And for example, explore this interaction in more details between QE, so monetary policy, and regulation. I think that will be really interesting to do and will also give the possibility to derive more, to strengthen the policy implication of the paper all together. So with this, I stop here. Thank you. Thank you, Agnes. So for those of you who are listening online, you can submit your questions via Slido and before I return back to Ye-Ming, those of you are on site here, just raise your hand if you want to ask a question. So Ye-Ming, do you want to first reply to Agnes? Yes, thank you so much. I think that's the most important thing for such a thoughtful and kind discussion. I just want to quickly touch base on two things. I think we need to discuss much more on some of the deeper comments. Maybe let me start from the end and just let's think a bit about the accounting. So I think as you correctly point out, at least at the very beginning, when there is an injection in reserves through the perch of securities and if these securities are originally held in the non-bank sector, the expansion on the bank's asset side in terms of reserves, the commercial bank's asset side in terms of reserves is automatically matched by an expansion in its liabilities, most likely deposits. It has to be just for things to be balanced, that some things got balanced on the liability side. And that should be one-to-one in its immediate mechanical effect. However, that does not mean that that amount of expanded deposits is there to stay, right? After the fact has occurred, the bank should equilibrate. It's gonna look at, okay, this is how large my balance sheet now is, this is how much I want it to be or how much I can afford it to be. How do I adjust? It goes on my asset and liability side to let's say a new equilibrium, right? In which I no longer am obliged to hold on to all the deposits that initially expanded. And I think what we want to understand is the world that has equilibrated rather than the very initial world, right after the injection where I fully agree that it's like a one-to-one expansion in deposits. Now, as that said, it should still be that in the equilibrium adjusted world that could be a simultaneous expansion in deposits. And that's something we definitely could think about running, right, where we basically run a counterfactual with a simultaneous increase in reserves and an increase in deposits. And I fully agree that the liability side matters. And actually, I think you pointed it out that Viral and Raghu, actually, they agreed with your point. So they wrote a very recent paper. You should all take a look at it if you haven't. But they basically look at the liability side, potential side effects on the liability side, where they argue that larger reserves could actually increase, let's say, instability through exacerbating the possibility front. So definitely in the liability side, side effects may also be very important. And so far, we focused mostly on the asset side. Yes, also in terms of regulation, I agree, I think we're a little bound by how many parameters we can estimate. We can definitely try to have, let's say, time-specific coefficients that then can better understand over what time period, what regulations were at play and how large were these constraints at affecting lending. But we will need to try a bit to see how much flexibility the data gives us and how much power we have is the estimation. But once again, thank you so much. There's one question here in the back, Morten. Yeah, I was wondering about them. So the result, you find that the elasticity is much higher for the loans. One interpretation of that would be that this is very costly, because then the corporate sector, they can invest less. But I guess the other one is that large firms, they have very close substitutes, issue equity or corporate bonds and so on. So therefore the cost actually may be very small. So it seems to be important to figure out what happens on the corporate side. That's very much true. So I think what we can say is that there is less, fewer loans that the banks are able to finance. But it would also be important to understand is that really a drop in the total amount of borrowing or is it substituting a way to cap to markets? Perhaps one way to understand the current results is to say that there could be some substitution and especially for the largest firms, I think probably the substitution will be very good. But that also means that if you do not have the option to substitute, if you're a smaller firm or if you're a lower rated firm, you don't have a credit rating. If you're more credit constrained, that means you're gonna be especially affected if banks are constrained from injection reserves. But I fully agree. Ideally we have a bigger model that also has a capital market involved. Next question, here in the middle, on your left. Great paper, so I don't know if you can look at this in the data, but you can possibly argue that reserve requirements force banks to drop the riskier loans, which are actually bad. They shouldn't have been made in the first place, but banks were doing restricting behavior, at least I guess during the financial crisis you can maybe argue for that. And that was the loans that dropped and that was actually the purpose of QE. So of course I'm playing the devil here. But so is there any way for you to look at something about the quality or risk of the loans that were dropped? That's a very good point as well. I guess there's the deeper philosophical point as in do we want a very safe banking sector which would be probably like a narrow bank or do we care about like liquidity transformation and lending that the banking sector performs? I guess there's a sweet spot in between that banks should take, let's say positive MPV projects and rather than finance a lot of zombies. The counterfactual currently cannot do this because we would have to model quality in inside the supply side. But we should be able to look at the, let's say the data around that time and just to see corresponding to the same time period because we do have granular loan level data, what types of firms are likely to lose their financing and probably that can get to that. Thank you. Next question, Peter here in the middle. I liked a lot. So the paper, one question is have you clarified what kind of assumptions you need to go from kind of analysis coming from local shocks to kind of general equilibrium effects that you kind of get to. So these elasticity's potentially are different. Yes, absolutely. So the assumption would be that the regions of the demand curves we have estimated and the regions of the cost functions we have estimated in response to local disaster shocks and in response to deposit growth from a bar tick shock are representative of the ones with a large injection in reserves. And of course that is not exactly true. Like we have a much larger magnitude, let's say of response that we have in the counterfactual than we have for an average deposit or loan market with the average shock that we experience. But with these shocks, we do have quite some variation. There are some very small shocks. Like you have a lot of rainfall locally and you do have a very small loss. But it is true that this data said it also has things like Hurricane Katrina, like very large scale events that led to very large losses in the local economy. And there for those types of shocks, we do see a much larger region of the, let's say of the parameter space being used to estimate the coefficients. But it is the assumption that yes, the response of the shocks that we use are able to predict, are able to represent what we would observe following a system wide reserve injection. Next question on the way in the back. You mean, thanks for the great presentation. I was wondering about how you interpret the securities market in the context of your model. The reason is that it seems to me there is a marginal cost for banks of intermediating basically between the non-banks and the central bank in terms of balance sheet cost. But the bank seems to pass through this balance sheet cost to borrow us instead of the non-banks. So you could think of a market where because the banks have the power basically to do QE, they pass through the balance sheet cost to the non-banks instead of the borrowers as in your setting. That's a very interesting way to think about it. I think perhaps by reviewed preference, the fact that we observe what we observe meant that they were unable to pass through everything to the non-banks. From some other work, my personal take is that they do have quite considerable amounts of market power over non-banks, but perhaps it's present but incomplete. Next question. Okay, then let me look at the slide. So there's one question on the way you separate out demand and supply. So you use natural disasters to capture local demand. So also for sort of a European audience, could you elaborate a little bit more to what extent banking in the US is truly local? Because here we think often you have these large banks that are basically active in different markets. That's a great question. I think it's not purely local for sure. I think what probably is more local is if a disaster hits a region, it's the housing, it's the infrastructure in that place. It's the residential homes, all those things that need to be rebuilt. So I think that definitely is very local. Now, the assumption would have to be that it's those borrowers and those depositors that they go primarily to local banks in that county for which we would have the disproportionate effect. So if they do also go to other banks in other counties, that is something we would have to net out. So I think what we use here, you can think of, this is the relative increase in exposure of the local bank branch due to local disasters. And this is going to be a larger effect. The more locally oriented the banking sector is. So I guess if you're taking another economy for banks that are even more local, probably this kind of shock is going to be even a better instrument at generating the variation. It would not be useful in a world in which there are no local markets at all and that everyone, no matter where they are, can go to any bank branch and has a similar propensity to go to any bank branch equally. Thank you. Next question is, so you abstract from loan maturity in your setup. So a very loan contract is the same maturity. And so one question would be if one were to model some sort of term premium, at least in this house we've been saying one of the effects through which QE reaches different parts of the economy is also by extracting duration and it should in principle compress this term spread. So if we think of your model, you would have two different loans with different maturity. Would that be another channel that would be worth looking at? Would it probably go in the opposite direction? Right, I think it's an interesting heterogeneity, I think because I guess there should be two differences, I think depending on the maturity of the loan. One is exactly as the one you said, if QE's goal is to flatten the yield curve, then longer maturity loans may be benefiting more on the asset side, let's say. But it should also be the case, I guess longer maturity loans, they have a different duration risk, they have different considerations from the bank supply side. That also could affect how the cost function differs, the cost function parameters differs for these two types of loans. So there will likely be very interesting interactions there. We can try to separate those out. We do observe the maturities at issuance at least that were extended for these loans. Thank you for the suggestion. Okay, my last question is stealing from Agnes' discussion. If I sort of push the rational that you provided in terms of comparing the US and Europe, so to the extent that these banks are also sellers of securities, Agnes was stipulating if I could be smaller, so is that something you would agree with that if you were to run this type of estimates for Europe, would you expect because of this to find smaller effects? I think literally, or let's say, first, the mechanical effect, yes, right? So if you think x anti or else equal, there are some banks who hold more of the securities that the central bank is buying relative to a banking sector that holds less, yes. There's definitely, the second one would respond by more given the channel I'm proposing. But I think, I guess it's important also to ask why a certain banking system is holding more of a certain security in the first place. And if there's any anticipation effect that perhaps these are the things that the central bank going forward is more likely to buy that contribute it to a larger holding of certain type of securities, then I would just argue maybe that's just the intermediate step of the channel you're proposing, because remember, the intermediate step has to be that the banks, I guess, are first buying it from their clients, asset managers, and then they're offloading it to the central bank, right, so there's exist a very short period in time, at least in the US most often, the banks are holding these things on balance sheets. And then perhaps that is a different timeframe, a different type of equilibrium that we see in Europe, but I think yes and no depending on what the ex ante expectations and incentives were for holding securities in the first place. Good, well, when I hear yes and no, I think more research is needed, so. Well, thank you so much, Erving, and of course, so many years for making this an interesting session. Thank you so much. Welcome back to the ECB's annual research conference. It's a tremendous pleasure to introduce Professor Jean-Tirol to you. He's Professor of Economics at Toulouse School of Economics and the 2014 Nobel Prize winner in economic sciences. He's by far the most cited European economist alive, and an entire generation of economists has grown up with his papers and textbooks on industrial organization. When I asked the students and colleagues how best to describe Jean, the first words that came to mind are intellectual, inquisitive, hardworking, and above all, down to earth. It is indeed quite telling that Jean was the first to send in his presentation for today's event. Professor Tirol will be delivering the Jean-Monet lecture. The goal of this lecture series here at the ECB is to debate topics of relevance to strengthening European Monetary Union. This lecture is named in honor of another Jean, Jean-Monet, one of the founding fathers of the European Union, and he was a French businessman, a diplomat, and he drafted the declaration which led to the creation of the European Coal and Steel Community, which was the precursor to the European Economic Community. And then he went on to be its first president. At the ECB, we admire Professor Tirol for his tireless efforts to put his prolific research at the service of the European common good. His applied work on the prudential regulation of banks, competition policy, or financial crisis has been highly influential in these circles. Today, Professor Tirol will talk about regulation and competition policy, or not, in the technology sector. Professor Tirol will speak for about 40 minutes followed by Q&A. You can, as before, supply your questions via Slido, or if you're here in the audience with us simply raise your hands. Professor Tirol, please join me. The floor is yours. Thank you so much, Luke, for those very, very kind words, and it's a great honor to be here, be back here and give this Jean-Marie lecture. You may be a little bit surprised. I mean, look at the surprise for you with the title of this lecture, which is not the usual Central Bank lecture, I would say. And don't worry, the European Central Bank is not engaging in mission creep, market risk-taker still has a job. But this digital economy is everywhere. So even so, that kind of stuff may not be what a number of you work about every day. It has an interface, as you will see, with some of the central banking stuff. So the outline will be not all the challenges which are raised by digital revolution in AI, the impact on labor, health, privacy, politics, and the like. This will be too much. It will already be a lot, as you will see. Most of the lecture will be about competition policy, which is at the crossroad. I think, I guess, most of us that we don't have a choice between laissez-faire and just some populist intervention. There are more interesting things to do. But we have to address this new winner-takes-all world. So I will be talking about that. I won't be talking about industrial policy because I won't have time, but if you want to raise it during question, it's fine. And I will have a shorter time, which is, of course, maybe a little bit closer to interest on FinTech, CBDC, Sprite Money, and give some personal views. So, of course, competition policy is not new. There has been a, it was around in the late 19th century, but there is more and more concern because we have seen an increase in the markups. And as a whole, we are not fans of monopolies. They charge high prices. They don't innovate that much, on average, because they are afraid of cannibalizing their own products. They engage in lobbying to keep their monopoly position and the like. Now, when I grew up as an economist, things were simple because you had a regulation on one side. So telecoms, electricity, railroads, gas, and the like. And you had a regulator of those network industries. And then competition policy was for the rest. You know, basically, industrial policy was kind of a source of shame for the rest of the family. Technologies has changed a little bit with the rise of multi-sided platforms. The challenger institution, and they have blurred the lines between industrial regulation and antitrust. And those platforms, in a sense, resemble public utilities. They are different, we'll discuss that. But they have very large investment costs. They have network externalities, and they have very low, actually, often zero marginal cost. They have very high prices. I mean, often the Gaffas, you know, the Amazon and Google and Facebook and so on of this world, they say, oh, look, the prices are very low. Actually, they are zero. Most prices for the consumers are equal to zero. We have all those great services for free. This is a wrong argument because there are two sides. And of course, Google and Facebook and the others will charge a lot of money to advertisers for targeted advertising. They will charge a lot of money to the merchants. You may be paying 15% to Amazon or 20% to booking, 25% on the app stores and the like. So it's actually a lot of money. And of course, this raises the cost of doing business. That means that the consumers, in some way, are going to pay for it. So look at this slide. It's both extremely naive, but also it looks complex. So basically, what you have in the middle is a platform. And it has different names. I'm going to use different names, but in Europe, it's called a core service. Or in Antitrust, it's called an essential facility or bottleneck. And there are lots of different names. And this platform is basically interacting with two sides of the market. On one side, there are the consumers, you and me. On the other side, there are apps or merchants who want to sell on those platforms. The merchants or the apps may be in-house, so they may belong to a platform or they may be third-party apps or merchants. So there is this balancing act that we have studied in the last 20 years on where the platform is trying to bring both consumer and merchants on board. Now, you have three different kinds of questions in this economy. One we are not going to discuss our very little discussion of is whether there are behavioral manipulations and wrong information given to the consumers. The other two we are going to discuss. One is whether this monopoly platform, which is at the center of the system, think about Google search engine or Facebook social network or the Amazon or Apple marketplace. Can it be contested in some way? Cooler, more efficient Google or more efficient Facebook come in and take over the market. That's called contestability. And if you look at merchants or apps, there is a question about whether third-party apps or merchants are actually fairly compensated and they get a fair access to the platform. It's not quite the same. The fair access to the platform is you compare the terms and conditions for a third-party app or merchant with an in-house one. And the European Union has been very active and also have lots of disagreements. I think it's great and I think I applaud it. I put in green here the kind of laws and regulations which have been put forward. The one we are going to focus is the DMA, the Digital Market Act, which has to do with market power and the issues of contestability and fair access. Now, you might tell me what's new. I mean, after all, I told you that those platforms, they look very much like old-style public utilities, telecoms, railroads and so on, large fixed costs, network externalities and low marginal costs. And that's why you have a winner-take-all situation. Now, you cannot, you just cannot regulate the Google and Facebook or Apple or Amazon the way you regulated telecoms and railroads and electricity companies for two reasons, at least two reasons. The first reason is that those firms are global firms. Now, all the other ones I gave you were always national firms, so that means that you had one regulator and this regulator had very good information about what was going on within the firm. Now, Google and Facebook are all over the world, okay? And that raises the issue about having the good data about the firm, but also who is going to be in charge of providing those firms with a fair rate of return, okay? So, we don't know how to do that. The other issue is that those firms are not followed along their life cycle. So, sure you see that Google is making a lot of money right now, but we were not monitoring Google when Google started up. And if you wanted to set a fair rate of return on Google, you'd have to factor in how much Google spent, but also what was the probability that Google will become Google, right? Because there are a lot of wannabe Googles, even now actually. And finally, it's a cat and mouse game, regulation. I mean, just like banking regulation. It's always difficult to catch the ones you are monitoring, and if things change very fast, and that's the case of tech, just like it's the case in banking, it's very hard actually to regulate. Okay, let me maybe skip, there's a very interesting piece of legislation in Europe about curating content. That's more on the consumer side, and try to make sure that those platforms actually don't have too much illegal content, fake news, defective products. They don't try to exploit our weaknesses, our cognitive weaknesses, for example, and that they have recommender systems like Google and Amazon, which hopefully will serve the consumer. Now, this is not to be taken for granted because since 1996, basically, there's been no liability of those firms for illegal content, defective product, fake news, and the like. So this is changing, but we have to invent a new regulation for that. So let me start with fairness. And one of the things with fairness, I mean, as I mentioned, there are two dimensions for fairness. One is whether the platform charges too much to the business units for access to the platform. The other one is whether there is a level playing field between in-house merchants and third-party merchants. And that brings to the notion of hybrid platform or competition. So if you look at the digital world, on the one hand, you have pure broker. So if you think about RBNB, for example, RBNB doesn't own the apartment, okay? It's just a marketplace putting together apartment owners and people who want to rent them. At the other end of the spectrum, you can have vertical integration where basically the platform does everything itself. They are to find pure examples, but Apple, traditionally, has been mostly vertically integrated into hardware and the like. But mostly now you have competition or hybrid platform in which you have a platform which operates the market but also compete in the market. So you have competition between in-house and third-party merchants or apps. And there are good reasons for why this is a case. I mean, even so you have a platform sometimes introduce your own applications or merchants. I mean, think about Amazon with old foods, with Amazon basics and the like. Because you want to control experience because you innovate yourself and also because you want to behave in an anti-competitive manner. So let's look at this. I'm going back to the same diagram and let's look at what's called the Chicago School. Chicago School Argument. So Chicago School Argument was actually enunciated in a different framework because Chicago School didn't have too solid market in mind, but let me rephrase what they were saying. Why would a monopoly platform actually exclude or handicap third-party apps or merchants? You will call in and pass, you will call that foreclosure. Today it's called self-referencing in Europe. The Chicago School will say, no, no, don't worry. In fact, they're not going to do that except if there are efficiency reasons. Why? Simply because if you let third-party apps develop and trade on your platform, then you will have a very rich ecosystem that consumers will like. And therefore those consumers, you can charge them more. If you have a better product, overall product, supplied to offer to consumers, you can charge them more and therefore you don't have any anti-competitive incentive to actually foreclose access to third-party merchants or apps. This Chicago School argument is a little bit like Modiglien and Miller, which is that we know it's wrong, but at the same time it's kind of a guiding principle. We have to ask, why is it wrong? And there are actually four reasons why it's wrong. So look at this diagram again. Start with a consumer and start with a statement of the Chicago School. If you have a richer ecosystem of merchants and apps, you can raise a consumer price. Now, what is if your price is zero? In the digital economy, your price is mostly zero, which means that your optimal price is negative. And the only reason why you would like to subsidize consumer to be on your platform is because you make a lot of money on the other side. And that's where the two-sided market part comes in. You would like to subsidize consumers to come onto your platform and then you'll make money on data, advertisers, merchants. So the optimal price is negative, but of course you cannot charge negative prices because otherwise you will have bots pretending to be consumers and making money out of you. So what's happening then is that there is a zero lower bound. By the way, macroeconomists, you don't have a monopoly on zero lower bound. The ZLB is also in IO, right? So you see a lot of ZLBs in IO as well with the two-sided platforms. So you don't want to raise a price above zero because in the first place you would like to charge a negative price. So first reason why the Chicago School argument doesn't apply to this context. Second reason, let's move to contestability. You may want to actually exclude third-party apps or merchants for two reasons. The first reason is that if you have only in-house apps or majority of in-house apps, apps it will be very difficult for an entrant in the core market actually to come in because there will be what's called an application bias to entry. You cannot be a platform if you don't have apps to offer. And if you control the apps yourself, nobody can enter. The other reason is a bit different. Again, link with contestability, which means that which comes from the fact that a third-party app which initially is a complement to the platform might become a substitute. And that was exactly the Microsoft browser case in the 90s. Netscape, which was a rival of its internet explorer, was a complement to Windows. However, by enlarging the lines of code, it could have become a substitute for Windows. And at least the case was built exactly on that, that Netscape was a complement. Everybody agreed on that, but could have become a substitute. And therefore Microsoft had incentive to foreclose Netscape. Third reason, the fourth reason is more complicated and a bit more subtle. Let's go now to the apps. It turns out that you have a second zero bound there. There are lots of apps which are free as well. And whenever, because you may, if those apps make money in other ways like getting the data and so on, then competition between apps developers will bring the price to zero, at least for some of them. And the winner of that competition actually will get supranormal profits, which won't be driven away, competed away because you have the zero bound. So you have a zero bound on the consumer side and you may have a zero bound on the app side. Okay, so four reasons. I went very fast, but there are four reasons why the Chicago school argument doesn't hold. And two of them have to do with zero bound, which are very typical of two-sided markets and digital economy where you end up having negative opportunity cost because you make money elsewhere from having consumers. Okay, so that raises the issue about what is the right compensation and it's a broader issue actually. What is the right compensation for Google and Facebook and Amazon and Apple and so on, Samsung. And that raises an issue having to do with what's called the access charge. 200 names, it's called a merchant fee, it's called whatever, but basically how much the rival app store, for example, have to pay or the merchants have to pay for access to the core service. And I will come back to that. You really have to find the right balance somehow between an access charge which is too low, meaning that platform cannot make money on access and will have strong incentive to foreclose arrivals or an access charge which is too high which will penalize the efficient competitors. We'll come back briefly to that. It's a very crucial issue because if you look at the DMA in Europe, it's talking a lot about those access charges without having a guiding principle for doing that and it's actually a very difficult issue. The second aspect of the DMA in Europe is to protect contestability. So it's not about fairness toward merchant apps, it's about whether a core segment, the monopoly segment is contestable or not. So can a more efficient entrant enter that market? And DMA has a number of things which most of them seems all right. DMA is insisting on multi-homing. Now what is multi-homing for those of you who have never seen the word is simply that one side or maybe the two side of market connect to multiple platforms, okay? If you have a master card and PayPal in your pocket or Visa in your pocket, you do multi-homing. Maybe the merchant does that as well, okay? But multi-homing is very, very important to have competition in the core market. So let me give you an example, but I could give you another example if you look at the apps written onto iOS which is Apple and Android, the biggest ones multi-homing, that's very important. That's why you still have two systems, iOS and Android. Let me give you another example. So in the US, you have Uber and Lyft. Uber is bigger, Lyft is smaller. And people multi-homing, at least some people multi-homing on Uber and Lyft. And many drivers multi-homing on Uber and Lyft. Now, imagine that Uber says, if you want to be a driver on Uber, you have to not deal with Lyft. You cannot be a boss on Uber and Lyft, you have to choose. Well, what will a driver do? Uber has more customers, so the drivers will go to Uber. So there will be a few drivers on Lyft. The consumer will basically stop multi-homing, at least those who are doing that. And soon enough, you will just have only Uber and it will be monopolized. So it's very important, and there are many examples of that, that you have multi-homing if you want to sustain competition or otherwise you will end up with a monopoly. In the few cases in which there is no monopoly for so far. And there are lots of variants, some of them are considered in the DMA. The thing which the DMA requires is interoperability. So the reason why we don't have a telecom monopoly is simply because there is a regulation so that the telecom operators actually have to interpret. So if you are not on my network, I can still call you. Well, it's the same for Facebook and with different issues. So for Facebook, it's very hard for a customer actually to move to another social network if the customer has to port the same material, the content, the posts, and so on, and the contacts onto every time. So if you have to do it twice, you are not going to do it very long, okay? So DMA is requiring actually static and dynamic portability. So if you can post your content, for example, on several platforms at the same time. And there is a more controversial thing which is that DMA will require that the big platforms do not combine data from different services obtained from third parties. Moving to data silos within the company. And that's more controversial because, of course, it's going to make Google, of course, maybe more contestable, but also less efficient because Google is very good at combining data from different services. Okay, what is the issue with contestability? The issue with the contestability is that the entrant may not be able to enter because of the application bias to entry. And also, as I mentioned, the fact that the app may later on become a substitute and then that gives an incentive for the platform to foreclose. But it may also be able to enter, but not enter. So what we have seen lately is what's called defensive merger. So defensive merger simply means that those big tech companies buy their future rivals, okay? And you might think from, you know, we have had laws for over a century saying it's not allowed. But it's very hard to challenge for the competition authority. It's very hard to challenge because, first, there are thresholds for notification. And if you think about Facebook buying Instagram and WhatsApp, which are two other social networks, WhatsApp and Instagram add no sales and no profit, no revenue, basically, when they are purchased more or less. And that's true for most of those mergers. So basically, you have things which have no revenues and they fall below the radar. Now the DMA is going to require notification by the designated platforms, which are the big platforms. But even if there is a notification, that's not the end because the burden of proof is very hard. You don't have any data. So imagine that you are European commission of the DOJ in the US. And you try to prove that the purchase by Facebook or WhatsApp and Instagram, which are social networks, which could become competitor with Facebook, is actually illegal because it reduces competition. The competition hasn't taken place yet. Nothing has happened yet. You have no data. So you have to be a very good econometrician to find something in the data. No, I'm just kidding. This is really an issue. And that's precisely why they do it so early. Because there's no data, so you cannot prove it's anti-competitive. Now you could say we do it exposed. Okay, now we decide that Facebook should not have purchased WhatsApp and Instagram. But the trajectory, of course, of WhatsApp and Instagram has changed by being purchased by Facebook. And on top of that the X have been scrambled. So it's very hard to unscramble the X. So that's really an issue. Let me skip that. And tell you a little bit about MFNs. MFNs are very important. And that's a very, very nice trick which has been used by many platforms. And here it's going to touch a little bit as an example as a particular illustration on payment systems. So an MFN is basically a pledge by the merchants that you won't find a better price anywhere else either on another platform or if you call the merchant directly. It's a best price guarantee. So for example, if I want to be on booking in the good old time, I want to lease my property on booking, then I have to promise booking that I'm not going to charge a lower price anywhere else. Okay, so that's the requirement. Now booking then can say look, you can come to me and you'll get the lowest price whatever you choose. That sounds great for the consumer, right? You get the lowest price regardless. On a single platform, that's great, no? No, there's a catch, okay? So just look at this diagram. You have, I don't know, well, you probably don't see, but you have a card payment system, for example, that might be American Express or it could be Visa, MasterCard or whatnot. It could be an online booking system like booking or it could be Amazon. And on the other side, there's a merchant. The platform will charge some fee, proportional fee usually. It's not quite proportional, but to simplify, so American Express might charge 3% of the transaction. Amazon Marketplace might charge 15%, booking 20%. Some of the apps, app stores charge 30%. Samsung and Apple and so on. So there's an access charge. But if the merchant want to be on the platform, it has to actually guarantee the best price to the consumers of the platform, okay? So it's a wonderful deal. Now, once you know you get the best price, there's no incentive for multi-homing. So you go to booking, you find the hotel you want and you don't look anywhere else because you know you get the lowest price on booking, okay? So the platform user is never going to multi-home. It's a booking customer and is going to stay on booking, especially if all the hotels on the booking or most of the hotels on booking. Okay, now booking can go and see the merchant. Here are my conditions, 20% and I'm the only gateway gatekeeper for those consumers that I'm serving. You won't reach them anywhere else because they don't multi-home. So you have to accept my terms and conditions. So far so good, but here's the catch. If booking charges 20% of the transaction, who pays the 20%? Are those booking customers or are there other people? Well, the booking customers are going to be charged a bit more because the hotel will have to pass through the cost in some way. However, remember the best price guarantee, the MFN, the most favored nation close because it cannot charge less to other customers. The hotel has to pass through the 20% to every customer, not only the booking customer but every other customer, whether they call the hotel directly or they go through Expedia or whatever. So this is a wonderful trick because it basically says that you can tax your rivals. It's one of the rare cases in which an oligopoly can actually tax his rivals. Now that has been noted, I mean researchers have worked on that and that has been noted by authorities and a number of countries in Europe actually have forbidden those most favored nation closes. Our current president in France, for example, did that in 2015 when he was finance minister in France but many other countries have done the same. Now it turned out that the prohibition of most favored nation closes didn't work very well, simply because, for example, the hotels on booking and others keep an MFN even so they are not obliged to. Why? Because a search engine of booking off Google can downlist them. So if there are far lower prices elsewhere, you can downlist this hotel. So the fact that there was pressure actually to apply an MFN even so you're not required to. So what economists researcher, I mean this is a research conference, are doing now is to try to find guiding principle to regulate those MFNs given that the prohibition, the structural remedy do not work, okay? And I could say much more about this. Maybe I should, maybe the biggest challenges is actually the design of access price regulation. Lots of us, including myself and Michele Biciglia but others as well, are trying to design those access price regulation and we have to take to solid markets seriously. By the way, it's referring to the US, the Supreme Court, MX decision, like many of those decision talk a lot about to solid markets, my paper with Charles Horshey and so on. And then they conclude that total price is charged on both side of markets is the only thing that matters, which when we say exactly the reverse, it's kind of embarrassing, but we have to apply theory to try to think about what the right access price. And there are very few cases in which it has been done. So in Europe with the payment cards, the rule actually comes from a paper I've written with Jean-Charles on capping the merchant via the merchant's convenient benefit of using cards. I mean, there's an unpaid call issue there, of course. You know, it's not that easy to compute, but at least you have a theoretical principle that gets you on what it should be. The same thing is going on now, this rule is being applied in Brazil and so on. But, you know, it's fair to say that we still have very little theoretical guidance on what this should be. And that's very crucial. So let me talk a little bit about some topic which is closer to your interest, maybe. Even so, I hope you have enjoyed this introduction to tech regulation. It's a huge thing. I mean, there's digital economies everywhere. People complain about monopolies and the like, and we have to find solutions. Our community has to find solutions. And that's going on. I'm very proud of Europe because Europe is moving ahead. No, not perfectly, but it's moving ahead, and that's very important. So let me say a few things about another aspect of the digital economy which is fintech and CBDC. I'm not going to tell you anything new here. We have lots of public cryptocurrencies. We have occasional attempts at creating private currencies. And then of course we have digital money issued by the central bank. Now, if you want to have a discussion of those issues, you need to say, what does money mean? I guess like your first class in macro. Is that a store value of savings of speculation? Or is that a basis for transaction, a medium of exchange unit of account? I mean, so far, for example, cryptocurrencies have been the former, right? Certainly too risky to actually be a unit of account and a transaction mean. Just like any economic phenomenon, they come from a demand and supply side. On the demand side, users want to have low transaction costs, especially cross-border payments. Sometimes they want to escape from a dysfunctional monetary system. Of course that's not the case for Europe or the US. And sometimes because they have some less palatable aims, money laundering, crime, tax evasion, some libertarian assholes that I've never understood. And of course on the other side of the market, there's supply side. You have entrepreneurs who are going to try to make profit out of senior rich, of new coins, some other business models based on intermediation fees. For private money, it's a bit different. For a private sponsor, you may have a consumer look in data collection and the like. Okay, and cryptocurrency, as I mentioned so far, and even so there are thousands of them, then there's basically stores of value, speculative stores of value. It's still too expensive and slow. They don't have a two-sided platform model. They still haven't understood exactly the business model of payment system. And they have this issue that you all know about price stability because the risk, of course, is a bubble bursting but there's a risk of forking as we have seen lately. And it can be highly volatile and that's why people have tried to introduce stable coins, okay? Which sounds quite reasonable. But of course, if you have stable coins, it's just like prudential regulation. You know, they tell you you can put your deposit there and don't worry, you'll get them back. Well, you must have collateral, it must be segregated, it must be prudentially supervised. The question is, who does that? Who is supplying this global public good of, of doing all the supervision that actually, because if you have reserves, those reserves they don't give a high yield exactly the same prime as for a bank. And who is going to be the lender of last resort? We're not there yet, okay? And personally, I'm not a big friend of neither of cryptocurrency and pride digital currencies. In the case of cryptocurrencies, you have a loss of senior age, which I think for me the senior age belongs to the central bank and to the treasury. There's an issue with cryptocurrencies about mining the environment, the waste in servers and the like. You have challenges for financial stability. I would like to see how cryptocurrency will take care of 2008, of the European crisis, of COVID and the like, that will be interesting. And there is a danger that if consumers, SMEs or financial intermediaries invest in Bitcoin, then that they might be bailed out and that would be terrible. So I don't see the point of actually. However, we also see that, and it's not a criticism, but central banks have been slow at introducing rival currencies. Despite some competitive advantages, basically the state can decide what is legal tender. The state can decide what currency taxis must be paid in. They can compel banks and fintech to join the platform and so on. So there are lots of advantages for the state institutions. So, and of course, most central banks are catching up now. There are still big issues about who is going to get access to CBDC. You just want retail depositors and to what extent? Do you want to have all cell depositors? Everyone, you know, that kind of stuff. And then we go back to the previous paper where you have an extension of reserves and the like. It seems like the tendency is towards saying it's only depositors and it's not going to compete with the bank's normal business. And that will say that actually the CBDC will be something like a deposit will be insured by the government and will pay deposit insurance, which will be the logical conclusion, and will be limited in amount, like 100,000 euros or something like that. And you're going to tell me what's the point of any CBDC then if it's just a retail deposit. Well, you could have direct transaction by the consumers so that the banks could not set fees. Now, there's a lot of regulation in Europe, fortunately, about that already. But that might be a point, I don't know. I just want to make a point, which is about narrow banking. Not all deposits are meant to be safe, protected from bilinability. Okay? There is no reason why every item in the liability of a bank should be protected from bilinability. That's inefficient. Retail deposits are protected. They are insured by the state because the state is a player who can actually insure people and banks against a risk. But in normal times, the deposits are covered by loans. Okay? Neither should all deposits be short-term, demandable, which I will assume CBDCs will be short-term and demandable. It's good actually to have some longer-term deposits. All right? So we have to think about all those things. There is some logic in the infrastructure of those things. And I don't have a definite conclusion, but it seems to me that we should not throw the baby with a bathwater. CBDCs will be very useful, if only to fight against those new things, which have the potential to be harmful. But they should not also compete too much with the banks. Okay? Thank you so much. And thank you. Thank you, Professor Théodore. So we have the pleasure of being joined by two of our board members. It's about Schnabel responsible for markets, first part of your talk, and Fabio Panetta responsible, among others, for the digital euro related to the second part of your talk. So I'm counting on questions from the two of you, just to push it on the spot. Can I have any questions as well? Please, Fabio. That's a mic, yeah, it's coming up. But first of all, thank you very much for your lecture. I found it very interesting and stimulating. I would ask you two questions, and then we concentrate on CBDC, because this is the topic that I'm working on here at the ACB. The first is the issue that you raised on who should have access to CBDC. You raised this as a question, and we have been reflecting on this. Of course, in terms of efficiency, it would be a idea that everybody would have access to CBDC, because if you introduced this digital means of payment, it would, at least to me, look natural to give access to consumers, merchants, firms, simply for efficiency reasons. But then, as you know very well, this could raise issues in terms of financial securities, because you might be inducing a cutting out of the bank sector. So what is your view on this? Because this is something on which we are reflecting from an operational viewpoint. Of course, the intermediate point of view would be, prudent point of view would be, let's start by having a limited access to CBDC, and then let's see how the system looks. Let's start by having access by consumers with a maximum value for each consumer, in which case the possibility of triggering, of inducing digital rounds would be limited. But there are many who criticize us on this view, because they claim that we are, in this case, limiting efficiency, making the demand for CBDC, by a broader potential group of users, lower, we are restricting access, and this would, of course, induce by itself lower demand, would make the CBDC less attractive. The second, it's not a comment, you mentioned the need for deposit insurance in relation to CBDC. I'm not sure I understand, because deposit insurance is necessary when you have a claim of a private entity. CBDC is like a bidder at the center bank. So why do you make any reference to deposit insurance? The problem why a CBDC could crowd out banks is because it's a risk-less asset. And of course, if you have a risk-less asset that has zero storage costs, that is totally liquid, then you might want to introduce some concern, otherwise it would naturally crowd out any other asset that has similar characteristics. So there's something that I did not get in your analysis. On stable coins, please don't call them money. They're not money. I don't want to pretend then objective here, but that's, again, a liability of a private issuer, that unless he's heavily regulated in the limit, like a bank, you cannot call it money, because the risk of the assets of the stable coin will not be zero. That's only for the center bank, the capacity to issue risk-less liabilities. So thank you very much in a way of career for your lecture. Well, thank you on the stable, let's start with the end on the stable coins. I agree if you have perfect prudential supervision, there is no risk, and there is not going to be money creation. But I'm concerned it's not going to work. I'm very concerned that at some point, those platforms will want to gamble because they will very low yield on this. And if you don't pay attention, then they will de facto issue money and take risk. So I don't know, maybe I'm too pessimistic, but I don't see the gain that is obtained from that, given that you already have a central bank currency if you had money, what is the point of having this, you're adding another layer by platform which are global, so that raises the issue, but who is going to supervise those platforms, who is going to be the lender of last resort. So I mean, you're creating primes without adding value as far as I can tell. You know, maybe I'm too French about it, I see centralization everywhere, but on CBDC is on the level of access. As I said, yeah, I mean, I agree that the position change term is not the right term, but the point is that safe assets, they sell at a premium, so they have an interest rate discount in a sense, and that should be the case. Now, I agree that nowadays with inflation, we may not be too concerned about that, but in a sense, that's really an issue. If you issue CBDCs and everybody can get them, that means that everybody has access to an asset which is completely safe and demandable, and that strikes me as strange, and there's a big risk of disintermediation. I don't see the government actually engaging in providing loans. I mean, the government cannot do that, and the central bank, by the way, of course, cannot do that, it's not equipped for that, and in the case of government, the political pressure will mean that there will be the wrong loans made and so on, so I don't think it's the right way to do it, so you cannot substitute, you have to keep the banks in operation, and something you would agree with, but the risk of disintermediation is very big, and the point I want to make is that you're offering a big service, demandable plus safe is really something that everybody wants, and you're offering this service, and whatever you want to call it, you have to charge for this service. So in my limited interpretation, where there will just be demand deposits, 100,000 euros maximum, then the banks, if the banks manage that, they will have to pay for deposit insurance for that service, which is offered by the central bank. So basically duplicating the current state of affairs, but with basically having people being able to do the transaction themselves with those CBDCs. Now, I don't know, I haven't studied in detail the thing, I mean, you probably know 10 times as much as I do on the topic, but it seems to me that we have to ask ourself, and that's something I'm sure of, is why do we have the financial system we have, right? It has taken centuries to get to some kind of financial system and to some kind of central banking. And my view is that there are good reasons for that. And we have a technological innovation and every new technology is good, but it's not because you have a new technology that the fundamentals of economics change, they don't change. The information asymmetries and all those things are still around. And therefore, we have to make sure that when we take advantage of this new technology, we don't throw the baby with a bath sweater. Good. Well, if you have questions, please raise your hand. So, Jean, you spoke a lot about at the beginning how regulation should be global in some form, at least this phenomenon of tech firms is global, contrary to the utilities. And then all the examples, if I'm not mistaken, that you used, all the firms that you named, they're all U.S. firms. So, sort of like three related questions on this one is, should EU regulation be tougher than the U.S. regulation on these tech firms since they operate globally? And if we're gonna be too tough, are these U.S. firms gonna leave Europe? Or the flip side of it is, how will we ever in Europe develop superstar firms like the ones that you mentioned? Okay, there are lots of questions in your question. So, yes, I mean, right now, if you look at the top 20 tech companies, 11 are from the U.S. and nine are from China, none from Europe. That's bad. We could discuss the causes for that, the reasons for that, but that's a very dangerous state of affairs. Now, the U.S. used to be the country of antitrust. Actually, we learn antitrust from the U.S. Now, it turned out that in the last 30 years, antitrust has been in the decline. It started with Reagan and not mentioning Trump. It's making a comeback, but not a comeback I like or any economist like, which is it's more populist comeback as opposed to some comeback grounded in economics. So, that's really an issue. Europe now is taking the lead, both in antitrust, as I mentioned. Also, there are similar rules now under the Biden administration and, of course, for privacy as well. By the way, if you ask me about GDPR, I will say lots of bad things about GDPR, but I'm very proud. I'm very proud that Europe did GDPR. You know, that's one of the areas. We may not be the best in business, but at least we are moving in terms of regulation and there's no question to me that we have to regulate. But we have to regulate in a smart way. So, the thing is that you're right, those firms are global and they operate in many different markets. The thing is that they still have to abide by the European rules if they want to be in Europe, which is a big market. And you see that with GDPR now, which is spreading in different countries. Now, in the old time, I remember there was more collaboration where the antitrust authorities in Europe would collaborate with the American antitrust authorities and they would not be the same rules, but they would try to work together on a given case, say the Microsoft case, on many of those cases. Now, it has become harder. Multilateral cooperation in general hasn't been very popular and so on. But I think Europe is still, it's not going to prevent the firms from coming to Europe, actually, you know, from being established in Europe. Actually, they are not already now. I mean, you see, there is none of them now in Europe, which I would say is a disaster. But they will want to sell to Europe. I think the reason for why there are no tech companies, no big tech companies in Europe are different and they have to do with their talents living for the US. Two more, they will live for China when China has a less authoritarian regime, I guess. It has to do with VC, it has to do with many other things. There are not one single cause of that. And also with our top scientists not always working with industry. At least that's true in France. Also, it's improving now. So that's really a big issue for Europe because there is a lot of the wealth which has been created, which is taking place elsewhere. But I don't think that if we just copy the US institutions, we'll have more of those big tech firms. So let's look if there are any questions from the floor at this point. Isabel here in the front. So thank you very much for your excellent lecture. It's extremely interesting. So one point I would like to raise is that at the moment we are talking a lot about like sovereignty, so European sovereignty and that brings me to a point that you did not discuss in your speech which is actually industrial policy. So let me give you an example. We are a lot discussing issues like whether to put data to clouds. And of course, putting data to cloud raises a severe sovereignty issues. But when we think about possible solutions that basically only exist US-based cloud providers, even though we have discussed these issues for many, many years in Europe. And I remember discussions a long time ago about GAIAX and so on. But for some reason, the European cloud is not there. So is this an issue for industrial policy, do you think? I haven't studied the cloud computing market but I agree, I mean, it's obvious that it's a dangerous state of affair for sovereignty and a privacy. I don't think we can mimic Amazon or Microsoft just like this, right? We try to mimic Google and that's a huge failure. And the point is that industrial policy in Europe is done the wrong way very often. Even so, we need it. It's two different statements. Very much against industrial policy the way it's done but at the same time we need it. Now the alternative was of course to obtain contracts with say American firms so that everything, the servers are in Europe and so on and try to make sure that the data don't go back to the US and that's not easy to do. Now in terms of industrial policy, my view is that it can be extremely useful and it has been done well. Surprisingly, it has been done well in the US. You will not expect the US to be a role model in Korea and in a couple of countries because they have thought about governance. In Europe, at least in France, we think about our friends as opposed to governance. So if you want to have DARPA for example, or the economy of DARPA in Europe and NSF or NIH and all those things which have given rise to all those innovations in the US which have benefited the entire world but mainly the US of course because of the tacit knowledge and the local network generated there. They have governance, very clean governance. It's not the European Commission, those are the scientists who are in charge and they have very broad mandate. They can finance a small number of highly promising projects. They can stop those projects which the government usually doesn't know to do in Europe once you start a project. You don't want to close it because you want to say that you are right in the first place. So you can stop the projects, you use scientists along the way, you evaluate, you finance stuff where the soil is fertile in a sense. So I see, I've conducted interviews in France saying why are you financing this? And the answer was, oh it would be nice in our city to have something on biology or environment or something like that. And then my next question is that, do you have the people who are going to make this happen? And then people look at me, why? And I said, no, it's not going to happen. Your cancer institute is not going to happen if you don't have the right scientist who are going to bring their own students and their good colleagues and the like. It's not going to work. You can have beautiful buildings and make big announcement about your cancer stuff, but it's not going to work. So that's the kind of thing that is important. Now in Europe we have failed miserably except for the ERC for research. There are a couple of examples like that. And also we think in a silo way. So each country thinks that it should be doing its own initial policy when Europe has the right scale. France is too small, I'm sorry. France is too small. And you need, but everybody wants to keep control and the European Commission wants to keep control with this new Europe and innovation council, whatever it's called. Instead of getting the scientists in charge, you get the people who may not have the knowledge to actually choose and supervise the stuff. Even so you have a scientific advisory council, but it's just an advisory council. So I've written at length, actually in the book that you see in the back in chapter 13, I've written at length and also in the report for Emmanuel Macron that I wrote with Olivier Blanchard, trying to explain how you do initial policy. So initial policy is important, but it has to be done right. Well, thank you on that note. I think that the glasses have full. And indeed, I highly recommend the book. This is, yeah, it has been too parochial, right? Industrial policy in Europe. So unfortunately, we have run out of time and now your time is very precious. We benefit from your presence here over dinner still. I want to thank all of you who have been joining us online and want to remind you that the event will continue tomorrow morning at 9.30 Frankfurt time. And all of you who are here, please join us for dinner later today if you can. Thank you very much. Thank you, Professor Thoreau. Thank you very much.