 Good morning. I'm very happy to start this session, which is a very interesting one with a very nice paper by Emi who challenges some of the welfare implications that one conventionally extracts from Calvo Models of Inflation Determination. So I'll give Emi immediately the floor and then Jordi will take it up. Okay, thank you very much. Thank you for inviting me. This is joint work with John Steinsen and two of our PhD students who are now gainfully employed, Patrick Sun and Daniel Villar. So until recently there was more or less an agreement in the community of central bankers that inflation targets should be about 2%. But recently there have been a lot of questions about the optimal inflation target by prominent people, including some in this room, about the possibility that perhaps we should consider a higher inflation target. And one reason for that is the idea that at the zero lower bound an inflation target might constrain the ability of the central bank to have low real interest rates. So this is something that I would say, you know, it's a topic which has moved from the realm of being something that central banks wouldn't even discuss, you know, five years ago when the issue was brought up, basically central banks would deny that they'd even thought about it to something that has become in the serious realm of policy discussion more recently. So in the context of the New Keynesian models that are used in central banks around the world, the main cost of higher inflation has to do with higher price dispersion. So the intuition for this is that if you have a lot of inflation in the economy, relative prices in the economy get sort of messed up. So, you know, relative prices are this this fundamental, you know, way of allocating production in the economy. And if there's a lot of inflation, the idea is that you get more more more more price dispersion and that this price dispersion fundamentally distorts the allocative role of the price system in the economy. So that leads to inefficiency. And in standard New Keynesian models, this force, you know, which intuitively it seems sort of, you know, some sort of reasonable that qualitatively, you have a lot of inflation, relative prices might get in somewhere distorted. But in the context of these standard New Keynesian models, these costs are actually incredibly large. So going from steady-state inflation of zero to 12% leads to a 10% loss in welfare. So this is, you know, bigger than even a large recession. So it's no wonder that in these conventional models, you would end up with low inflation targets being optimal. So this, you know, in the, even though you might think of this as sort of qualitatively, you know, a reasonable prediction. And in fact, analyses of these types of predictions were carried out even before New Keynesian models. In the context of these New Keynesian models, the costs are quantitatively extremely large. So in this paper, we're going to try to bring some new empirical evidence to bear on this topic. The evidence has been limited in the United States, partly by the fact that the data has been limited. So the older work on this topic focused mainly on looking across different industries. But across different sectors, you know, if you would look at inflation dispersion, it's very much affected by things like oil price shocks, which would affect one sector versus another. So what you need to be able to look at price dispersion, which is not based on these kind of cross-industry differences, is microprice data. But until recently, the microprice data available for the United States only went back to 1988, which this is a picture of US inflation going back to the late 1970s. And the previous data, microprice data, went back to the red line. And you can see that that was a pretty boring period in US inflation history. So in particular, it did not include the great inflation period and subsequent Volcker disinflation that occurred in the early 1980s, which is sort of the major event of US inflation history. So the data contribution in this paper was to extend the data set, this CPI micro data set back to the late 1970s. And so we could have evidence on this great inflation period. So what we're going to do is try to use this new data going back to the late 1970s to try to bring some direct evidence to bear on the question of whether we in fact do see greater inefficient price dispersion associated with high inflation. And we're going to propose a sort of new way of analyzing this question by focusing on the absolute size of price changes, which I'll talk more about. So just to preview our findings, we find no evidence of increases in inefficient price dispersion during this great inflation period. I think I came into this project having the prior that the predictions of the New Keynesian model on this point were pretty extreme. So I think quantitatively, I wasn't expecting to see that we get that big an increase in inefficient price dispersion, but the results were a little bit more stark than we anticipated in the sense that we really find no evidence of this channel whatsoever. And in particular, when we look at this absolute size of price change metric, which I'll show you how it's related, we find that it's really completely flat over this 1978 to 2014 period. And that's supportive of the view that there really wasn't a big increase in inefficient price dispersion associated with the large inflation we saw in the late 1970s. So in this sense, we find that the main cost of inflation in New Keynesian models is completely elusive in the data. And I certainly don't want to argue that these particular costs that are formalized in this New Keynesian model are the only costs of inflation that would could contemplate. In fact, they may not even be the most intuitive costs by any stretch of the imagination. I'll talk more about this at the end. But I do think that the formal realization of these costs in the context of microfounded loss functions has had an important role in the debate on optimal inflation targets. So I think that's where our paper plays a role. But I think there certainly are important other costs of inflation and important other potential benefits of inflation that we need to think more about. So let me start by briefly talking about why inflation is so costly in standard sticky price models. So what we do is we look at a very plain vanilla New Keynesian model, but the idea is just to give a very, you know, an intuition for why these costs of inflation are so large in these models. So it's a model where households consume and supply labor. There's monopolistic competition and firms face costs of changing prices. And we compare what happens in a Calville model where there's a fixed probability of changing prices to a menu cost model. We look at a pretty standard calibration along most dimensions. We consider two different values for the elasticity of substitution of four and seven and I'll show you how that matters. And then we're going to look at different values of the inflation rate and look at the implications for welfare in this model relative to a flexible price benchmark. So here are the implications of the Calville and menu cost model for welfare losses relative to this steady state benchmark for these two values of the elasticity of substitution of four and seven. So the black line, the solid line and the dashed line are the menu cost model. And the red line, the solid line and the dashed line are the Calville model with the, you know, the highest line being theta equals seven, the elasticity substitution being seven and the lower line being theta equals four. And then on the x-axis I have steady state inflation and the y-axis I have the fraction of flexible price consumption lost, you know, associated with these, these, these higher values of inflation. And what you can see is that while in the menu cost model, these are the two bottom black lines, the increase in inflation is, is essentially associated with no welfare costs. I mean, there is actually a slight slope, but you basically can't see it. It's basically looks essentially completely flat. There are these very large increases in welfare losses in the, in the Calville model and it's particularly large for the higher elasticity of substitution. There's some, you know, range in the literature in terms of what people assume. Macro economists will often assume higher values for this elasticity of substitution. So where, where does that come from? Well, here are pictures of what happens to output and labor. Here I'm choosing theta equals four, so the less extreme calibration of the Calville model. And you can see that as, as steady state inflation rises in the Calville model, you have a decline, a substantial decline in output at higher steady state values of inflation. And at the same time, you have an increase in, in labor relative to this, to this flexible price benchmark. So, so, so, so what is going on? Well, the bottom line is that in the menu cost model, the welfare costs are small, and essentially unresponsive to moderate values of inflation. Whereas in the Calville model, these welfare costs rise rapidly with inflation. And the intuition for where these welfare costs come from is these distortions in the price system. So what's happening in the model, the, the, the role of prices in the model is that there are these shocks to, to productivity. So there are idiosyncratic shocks to productivity. You see, there's this AIT term. So there's variation across firms in their productivity. And from an efficiency standpoint, you know, who should be the firms that are producing more? It should be the higher productivity firms. And if prices are adjusting flexibly, you know, that's what happens. But when prices are not adjusting flexibly, then those kinds of adjustments don't happen. You have, you know, some firms that that have very low prices, but aren't really all that efficient and other firms that have very high prices, but are very efficient. And it's this allocated distortion across firms that increases very rapidly in the context of the Calville model. So that's the intuition for why it is that in this model, price dispersion is so inefficient. And in the Calville model relative to the menu cost model, so the Calville model remembers a model where there's just a fixed probability of price change. So even if you're very far from your optimal price, you still have the same probability of adjusting your price. Whereas the menu cost model has this sort of escape valve that, you know, if you get far enough from your, from your optimal price, then you just pay the menu cost and adjust. So that's sort of the central difference. And you can see how that essentially puts a limit on on how inefficient your, your price could be. So, so this is a picture of inefficient price dispersion. And, you know, it's kind of showing you the same thing. The black line is the menu cost model. And the red line is the Calville model. The x axis is showing steady state annual inflation. And you can see that in the menu cost model is essentially totally flat. Whereas in the Calville model, there's a there's a slight increase. Okay, so now let me talk to you a little bit about the data set. So this was sort of an epic data construction process. I want to tell you a little bit about the about the process as well as give you a sense of some of the important features of constructing this data set, because there were some data quality issues that were very important that we had to overcome. So as I mentioned, the contribution of our new data was to go back in time relative to the original beginning of this CPI micro price data in 1988, and then add these years in the late 1970s, going back to the late 1970s to capture this period of the US great inflation. So what this involved was first, you know, at some point, we realized that there was this data at the Bureau of Labor Statistics. It was literally in a filing cabinet, but John and I had been sort of hanging around the Bureau of Labor Statistics for many years. And there was sort of one person who was about to retire and knew that there was a filing cabinet in the back of some room, which had some microfilm, and we came to realize that, you know, that these that these data were there. But the next challenge was that the last cartridge reader that could be used to actually read these data, well, first had been lost. And then once we found it had broken. And then once we realized it had broken, we came to realize that the cartridges were so old that no modern cartridge cartridge readers could actually read them. And then we came to the problem that during this period, and this is a more general issue, BLS funding, in fact, funding for all measurement of the US economy has been falling. But the BLS had to pay for the entire data construction process, because as any of you know, who may have worked with the government, you can't give more than $20 to the government or its bribery. So by remarkable ingenuity on the part of the people at the BLS who work in the research department, we were over a course of many years able to bring in retrofitted scanning machines into the Bureau of Labor Statistics, which allowed us to do this. You know, at any point in time, the optimal thing would have been to take the data out of the building and send it to China or India to have it digitized. But rest assured that the BLS is keeping price data safe, even if it's the price of Cheerios from 1982. Those data are not released and are kept confidential. So we can be confident that the prices will not be used for any bad purposes. So this was sort of the process that we went through to get the data digitized into a digital form. But even once we had these cartridge pictures, so that was the first thing to get PDFs from the literal cartridges, there was still the process of turning that into machine readable form. And for that, again, because this all had to be done on site at the Bureau of Labor Statistics, it was necessary to use optical character recognition technology. The reason I'm telling you this is because for any of you who've used optical character recognition technology, it's not a foolproof process. So I think that the only reason that this worked, and I'm fairly confident in the end, we actually got a pretty accurate data set, is because there are various reasons that there's inherent redundancy in the data that allows us to check our process that I will tell you about. So in the end, we got a data set. We were able to extend this data going back from 1988 to 1978. So we include this great inflation period. And the full data set goes from 1978 to 2014. You know, there's one six-month gap, which, you know, is just because for whatever reason, we could not locate those particular cartridges. So the nature of this data set will be familiar to those of you who've worked on similar data sets. It's, you know, literally looking at individual products in a particular store, you know, so in a particular safe way, say, at a particular location in, say, New York. And then they're following that particular product in a particular store over time. And the sample varies from something like 80,000 to 100,000 per month. And then we're going to focus on the prices of those identical products over time. So as I said, there were these two phases, the scanning of these microfilm images from cartridges. And then second of all, the conversion of these scanned images to machine readable form using optical character recognition software. And I should note in relation to the paper this morning that international trade played an enormous role in this last part when we first got quotes on doing the last part from the US government. It was, you know, in the millions of dollars. But eventually, one of our co-authors found a company which ostensibly was located in California but seemed to run on Chinese time. They took Chinese New Year off and they could do it at a cost of two orders of magnitude less than the original quotes at higher quality. So that was a fundamental input into the production process here. So as I mentioned, I think it was absolutely crucial in the optical character recognition process that we were able to make use of important sources of redundancy on these scanned images. So in particular, the images are images of what we call a price trend listing and each one of the images from these cartridges presents prices for the whole last year. So that is 12 months of prices. And we see that in every month. So in principle, each individual price may appear in the data set as many as 12 times. And that's the first important source of redundancy in the data set that allows us to do a lot of data checking. That we can check whether the price that we recover in one month looks the same as the price in the next month from these cartridges. The second thing is that on each of these cartridges, the prices are both reported in levels and in percentage changes. So that's a second source of redundancy. And this, you know, it's kind of a boring detail, but I think this was very important in us coming up with a credible data set using these optical character recognition technologies. And you have to remember that at the end of the day, we're going to be looking at statistics like frequency of price change, which are obviously super sensitive to any kind of measurement error that we might have. So we're going to only include prices in our data set if they're able to be verified using one of these two procedures. Okay, so now let me present empirical results from this data set. And again, it was, you know, after this process, you know, which took many years, it was a very satisfying experience when we were finally able to create series that where you really can't see where the link is between the old data and the new data. And as I said, I think this is an achievement because the statistics we're looking at, like the frequency of price change, I think should be quite sensitive to this issue of measurement error. Okay, so now I want to talk about this issue of how you might try to learn from this data set about the extent of inefficient price dispersion. So the simplest thing that you might think of doing, if you want to assess the question of whether higher inflation, steady state inflation, leads to more price dispersion, would just be to calculate price dispersion. That seems like the simplest thing that you could do. So you might think of just looking at the cross-sectional variance or center deviation of the level of prices. But in practice, you know, this is problematic. And the reason is that even if you look within reasonably narrow product categories, there's a huge amount of variation in prices that has nothing to do with, you know, inefficiency, you know, the type of things that we're focused on. But it's just because there's unobserved variability in product quality. So for example, you know, I have little kids, they buy a lot of milk, and milk seems like a pretty homogenous product category. But even if you look within milk, if you compare organic milk to store brand milk, you know, these things can vary in price by a factor of two. And if you have time series variation in the extent of that kind of product heterogeneity, you know, and in fact, you know, there's other evidence that there has been expansion in product heterogeneity. And that could really cause problems for your measure. So and this kind of efficient price dispersion that's really associated with product quality could completely dwarf the type of forces we're thinking about with inefficient price dispersion. So what do we get if we do this, if we just calculate the cross-sectional dispersion of prices, we get this line, which shows this enormous increase in the interquartile range of prices at a given point in time. But we didn't find this to be, you know, so by the way, this would go the opposite direction from what you would expect if you thought that higher inflation was leading to more price dispersion because inflation has been generally falling over this time period. But we thought this was not very convincing evidence about the question because an alternative interpretation would be that there's been an increase in unobserved differentiation of products over this time and that might be dominating this particular measure. So this has been a major challenge in this literature. You know, the older literature tended to look at cross-sectional variation in inflation, we're going to focus on a different measure, which is the absolute size of price changes. So why might the absolute size of price changes tell you something about inefficient price dispersion? The intuition is that, you know, in these models, what's going on? So you have a price that you set at some point, and then the price is sticky. And over time, there are very shocks that occur. You know, there are aggregate shocks and use in chronic shocks. And your real price drifts away from the optimal value. And then at some point in the future, you get an opportunity to adjust your price. If your price is drifted really far from its optimal value, then when you finally do get to adjust your price, you would adjust by more. That's the basic intuition, that how much you adjust when you finally get a chance to adjust your price would be related to the difference between your current price and the efficient price. So that's the intuition from by looking at the absolute size of price changes would provide a metric of the extent of inefficient price dispersion. And we verify that that intuition is true in these macro models. So as I said, the basic idea is that you're drifting away. And then the question is, how much do you adjust, conditional adjusting? And that's going to be a larger amount if you've drifted further from your optimal price. So here is a graph of the mean absolute size of price changes and as a function of the steady-state inflation level. And so this is verifying that this intuition works in the menu cost model and the Calvo model. So in the menu cost model, what the graph is showing is that as steady-state inflation rises from 0 to 16%, the absolute size of price changes remains essentially constant at around 8%. And the intuition for that is that in the menu cost model, you allow your price to drift away from its optimal value until you hit the SS bound and then you adjust to the optimal value. But if the SS bounds are relatively invariant to the steady-state inflation, then intuitively, you're always going to hit the SS bounds around 8%. And so then this absolute size of price changes is not going to change much as a function of inflation. In contrast, in the Calvo model, where the frequency of price changes is fixed instead, as steady-state inflation rises, the prices are drifting further and further away from the optimal value before they adjust. And so this absolute size of price change is much more responsive to inflation. So intuitively, when inflation goes up, there are these two levers that you have to adjust in pricing. You could either adjust more frequently, the frequency of price change, or you could adjust by more. And in some sense, this is something that's giving you a metric of that. In the Calvo model, the frequency of price change is completely fixed. So you're allowing yourself to drift much further away from the optimal price before you're adjusting. In contrast, in the menu cost model, you have these fixed SS bounds and you're adjusting much more frequently as steady-state inflation rises. Now you might worry, all this analysis I've been doing so far had to do with steady-state inflation. You might worry that in a dynamic model, in a dynamic analysis, things would turn out differently. But we also analyzed things in the context of a dynamic model. That's these dots here, instead of the lines. And it turns out that these basic intuitions still go through. So what do we get in the data? So this is a picture of the absolute size of price changes that's on the y-axis over time, going back to the beginning of our data set in 1978. The black line is regular price changes and the red line is price changes including sales. So what the statistic is showing is that if you look at, for example, regular price changes, but the message is the same, including sales, the absolute size of price changes has been right around 8% going back all the way to the late 1970s. And there's really no evidence that this number was higher in the late 1970s or early 1980s. And this is despite the fact that steady-state inflation or inflation at that time was much higher. It was more like 10% a year. So intuitively, you might have thought that given the fact that inflation was so much higher, if your frequency of price change didn't adjust that much, maybe you would drift further away from your optimal price and you would adjust by larger amounts, conditional and adjustment. In the Plainville and Manicost model, that doesn't really happen because you have, the SIS bounds are pretty invariant and that's essentially what we find in the data as well. Now, you might potentially be concerned that maybe while this first moment of the absolute size of price changes wasn't adjusting, maybe some kind of higher moments in the distribution are adjusting. Maybe while it's true that the average price change was not any bigger back in the 1970s and 80s, maybe there was some kind of tail of the distribution that was larger. But even if we look at the standard deviation of the absolute size of price changes, these price changes might be particularly important from a welfare standpoint because prices that are really, really far off might lead to really big inefficiency. Even if we look at the standard deviation of the absolute size of price changes, we find that there's not much of an impact. So in the model, here is the standard deviation of the absolute size of price change relative to steady-state annual inflation. You see that there's this dramatic increase in the Calvo model. In the Menucos model, this is totally flat. But in the data, there's no sign that the standard deviation of the absolute size of price changes was any larger in the late 1970s or early 1980s. In fact, if anything, maybe there's an increase. So from this perspective of the mean and standard deviation of the absolute size of price changes, we really find no evidence of this channel in terms of the cost of inflation during the Great Inflation in the United States. And in this sense, this particular cost of inflation seems to be completely elusive in the data. The last thing I want to show you is the frequency of price change. In some sense, this is the flip side of the absolute size facts that I was showing you. If the absolute size wasn't moving, then it must have been that prices were changing more. And in some sense, these are more interesting graphs because they actually move. So basic feature of the Caval models of the frequency of price changes is constant. That's an assumption. Whereas in the menu cost model as steady state inflation rises, the frequency of price change rises. So we can do as in the data in the model. The annual CPI inflation here is plotted in red. And that's the right axis. And then the frequency of price change from the data is the black. And that's on the left axis. You can see that these co-move very closely. We can also ask the question of how the simple menu cost model fits these facts in the data. And for this, it's useful to break things out between increases and decreases. So here I'm plotting the frequency of price increases. That's the black line. And again, CPI inflation, the frequency of price decreases. You see that all the variation in the frequency of price changes coming from the frequency of price increases, not decreases. That is actually a prediction of the model. And so in this sense, the simple menu cost model fits the data pretty well. And I'll show you in a moment an actual picture of model versus data. So one question for which this comparison between model and data is interesting is you might have thought that given all the technological change that has occurred over this sample period, it's obviously a huge amount of technical change relative to the 1970s and 80s, you might have thought that if the main challenge to adjusting prices was technological nature, that we would see an increase in the frequency of price change. So the question I'm going to ask now is whether a simple menu cost model with a constant menu cost over the whole sample period can explain the data in terms of time series variation in the frequency of price change. And here's the picture of model versus data. So the dash line, now again, I'm breaking out between frequency of increase and decrease. The dash line is the frequency of increases. And the dash black line is the frequency of increases in the data. The dash red line is the frequency of decreases in the data. And then the solid lines are the corresponding values from the model. And you can see that, first of all, the model fits the data well in terms of capturing this correlation between the frequency of change and the inflation rate. And also, there's really no tendency for there to be a trend deviation between model and data over time. So if it were the case that this menu cost model with a constant menu cost over time was failing to capture the fact that maybe with technological change menu cost should be falling over time, then you might expect that the data would be trending up relative to the model. And we don't see that. So from this perspective, it looks like despite all this technological change, in terms of sticky prices, maybe they're here to stay. And to me, what this suggests is that the costs of sticky prices probably don't have a lot to do with technology. They probably, I mean, there's a lot of survey evidence that suggests that if you ask managers why they don't change prices, they talk about issues like upsetting their customers. And those might be things that you wouldn't expect to change over time, even with rapid technological change. So with regular prices, we find essentially no evidence of menu costs falling over time. The way in which technological change has changed price adjustment is in terms of sales. So here's the frequency of sales. So sales don't occur in every sector. So for example, they leave the service sector, which of course accounts for a huge fraction of the economy, essentially untouched. But several sectors, sales are important in the US and in other countries. So in particular in food, clothing and household furnishings, there's been a trend increase in sales over this time period. So in that sense, there has been an increase in price flexibility. But there's an ongoing literature on the extent to which sales contribute to aggregate price flexibility. And it seems like they're probably much less important than regular prices in terms of leading to fluctuations in aggregate inflation. And of course, there's also the fact that they just don't occur in large sectors of the economy, like services. Okay. So to conclude, the main conclusion of our analysis is that if we look at this admittedly narrow definition of the cost of inflation in the context of these New Keynesian models, having to do with price dispersion, we really find no evidence in the data for this particular cost. Clearly this is only a narrow part of the cost of inflation. My sense is that the case against having a higher inflation target comes from at least a trilogy of different sources. I do think that these kinds of micro-founded welfare losses based on New Keynesian models have played a role. Partly they play a role because even if you write down a model which has other costs or other benefits of inflation, these costs of inflation coming from price dispersion are so large that they tend to drown out the consequences of any other force that you might have in the model. So I think that's one part of the case against the higher inflation target. But there clearly are other kinds of intuitions that many economists have about higher inflation, the notion of a slippery slope toward truly higher inflation, higher inflation volatility, the notion that people have a hard time thinking about decision-making with higher inflation. And these are clearly things that we need to think more about modeling. Finally, when I ask myself why regular people dislike inflation so much, why it was inflation public enemy number one in the late 1970s and the early 1980s, my sense is that it didn't have to do with price dispersion, but that at some level it was that people saw prices going up and their wages not going up and some form of money illusion essentially played a role in the political aspects. So I think people's negative feelings about higher inflation targets come from a variety of sources. Our paper is a contribution to thinking about one of these sources, but clearly thinking about these other sources and it is an important part of future research. Thank you. Okay. It's always a pleasure to read Amy and John's papers and this has been no exceptions. I'm very grateful to the organizers for having invited me to discuss this paper. So, but, Massimo, I think you're supposed to say. So let me start. This falls into a literature whose ultimate aim, I guess, is to answer a very practical question for central banks, which is what is the optimal level, what is the optimal level of inflation? I mean, most central banks, at least in advanced economies, target level of inflation of, or have as a target a 2%, but we have to ask ourselves, does this make sense? Do we have a real justification for why it is 2% and not 3%, 1% or just full price stability? So obviously this is an old question, but has been, which has been reexamined in recent years and a number of factors have been put usually on the table in order to answer that question. So first, there are frictions that require the use of money in order to make transactions. So that calls for following the Friedman rule and that implies that in steady state, inflation should be negative if the real interest rate, if the steady state real interest rate is positive. The existence of menu costs would call for zero inflation on average in order to minimize the waste associated with paying those menu costs. Inefficient price dispersion, which is associated with models in which there is a staggering of price decisions in which not all price decisions are synchronized, that would also call for an inflation target of zero. But if on top of that, you add the possibility of the existence of a zero lower bound and hence the risk of hitting the zero lower bound with all the terrible consequences that that brings about, that calls, if anything, for adjusting upward, whatever is your preferred inflation target, you know, at the margin you would want to adjust it upward in order to reduce that risk. And on top of that, downward nominal wage rigidity also would call for an upward adjustment. Now, the present paper doesn't try to provide an answer to the question on top of the slide. Now, what is the optimal level of inflation? Instead, the authors just focus on one particular aspect which is the issue of inefficient price dispersion and try to reassess the costs of inflation linked to that particular aspect. So, I mean, made a great presentation here. It was very clear what they do. So, the starting point, which is something that is well known in the literature is that the New Keynesian model with Calvo pricing involves very large costs of deviations from price stability. I mean, for any reasonable calibration and Amy mentioned this particular result as a headline of the paper a bit. Now, if you go from an inflation of zero to 12%, that is equivalent to a permanent drop in consumption of 10%. That's a huge negative effect. Let me give you a different perspective on that, on this cost of inflation implied by the Calvo model. If you look as many researchers do at the welfare losses from fluctuations in output gap and inflation in this Calvo model, and here I'm using a calibration from my textbook, you get a welfare loss function that looks like the one you see on the slide. So, y tilde is the output gap and pi is inflation. So, you see that the coefficient on inflation is much larger than the coefficient on the output gap. And many of us feel really uncomfortable about this. We just can't believe this is right. And that's why personally, I try, in my own work, I put less and less emphasis on these measures of welfare and instead I focus on reporting the volatility of inflation and the volatility of the output gap. And the reader can make his mind on how good or bad that is. Now, where does this come from? I mean, well, we understand, well, where it comes from. It's the fact that the Calvo model implies a probability of price adjustment that is independent of the gap that any firm faces at any point in time between its current price and its desired price. So that means that an increase in average inflation leads to a rapid increase in price dispersion in that model, and that price dispersion in turn generates an inefficient dispersion in the quantities produced and consumed by households in the model. And this helps explain the fact that many papers in the literature that have examined the question of optimal inflation in the context of the Calvo model come up with a value that is less than 2%, and sometimes very close to zero. So that's the background of the present paper. So what the authors do in this paper is, well, they go beyond that observation that, as I said, is relatively well known and they ask themselves, well, let's forget about what is the amount of price dispersion in the model and how that price dispersion is related to inflation. Let's look at price dispersion in an actual economy and how it relates to inflation. So they look at the US economy, but when one wants to address this question, there are two challenges that one faces. First, it's heterogeneity. I mean, you cannot compare the price of an orange with the price of a BMW, obviously. And even within a narrow category, as Amy made clear, there are differences in quality in the format in which products are sold and so on. So it's hard. You cannot just look at the raw data, okay? So in other words, the inefficient price dispersion, which one can think of as a dispersion in the gaps between price and marginal cost, is not really observable. And on top of that, there's a data problem because the micro-price data available, at least until they wrote this paper, was only for a period of low and stable inflation. So they're in the US. There was no way to answer the question that was raised. So the response to the first challenge is, well, to propose an alternative measure, okay, that is the mean of absolute price changes as a proxy for price dispersion. And here, you know, this would be, I guess, a model-based definition of this mean of absolute price changes. APTI is the price of a good eye, and CT would be the fraction of goods whose price is not adjusted in period T. So it would correspond to, I guess, the Calvo parameter in the stickiness, the measure of stickiness in period T. Okay, so that's the measure that is proposed. Let me call it Delta from now on. And the response to the second challenge is to extend the BLS price data set back to 1977. It took me three seconds to write that line. It took them many years to an huge amount of effort, okay? So I won't have anything to say about the data construction, only that I'm truly impressed and struck at how patient they are, actually. I wouldn't be able to wait so long to write the paper. Mean findings, so using this new data set, or this extended data set, the author show that this Delta, this proxy that they use for relative price dispersion, shows no relation whatsoever with average inflation. Okay, so that's the mean finding, I would say. And on top of that, they show that the frequency of price adjustments increases with inflation. Okay, so what do we make of those findings? Well, clearly, those findings by themselves, at face value, imply a rejection of the Calvo model, at least of the standard Calvo model. The Calvo model, by assumption, implies a constant frequency of price adjustment, and it implies that can be shown analytically a measure of this mean of absolute price changes, which is given by this expression that you see here, which is increasing in steady-state inflation. Okay, so that's the steady-state value of the parameter. That's, by the way, equal to a steady-state inflation times the average duration of prices. So it has a very intuitive interpretation. On the other hand, the menu cost model, standard menu cost model with large idiosyncratic shocks and so on, but constant menu cost is consistent with these near invariance of delta to inflation, and it's also consistent, and amazingly so, with the pattern of price adjust, for the pattern of frequencies of price adjustment that we observe in the data in the US. So the two corollaries from this is that the menu cost model is likely to be more suitable that the New Keynesian Calvo model for the analysis of what the optimal inflation is. And also, okay, given that most of the literature so far has used the Calvo model, okay, their analysis suggests that optimal inflation is likely to be higher than implied by those papers, okay? An exception to that literature is Blanco 2015, which actually uses a menu cost model to address the issue of the optimal inflation rate and comes up with a value, a number that is above 2%, I believe, and the latest version is about 4%. Okay, so comments. There is no, as I said, I think this is a very nice paper I have, but no. No one has ever written the perfect papers, so I have to find some way, as I discussed, to prove that this is not the perfect paper. So I think the evidence on delta, this mean of absolute price changes, is extremely useful, and the authors make a very good case for the Calvo model relative to, sorry, for the menu cost model versus the Calvo model. But remember what the original motivation was for proposing this measure, this delta, was to come up with a measure of price dispersion, okay? Now what is price dispersion? So here I have a definition of price dispersion, is the cross-sectional, which is the one that they also use in the paper, right? It's the cross-sectional variance of QI, where QI is the gap between the price and marginal cost. Now in terms of their simple model, it would be given by the sum of the price and the productivity, okay, which is idiosyncratic in their model, okay? So that's what we would go, that's what we would be interested in measuring, but obviously we don't observe ATI, so we cannot measure it. Now why do we care about this particular measure of price dispersion, okay? Well, there are many reasons why we may think this is interesting, but one reason is that this is the measure that enters at least approximations of welfare losses, the steady-state welfare losses in models with sticky prices independently, and that's important independently of the particular specification of price of stickiness, whether it's Calvo, Manucosa, you know, and that up to a second order approximation around the flexible price of steady-state, welfare losses or welfare utility is given by this expression that involves the output gap, the steady-state markup, all these things are endogenous and may vary with the inflation in general, but also involves, very importantly, this measure of price dispersion, this particular measure of price dispersion. Now what the paper doesn't do, I mean, Emmy has given some intuition why the two should are likely to be related, this mean of absolute price changes and price dispersion, but what the paper doesn't really do explicitly is to establish that this mean of absolute price changes is a good measure, it's a good proxy for the variance of inefficient price, measure of inefficient price dispersion. So strictly speaking, it does not establish that price dispersion has been invariant to inflation. Now, as Emmy made clear, there is, in the Calvo model, in particular also in the menu cost model, there is a link between the two measures, in particular the Calvo model you can show, you can derive an analytical expression for the two measures, you have it here, okay? So you can see that there is a monotonic relationship between relative, between inefficient price dispersion and this mean of absolute price changes, it's not linear, okay? As you can see, there is a non-linearity, okay? But there is a monotonic relationship. Now, does this hold in general? And hence, does this necessarily hold in the data? Well, not necessarily. And here I have two counter examples, not particularly, the first one is not particularly interesting but it makes clear that it doesn't have to happen. Imagine that half of the firms set a price, PTI in period T, that is proportional to the lock of the money supply, that's little m, T is time, and sigma is just the constant, okay? And half of the firms follow the pricing rule that you see below, okay? Now, you can check that the mean of absolute price changes in this economy will be given by the growth rate and will be equal to inflation in the economy given by the growth rate of the money supply. But the relative price dispersion will be given by sigma squared. These are two completely independent parameters, okay? So in this cooked up example, the two are completely the couple. Perhaps a more interesting example or counter example would be a Calvo model with time varying C, the stickiness parameter. So look at the expression on top, okay? This, you know, you can imagine, suppose that C was not constant and as in the baseline Calvo model, but suppose that it varied maybe with the average rate of inflation, okay? So price stickiness may be lower when inflation is higher. Then we could very well observe in our sample period that delta, the mean of absolute price changes is roughly constant over time and independent of average inflation. However, the relative or the measure of price dispersion would change over time with C, okay? So again, these are just two counter examples. I'm not saying that they prove in any sense that their point is not valid, okay? But, you know, I just wanted to point to that little loose end in the paper. Now, some final comments, as Amy made clear at the very end of her presentation. This is a critique, this is not, the paper shouldn't be taken as an overall in-diagment of the New Keynesian model with Calvo pricing. It's a critique of one particular misuse, okay? Of that model, which is a use that certainly it was not intended for to begin with when the model was developed, okay? Which is to analyze the welfare effects of different average inflation rates. So I think it's important to keep that in mind. And just to conclude, let me point to a practical question that the authors don't address explicitly in the paper, but it's one that I think should be of interest to central banks, okay? And let's see how their evidence may inform the answer to that question. Now, should the Fed or the ECB raise their inflation target today, okay? They have at least 2% inflation target. Is there a reason to keep it? Should they raise it? Let me be a bit more focused. Let's see, let's take it as given, okay? That the, as many people have suggested and provided evidence for that the steady-state drill interest rate has come down because of factors that have nothing to do with monetary policy, okay? And let's suppose that it has come down from 2% to 1%. That's a relatively conservative assessment given the evidence provided in several recent papers. Okay, so no discussion is complete without a reference to one's work, so let me complete the discussion. So this is some preliminary work with Philippe Pandrade, Erbeli Bihane, and Julien Materon from the Bank de France. We look at, what we do is to estimate a full DSGE model with all the bells and whistles and more using U.S. data and Euro-area data and we look at the optimal rate of inflation implied by that model. And we see how that optimal and we study how that optimal rate of inflation relates to the steady-state real interest rate which is given by forces that again are independent of monetary, by real factors, okay? So here you have the optimal rate of inflation, everything is analyzed here, and on the horizontal axis you have the steady-state real interest rate. So the case of interest now, I mean the one that would seem of relevance is going from a rate of interest rate, real interest rate of 2% to 1%, and that implies in our analysis nearly one for one increase in the rate of inflation target. So in the U.S. it would go from 2.5% to 3.5%, okay? And in the Euro-area, roughly from 2% to something close to 3%. It's not nearly one for one, the slope is minus 0.9, but what that suggests, again, based on the estimates of this model is that this relative price distortions are not overwhelming at these levels of inflation, at these low levels of inflation. What is really critical here in getting these results is the zero lower bound, the probability of hitting the zero lower bound. If you increase the inflation target one for one or nearly one for one with the change in the steady-state real interest rate, essentially what you're doing is to keep the probability of hitting the zero lower bound unchanged, okay? And you disregard the increased costs of relative price, of price dispersion. Now in their analysis, here you have a, you know, as I said, they don't address this issue, but if you take their menu cost model, it is clear that the inflation target should be raised because there is no increase in the cost. In this analysis, ignore the zero lower bound, okay? But it's clear that if you just want to keep the constant, the probability of hitting the zero lower bound, you would want to raise the inflation target because the model, the menu cost model, implies that there is no increase in, well, for losses from increasing the target. And how about the Calvo model? Well, you know, it's not completely flat, but as you can see at these rates of inflation, it's nearly flat, okay? So the increase in the costs associated with relative price dispersion are not that large in the Calvo model at low levels of inflation, like the ones that we have in the US or the Eurora area. As I said, great paper, okay, thank you.