 So welcome to this first session of the 7th annual research conference. I'm very happy that I'm able to share the session that I was asked to share this session. And I would like to give the floor right away to Peter Karadi and his paper that he wrote together with his co-authors Raphael Schoenle and Jesse Wursten on price selection in the microdata. So thanks a lot for the organizers for having the paper in the program. Yeah, so this is joint work with Raphael Schoenle and Jesse Wursten and the usual disclaimer applies. So what I'm going to talk about is price selection in the microdata and the motivation is quite clear. So this is a kind of classic question in macroeconomics. That the rigidity of the price level actually influences the real effects of the monetary policy and also the amplification through demand type channels. And from previous research we know that prices change infrequently and in standard price setting models this low frequency of price changes implies that the aggregate price level is rigid. But in models the price rigidity is microfounded by fixed cost, the menu cost of price adjustment like the model of Golossov Lukas. Actually the price level can stay flexible even if only a small fraction of prices adjust. And the reason is that in these frameworks large price changes are going to be endogenously selected. And why is this the case? So if there is a fixed cost of adjusting the prices then it's optimal for the firms to concentrate on the products which are the most misaligned and adjust the prices of these products. So this way they can mitigate the cost of price adjustments. But then if an aggregate shock hits then these are going to be the most misaligned prices that are get adjusted. They will change a lot because they are not just get adjusted by the aggregate shock but they are also misaligned in the product level and the firm level. So there is going to be an interaction between the idiosyncratic product level shocks and the aggregate shock and this will raise the flexibility of the aggregate price level. So what is it we are going to do in this paper? We are going to revisit this Golossom and Lukas critique of price rigidity by looking at microdata and we would like to measure the strengths of the selection effect there. So we are going to do it by measuring both the price misalignment to measure the product level causes of the selection and identify aggregate shocks. So they identify the macro shocks that trigger these effects. So what is it we are going to find? We are going to find evidence for state dependence in price setting. So we will find that the probability of adjustment is going to increase with price misalignment unconditionally. So if we kind of look at just, so pull the data over time. But importantly we find that the selection as we defined it is not there. So condition on aggregate shock, the size of the misalignment is going to be immaterial. Instead what we find that the kind of state dependent adjustment is best described by an active gross extensive margin. So there is going to be a shift between the frequency of price increases versus price decreases. So if there is a tightening there is going to be less increases, more decreases and that kind of accounts for most of the adjustment in the data. So we think that this provides some guidance for model choice and policy implication. So the model is, so these results are consistent with mildly state dependent models with linear adjustment hazard and actually sizeable monetary non-neutrality. So in the talk I'm going to first talk about the framework explaining a bit more in detail how we define selection and then go to the data. In the data I'm going to concentrate in the presentation on supermarket data and going to estimate both the price gap proxy. We are going to use distance from competitors prices and for an aggregate shock we are going to use a credit shock. But in the paper we show that our results are robust to using a more general producer price index, micro data and other price gap proxies for price gap as well as other aggregate shocks. Then we combine this data to look at selection and show some robustness. If I have time I'm going to talk about the literature a bit more. So let me kind of jump into the conceptual framework and the goal is to identify channel of adjustment of the price level to the aggregate shock. Basically you should think of this as kind of an accounting framework in an environment with sticky prices. In the original Caballero angle framework they identified two channels. One is the intensive margin which looks at when an aggregate shock hits all firms which adjust their prices are going to adjust it by more. So there's going to be larger adjustment for each adjusting firm. And then there is an extensive margin channel which is that there are going to be new adjusters. And this channel is the only one which is active in state dependent price setting models. Our contribution is to separate this extensive margin channel into two channels. One is what we call the gross extensive margin which is going to be a shift between price increases and decreases. And the selection effect and how we define it is looking at better larger gaps. So prices with larger gaps adjust with higher probability, conditional and aggregate shock. And we argue that actually a recent research has shown that in a lot of these models the dynamics after a monetary shock is actually quite similar. So it's sufficient to concentrate on either the impact effect or an effect at a particular horizon at least approximately. So the starting point is a model with price adjustment friction. So the price adjustment is lumpy. What Kabalyero Engel and others pointed out is that it's very useful to concentrate on the price gap which is defined as the difference of the price from a theoretical optimal price. And this optimal price is influenced continuously by both the product level and aggregate factors while the price level because of these price adjustment frictions are adjusting only occasionally. So using the price gap we can decompose inflation into the multiple of several terms. One is the density of the price gap multiplied by the probability and the size of the probability of adjustment. And this is the lambda x. This is the hazard function. And the size of the adjustment conditional gap which is just minus the gap itself. So this is illustrated on this figure which shows how the adjustment happens in an SS pricing framework. The shaded area shows the price gap. This purple hazard function shows the probability of adjustment. It is zero if the gap is smaller than a threshold and one above this threshold. And then the dark shaded area shows what happens and these are the price decreases in this example and these are what actually contributes to the inflation itself. Here you can see that actually the decreases are large. But importantly, this fact that in the SS pricing framework the price decreases are large doesn't imply selection as we define it. Here you see actually two versions of the model. One is where like the before when the SS pricing is a step function. The other one is a linear adjustment hazard function. There you can see that the probability of adjustment is still increasing but it's not jumping like in the SS pricing framework. What we propose and this has already been done in the literature is that you can decompose the inflation into two components. One is based on the average size and average frequency of adjustment and the covariance term. The covariance term is asking how the size of the adjustment cover is with the probability of adjustment and if this covariance is positive then you have state dependence in the price setting. Importantly models like the cargo price setting model where the hazard is flat this covariance term is zero so there's no state dependence in price setting. But this is not what we call a selection. Instead for selection we want to ask that when an aggregate shock hits what channels are active and you can show that these kind of three terms become active. One is the intensive margin which is just all the prices which are adjusting anyway they are going to be adjusting by more. And then on the extensive margin there are going to be two terms. One is the gross extensive margin which is that there are going to be more decreases than increases. And then the selection term is asking how much the new changes the probability or the position of those who are changing now because of the shock are covered with the size. So basically is it true that larger price changes are now going to be changed by more with higher probability than before. In the figure you can see that in terms of the selection there is going to be big difference between a model in Golosova Lukas which is on the left hand side and a model with a mildly state dependent pricing where you have this linear hazard. There the new in both cases the dark shaded area show the position of the new decreases. So these are the ones which can get triggered by the aggregate shock and you see that in case of the Golosova Lukas case the new decreases are concentrated at a high point so they are going to be changed by a large amount while in case of a mildly state dependent model they are dispersed like on the right hand side and in this case there is going to be no selection. So this table just overviews what I just said. In the time dependent model it's only the intensive margin which is effective. In SS and convex hazard models there are intensive margin, gross extensive margin and selection and in a linear hazard model the selection is not active instead the effects are coming from the gross extensive margin. So the paper is looking to the next part is to look at to measure the shape of the hazard function and get density in the data assess the strengths of the margin of adjustment both unconditionally and conditionally on the aggregate shock. So let me jump into this. So the data that we are using is supermarket data from the United States. The advantage of this data it's very granular so it has 170,000 products also wide coverage in the US so it's over 50 markets and also it has a long time series so it's available for 12 years. So it's a very suitable for our causes because it has both granularity which helps us to identify high quality information about close substitutes of the exact same product as well as long time series so we can identify aggregate fluctuations. So we do some cleaning of the data we filter out temporary discounts and do some time regurgation to go from weekly data to monthly data. So to look at price gaps we argue that a relevant component of the gap is actually observable. So what we use it is a distance from the average price of close competitors. So we observe a particular price. We can see the exact same price in other stores and we will use how far the aggregate price is from this average of competitors. We actually control for some store fix effects to control for regional variation or amenities but the point is that if stores want to avoid price misalignment then this is going to be a reasonable measure of the price gap. They actually want to do it in both directions. They don't want their prices to be higher than the competitors because then they will face low demand but they also don't want them to be too low because then they can increase profits by increasing the price some more. So formally the competitors reference price gap is going to be defined as below where we are just taking the average of the exact same product in competitors prices after taking some store fix effect. We also control for unobserved heterogeneity. So basically from the gaps we deduct estimated product store fixed effects and this actually is important for our results. So this is one of the main figures of the paper. So here what you see is that how the probability of a price adjustment changes with the competitor price gap and what you see is that importantly the probability of adjustment increases significantly with distance from zero. It is also not exactly but approximately linear and also positive at zero and mildly asymmetric. Actually these results are in line with previous results which usually use the more narrower data. If you look at the size of the adjustment or the average size of adjustment as a function of the price gap then you get the following figure. What's striking here is that there is an almost one-to-one linear relationship between the size of adjustment and the gap. So if you know that the firm faces the gap then on average it wants to close this gap. So this shows that our measure is actually a relevant component of the gap indeed. And the last figure shows the density of the gaps which shows that despite sales filtering and store fix effect there is still a sizeable dispersion and fat tails of the distribution. So one thing we can do right away just by using this hazard function and density is to do the composition that we proposed at the beginning because for this we just need these objects and do a calculation. So our goal is to separate these three channels. If you do this what we find is that the relative contribution of the channels is that the intensive margin is the most important. The gross extensive margin increases the effect by around one-third of the intensive margin and the selection effect in this example is minuscule in this way. So just to emphasize this means that the extensive margin effect is important so we are not saying that that's what these results are saying. Instead what mostly drives the results is just the shift between increases and decreases after a shock. So the next what we do is try to reassess the same thing using an aggregate shock. So here we are kind of doing it unconditionally without looking at how the economy responds to an aggregate shock. We want to see it in the data whether these results are borne out. So what we use for this is a credit shock. So we are looking at a sizeable exit and a tightening of credit conditions which we identify using timing restriction. So the idea is to look at the increase in a measure of a premium, of excess bond premium, so basically a default fee corporate spread created by Simon Gritz and Igor Zakhryshek without any contemporaneous effect on activity prices or interest rates. This is how we identify kind of an exogenous causal shift in credit. So just to show you how the economy responds to a shock like this, let me first run just a series of local projection or LSE reactions where we look at different variables of interest and we are interested in how this credit shock passes through to their behavior. For controls we are going to use one to twelve months legs of the consumer price index industrial production and the one-year rate, one-year treasure rate, as well as the excess bond premium. So these figures show how the impulses look like so there is a shift in the excess bond premium which kind of dies out within a year. The interest rates or monetary policy responds to it so eases but it's not enough to offset the effect on industrial production which is sizeable and persistent and also the core CPI which is the price index that we look at is declining actually slowly and peaks around 24 months after the effect. If you run the same regression using our supermarket index just to show you that it makes sense the results are actually similar to what happens to the core CPI. The effects are not before 24 months so this actually motivates us to look at this 24 months horizon which is basically the peak level effect of the shock. So to look at selection we would like to combine this product level proxy with this aggregate shock. The question is again are the new adjusters after the shock have large gaps and our approach is that we are looking at selection as an interaction between the aggregate shock and the product level proxy and ask whether it influences the probability of price adjustment. So the linear probability model that we follow are shown here. The dependent variables are the indicator of a price increase or a price decrease between period T and the H period in the future. For a particular product in a particular store and as an explanatory variables we have the price gap in months before the shock to control for the regular effect of the price gap. Then the excess bond premium to control for the effect of the aggregate shock on the probability of adjustment this is going to be kind of the average effect or the cross extensive margin effect. And the interaction term and this is our focus interaction term is asking when an aggregate shock hits are the prices which are changed have a larger gap or not so this is going to be the selection effect. We also have various controls for example we control for the age of the price to control for time dependence. We have series of aggregate controls actually these are the same ones as we had in the local projection exercise I showed you before. And we also have product store fixed effects as well as calendar months fixed effects. So to control for unexplained cross sectional heterogeneity as well as seasonality and we cluster standard errors across categories and time. So this is the main table of the paper which shows how on the left column how the price increases, the right column how the price decreases the probability of price decreases respond to these various factors. What you see is that the effect of the gap itself has a significant effect the shock itself also has a significant effect but their interaction term is insignificant and this is consistent in various robustness exercises and in terms of quantity so these are sizable so what we see is that if you move from the gap from the first quartile to the third quartile you see that the probability of price increase is 26 percentage point lower also in terms of the adjustment of the gross extensive margin is sizable so that one standard aviation credit tightening which is a 33 basis point decreases the probability of price increase by one percentage point and at the same time increases the probability of price decrease by a similar one so symmetric one percentage point. But we find kind of no selection but some evidence of time dependence. So if we put together kind of the theoretical and the empirical results what we find in the data there seems to be effective intensive margin effective gross extensive margin but no selection and this is kind of consistent with model with linear hazard but inconsistent with both time dependent models like a Calvo model which assumes constant hazard or SS and COMEX hazard models like Olasova and Lucas. So just quickly let me show you one robustness exercise so one thing which you might worry about is that the linearity assumption in the regressions might be a bit too strong so here we are relaxing it and assuming that so instead of assuming that the probability and that the relationship between the gap and the probability of adjustment is linear we just create different groups of some firms with different gap sizes and look at how the probability changes with the average size and what you can see is that first that the relationship is quite linear in this case not exactly linear but close so that's the red lines show that kind of unconditionally there is this linear relationship between the gap and the price increases and the price decreases and the interaction term which you see the blue lines there the relationship is insignificant at zero so if we relax the linearity assumption we find that the results survive in the paper we run a battery of other robustness checks and this result actually survives in all cases so I have some time to talk quickly about literature a part of the literature so in the literature actually selection is a robust prediction of various many cosmos with steep hazard functions so the classic papers are Kaplan's, Puerber and Golossom and Lukas and actually in more recent iterations it has been found that this selection actually comes back so for example I have worked with Adam Reif where we assume that idiosyncratic shock has fat tails like in the famous paper of Virgilio Midrigan but we find that if you assume that these shocks have a particular form, kind of a robust form then the selection comes back and similarly in paper of Bonomo and Quarters find that if you have multiproduct firms but assume that firms still face some fixed cost of adjustment for each product they change then selection effect can come back but importantly selection weekends if the hazard function is flatter because of information frictions or random menu costs so in our paper we are addressing the same question and we try to look at as an empirical question so how the hazard function looks like and we are not the first looking at the hazard function there are two strands of literature one is looking at the hazard function implicitly so estimate density as hazard function by matching moments and for example Francesco has a great paper doing that and finding that the hazard function shape that fits the data most is a quadratic one but then there are also other papers looking at explicit hazard function and interestingly these papers tend to find that the hazard functions are close to linear there are also other papers which actually try to look at selection directly by constructing informative models that are looking at this including work of Luca de'Dola and co-authors so let me just conclude so what we did is we looked at granular supermarket and piped PPI data in the paper to measure selection we find evidence for state dependence but no evidence for selection instead we find that the effects coming from the gross extensive margin and this is consistent with linear hazard and state dependent models and our implications that we draw from this is that a shift between price increases versus decreases is what determines the extensive margin and the shape of the hazard function is actually informative about the strengths of this shift so it makes a lot of sense to concentrate and learn more about the shape and the slope of the hazard function thank you very much thank you Peter, yes now we have Francesco Lippi from the Ainodi Institute for Economics and Finance giving the discussion of this paper thank you okay so thank you very much for the invitation it's a pleasure to be here and it's a pleasure to discuss this paper which is very interesting especially for someone like me who's been working with these models for many years so what do they do in the paper? they consider sticky prices and study they want to answer questions about the propagation of monetary shocks in particular credit shocks but think more generally how monetary policy works like Isabel put it and they have great data that are useful to empirically analyze what firms actually do when they come to changing prices that is whether or not they change the prices what Peter's called the extensive margin and how much, by how much prices are changed so first they characterize firm behavior then they discuss the implications of this behavior for the propagation of aggregate shocks which is indeed kind of the name of the game in this literature you have two polar models you have Calvo where firms adjust prices just because you know the Calvo ferry arrives and they adjust prices obviously we're never happy nor proud to use such a model like you know you don't tell it to your friends in the business industry you're doing that because I think you guys are doing this but on the other hand you have goals where you know everything there's a fixed cost and you only adjust when you reach that critical threshold that's also an incredibly exaggerated model you know it's probably not going to be true and it's not true and Peter I think the main interesting result of this paper is to show very clearly both models are wrong they're wrong big way and you can do something more with this data you can kind of get a very precise idea of where you are in this space span between these two extremes that's what I'm trying to do so specifically what they do they measure these X's these desired adjustments okay so a firm is happy if X is zero I'm putting a hat because that's really kind of the empirical measure of the X at some point I'll bring in the theory X and they embrace this Caballero angle very nice framework where if you tell me your X I'm going to tell you what's the probability that you adjust so in the extreme models this is an extreme it's a simple object however this probability is just a number it doesn't depend on X it's a constant function flat Gullos of Lucas it's kind of an L shaped object zero and then you reach a critical X bar and it shoots to infinity in continuous time to one in discrete time okay that's what they do so they estimate these lambdas what do they find they find that it's linear it's very nice Eichenbaum, Jaimovich and Rebello found basically the same using just one supermarket data linear in the absolute value of X and then they study then they want to do more they say well but what if I have aggregate shocks how do the aggregate shocks affect this probability of price changes and so they run linear regressions of the probability of adjustment on X, Epsilon and the interaction term and I already sort of summarized what are the main results so let me summarize you know like the framework that we're using to think about this problem is this caballero angle and it's a nice framework because both Calvo and Golozov and Lucas are nested as limiting cases so there's a firm who control the Zex firm I and now I don't have the head because so M star is like the ideal markup and P is the price the marginal cost you can see how the aggregate shocks that they have will affect the decisions of the firm because it will affect marginal cost if credit becomes more expensive you know my marginal costs are higher I may want to change my prices that's the idea assumption in this model is if there is no you know these are models written before 2021 so there was very little inflation so X's are bouncing around because of idiosyncratic shocks that's mostly what it is and then the theory the optimal policy will produce one of these hazard functions the reason why it's probabilistic and it's not zero one there's several ways to justify this you know you could think that these fixed costs are random you draw them from a distribution so you know Peter and I have the same max but he draws a low adjustment cost then he adjusts I don't draw it I don't adjust that's the idea and intuitively I think it makes sense in many models the bigger the gap the bigger the probability you know you have bigger motives for doing something another big assumption in this model if you decide to adjust you close the gap you do you change your price such that you jump on top of you know mu star that's not an assumption that has to be true in all models for instance models with sales or models with high inflation you do some front loading when you adjust you don't close the gap you may want to start with a high price these are just different models okay so if you give me one of these models if you give me those primitives then you know I can basically aggregate for the economy and work out what the economy will look like what is F by the way F in this model will be convex you know I plot I wrote down a little note for the Kolmogorov equation just like he finds you can compute the aggregate frequency you can compute several observable moments like distribution of price changes and these results in figure two from the paper are real beauty I think they're a real beauty because if you give them to someone like I mean for me they're a real beauty because I work on these things so you know you give me these three figures first of all I can say okay first figure great they're closing the gap the slope is minus one it's just like the theory suggests remember they didn't have to you could have seen something very different the middle figure there are no Calvo Ferris again I feel good my brother is in business I don't want to tell him that I'm working with Ferris you know today I'm at the ECB I want to tell him we are serious researchers looking at data seriously no Ferris that's what we find and the density is convex and symmetric just like the theory will suggest so if you are like a caballero angled guy you can stop here okay let's you know analyze how a shock propagates in this model you don't need to I mean they spend a lot of time in the paper discussing selection which you know it's okay but selection is a little bit like looking at a soccer game and counting shooting on the goal that's nice but really you want to count goals because that's in the end what determines whether or not you win so I'm happy to do stuff about selection but there's something more interesting once you give me a GHF a generalized hazard function that's all I need to study how shocks propagate so this is I'm using some theory results these models can be solved pencil and paper pretty accurately so these are some results I produce using you know recent paper we have so in the left hand side you see the primitives these are three generalized hazard function you know the flat one here the black one is is calvo obviously probability doesn't depend on the state then there's the goal of Lucas like the rugby goal and then there's a linear absolute value hazard function okay just for fun in the middle panel is what these three functions predict in terms of shape of observable price distribution of the size of the adjustments and on the right panel you see like the implications for the propagation of aggregate shocks and this is not one simulation these are analytic results I can tell you like you know it's a proposition so if you give me these models I can tell you exactly how much bigger the output response is in calvo the black line compared to goals of Lucas it's six times bigger in terms of the area and the linear hazard is actually you know in between a little bit closer to goals of Lucas so in spite of the lack of selection in spite this model has the state dependence the key is state dependence the fact that this hazard is not flat which means that ages that need to respond will respond when there's a shock and does it matter here's the answer it does so what they do they focus a lot of their analysis on selection in particular they define selection I'm sorry it's a bit small but you saw it in Peter's light they run these linear probability models where say probability of price increases is run on a measure of the aggregate shock of the gap and of their interaction and the interaction is their intuitive idea of whether or not you have selection because given an aggregate shock the probability that you adjust should be bigger if the gap is bigger they don't define and that's what they find they don't find this coefficient on the selection term being significant okay so what do I think about this to me it's a bit of a distraction but let me think about it nevertheless so let me explore the theory behind these regressions the theory behind the metrics should I expect or should I not expect the interaction term to matter in these regressions well remember the theory is the x includes the aggregate shocks the way they measure the x's this x hat is the difference between one firm x and the other firm x now because everybody is affected by the aggregate shocks their x's do not have the aggregate shock by construction it washes out so let me construct the theory base gap which is their x hat plus their epsilon epsilon is a credit shock I'm putting an alpha because you know the units x is measured in units of price deviations the credit shocks you need to have some elasticity so suppose we ask some good micro guys about the alpha anyway there's an alpha there so now the hazard function that I expect to be working in this data is some function of x hat plus epsilon so what's the question should we expect the interaction term well it depends on the shape of lambda if lambda is linear then you know a linear function doesn't have any interaction term it's a linear function all the higher derivatives are zero now here is a bit tricky well it's not really linear it's linear in the absolute value the absolute value is not a linear function so actually you know if you it's also not differentiable at zero but let's say you do something quick and dirty and you kind of approximate an absolute value because it's a non-linear function let's do something simple to clear this suppose the hazard was quadratic well then of course you should have an interaction term because we all know the math for the square of the binomial but you should also have x squared and epsilon squared not in levels so that's why I kind of find it useful to think of the theory because it kind of guides me to what I should look for in the empirics so I run some a model I simulated some data it was cheap so I simulated billions of data so the first regression you see these are like asymptotics that's why the t statistics are humongous you know if I estimate the hazard function well that's a model I'm using that's a true model of course I recover it and I don't see any interaction for the product I shouldn't I'm just checking that my code is written correctly hopefully it is also I'm focusing another this is a bit of a detail but the model tells you if your gap is high today you adjust today now I understand they have true data and we saw that the shock takes time to realize so they look at effects after 24 months but you have to be careful in pushing the horizon for the outcome so far away because if you push it really too far away something is going to happen at some point they will adjust the simulation if I put one year everybody adjusts in one year I'm having a model with two adjustments per year so what if I mis-specify the model I don't, instead of estimating the absolute value I just throw in x, epsilon and their product I actually find that the product is kind of borderline significant much less than the x now here there is also an issue of inference I have billions and billions of data with many aggregate shocks so if I like put, let me assume that in the ten years they have they can observe 20 aggregate shocks so basically divide all my t-stats by 10 then I will get that stuff is not like interaction and epsilon is borderline significant so I'm just doing this to say look what you find really kind of depends on first of all functional form assumptions and the specification of the regression depends on that and number of observations and in my simulations it's harder to estimate the effects of epsilon because I have a large cross-section but I don't have that longer time series so in the end you know I was thinking maybe John Tyrol is in the room and he doesn't know about monetary policy at least what we do so I want you know are we really lost like I will learn anything in the past 20 years I think we did so this nice paper shows yes setters attentives yes they are decisions depend on the state do we care about time of state dependent models do we care I think we do so if I was like you know a policy maker today with these big energy shocks the covid supply bottlenecks the trade wars models like this behave very differently from time dependent models where firms are just waiting for the calvo ferry to do an adjustment and we have lots of evidence on say Swiss big surprise appreciation about when these big events occur firms are fast Peter's own paper on VAT changes you know in Hungary Alvarez, my co-author I know Maya Argentina like changing the price of utilities dramatically and prices changing fast bottom line is when there are big shocks there are big reactions I think as economists we should be proud of it because it's kind of reassuring about our job we're you know suppose firms are just doing things just because then it would be a bit depressing my final comment is something I would do with this data so even in the little model I use the aggregate response is very small one way to pump up these responses is to think of models where there are strategic complementarities where each firm's decisions about their little acts also depend on the big X that's a different idea and I think this is a really important big question that they could do a lot with the data they have thank you. Thank you Francesco maybe Peter you want to react quickly before we open up to the whole audience and so people can still think a bit about questions but maybe you want to react directly to what Francesco suggested. Almost everybody we agree and I think I think it's very important to emphasize that there is this kind of potential confusion in what we say and don't say and importantly when we say that selection is not there we are not saying that state dependence is not there and I think it's very useful that you pointed out that in this case if we have and what we show is that there is this relationship between the gap probably with the price change it will have an influence on how the aggregate economy responds and so we need to go away from a carbon model and I think in your discussions you suggested we go and figure out much more the theory behind the metrics I think this is well taken I think we want to go there just to reassure you actually it's true that if you go 24 months out the effects are the probability of price adjustment is much higher than if you are looking at the month out but I can assure you that actually we still have this V-shape relationship so some of the effects are still say and I think you are absolutely right about the strategy complementarities I think it would be this data might be useful to also learn about that thanks a lot again thanks to you it's a great paper so are there any questions from the audience there's also the Slido tool of course so for everybody who is online attending us please feel free to ask your questions in Slido if not I have a question actually also from let's say practitioner you I mean you did your study basically on US data and that covered a span from 2001 to 2012 when inflation was actually relatively low in comparison to if you would look at it now and also if you look at the shocks I mean nowadays we are confronted a lot with let's say aggregate supply shocks I mean in how far would your results change with let's say more higher inflation regime or a regime shift where you have actually more this aggregate shocks supply shocks than demand shocks for example yeah so thanks for the question so one thing is that in this Prisma network surprising micro data analysis network we have actually acquired similar data for the URA area and we found that the results are qualitatively comparable and we can compare also quantities so this is something which I want to point out unlucky so I mean luckily in actual sense inflation was low also for the period we have the data so in some sense we cannot directly look at the evidence how it changes but we would like to in the future but we can use theoretical models to have some ideas about how the results would change if the shocks would be larger or inflation would be higher and actually what we find is that if the shocks are large then there will be kind of much larger effect much more firms would adjust and also if the trend inflation is higher then we should expect more firms adjusting so overall the slope of the Phillips curve should be actually higher in these situations and actually quantitative models could try to give kind of quantitative numbers for this but that might depend a lot on the particular details and assumptions Thank you Are there any questions from the audience? Yes there is I think there's somebody with a microphone that could come yes maybe introduce yourself quickly Hi I'm Alistair Mackay I was hoping Peter could sort of elaborate on the motivation for looking at the dynamics conditional on a identified shock because as Francesco pointed out we can calculate the sort of aggregate dynamics directly from GHF so just want to hear sort of the rationale for the value of the identified shock I think in some sense what we I think there is some difference in the philosophy of the paper and for example Francesco's approach here we kind of wanted to establish empirical results with kind of minimal assumptions on the economy and here we are sure that after we have these if we are ready to use the assumptions of the model then we basically have everything what we need but if we kind of not ready to make these kind of strong assumptions of the of a structural model then it will be useful to actually establish some evidence using an aggregate shock and just looking at what happens there and I think in some sense we are sure that these results are actually borne out more or less from kind of theoretical models that we use so it kind of supports these models and then you can say that we wouldn't need to do this but I think it's still useful to kind of deliver structural effects and then we can compare it to kind of models but different types of models as well we also hope that kind of an audience who is kind of not that deep into kind of structural models these results are interesting and provide some intuition. Can I add one thing? Yes. So my take on that was you know when I say it after his figure too I would stop here and I would just be happy and be able to calculate aggregate impulse responses of course that's a data generating process but in my one thing I find interesting is that they are kind of testing this model we know for instance rationally in attention models where people pay different attention to different types of shocks aggregate versus idiosyncratic so I like their experiment they are trying to set up like let's see if they respond to X as they respond to epsilon so once you say I would do it there's no functional form confusion and just focus on the straight line and running regression on these positive probabilities on epsilon and X but then you know maybe you could find that they don't respond to epsilon maybe they're not paying attention to epsilon so that's additional information in my view. Thank you. Anybody else? So there's a raised hand. Morton Raman I was wondering whether so you have all this micro data it is a bit special because it is from supermarkets lots of goods lots of things that we don't have in there but should we really should we think that price of milk is sort of done the same way as the price of a car so should we really I'm just wondering about the extent to which we think of one model of price setting or not I mean we know in the core CPI that they are different from commodities and so on and so so to what extent should we aggregate at this level there there's one model of price setting is that useful for monetary policy? Maybe we take one more question than I saw a raised hand before but ah okay is somebody with a mic here Luke? Maybe just a round of Francesca you said we've learned a lot at least we know which models are incorrect and I can tell you sort of at least from a theoretical perspective where things will be going so I wanted to push the chairs question a little bit more since there are many on the call that are actually doing monetary policy and maybe less familiar with these models that you write down in the description from monetary policy from your theory for where we are today in terms of not just large shocks but also a high inflation environment Maybe I give the floor for the last three minutes back again to Peter Okay So Morton's point I think it's very well taken so I think it's when people look at particular markets they usually kind of look at the price setting which is very different in different areas so I think in some sense what we are trying to do is simplify the reality and ask whether we can learn something that is useful and actually in the paper we also look at PPI so producer price index which covers not just supermarkets but basically the whole economy and we find that these results that we look at are kind of consistent so robust there so that we find that there is this state dependence as well but not the selection so we hope that it is useful I think there is if someone wants to dig deeper there is great gains could be have from really understanding the details of price setting in different markets in terms of how the what can we learn I think this is a very hard question but in some sense one potential answer is that a lot of what the literature is after is really the effective slope of the Phillips curve and what we kind of get out from looking at micro data is that this slope is actually higher than previously assumed so we actually need to kind of design optimal policy based on this and it really what to do is kind of depends on a lot of factors what kind of shock is hitting you but actually I think looking at the literature what we already know quite a lot what you should do the slope of the Phillips curve Yes, Francesco? My take on I agree with Peter it's obviously difficult question as most policy questions but there's some like high level thing that we understand using this model which is when we live in normal times we don't see that many price changes and we think prices are not changing that often we don't really know why that is are they not paying attention or is that because what we know if we think that these kind of models the state dependent models are behind the process is that once a big shock arrives firms will not wait you will have a cluster of price changes as we are seeing you will see more frequent price changes you will see a larger fraction of price increases going up big time and the reason I would worry as a policy maker similarly to what happened when many countries joined the euro consumers who are not educated following these tendencies they see lots of price increases they see prices changing everywhere this thing can get out of hand this is a very delicate time it's a very different behavior if you look at the the unfolding of one of these shocks in one of these time dependent models where you know you have to wait for the ferries firms don't wait for the ferries so now we're in the middle of the storm and we need to reassure the markets that we know what's going on and we're taking measures to avoid second round effects etc thank you very much I can completely concur that indeed what we see also at the moment is a much quicker pass through of these shocks to now the consumer price level and I think this work that you're doing Peter and also others are doing in this brismar network is very useful for us to kind of gauge such effects and be more aware also for the future so thank you very much