 First of all, welcome to everybody. So we will have presentations in each session, one after the other, and we will take questions altogether at the end. I will give the speakers now 22 minutes. I'll keep the time. And for questions, please write them in the chat. I will not look at the raised hands in Webex. I will look at the chat. Please put your affiliation, your name, if it's not already clear, your question so that I can maybe group them together. And if it's a clarification question, can you please write, clarify? So maybe the authors, the co-authors of the speaker could already solve some of these clarification questions in the chat. So we have more time for more substantial discussions at the end. Now, you are all muted. At some point, I may ask some of you to read your question aloud or to just ask your question. In that case, please accept the request to unmute, and then you will be able to speak. Once you mute yourself again, you will have to be given the possibility again. So you're locked, so to speak for this session. Now, let me get to the content and put in a plug for Prisma. Isabel already mentioned that we have a research network within the European System Central Bank called Prisma, which stands for a Price Set in Microdata Analysis Network. The following two papers are indeed by members, participants in this research network. And so I thought to remind you of that. You have information on the web page of the ECB. Subscribe to that page, put it in your bookmarks, because as the papers come out, we will list them there so you can stay on track with the production, research production of this network. Now, Andrea, you can start to share your slides. You are presenter now. You should be able to share your slide. This is a paper joined with Alvarez, Gotier, Le Bihane, and Lippi. Andrea, yes, we see your slides. Very good. Yes, very good. Hello, everyone. Thank you very much for having us. I'm going to present the paper Empirica Investigation of the Sufficient Statistic for Monetary Shock. This paper is co-authored with Alvarez, Gotier, Le Bihane, and Lippi. In recent works, Alvarez, Lippi, and Cooutors have established a new theoretical result for the propagation of Monetary Shock. This result is called the Sufficient Statistic Proposition. This proposition tells us that in a multi-sector economy with frictional price setting, the cumulative response of output of an industry where Monetary Shock is equal to the ratio of the kurtosian frequency of price changes times delta, the Monetary Shock, divided by epsilon times six, where epsilon is the labor elasticity. When the cumulative response, to be precise, is the cumulative input response function, which is the integral of the input response function. From now on, I will call this object as the response. The novelty of this result stands in underlining the importance of the kurtosis in explaining the propagation of Monetary Shock. Kurtosis, intuitively, is important because it measures the lack of the so-called selection effect. On the other hand, the importance of frequency is pretty well known in the literature because frequency represents the time units of the model. This is a theoretical result that holds in a broad class of models. For example, it holds in random-menu cost models as the one of Calvo, Calvo Plus, or Golsofer Lucas. It also holds in rational-attention models as the one of Reis, but it's also all in multi-product models. This is the theoretical result that holds under some assumption. For example, in the model, the model must feature small inflation. Furthermore, the firms, when they adjust their prices, must set their price equal to the optimal one. We say in the literature, we say that the first must close their gap. Furthermore, the shock in the model are all albronian. For example, this result does not hold in a model with price plans or in a model with temporary price changes. The aim of our paper is to empirically test and explore the validity of this theoretical result. To do this, we use micro and sectoral data on consumer price and producer price data for the French economy. And we are going to exploit the across-sector variability. The main challenge of our work is to estimate the response of different sectors to a more high shock and to estimate the micromoments of the same sectors. For example, we are going to estimate the frequency and the kurtolis of price changes. Before telling the empirical strategy that we're going to adopt, I want to state the sufficient statistic proposition as the response of prices and not as the response of output. I want to do this because we have better and more data for prices. After some algebra that I'm not showing, we can state the sufficient statistic proposition in the following way. The response of prices in sector J up to time T must be equal to the monetary shock delta times T minus the ratio of the frequency and the kurtolis of price changes times delta, the monetary shock divided by six. Furthermore, we can disentangle the effect of kurtolis and frequency to see if both of them are statistically significant. To do this, we can take a forced order theory expansion around the mean of the frequency and the kurtolis. When we do this, we obtain that the response of prices must be equal to a constant minus delta, the monetary shock times the mean of the kurtosis divided by six times the mean of the frequency times the kurtosis plus the same coefficient times the frequency. From this to equation is quite clear how we are going to test the sufficient statistic proposition. We want to run a cross sector regression in view of this equation. This is exactly what we are going to do and we report the two regression in this table. The first regression that we're going to want to run is that to regress the response of prices over a constant and the ratio of the kurtosis and the frequency. Once we normalize the monetary shock delta to minus 1%, we have the prediction for our coefficients. According to the theory, alpha, the coefficient of the constant must be negative and equal to minus t. The coefficient on the ratio must be equal to one over six. However, if in the model we allowed for strategic complementalities, we have one more degree of freedom for the coefficient meter. As I told you before, we took a table expansion of the first order to inspect the significance of frequency and kurtosis. When we do this, we can run a regression of the response of prices over a constant, the frequency and the kurtosis. The theory in this case predicts that the coefficient on the frequency and the kurtosis must be the same in absolute volume. Please remember this to regression because I have our two baseline regression which we are going to estimate later. But however, before estimate them, let me tell you the empirical strategy that we adopt. Our empirical strategy will consist in three steps, using a French data on PPI and CPI. In the first step, we want to construct the left-hand side variable of the above-mentioned regression. To do this, we implement the factor lamented here using sectoral and aggregate time series. This way, we can obtain the response of different sectors of prices. In the second step, we are going to use microdata on CPI and PPI and we will construct the frequency and the kurtosis of price changes. Once we have constructed the frequency and kurtosis, we are done with the right-hand side objects of our regression. And this is possible, and then it's possible in the third step to relate them using an act of sector regression, regressing the response of prices over the frequency and the kurtosis, which is, this is exactly our third step. Let me start with the first step. In the first step, as I told you, we implement the factor lamented here following Bernanke-Pobben-Erias. In our case, the factor lamented here is just a VR in which the interest rate, in our case, the three-month variable is known. And there are some unknown factors that must be estimated. We estimate these unknown factors using a large number of time series in which we include the sectoral PPI and CPI. Using the factor lamented VR is possible to retrieve the inputs response function of this large number of time series that we use to estimate the factor. For example, in our case, we are able to estimate the sectoral inputs response function of price of PPI and CPI. Once we have the inputs response function is straight forward to estimate, to calculate the cumulative inputs response function, which is just the integral of the inputs response function. We will integrate the inputs response function up to two years or three years. As every VR, we need an identification assumption to retrieve the monetary shocks. To do this, our bench line identification is to use a Cholesky identification plus a long-run restriction. The long-run restriction in our case is that all the sectoral prices must have the same response in the long run. In our case, they must have the response of minus 1%. The other two alternative identification that we're going to use is a Cholesky without long-run restriction and use high-frequency data as instrumental variable. In this figure, we report the estimated inputs response function using our benchmark identification, means long-run restriction plus Cholesky. On the left panel, we have the inputs response function for PPI. On the right one, we have the response for CPI. The dashed red line is the inputs response function for some sectors. The thick red line is just an unweighted leverage of them. And the blue line is the inputs response function of an aggregate measure of the PPI on the left panel for CPI. On the X axis, we have the time after the shock. And on the Y axis, we have log points in relation from the step state. From this figure, for example, from the PPI, the left panel, you can see that all the inputs response function in the long-run goes back to the level minus 1%, minus 1%. And furthermore, you can see that there is a variability in which the different sectors responds to a more day shock. Once we have the inputs response function, as I told you, it's possible to construct the cumulative inputs just integrating the dashed red line for each sector up to two years or three years. This way, we have construct the left-hand side variables of our main regression. We can now proceed with the second step. In the second step, we want to construct the right-hand side object, which are the frequency and kurtosis of price changes. For PPI, we have around 120 sectors. For CPI, instead, we have around 220 sectors. In the figure, we have in the top panel the histogram for the frequency of price changes. In the bottom, we have the histogram for the kurtosis of price changes. In the yellow, we have the histogram for PPI. In the white, we have the one for CPI. From the top panel, you can see that there are some sectors for CPI that have a very low frequency. These are the services. Furthermore, from the bottom panel, you can see that the distribution of the kurtosis of CPI is more spread with respect to the one of PPI. However, both of them have a mean kurtosis of around 4.5. Now that we have constructed the left-hand side object and the right-hand side object of our baseline specification, we can run our regression across sectors. This is exactly what we are going to do now. So we regress the response of prices over a constant and the ratio of the kurtosis and frequency of price changes. In this table, we report the result using producer prices. In the first two columns, we have the response estimated using the Cholesk identification plus the long-run restriction. In the third and fourth column, we have the response estimated using the Cholesky without using, in this case, the long-run restriction. In the last two columns, instead, we have the identification using high-frequency data plus instrumental variable plus long-run restriction. In the first row, you can see the coefficient of the ratio of the kurtosis and the frequency. You can see that the coefficient is always positive and statistically significant different from zero as what the sufficient statistic result predicts. Furthermore, the theory also predicts that the constant should be negative and statistically significant different from zero. And this is exactly what we obtain. Furthermore, as I told you, we can disentangle the effect of the tossing frequency using a forced order theory expansion. When we do this, we obtain that the response of prices must be, we can run a regression of the response of prices over a constant, the frequency, and the kurtosis of best changes. Our result are reporting this table and also in this table, we are still using on the producer prices. The column of this table are the same as the previous one. In the first row, we report the coefficient for the frequency which is always negative and statistically significant different from zero which is what the sufficient statistic result predicts. Furthermore, in the second row, we can see that the kurtosis is always statistically significant from zero and positive. The constant, furthermore, is negative and statistically significant. These are all in line with what the theory predicts. Furthermore, the theory predicts that the coefficient of frequency and kurtosis should be the same in absolute value. For example, inspecting the second column, we can see that the two coefficients are not far from each other. Indeed, when we run an F test under the noun that the two coefficients are the same in absolute value, we couldn't reject the noun. There is a message in the chat, but okay. Go ahead. No problem. Sorry, sorry. We can even further push the theory. Indeed, the theory predicts that the derivative of the response of prices with respect to odd moments as for example, the mean and the issuance. Calculated at zero mean as soon as must be equal to zero. This means that other moments are the mean and the issuance should not explain the propagation of more than short. We can even also run a regression in which we add these two moments and we can also add the standard deviation of prices. Also, these moments should not be informative of the propagation of more than short. To do this, we run a regression of the response of prices over a constant plus the ratio of the kurtosis and the frequency plus the mean, the standard deviation and the skewness of price changes. The result of this specification are reported in this table. This table is still only using producer prices data. The column of the table are exactly the same as the previous two tables. In the first row, we see that the coefficient of the ratio of the kurtosis and frequency also in this case is still statistically significant different from zero and positive. Furthermore, the coefficient is very similar, it's very close to the one that you obtain our first regression, which was the regression in which we were not including the mean, the skewness and standard deviation. From the second, the third and the third row, you can see that other moments are not statistically significant in explaining the propagation of monetized shocks. Until now, I have only speak about the result for producer prices. Now I go through the result using consumer prices. The result are shown in this table, but we report only the main two regression that was shown. So we, in the top panel, we report the result from regressing the response of prices over the ratio of kurtosis and frequency. In the bottom panel, we report the results when we regress the response of prices over the frequency and the mean. From the first row, we can see that in this case, in the majority of the cases, the coefficient of the kurtosis and the frequency is positive and statistically significant. However, this is not the case in the first two columns. In the first two columns, yes. Five more minutes. Yes, thank you. Yes, thank you. Furthermore, we can see that the constant in the top panel, also in this case, is negative and statistically significant from zero. In the bottom panel, we can see that the coefficient on the frequency in all our specification is still negative and statistically significant from zero. Instead, the coefficient on the kurtosis is always positive but statistically significant in the majority of the cases. This is not the case when we look at the third and fourth column. We believe that the result for CPI are weaker than the one of PPI because in CPI we have some sectors with lots of discounts and the sufficient statistical result does not hold in a model with discounts. Indeed, not reported, but when we run a regression excluding regression for CPI, excluding clothing and foods, the result for CPI are much more in line with the one of PPI. Let me conclude. In this paper, we have provided an empirical test for the sufficient statistic result. We have done these exploiting the across sector variability using PPI and CPI. We have found evidence for the sufficient statistic in particular using PPI. Our result for RLS robust when we use consumer data. What we have found is that the ratio of the kurtosis frequency as they predict a sign and the statistically significant difference on zero. Furthermore, we couldn't reject the hypothesis that the coefficient of the kurtosis frequency have the same effect in mind, implying that also the kurtosis is important in explaining the propagation of model shock, which is exactly the novelty behind the sufficient statistic proposition. Furthermore, we have also found that other moments as they mean the skewness and the standard deviation are not statistically significant in explaining propagation of model shock. Our results, all for several robustness checks that we have not reported here. For example, the odds when we remove potential outliers of the response of crisis or of the kurtosis or of the frequency. We believe that the result for CPI are weaker because in CPI, there are some sectors that have lots of discounts and the sufficient statistic result does not apply in a model for with discounts. Thank you very much. I'm happy to answer any question later, but now there is the presentation of Peter. Peter, the floor is yours. Okay, so thanks a lot. It's great to be here. It's a joint work with Raphael Schoenland and Jesse Wurston and who might be here answering some of your questions if you have during the presentation. So the paper is about measuring price selection in microdata. And the question is that we are revisiting is basically a classic question. So it's that the rigidity of price level influences the real effects of monetary policy as well as the amplification through demand channels. And we know from previous research that prices change infrequently and in standard models, low frequency actually implies that the price level is rigid. However, in models which are microfunded by fixed menu cost of adjustment, the price level can stay flexible even if a small fraction adjusts because large price changes are going to be selected. And why is this the case? It's because of the menu cost, it's optimal. So it's a fixed cost of adjustment. So if you need to change prices, it's optimal for you as a price setter to concentrate on the most mispriced products. And then when an aggregate shock hits, the adjusting prices are going to be those that are most mispriced. And because of this, they change a lot. And this together, so the interaction of the individual mispricing and the aggregate shock is going to raise the flexibility of the price level. And this can potentially be so big as to make monetary policy completely ineffective in influencing the real economy. So in this paper, we are going to revisit this, also when Lucas critic to price rigidity by establishing new facts using micro data. So we are going to generate proxies for mispricing. We are going to identify aggregate shocks and we are going to measure selection as the impact of this gap and shock interaction. So a micro-micro interaction on the probability of the price changes. In particular, we are asking whether is it true that prices with large gaps are changed with higher probability than those with small gaps when an aggregate shock hits? So in this sense, we are measuring whether selection is present in the data. And what do we find? We find actually evidence for state dependence in the sense that the price change probability as well as the size increases with the price gap. But we find no evidence of selection and I'm going to be clear about how we define it exactly. But in, so what we find is that the gap, this mispricing is basically immaterial than an aggregate shock hits. Instead, the price response, the price level response is going to be state dependent but through a gross extensive margin. So it's going to adjust through the shift in the share of price increases versus price decreases. That's going to be the main state dependence channel. And this, we find that these evidence provides guidance for model choice and some policy implications. In particular, the results are consistent in mildly state dependent models and sizeable monetary non neutrality. So let me just go over to clarify the notions of kind of the main idea behind selection. And for this, let me use an intuitive framework by Cavalier angle. The starting point is that there are price adjustment frictions which are going to lead to lumpy price adjustment. And an important state variable, product level state variable in these models is the price gap. So how far the price of the product is from its optimal price. The important thing is that the optimal price is unobserved. And because of these lumpy kind of costs or adjustment, the price of the product only adjusts occasionally while the optimal price is influenced continuously by both product level and aggregate factors and what comes out from these micro funded models is that there is going to be a dispersed distribution of this price gap. And this is what is shown on the right hand side. So the focus of our analysis is going to be the shape of the adjustment hazard. So the adjustment hazard is just reports the probability of adjustment as a function of the gap as a function of the mispricing. And in a simple menu cost model, this becomes a step function. So for small gaps below a threshold, there is going to be no adjustment. And above a threshold, there is going to be adjustment with probability one. So then together, so the gap density together with the probability with this hazard function is going to imply the price change distribution. So that's going to be in this case, shown in the dark black area. So these are the price decreases. And in this setup, the price changes are going to be large because only those prices change, which are higher, which have price gaps, which are higher than this inaction. This inaction band or threshold. And importantly, this is not what we call selection. So this is just normal time, high price changes. This is actually what we will refer to as stay dependence. This is, but what's in instead selection is what happens when an aggregate shock hits. So an aggregate shock in this setup hits by shifting the hazards to the left. And then because of the aggregate shock, there are going to be new decreases, which are going to appear. And selection is going to be large. If these new decreases are large, they are far from their optimum because these new decreases are going to determine the flexibility of the price level, how they are going to influence the aggregate prices. In SS models, these are far from their optimum, the new adjusters. So their adjustment is going to be large and the price level is going to be flexible. In Calvo model, the hazard is flexible. So there are going to be basically no new adjusters. There's going to be no selection. And what Kolozov and Lukas showed that in a model with large selection, the input responses after a monetary policy shocks are going to be, so the real effects, which are shown here are going to be smaller and temporary while in a Calvo model all without any selection are going to be large and persistent. So selection is really important, even though here the frequency of the price adjustment is the same across the two models. So what we now do in the paper, so the data we are, I'm going to show you, but in the paper we also show robustness results using PPI data is supermarket scanner data. And we like it because it's very granular. So products are identified at the barcode level. It also has a wide coverage in the US as well as 12 years of weekly data. So the granularity gives us high quality information about close substitutes and the time series is going to give us, so it's long enough for us to identify aggregate fluctuations. So we are doing some standard cleaning in the data. We are filtering out temporary discounts and we also move from weekly data to monthly frequency using the monthly mode as monthly prices. So this figure shows how our supermarket price index does in the aggregate. The orange line shows CPI food at home sub-index while the dark red line shows the reference price index. So the sales filtered supermarket index. As you can see, the index captures the business cycle of variation of the series well. It is not perfect in matching the level of the inflation because it ignores new product introductions. And that's well known, but this is not something that we are focusing on. So our question is really about what business cycle responses to shocks. So our starting point is that even though the optimal prices at the product level and because of that, the gap is unobservable, an important component of it is observable. And this important component is basically how far a price is from the average price of close competitors after content is reached. After controlling for the competitors or the other stores kind of persistent characteristics and basically price differences because of regional variation and amenities. And the idea here is that stores when we are comparing the exact same products, stores don't want no mispricing. They don't want their prices to be higher than the competitors because then they will face low demand, but they also don't want their prices to be lower because then their markup is too low. They could increase their profits by increasing their prices. So in particular, what we do, we take the sales filtered reference prices and calculate the gap, as I said, as a difference from the exact same product from the price in competing stores. And also we are controlling for a store fixed effect by calculating just the fixed effects regression of the form here, as seen on this slide. So with this, we are eliminating permanent differences in store pricing. And in our analysis, we are also going to use lag gap. In that sense, it's pre-determined. It's before an aggregate shock hits. It's a measure of initial mispricing. So let me show you some data. So here, what we do is calculate these gaps and just show you the density of the gap. And what you can see here is that the density itself has a really sizable dispersion and also fat tails, even though we are filtering out sales and also store fixed effects. It's also important to point out that most of the distribution is between minus 20 and plus 20%. So this is where the relevant relevant ranges are still. What we are showing here is the size of the price change conditional on having a particular price gap. And what you see here is that on average, there is almost one-to-one relationship between the gap and the size. So basically, on average, when firms change their prices, they are closing the gap. So in that sense, this justifies other factors justifies us using it as a relevant component of the gap. And the last figure I'm showing you here is an adjustment hazard in the data. So here, you can see the probability of the adjustment conditional on the gap. And what you can see right away is that it has this familiar V-shaped form, which is in line with state dependence. And so it clearly increases with the distance from zero. What you can also see is that it's a symmetric. So there is a higher probability of adjustment if your price gap is negative. So your price is below your competitors' prices. You don't like it. So you like it less when it's above. And it's also, I mean, it's in the eye of the beholder, but we would like you to kind of see that in the relevant range, minus 20 plus 20 range, it's almost piecewise linear. So there is not much nonlinearity happening in this range. Actually, it becomes even less if we control for heterogeneity, unobserved heterogeneity using store item fix effects as we do in the regressions to come. So now we have this measure of the price gap. Now we need an aggregate shock. And for this, because our sample is between 2001-2012, we are concentrating on the credit shock. But we also, in the paper, we also show that the results are robust using monetary policy shocks. And we identify it using timing restrictions following Gil Kristens-Zakrajek's paper so we use their excess bond premium measure, which is a default free corporate spread, and identify credit shocks as a shift and increase in the excess bond premium without any contemporaneous effect on activity, price, and interest rate. And to show you how it looks like in the dynamics or how these shocks looks like, we just run a series of OS regressions like the local projections analysis where we look at how these credit shock influences different aggregate variables using a set of controls. And for controls, we are using 1 to 12 once legs of CPI industrial production, the one year rate and the excess bond premium. So how the shocks looks like. So if there is a tightening of the credit conditions, the excess bond premium increases and stays high for almost a year. Monetary policy eases, but it's not sufficient to offset the real effects of this tightening. So industrial production drops, a ham shape pattern, and core CPI is declining and the peak effect of the decline is around two years after the shock. If we look at for the same shock, what happens to our supermarket price index, we see that the effects are very similar than the core CPI. So there is a gradual response not unlike the core CPI and the peak effect is not before the 24 months. So we are going to use this 24 months period in our following analysis. So with this product level proxy and the aggregate shock, we can now assess that is it true whether the new adjusters after a shock have large gaps. And we do this by looking at selection as an interaction of the aggregate shock and the product level proxy in a linear probability of the model of price adjustment. So the form of our main regression is the following. So in the left hand side, we have an indicator of price increases and price decreases separately for a product in a particular store between the current period and 24 months in the future. Peter, you have five minutes. Thanks a lot. And our regressors, so we are interested in how this adjustment probability influenced by the gap, the price gap itself, the aggregate shock itself and their interaction. And our focus is going to be whether the interaction is significant because if it's significant, it means that over the normal effects, large shocks, so large products with large gaps are response with higher probability than the aggregate. So we are going to run a series of controls, aggregate controls as before. We have also the age of the price as a control variable. And then we set the regression with product store level fixed effects as well as calendar months fixed effects and cluster the standard errors across categories and time. So this is the main results of the paper and in the paper we have a series of kind of a lot of robustness tests and actually the results stay similar. What we find is that the gap itself has a very strong effects on the increases and it's as expected. So a higher gap decreases the price increase probability and increases the price decrease probability. The shock itself has also a significant effect. So a tightening of the credit conditions reduces the price increase probability and increases the price decrease probability but the selection effects or their interaction are not significant and this is consistent with a lot of the robustness regression. So in that sense, we find evidence for state dependence. The gap raises frequency and the adjustment in the cross-section in the sense that the aggregate shock shifts the probability of increases versus decreases irrespective of the gap basically all around the distribution but no evidence for selection. And actually you might vary that we are imposing linearity even though this is not necessarily the case. So here what we are doing as a robustness we look at, we create groups and run a regression basically looking at how these different groups respond to the aggregate shock as the function of the gap and as a function of the interaction. And what you see here is that these regression actually justifies our assumption of linearities. It's not exactly linear but it's very close to linear and the effects are very significant for the direct effects of the gaps but stay insignificant for the interaction terms. And as I said in the paper we have a series of robustness tests. So what did we learn from this? We're getting to two minutes. Thanks. So what we point out is that it's useful to separate the extensive margin effect that Kabalyera Engel emphasized into a gross extensive margin effect and the selection. And the gross extensive margin is the shift between price increases versus price increases and the selection is better large gaps just with high probability condition on the shock. And we point out that if you have a linear probability model that I'm showing here where the hazard function is linear as well as flat then what happens is that an aggregate shock is going to shift the hazard function to the left but this shift is going to have basically the same effect as we have seen in our data. In particular there's going to be a uniform shift in the probability of adjustment for example here for a tightening the decreases go up but this is going to be uniform across the distribution of the gaps. There's going to be no selection. The new adjusters are not going to be larger than otherwise. So just to overview the effects we argue that our data points to having gross extensive margin and no selection this is inconsistent with time dependent models which have none of them but this is also inconsistent with the SS models with strongly convex hazards because they should have both selection and gross extensive margin if the model is kind of linear or close to linear the hazard then it is consistent with the data in particular there is no selection but there is gross extensive margin effect so there is state dependent effects but not selection. And just to show how the data can be used we are also looking at we are taking a model of the shelf in particular roots for its rational attention, extension of the Colossal and Lucas menu cost model and use it to calibrate the density of the price gaps and the hazard function to show that then if we are using a model we can use the frame, the evidence we have to assess the non-neutrality to the cargo and here you can see how the model calibrated how we managed to calibrate the model to get to the the data and it is kind of in line with the evidence and then after we calibrated the model we can ask the extent of monitoring non-neutrality we are doing the same exercises as Woodford did in particular looking at monetary shocks and how they influence the price level in a series of regressions and we are asking how close the price level reaction is to the cargo effect to the cargo case when this information friction parameter is very high or when this information friction parameter is very low and what we find is that for the calibration that matches our data the effects are 20% higher than cargo even though in an SS model it would be almost 5 and a half times as flexible in an SS framework so the estimated information friction parameter here implies high monitoring non-neutrality so I'm right out of time so let me just conclude so we use a granular supermarket and in a paper also PPI data to measure selection we found evidence for state dependence in the sense that adjustment probability and the size increases with the gap but no evidence for selection in the sense that condition on the adjustment sorry the adjustment is independent of the price gap instead the state dependent adjustment is going through the gross extensive margin so the shift between increases versus decreases and these results are inconsistent with standard time dependent and state dependent models but consistent with the mildly state dependent models for example the one with tight information constraints as in Woodford which also imply sizeable monitoring non-neutrality thanks a lot and looking forward to the discussion we have a very dense chat but for now mostly questions about the first paper and Francesca has been very active in responding to some of the questions as co-author I think there is still one by Daniel outstanding which is about the econometrics and so Daniel asked about how worried you are Andrea about having generated regressors in your regression what could be the effect in terms of significance of your results your okay now I could not mute myself I do not like that now I can and he is right the regressors are generated but we have lots of micro price change data so we should be able to estimate quite precisely the kurtosian frequency and furthermore if that is not the case we should have attenuation bias in our baseline regression when you run over the ratio so in that case the coefficient should be downward bias and so it just our results should be quite consistent with that but he is right the frequency and kurtosis are generated and then there was a question where if I may where Francesco already replied partly on Matt was asking about fixed effects so there has been a back and forth in the chat about did you control for fixed effect but can I add a twist to that something I do not understand yet to myself is when you want to test for sufficient statistics what is really that test you know it is within a model then what is the role of unobservables which you would try to capture with fixed effects in the test for sufficient statistics so would you want to saturate or do you think it is actually effective? Yeah exactly that is a really good point that also we try to respond in our favor that for us it is not obvious that you should add fixed effect like if there are difference across sector we want to exactly exploit the difference so yeah our baseline indeed is without fixed effect when you add fixed effect our result we can but that is also as Francesco was saying because we have 40 dummies and we have 120 sectors so there are lots of dummies for like you end up with every sector more in average free free sector free observation you know so it's it's quite normal that the results will be weakened I don't know if I answered your question to me yes I think Matt can write if it's not satisfied but now we have also a question for Peter from which Peter can read but I read it a lot for everybody if did you try some form of order probit Peter instead of linear probability on another channel of course maybe why did you choose linear probability Yes so let me try to share a slide showing showing those results so here so as a robustness this is a very important question so linear probability models are kind of an approximation and needs to be checked whether more challenging setups are kind of the results stay robust or not so here we show results the order probit is the third column and the coefficients obviously here cannot be directly compared to the linear probability model but in terms of the patterns they are very similar so in the sense of the gap having it expected so it's the same significant effects the shock itself also having a significant effect with an expected sign and the selection staying insignificant we also run regressions with multinomial probit here was the only point when we find actually some significance for the selection effect but it was not very strong and only for rise decreases the detailed results you can also find in the paper so if I see no other really pending questions in the chat more of a discussion maybe also between Anton and Francesco and everybody else as well okay so we have from Cristiane a question Peter what would it take to apply the results on the Prisma data right the methodology well these are I think part of the Prisma data but you I think it's a completely it's a completely fair fair question so this is US this is now concentrating on the US data but we are just to confirm we are working on looking at another analysis using Euro area supermarket data the difference there is that for the Euro area data we don't yet have long enough time series to run this kind of selection analysis analysis of both the combining the cross section with the aggregate shock instead what we do is document the cross sectional moments and then look at so kind of identify the parameters of a theoretical model and then try to draw comparisons between the US and Euro area and the work is in progress so we don't yet have results but we are close so I think hopefully I'm going to be able to report these results soon enough but unfortunately we had a big shock so as we get the data from 1920 and so on that will help Peter you have a question from Alexander Jung from the ECB you can read it off or I can read it off how dependent are your results on the selection versus state dependent on the data set and then the pricing may be different on geographic levels yes so I think that's that's fair what what we what we tried so in sometimes are how we compare prices is we compare it nationally but because we want kind of a large enough comparison but we control for or kind of permanent differences in geography so we control for store fixed effects so in that sense we are taking into account some of these heterogeneity but it's true that in our with our data we could actually run robustness looking at only certain geographies, certain markets how it goes but how it does we haven't done it yet but then we need to restrict to goods which have enough which are kind of more popular which are more available in more of the stores but I think it's a good idea we should probably do that so we have and it will have to be the final question I'm afraid from Christoph on evidence that once the capital adjusted firms remain on the average price level or they move around it or they just start getting another gap so what evidence we showed is that on average they close the gap over two year over two year horizon so in that sense you should I'm conditional on changing they are closing it after that so we haven't really looked at kind of dynamics after this gap gap happened I expect this gap to be opened up again over time and then closed again so I think it's not so they are going to adjust only regularly but this is something we need to look into the data